Nov 29 06:29:25 localhost kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 29 06:29:25 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 29 06:29:25 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 06:29:25 localhost kernel: BIOS-provided physical RAM map:
Nov 29 06:29:25 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 29 06:29:25 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 29 06:29:25 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 29 06:29:25 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 29 06:29:25 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 29 06:29:25 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 29 06:29:25 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 29 06:29:25 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 29 06:29:25 localhost kernel: NX (Execute Disable) protection: active
Nov 29 06:29:25 localhost kernel: APIC: Static calls initialized
Nov 29 06:29:25 localhost kernel: SMBIOS 2.8 present.
Nov 29 06:29:25 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 29 06:29:25 localhost kernel: Hypervisor detected: KVM
Nov 29 06:29:25 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 29 06:29:25 localhost kernel: kvm-clock: using sched offset of 6206054738 cycles
Nov 29 06:29:25 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 29 06:29:25 localhost kernel: tsc: Detected 2799.998 MHz processor
Nov 29 06:29:25 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 29 06:29:25 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 29 06:29:25 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 29 06:29:25 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 29 06:29:25 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 29 06:29:25 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 29 06:29:25 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 29 06:29:25 localhost kernel: Using GB pages for direct mapping
Nov 29 06:29:25 localhost kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 29 06:29:25 localhost kernel: ACPI: Early table checksum verification disabled
Nov 29 06:29:25 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 29 06:29:25 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 06:29:25 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 06:29:25 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 06:29:25 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 29 06:29:25 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 06:29:25 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 06:29:25 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 29 06:29:25 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 29 06:29:25 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 29 06:29:25 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 29 06:29:25 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 29 06:29:25 localhost kernel: No NUMA configuration found
Nov 29 06:29:25 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 29 06:29:25 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 29 06:29:25 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 29 06:29:25 localhost kernel: Zone ranges:
Nov 29 06:29:25 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 29 06:29:25 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 29 06:29:25 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 06:29:25 localhost kernel:   Device   empty
Nov 29 06:29:25 localhost kernel: Movable zone start for each node
Nov 29 06:29:25 localhost kernel: Early memory node ranges
Nov 29 06:29:25 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 29 06:29:25 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 29 06:29:25 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 06:29:25 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 29 06:29:25 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 29 06:29:25 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 29 06:29:25 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 29 06:29:25 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 29 06:29:25 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 29 06:29:25 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 29 06:29:25 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 29 06:29:25 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 29 06:29:25 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 29 06:29:25 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 29 06:29:25 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 29 06:29:25 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 29 06:29:25 localhost kernel: TSC deadline timer available
Nov 29 06:29:25 localhost kernel: CPU topo: Max. logical packages:   8
Nov 29 06:29:25 localhost kernel: CPU topo: Max. logical dies:       8
Nov 29 06:29:25 localhost kernel: CPU topo: Max. dies per package:   1
Nov 29 06:29:25 localhost kernel: CPU topo: Max. threads per core:   1
Nov 29 06:29:25 localhost kernel: CPU topo: Num. cores per package:     1
Nov 29 06:29:25 localhost kernel: CPU topo: Num. threads per package:   1
Nov 29 06:29:25 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 29 06:29:25 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 29 06:29:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 29 06:29:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 29 06:29:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 29 06:29:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 29 06:29:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 29 06:29:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 29 06:29:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 29 06:29:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 29 06:29:25 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 29 06:29:25 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 29 06:29:25 localhost kernel: Booting paravirtualized kernel on KVM
Nov 29 06:29:25 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 29 06:29:25 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 29 06:29:25 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 29 06:29:25 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Nov 29 06:29:25 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Nov 29 06:29:25 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 29 06:29:25 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 06:29:25 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 29 06:29:25 localhost kernel: random: crng init done
Nov 29 06:29:25 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 29 06:29:25 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 29 06:29:25 localhost kernel: Fallback order for Node 0: 0 
Nov 29 06:29:25 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 29 06:29:25 localhost kernel: Policy zone: Normal
Nov 29 06:29:25 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 29 06:29:25 localhost kernel: software IO TLB: area num 8.
Nov 29 06:29:25 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 29 06:29:25 localhost kernel: ftrace: allocating 49313 entries in 193 pages
Nov 29 06:29:25 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 29 06:29:25 localhost kernel: Dynamic Preempt: voluntary
Nov 29 06:29:25 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 29 06:29:25 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 29 06:29:25 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 29 06:29:25 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 29 06:29:25 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 29 06:29:25 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 29 06:29:25 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 29 06:29:25 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 29 06:29:25 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 06:29:25 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 06:29:25 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 06:29:25 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 29 06:29:25 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 29 06:29:25 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 29 06:29:25 localhost kernel: Console: colour VGA+ 80x25
Nov 29 06:29:25 localhost kernel: printk: console [ttyS0] enabled
Nov 29 06:29:25 localhost kernel: ACPI: Core revision 20230331
Nov 29 06:29:25 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 29 06:29:25 localhost kernel: x2apic enabled
Nov 29 06:29:25 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 29 06:29:25 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 29 06:29:25 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 29 06:29:25 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 29 06:29:25 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 29 06:29:25 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 29 06:29:25 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 29 06:29:25 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 29 06:29:25 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 29 06:29:25 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 29 06:29:25 localhost kernel: RETBleed: Mitigation: untrained return thunk
Nov 29 06:29:25 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 29 06:29:25 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 29 06:29:25 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 29 06:29:25 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 29 06:29:25 localhost kernel: x86/bugs: return thunk changed
Nov 29 06:29:25 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 29 06:29:25 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 29 06:29:25 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 29 06:29:25 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 29 06:29:25 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 29 06:29:25 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 29 06:29:25 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 29 06:29:25 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 29 06:29:25 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 29 06:29:25 localhost kernel: landlock: Up and running.
Nov 29 06:29:25 localhost kernel: Yama: becoming mindful.
Nov 29 06:29:25 localhost kernel: SELinux:  Initializing.
Nov 29 06:29:25 localhost kernel: LSM support for eBPF active
Nov 29 06:29:25 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 06:29:25 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 06:29:25 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 29 06:29:25 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 29 06:29:25 localhost kernel: ... version:                0
Nov 29 06:29:25 localhost kernel: ... bit width:              48
Nov 29 06:29:25 localhost kernel: ... generic registers:      6
Nov 29 06:29:25 localhost kernel: ... value mask:             0000ffffffffffff
Nov 29 06:29:25 localhost kernel: ... max period:             00007fffffffffff
Nov 29 06:29:25 localhost kernel: ... fixed-purpose events:   0
Nov 29 06:29:25 localhost kernel: ... event mask:             000000000000003f
Nov 29 06:29:25 localhost kernel: signal: max sigframe size: 1776
Nov 29 06:29:25 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 29 06:29:25 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 29 06:29:25 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 29 06:29:25 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 29 06:29:25 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 29 06:29:25 localhost kernel: smp: Brought up 1 node, 8 CPUs
Nov 29 06:29:25 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 29 06:29:25 localhost kernel: node 0 deferred pages initialised in 22ms
Nov 29 06:29:25 localhost kernel: Memory: 7765952K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616268K reserved, 0K cma-reserved)
Nov 29 06:29:25 localhost kernel: devtmpfs: initialized
Nov 29 06:29:25 localhost kernel: x86/mm: Memory block size: 128MB
Nov 29 06:29:25 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 29 06:29:25 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 29 06:29:25 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 29 06:29:25 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 29 06:29:25 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 29 06:29:25 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 29 06:29:25 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 29 06:29:25 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 29 06:29:25 localhost kernel: audit: type=2000 audit(1764397762.547:1): state=initialized audit_enabled=0 res=1
Nov 29 06:29:25 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 29 06:29:25 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 29 06:29:25 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 29 06:29:25 localhost kernel: cpuidle: using governor menu
Nov 29 06:29:25 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 29 06:29:25 localhost kernel: PCI: Using configuration type 1 for base access
Nov 29 06:29:25 localhost kernel: PCI: Using configuration type 1 for extended access
Nov 29 06:29:25 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 29 06:29:25 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 29 06:29:25 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 29 06:29:25 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 29 06:29:25 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 29 06:29:25 localhost kernel: Demotion targets for Node 0: null
Nov 29 06:29:25 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 29 06:29:25 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 29 06:29:25 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 29 06:29:25 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 29 06:29:25 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 29 06:29:25 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 29 06:29:25 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 29 06:29:25 localhost kernel: ACPI: Interpreter enabled
Nov 29 06:29:25 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 29 06:29:25 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 29 06:29:25 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 29 06:29:25 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 29 06:29:25 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 29 06:29:25 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 29 06:29:25 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [3] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [4] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [5] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [6] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [7] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [8] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [9] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [10] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [11] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [12] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [13] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [14] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [15] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [16] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [17] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [18] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [19] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [20] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [21] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [22] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [23] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [24] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [25] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [26] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [27] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [28] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [29] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [30] registered
Nov 29 06:29:25 localhost kernel: acpiphp: Slot [31] registered
Nov 29 06:29:25 localhost kernel: PCI host bridge to bus 0000:00
Nov 29 06:29:25 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 29 06:29:25 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 29 06:29:25 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 29 06:29:25 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 29 06:29:25 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 29 06:29:25 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 29 06:29:25 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 29 06:29:25 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 29 06:29:25 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 29 06:29:25 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 29 06:29:25 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 29 06:29:25 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 29 06:29:25 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 29 06:29:25 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 29 06:29:25 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 29 06:29:25 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 29 06:29:25 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 29 06:29:25 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 29 06:29:25 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 29 06:29:25 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 29 06:29:25 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 29 06:29:25 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 29 06:29:25 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 29 06:29:25 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 29 06:29:25 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 29 06:29:25 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 06:29:25 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 29 06:29:25 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 29 06:29:25 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 29 06:29:25 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 29 06:29:25 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 29 06:29:25 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 29 06:29:25 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 29 06:29:25 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 29 06:29:25 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 06:29:25 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 29 06:29:25 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 29 06:29:25 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 06:29:25 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 29 06:29:25 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 29 06:29:25 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 29 06:29:25 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 29 06:29:25 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 29 06:29:25 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 29 06:29:25 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 29 06:29:25 localhost kernel: iommu: Default domain type: Translated
Nov 29 06:29:25 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 29 06:29:25 localhost kernel: SCSI subsystem initialized
Nov 29 06:29:25 localhost kernel: ACPI: bus type USB registered
Nov 29 06:29:25 localhost kernel: usbcore: registered new interface driver usbfs
Nov 29 06:29:25 localhost kernel: usbcore: registered new interface driver hub
Nov 29 06:29:25 localhost kernel: usbcore: registered new device driver usb
Nov 29 06:29:25 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 29 06:29:25 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 29 06:29:25 localhost kernel: PTP clock support registered
Nov 29 06:29:25 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 29 06:29:25 localhost kernel: NetLabel: Initializing
Nov 29 06:29:25 localhost kernel: NetLabel:  domain hash size = 128
Nov 29 06:29:25 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 29 06:29:25 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 29 06:29:25 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 29 06:29:25 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 29 06:29:25 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 29 06:29:25 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Nov 29 06:29:25 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 29 06:29:25 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 29 06:29:25 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 29 06:29:25 localhost kernel: vgaarb: loaded
Nov 29 06:29:25 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 29 06:29:25 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 29 06:29:25 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 29 06:29:25 localhost kernel: pnp: PnP ACPI init
Nov 29 06:29:25 localhost kernel: pnp 00:03: [dma 2]
Nov 29 06:29:25 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 29 06:29:25 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 29 06:29:25 localhost kernel: NET: Registered PF_INET protocol family
Nov 29 06:29:25 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 29 06:29:25 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 29 06:29:25 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 29 06:29:25 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 29 06:29:25 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 29 06:29:25 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 29 06:29:25 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 29 06:29:25 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 06:29:25 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 06:29:25 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 29 06:29:25 localhost kernel: NET: Registered PF_XDP protocol family
Nov 29 06:29:25 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 29 06:29:25 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 29 06:29:25 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 29 06:29:25 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 29 06:29:25 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 29 06:29:25 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 29 06:29:25 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 29 06:29:25 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 29 06:29:25 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 77778 usecs
Nov 29 06:29:25 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 29 06:29:25 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 29 06:29:25 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 29 06:29:25 localhost kernel: ACPI: bus type thunderbolt registered
Nov 29 06:29:25 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 29 06:29:25 localhost kernel: Initialise system trusted keyrings
Nov 29 06:29:25 localhost kernel: Key type blacklist registered
Nov 29 06:29:25 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 29 06:29:25 localhost kernel: zbud: loaded
Nov 29 06:29:25 localhost kernel: integrity: Platform Keyring initialized
Nov 29 06:29:25 localhost kernel: integrity: Machine keyring initialized
Nov 29 06:29:25 localhost kernel: Freeing initrd memory: 85868K
Nov 29 06:29:25 localhost kernel: NET: Registered PF_ALG protocol family
Nov 29 06:29:25 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 29 06:29:25 localhost kernel: Key type asymmetric registered
Nov 29 06:29:25 localhost kernel: Asymmetric key parser 'x509' registered
Nov 29 06:29:25 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 29 06:29:25 localhost kernel: io scheduler mq-deadline registered
Nov 29 06:29:25 localhost kernel: io scheduler kyber registered
Nov 29 06:29:25 localhost kernel: io scheduler bfq registered
Nov 29 06:29:25 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 29 06:29:25 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 29 06:29:25 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 29 06:29:25 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 29 06:29:25 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 29 06:29:25 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 29 06:29:25 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 29 06:29:25 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 29 06:29:25 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 29 06:29:25 localhost kernel: Non-volatile memory driver v1.3
Nov 29 06:29:25 localhost kernel: rdac: device handler registered
Nov 29 06:29:25 localhost kernel: hp_sw: device handler registered
Nov 29 06:29:25 localhost kernel: emc: device handler registered
Nov 29 06:29:25 localhost kernel: alua: device handler registered
Nov 29 06:29:25 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 29 06:29:25 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 29 06:29:25 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 29 06:29:25 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 29 06:29:25 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 29 06:29:25 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 29 06:29:25 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 29 06:29:25 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 29 06:29:25 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 29 06:29:25 localhost kernel: hub 1-0:1.0: USB hub found
Nov 29 06:29:25 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 29 06:29:25 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 29 06:29:25 localhost kernel: usbserial: USB Serial support registered for generic
Nov 29 06:29:25 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 29 06:29:25 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 29 06:29:25 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 29 06:29:25 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 29 06:29:25 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 29 06:29:25 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 29 06:29:25 localhost kernel: rtc_cmos 00:04: registered as rtc0
Nov 29 06:29:25 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-29T06:29:24 UTC (1764397764)
Nov 29 06:29:25 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 29 06:29:25 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 29 06:29:25 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 29 06:29:25 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 29 06:29:25 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 29 06:29:25 localhost kernel: usbcore: registered new interface driver usbhid
Nov 29 06:29:25 localhost kernel: usbhid: USB HID core driver
Nov 29 06:29:25 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 29 06:29:25 localhost kernel: Initializing XFRM netlink socket
Nov 29 06:29:25 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 29 06:29:25 localhost kernel: Segment Routing with IPv6
Nov 29 06:29:25 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 29 06:29:25 localhost kernel: mpls_gso: MPLS GSO support
Nov 29 06:29:25 localhost kernel: IPI shorthand broadcast: enabled
Nov 29 06:29:25 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 29 06:29:25 localhost kernel: AES CTR mode by8 optimization enabled
Nov 29 06:29:25 localhost kernel: sched_clock: Marking stable (3819005072, 151630058)->(4274552943, -303917813)
Nov 29 06:29:25 localhost kernel: registered taskstats version 1
Nov 29 06:29:25 localhost kernel: Loading compiled-in X.509 certificates
Nov 29 06:29:25 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 06:29:25 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 29 06:29:25 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 29 06:29:25 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 29 06:29:25 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 29 06:29:25 localhost kernel: Demotion targets for Node 0: null
Nov 29 06:29:25 localhost kernel: page_owner is disabled
Nov 29 06:29:25 localhost kernel: Key type .fscrypt registered
Nov 29 06:29:25 localhost kernel: Key type fscrypt-provisioning registered
Nov 29 06:29:25 localhost kernel: Key type big_key registered
Nov 29 06:29:25 localhost kernel: Key type encrypted registered
Nov 29 06:29:25 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 29 06:29:25 localhost kernel: Loading compiled-in module X.509 certificates
Nov 29 06:29:25 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 06:29:25 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 29 06:29:25 localhost kernel: ima: No architecture policies found
Nov 29 06:29:25 localhost kernel: evm: Initialising EVM extended attributes:
Nov 29 06:29:25 localhost kernel: evm: security.selinux
Nov 29 06:29:25 localhost kernel: evm: security.SMACK64 (disabled)
Nov 29 06:29:25 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 29 06:29:25 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 29 06:29:25 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 29 06:29:25 localhost kernel: evm: security.apparmor (disabled)
Nov 29 06:29:25 localhost kernel: evm: security.ima
Nov 29 06:29:25 localhost kernel: evm: security.capability
Nov 29 06:29:25 localhost kernel: evm: HMAC attrs: 0x1
Nov 29 06:29:25 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 29 06:29:25 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 29 06:29:25 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 29 06:29:25 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 29 06:29:25 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 29 06:29:25 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 29 06:29:25 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 29 06:29:25 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 29 06:29:25 localhost kernel: Running certificate verification RSA selftest
Nov 29 06:29:25 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 29 06:29:25 localhost kernel: Running certificate verification ECDSA selftest
Nov 29 06:29:25 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 29 06:29:25 localhost kernel: clk: Disabling unused clocks
Nov 29 06:29:25 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 29 06:29:25 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 29 06:29:25 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 29 06:29:25 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 29 06:29:25 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 29 06:29:25 localhost kernel: Run /init as init process
Nov 29 06:29:25 localhost kernel:   with arguments:
Nov 29 06:29:25 localhost kernel:     /init
Nov 29 06:29:25 localhost kernel:   with environment:
Nov 29 06:29:25 localhost kernel:     HOME=/
Nov 29 06:29:25 localhost kernel:     TERM=linux
Nov 29 06:29:25 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64
Nov 29 06:29:25 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 06:29:25 localhost systemd[1]: Detected virtualization kvm.
Nov 29 06:29:25 localhost systemd[1]: Detected architecture x86-64.
Nov 29 06:29:25 localhost systemd[1]: Running in initrd.
Nov 29 06:29:25 localhost systemd[1]: No hostname configured, using default hostname.
Nov 29 06:29:25 localhost systemd[1]: Hostname set to <localhost>.
Nov 29 06:29:25 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 29 06:29:25 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 29 06:29:25 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 06:29:25 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 29 06:29:25 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 29 06:29:25 localhost systemd[1]: Reached target Local File Systems.
Nov 29 06:29:25 localhost systemd[1]: Reached target Path Units.
Nov 29 06:29:25 localhost systemd[1]: Reached target Slice Units.
Nov 29 06:29:25 localhost systemd[1]: Reached target Swaps.
Nov 29 06:29:25 localhost systemd[1]: Reached target Timer Units.
Nov 29 06:29:25 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 29 06:29:25 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 29 06:29:25 localhost systemd[1]: Listening on Journal Socket.
Nov 29 06:29:25 localhost systemd[1]: Listening on udev Control Socket.
Nov 29 06:29:25 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 29 06:29:25 localhost systemd[1]: Reached target Socket Units.
Nov 29 06:29:25 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 29 06:29:25 localhost systemd[1]: Starting Journal Service...
Nov 29 06:29:25 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 06:29:25 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 29 06:29:25 localhost systemd[1]: Starting Create System Users...
Nov 29 06:29:25 localhost systemd[1]: Starting Setup Virtual Console...
Nov 29 06:29:25 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 29 06:29:25 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 29 06:29:25 localhost systemd[1]: Finished Create System Users.
Nov 29 06:29:25 localhost systemd-journald[308]: Journal started
Nov 29 06:29:25 localhost systemd-journald[308]: Runtime Journal (/run/log/journal/a28c55e720034883bda8258835775761) is 8.0M, max 153.6M, 145.6M free.
Nov 29 06:29:25 localhost systemd-sysusers[312]: Creating group 'users' with GID 100.
Nov 29 06:29:25 localhost systemd-sysusers[312]: Creating group 'dbus' with GID 81.
Nov 29 06:29:25 localhost systemd-sysusers[312]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 29 06:29:25 localhost systemd[1]: Started Journal Service.
Nov 29 06:29:25 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 06:29:25 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 06:29:25 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 06:29:25 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 06:29:25 localhost systemd[1]: Finished Setup Virtual Console.
Nov 29 06:29:25 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 29 06:29:25 localhost systemd[1]: Starting dracut cmdline hook...
Nov 29 06:29:25 localhost dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Nov 29 06:29:25 localhost dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 06:29:25 localhost systemd[1]: Finished dracut cmdline hook.
Nov 29 06:29:25 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 29 06:29:25 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 29 06:29:25 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 29 06:29:25 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 29 06:29:25 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 29 06:29:25 localhost kernel: RPC: Registered udp transport module.
Nov 29 06:29:25 localhost kernel: RPC: Registered tcp transport module.
Nov 29 06:29:25 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 29 06:29:25 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 29 06:29:25 localhost rpc.statd[444]: Version 2.5.4 starting
Nov 29 06:29:25 localhost rpc.statd[444]: Initializing NSM state
Nov 29 06:29:25 localhost rpc.idmapd[449]: Setting log level to 0
Nov 29 06:29:25 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 29 06:29:25 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 06:29:26 localhost systemd-udevd[462]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 06:29:26 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 06:29:26 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 29 06:29:26 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 29 06:29:26 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 29 06:29:26 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 29 06:29:26 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 29 06:29:26 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 29 06:29:26 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 06:29:26 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 29 06:29:26 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 06:29:26 localhost systemd[1]: Reached target Network.
Nov 29 06:29:26 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 06:29:26 localhost systemd[1]: Starting dracut initqueue hook...
Nov 29 06:29:26 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 29 06:29:26 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 29 06:29:26 localhost systemd[1]: Reached target System Initialization.
Nov 29 06:29:26 localhost systemd[1]: Reached target Basic System.
Nov 29 06:29:26 localhost kernel: libata version 3.00 loaded.
Nov 29 06:29:26 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 29 06:29:26 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Nov 29 06:29:26 localhost systemd-udevd[475]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 06:29:26 localhost kernel: scsi host0: ata_piix
Nov 29 06:29:26 localhost kernel: scsi host1: ata_piix
Nov 29 06:29:26 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 29 06:29:26 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 29 06:29:26 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 29 06:29:26 localhost kernel:  vda: vda1
Nov 29 06:29:26 localhost kernel: ata1: found unknown device (class 0)
Nov 29 06:29:26 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 29 06:29:26 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 29 06:29:26 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 29 06:29:26 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 29 06:29:26 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 29 06:29:26 localhost systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 06:29:26 localhost systemd[1]: Reached target Initrd Root Device.
Nov 29 06:29:27 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 29 06:29:27 localhost systemd[1]: Finished dracut initqueue hook.
Nov 29 06:29:27 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 06:29:27 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 29 06:29:27 localhost systemd[1]: Reached target Remote File Systems.
Nov 29 06:29:27 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 29 06:29:27 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 29 06:29:27 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Nov 29 06:29:27 localhost systemd-fsck[554]: /usr/sbin/fsck.xfs: XFS file system.
Nov 29 06:29:27 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 06:29:27 localhost systemd[1]: Mounting /sysroot...
Nov 29 06:29:27 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 29 06:29:27 localhost kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Nov 29 06:29:28 localhost kernel: XFS (vda1): Ending clean mount
Nov 29 06:29:28 localhost systemd[1]: Mounted /sysroot.
Nov 29 06:29:28 localhost systemd[1]: Reached target Initrd Root File System.
Nov 29 06:29:28 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 29 06:29:28 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 29 06:29:28 localhost systemd[1]: Reached target Initrd File Systems.
Nov 29 06:29:28 localhost systemd[1]: Reached target Initrd Default Target.
Nov 29 06:29:28 localhost systemd[1]: Starting dracut mount hook...
Nov 29 06:29:28 localhost systemd[1]: Finished dracut mount hook.
Nov 29 06:29:28 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 29 06:29:28 localhost rpc.idmapd[449]: exiting on signal 15
Nov 29 06:29:28 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 29 06:29:28 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 29 06:29:28 localhost systemd[1]: Stopped target Network.
Nov 29 06:29:28 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 29 06:29:28 localhost systemd[1]: Stopped target Timer Units.
Nov 29 06:29:28 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 29 06:29:28 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 29 06:29:28 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 29 06:29:28 localhost systemd[1]: Stopped target Basic System.
Nov 29 06:29:28 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 29 06:29:28 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 29 06:29:28 localhost systemd[1]: Stopped target Path Units.
Nov 29 06:29:28 localhost systemd[1]: Stopped target Remote File Systems.
Nov 29 06:29:28 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 29 06:29:28 localhost systemd[1]: Stopped target Slice Units.
Nov 29 06:29:28 localhost systemd[1]: Stopped target Socket Units.
Nov 29 06:29:28 localhost systemd[1]: Stopped target System Initialization.
Nov 29 06:29:28 localhost systemd[1]: Stopped target Local File Systems.
Nov 29 06:29:28 localhost systemd[1]: Stopped target Swaps.
Nov 29 06:29:28 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Stopped dracut mount hook.
Nov 29 06:29:28 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 29 06:29:28 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 29 06:29:28 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 29 06:29:28 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 29 06:29:28 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 29 06:29:28 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 29 06:29:28 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 29 06:29:28 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 29 06:29:28 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 29 06:29:28 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 29 06:29:28 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 29 06:29:28 localhost systemd[1]: systemd-udevd.service: Consumed 1.920s CPU time.
Nov 29 06:29:28 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 29 06:29:28 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Closed udev Control Socket.
Nov 29 06:29:28 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Closed udev Kernel Socket.
Nov 29 06:29:28 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 29 06:29:28 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 29 06:29:28 localhost systemd[1]: Starting Cleanup udev Database...
Nov 29 06:29:28 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 29 06:29:28 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 29 06:29:28 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Stopped Create System Users.
Nov 29 06:29:28 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 29 06:29:28 localhost systemd[1]: Finished Cleanup udev Database.
Nov 29 06:29:28 localhost systemd[1]: Reached target Switch Root.
Nov 29 06:29:28 localhost systemd[1]: Starting Switch Root...
Nov 29 06:29:28 localhost systemd[1]: Switching root.
Nov 29 06:29:28 localhost systemd-journald[308]: Journal stopped
Nov 29 06:29:29 localhost systemd-journald[308]: Received SIGTERM from PID 1 (systemd).
Nov 29 06:29:29 localhost kernel: audit: type=1404 audit(1764397768.436:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 29 06:29:29 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 06:29:29 localhost kernel: SELinux:  policy capability open_perms=1
Nov 29 06:29:29 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 06:29:29 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 29 06:29:29 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 06:29:29 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 06:29:29 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 06:29:29 localhost kernel: audit: type=1403 audit(1764397768.587:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 29 06:29:29 localhost systemd[1]: Successfully loaded SELinux policy in 154.762ms.
Nov 29 06:29:29 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.614ms.
Nov 29 06:29:29 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 06:29:29 localhost systemd[1]: Detected virtualization kvm.
Nov 29 06:29:29 localhost systemd[1]: Detected architecture x86-64.
Nov 29 06:29:29 localhost systemd-rc-local-generator[635]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:29:29 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 29 06:29:29 localhost systemd[1]: Stopped Switch Root.
Nov 29 06:29:29 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 29 06:29:29 localhost systemd[1]: Created slice Slice /system/getty.
Nov 29 06:29:29 localhost systemd[1]: Created slice Slice /system/serial-getty.
Nov 29 06:29:29 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Nov 29 06:29:29 localhost systemd[1]: Created slice User and Session Slice.
Nov 29 06:29:29 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 06:29:29 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 29 06:29:29 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 29 06:29:29 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 29 06:29:29 localhost systemd[1]: Stopped target Switch Root.
Nov 29 06:29:29 localhost systemd[1]: Stopped target Initrd File Systems.
Nov 29 06:29:29 localhost systemd[1]: Stopped target Initrd Root File System.
Nov 29 06:29:29 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Nov 29 06:29:29 localhost systemd[1]: Reached target Path Units.
Nov 29 06:29:29 localhost systemd[1]: Reached target rpc_pipefs.target.
Nov 29 06:29:29 localhost systemd[1]: Reached target Slice Units.
Nov 29 06:29:29 localhost systemd[1]: Reached target Swaps.
Nov 29 06:29:29 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Nov 29 06:29:29 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 29 06:29:29 localhost systemd[1]: Reached target RPC Port Mapper.
Nov 29 06:29:29 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 29 06:29:29 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Nov 29 06:29:29 localhost systemd[1]: Listening on udev Control Socket.
Nov 29 06:29:29 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 29 06:29:29 localhost systemd[1]: Mounting Huge Pages File System...
Nov 29 06:29:29 localhost systemd[1]: Mounting POSIX Message Queue File System...
Nov 29 06:29:29 localhost systemd[1]: Mounting Kernel Debug File System...
Nov 29 06:29:29 localhost systemd[1]: Mounting Kernel Trace File System...
Nov 29 06:29:29 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 06:29:29 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 29 06:29:29 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 29 06:29:29 localhost systemd[1]: Starting Load Kernel Module drm...
Nov 29 06:29:29 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Nov 29 06:29:29 localhost systemd[1]: Starting Load Kernel Module fuse...
Nov 29 06:29:29 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 29 06:29:29 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 29 06:29:29 localhost systemd[1]: Stopped File System Check on Root Device.
Nov 29 06:29:29 localhost systemd[1]: Stopped Journal Service.
Nov 29 06:29:29 localhost systemd[1]: Starting Journal Service...
Nov 29 06:29:29 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 06:29:29 localhost systemd[1]: Starting Generate network units from Kernel command line...
Nov 29 06:29:29 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 06:29:29 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Nov 29 06:29:29 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 29 06:29:29 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 29 06:29:29 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 29 06:29:29 localhost kernel: fuse: init (API version 7.37)
Nov 29 06:29:29 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 29 06:29:29 localhost systemd[1]: Mounted Huge Pages File System.
Nov 29 06:29:29 localhost systemd[1]: Mounted POSIX Message Queue File System.
Nov 29 06:29:29 localhost systemd[1]: Mounted Kernel Debug File System.
Nov 29 06:29:29 localhost systemd[1]: Mounted Kernel Trace File System.
Nov 29 06:29:29 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 29 06:29:29 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 06:29:29 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 29 06:29:29 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 29 06:29:29 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 29 06:29:29 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 29 06:29:29 localhost systemd[1]: Finished Load Kernel Module fuse.
Nov 29 06:29:29 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 29 06:29:29 localhost systemd[1]: Finished Generate network units from Kernel command line.
Nov 29 06:29:29 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 29 06:29:29 localhost systemd-journald[676]: Journal started
Nov 29 06:29:29 localhost systemd-journald[676]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 06:29:28 localhost systemd[1]: Queued start job for default target Multi-User System.
Nov 29 06:29:28 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 29 06:29:29 localhost kernel: ACPI: bus type drm_connector registered
Nov 29 06:29:29 localhost systemd[1]: Mounting FUSE Control File System...
Nov 29 06:29:29 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 06:29:29 localhost systemd[1]: Starting Rebuild Hardware Database...
Nov 29 06:29:29 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 29 06:29:29 localhost systemd[1]: Starting Load/Save OS Random Seed...
Nov 29 06:29:29 localhost systemd[1]: Starting Create System Users...
Nov 29 06:29:29 localhost systemd[1]: Started Journal Service.
Nov 29 06:29:29 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 29 06:29:29 localhost systemd[1]: Finished Load Kernel Module drm.
Nov 29 06:29:29 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 29 06:29:29 localhost systemd[1]: Mounted FUSE Control File System.
Nov 29 06:29:29 localhost systemd[1]: Finished Load/Save OS Random Seed.
Nov 29 06:29:29 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 06:29:29 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 29 06:29:29 localhost systemd[1]: Finished Create System Users.
Nov 29 06:29:29 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 06:29:29 localhost systemd-journald[676]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 06:29:29 localhost systemd-journald[676]: Received client request to flush runtime journal.
Nov 29 06:29:29 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 29 06:29:29 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 29 06:29:29 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 06:29:29 localhost systemd[1]: Reached target Preparation for Local File Systems.
Nov 29 06:29:29 localhost systemd[1]: Reached target Local File Systems.
Nov 29 06:29:29 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 29 06:29:29 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 29 06:29:29 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 29 06:29:29 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 29 06:29:29 localhost systemd[1]: Starting Automatic Boot Loader Update...
Nov 29 06:29:29 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 29 06:29:29 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 06:29:29 localhost bootctl[694]: Couldn't find EFI system partition, skipping.
Nov 29 06:29:29 localhost systemd[1]: Finished Automatic Boot Loader Update.
Nov 29 06:29:29 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 06:29:29 localhost systemd[1]: Starting Security Auditing Service...
Nov 29 06:29:29 localhost auditd[699]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 29 06:29:29 localhost auditd[699]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 29 06:29:29 localhost systemd[1]: Starting RPC Bind...
Nov 29 06:29:29 localhost systemd[1]: Starting Rebuild Journal Catalog...
Nov 29 06:29:29 localhost systemd[1]: Started RPC Bind.
Nov 29 06:29:29 localhost systemd[1]: Finished Rebuild Journal Catalog.
Nov 29 06:29:29 localhost augenrules[705]: /sbin/augenrules: No change
Nov 29 06:29:29 localhost augenrules[720]: No rules
Nov 29 06:29:29 localhost augenrules[720]: enabled 1
Nov 29 06:29:29 localhost augenrules[720]: failure 1
Nov 29 06:29:29 localhost augenrules[720]: pid 699
Nov 29 06:29:29 localhost augenrules[720]: rate_limit 0
Nov 29 06:29:29 localhost augenrules[720]: backlog_limit 8192
Nov 29 06:29:29 localhost augenrules[720]: lost 0
Nov 29 06:29:29 localhost augenrules[720]: backlog 0
Nov 29 06:29:29 localhost augenrules[720]: backlog_wait_time 60000
Nov 29 06:29:29 localhost augenrules[720]: backlog_wait_time_actual 0
Nov 29 06:29:29 localhost augenrules[720]: enabled 1
Nov 29 06:29:29 localhost augenrules[720]: failure 1
Nov 29 06:29:29 localhost augenrules[720]: pid 699
Nov 29 06:29:29 localhost augenrules[720]: rate_limit 0
Nov 29 06:29:29 localhost augenrules[720]: backlog_limit 8192
Nov 29 06:29:29 localhost augenrules[720]: lost 0
Nov 29 06:29:29 localhost augenrules[720]: backlog 0
Nov 29 06:29:29 localhost augenrules[720]: backlog_wait_time 60000
Nov 29 06:29:29 localhost augenrules[720]: backlog_wait_time_actual 0
Nov 29 06:29:29 localhost augenrules[720]: enabled 1
Nov 29 06:29:29 localhost augenrules[720]: failure 1
Nov 29 06:29:29 localhost augenrules[720]: pid 699
Nov 29 06:29:29 localhost augenrules[720]: rate_limit 0
Nov 29 06:29:29 localhost augenrules[720]: backlog_limit 8192
Nov 29 06:29:29 localhost augenrules[720]: lost 0
Nov 29 06:29:29 localhost augenrules[720]: backlog 0
Nov 29 06:29:29 localhost augenrules[720]: backlog_wait_time 60000
Nov 29 06:29:29 localhost augenrules[720]: backlog_wait_time_actual 0
Nov 29 06:29:29 localhost systemd[1]: Started Security Auditing Service.
Nov 29 06:29:29 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 29 06:29:29 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 29 06:29:29 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 29 06:29:30 localhost systemd[1]: Finished Rebuild Hardware Database.
Nov 29 06:29:30 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 06:29:30 localhost systemd[1]: Starting Update is Completed...
Nov 29 06:29:30 localhost systemd[1]: Finished Update is Completed.
Nov 29 06:29:30 localhost systemd-udevd[728]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 06:29:30 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 06:29:30 localhost systemd[1]: Reached target System Initialization.
Nov 29 06:29:30 localhost systemd[1]: Started dnf makecache --timer.
Nov 29 06:29:30 localhost systemd[1]: Started Daily rotation of log files.
Nov 29 06:29:30 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 29 06:29:30 localhost systemd[1]: Reached target Timer Units.
Nov 29 06:29:30 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 29 06:29:30 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 29 06:29:30 localhost systemd[1]: Reached target Socket Units.
Nov 29 06:29:30 localhost systemd[1]: Starting D-Bus System Message Bus...
Nov 29 06:29:30 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 06:29:30 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 29 06:29:30 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 29 06:29:30 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 06:29:30 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 29 06:29:30 localhost systemd-udevd[732]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 06:29:30 localhost systemd[1]: Started D-Bus System Message Bus.
Nov 29 06:29:30 localhost systemd[1]: Reached target Basic System.
Nov 29 06:29:30 localhost dbus-broker-lau[747]: Ready
Nov 29 06:29:30 localhost systemd[1]: Starting NTP client/server...
Nov 29 06:29:30 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 29 06:29:30 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 29 06:29:30 localhost systemd[1]: Starting IPv4 firewall with iptables...
Nov 29 06:29:30 localhost systemd[1]: Started irqbalance daemon.
Nov 29 06:29:30 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 29 06:29:30 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 06:29:30 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 06:29:30 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 06:29:30 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 29 06:29:30 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 29 06:29:30 localhost systemd[1]: Reached target User and Group Name Lookups.
Nov 29 06:29:30 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 29 06:29:30 localhost systemd[1]: Starting User Login Management...
Nov 29 06:29:30 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 29 06:29:30 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 29 06:29:30 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 29 06:29:30 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 29 06:29:30 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 29 06:29:30 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 29 06:29:30 localhost chronyd[792]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 06:29:30 localhost systemd[1]: Started NTP client/server.
Nov 29 06:29:30 localhost chronyd[792]: Loaded 0 symmetric keys
Nov 29 06:29:30 localhost systemd-logind[782]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 06:29:30 localhost chronyd[792]: Using right/UTC timezone to obtain leap second data
Nov 29 06:29:30 localhost systemd-logind[782]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 06:29:30 localhost chronyd[792]: Loaded seccomp filter (level 2)
Nov 29 06:29:30 localhost systemd-logind[782]: New seat seat0.
Nov 29 06:29:30 localhost systemd[1]: Started User Login Management.
Nov 29 06:29:30 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 29 06:29:30 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 29 06:29:30 localhost kernel: kvm_amd: TSC scaling supported
Nov 29 06:29:30 localhost kernel: kvm_amd: Nested Virtualization enabled
Nov 29 06:29:30 localhost kernel: kvm_amd: Nested Paging enabled
Nov 29 06:29:30 localhost kernel: kvm_amd: LBR virtualization supported
Nov 29 06:29:30 localhost kernel: Console: switching to colour dummy device 80x25
Nov 29 06:29:30 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 29 06:29:30 localhost kernel: [drm] features: -context_init
Nov 29 06:29:30 localhost kernel: [drm] number of scanouts: 1
Nov 29 06:29:30 localhost kernel: [drm] number of cap sets: 0
Nov 29 06:29:30 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 29 06:29:30 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 29 06:29:30 localhost kernel: Console: switching to colour frame buffer device 128x48
Nov 29 06:29:30 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 29 06:29:30 localhost iptables.init[776]: iptables: Applying firewall rules: [  OK  ]
Nov 29 06:29:30 localhost systemd[1]: Finished IPv4 firewall with iptables.
Nov 29 06:29:30 localhost cloud-init[836]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 29 Nov 2025 06:29:30 +0000. Up 10.16 seconds.
Nov 29 06:29:31 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 29 06:29:31 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Nov 29 06:29:31 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpsaug3m06.mount: Deactivated successfully.
Nov 29 06:29:31 localhost systemd[1]: Starting Hostname Service...
Nov 29 06:29:31 localhost systemd[1]: Started Hostname Service.
Nov 29 06:29:31 np0005539583.novalocal systemd-hostnamed[850]: Hostname set to <np0005539583.novalocal> (static)
Nov 29 06:29:31 np0005539583.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 29 06:29:31 np0005539583.novalocal systemd[1]: Reached target Preparation for Network.
Nov 29 06:29:31 np0005539583.novalocal systemd[1]: Starting Network Manager...
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.5645] NetworkManager (version 1.54.1-1.el9) is starting... (boot:558c485d-5d9e-4d57-8633-c8b29a02c676)
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.5651] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.5723] manager[0x56366c6d7080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.5768] hostname: hostname: using hostnamed
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.5768] hostname: static hostname changed from (none) to "np0005539583.novalocal"
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.5773] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6025] manager[0x56366c6d7080]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6026] manager[0x56366c6d7080]: rfkill: WWAN hardware radio set enabled
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6067] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6067] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6068] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6068] manager: Networking is enabled by state file
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6070] settings: Loaded settings plugin: keyfile (internal)
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6079] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6097] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 06:29:31 np0005539583.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6177] dhcp: init: Using DHCP client 'internal'
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6180] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6195] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6203] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6211] device (lo): Activation: starting connection 'lo' (aa60ea5c-c713-40ed-b887-294ab3e1a707)
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6222] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6226] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6261] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6266] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6270] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6273] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6275] device (eth0): carrier: link connected
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6280] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6287] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 06:29:31 np0005539583.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6306] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6312] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6313] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 06:29:31 np0005539583.novalocal systemd[1]: Started Network Manager.
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6317] manager: NetworkManager state is now CONNECTING
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6319] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6328] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6333] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 06:29:31 np0005539583.novalocal systemd[1]: Reached target Network.
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6372] dhcp4 (eth0): state changed new lease, address=38.102.83.203
Nov 29 06:29:31 np0005539583.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6382] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6405] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 06:29:31 np0005539583.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 29 06:29:31 np0005539583.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6482] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6484] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6486] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6495] device (lo): Activation: successful, device activated.
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6502] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6506] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6509] device (eth0): Activation: successful, device activated.
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6515] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 06:29:31 np0005539583.novalocal NetworkManager[854]: <info>  [1764397771.6519] manager: startup complete
Nov 29 06:29:31 np0005539583.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Nov 29 06:29:31 np0005539583.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 06:29:31 np0005539583.novalocal systemd[1]: Reached target NFS client services.
Nov 29 06:29:31 np0005539583.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 06:29:31 np0005539583.novalocal systemd[1]: Reached target Remote File Systems.
Nov 29 06:29:31 np0005539583.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 06:29:31 np0005539583.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 29 06:29:31 np0005539583.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 29 Nov 2025 06:29:32 +0000. Up 11.25 seconds.
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: |  eth0  | True |        38.102.83.203         | 255.255.255.0 | global | fa:16:3e:0f:33:03 |
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: |  eth0  | True | fe80::f816:3eff:fe0f:3303/64 |       .       |  link  | fa:16:3e:0f:33:03 |
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 29 06:29:32 np0005539583.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 06:29:33 np0005539583.novalocal useradd[985]: new group: name=cloud-user, GID=1001
Nov 29 06:29:33 np0005539583.novalocal useradd[985]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Nov 29 06:29:33 np0005539583.novalocal useradd[985]: add 'cloud-user' to group 'adm'
Nov 29 06:29:33 np0005539583.novalocal useradd[985]: add 'cloud-user' to group 'systemd-journal'
Nov 29 06:29:33 np0005539583.novalocal useradd[985]: add 'cloud-user' to shadow group 'adm'
Nov 29 06:29:33 np0005539583.novalocal useradd[985]: add 'cloud-user' to shadow group 'systemd-journal'
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: Generating public/private rsa key pair.
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: The key fingerprint is:
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: SHA256:uhVwDaRnwX4EeqicKaF3nSTIFmTaePjShIwtuhUhUsc root@np0005539583.novalocal
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: The key's randomart image is:
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: +---[RSA 3072]----+
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |oo*..  o=.       |
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |+@ =E  +.+.      |
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |B.@ . *.=..      |
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |.O + B B. .      |
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |+ * * o S.       |
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: | = o   . .       |
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |.     . .        |
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |       o         |
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |      .          |
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: +----[SHA256]-----+
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: Generating public/private ecdsa key pair.
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: The key fingerprint is:
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: SHA256:dwsvwrBARgDFr7Pr2G3WdBxDrUm8zkhp/6XRZB75eI0 root@np0005539583.novalocal
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: The key's randomart image is:
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: +---[ECDSA 256]---+
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |.+o..  . .       |
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |  ..    + .      |
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |   .o  + +   .   |
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |   o. + *   =    |
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |   ..o.*Soo=.+ o |
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |  o  .o+*..+=.E .|
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |   o o..o..+o.   |
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: | o..o .  .o.     |
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |..++.            |
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: +----[SHA256]-----+
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: Generating public/private ed25519 key pair.
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: The key fingerprint is:
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: SHA256:29HL2UNda0ROgVD9+jNLCFBB282jsqsIDsHka4fWYRQ root@np0005539583.novalocal
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: The key's randomart image is:
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: +--[ED25519 256]--+
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |     E    .=+.o+.|
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |      .   . o.*. |
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |   . .   . . . *o|
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |  + .     ..  o.=|
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |   + o  S .o...+.|
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |    * .  o o+=+  |
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |   * +  . ..+.oo |
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |  o + . .   . .+.|
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: |     . . ...   .+|
Nov 29 06:29:33 np0005539583.novalocal cloud-init[918]: +----[SHA256]-----+
Nov 29 06:29:33 np0005539583.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Nov 29 06:29:33 np0005539583.novalocal systemd[1]: Reached target Cloud-config availability.
Nov 29 06:29:33 np0005539583.novalocal systemd[1]: Reached target Network is Online.
Nov 29 06:29:33 np0005539583.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Nov 29 06:29:33 np0005539583.novalocal systemd[1]: Starting Crash recovery kernel arming...
Nov 29 06:29:34 np0005539583.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Nov 29 06:29:34 np0005539583.novalocal systemd[1]: Starting System Logging Service...
Nov 29 06:29:34 np0005539583.novalocal sm-notify[1001]: Version 2.5.4 starting
Nov 29 06:29:34 np0005539583.novalocal systemd[1]: Starting OpenSSH server daemon...
Nov 29 06:29:34 np0005539583.novalocal systemd[1]: Starting Permit User Sessions...
Nov 29 06:29:34 np0005539583.novalocal systemd[1]: Started Notify NFS peers of a restart.
Nov 29 06:29:34 np0005539583.novalocal sshd[1003]: Server listening on 0.0.0.0 port 22.
Nov 29 06:29:34 np0005539583.novalocal sshd[1003]: Server listening on :: port 22.
Nov 29 06:29:34 np0005539583.novalocal systemd[1]: Started OpenSSH server daemon.
Nov 29 06:29:34 np0005539583.novalocal systemd[1]: Finished Permit User Sessions.
Nov 29 06:29:34 np0005539583.novalocal rsyslogd[1002]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1002" x-info="https://www.rsyslog.com"] start
Nov 29 06:29:34 np0005539583.novalocal systemd[1]: Started Command Scheduler.
Nov 29 06:29:34 np0005539583.novalocal rsyslogd[1002]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 29 06:29:34 np0005539583.novalocal systemd[1]: Started Getty on tty1.
Nov 29 06:29:34 np0005539583.novalocal crond[1007]: (CRON) STARTUP (1.5.7)
Nov 29 06:29:34 np0005539583.novalocal crond[1007]: (CRON) INFO (Syslog will be used instead of sendmail.)
Nov 29 06:29:34 np0005539583.novalocal crond[1007]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 26% if used.)
Nov 29 06:29:34 np0005539583.novalocal crond[1007]: (CRON) INFO (running with inotify support)
Nov 29 06:29:34 np0005539583.novalocal systemd[1]: Started Serial Getty on ttyS0.
Nov 29 06:29:34 np0005539583.novalocal systemd[1]: Reached target Login Prompts.
Nov 29 06:29:34 np0005539583.novalocal systemd[1]: Started System Logging Service.
Nov 29 06:29:34 np0005539583.novalocal systemd[1]: Reached target Multi-User System.
Nov 29 06:29:34 np0005539583.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 29 06:29:34 np0005539583.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 29 06:29:34 np0005539583.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 29 06:29:34 np0005539583.novalocal rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 06:29:34 np0005539583.novalocal kdumpctl[1012]: kdump: No kdump initial ramdisk found.
Nov 29 06:29:34 np0005539583.novalocal kdumpctl[1012]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 29 06:29:34 np0005539583.novalocal cloud-init[1161]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 29 Nov 2025 06:29:34 +0000. Up 13.56 seconds.
Nov 29 06:29:34 np0005539583.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Nov 29 06:29:34 np0005539583.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Nov 29 06:29:34 np0005539583.novalocal dracut[1262]: dracut-057-102.git20250818.el9
Nov 29 06:29:34 np0005539583.novalocal dracut[1264]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 29 06:29:34 np0005539583.novalocal cloud-init[1314]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 29 Nov 2025 06:29:34 +0000. Up 14.01 seconds.
Nov 29 06:29:34 np0005539583.novalocal cloud-init[1339]: #############################################################
Nov 29 06:29:34 np0005539583.novalocal cloud-init[1341]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 29 06:29:34 np0005539583.novalocal cloud-init[1343]: 256 SHA256:dwsvwrBARgDFr7Pr2G3WdBxDrUm8zkhp/6XRZB75eI0 root@np0005539583.novalocal (ECDSA)
Nov 29 06:29:34 np0005539583.novalocal cloud-init[1345]: 256 SHA256:29HL2UNda0ROgVD9+jNLCFBB282jsqsIDsHka4fWYRQ root@np0005539583.novalocal (ED25519)
Nov 29 06:29:34 np0005539583.novalocal cloud-init[1350]: 3072 SHA256:uhVwDaRnwX4EeqicKaF3nSTIFmTaePjShIwtuhUhUsc root@np0005539583.novalocal (RSA)
Nov 29 06:29:34 np0005539583.novalocal cloud-init[1351]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 29 06:29:34 np0005539583.novalocal cloud-init[1352]: #############################################################
Nov 29 06:29:35 np0005539583.novalocal cloud-init[1314]: Cloud-init v. 24.4-7.el9 finished at Sat, 29 Nov 2025 06:29:35 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 14.22 seconds
Nov 29 06:29:35 np0005539583.novalocal sshd-session[1370]: Unable to negotiate with 38.102.83.114 port 41034: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Nov 29 06:29:35 np0005539583.novalocal sshd-session[1381]: Unable to negotiate with 38.102.83.114 port 41050: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Nov 29 06:29:35 np0005539583.novalocal sshd-session[1388]: Unable to negotiate with 38.102.83.114 port 41052: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Nov 29 06:29:35 np0005539583.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Nov 29 06:29:35 np0005539583.novalocal sshd-session[1393]: Connection closed by 38.102.83.114 port 41060 [preauth]
Nov 29 06:29:35 np0005539583.novalocal systemd[1]: Reached target Cloud-init target.
Nov 29 06:29:35 np0005539583.novalocal sshd-session[1365]: Connection closed by 38.102.83.114 port 41018 [preauth]
Nov 29 06:29:35 np0005539583.novalocal sshd-session[1398]: Connection reset by 38.102.83.114 port 41074 [preauth]
Nov 29 06:29:35 np0005539583.novalocal sshd-session[1406]: Unable to negotiate with 38.102.83.114 port 41086: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Nov 29 06:29:35 np0005539583.novalocal sshd-session[1375]: Connection closed by 38.102.83.114 port 41038 [preauth]
Nov 29 06:29:35 np0005539583.novalocal sshd-session[1411]: Unable to negotiate with 38.102.83.114 port 46736: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: Module 'resume' will not be installed, because it's in the list to be omitted!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: memstrack is not available
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 06:29:35 np0005539583.novalocal dracut[1264]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 06:29:36 np0005539583.novalocal dracut[1264]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 06:29:36 np0005539583.novalocal dracut[1264]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 06:29:36 np0005539583.novalocal dracut[1264]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 06:29:36 np0005539583.novalocal dracut[1264]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 06:29:36 np0005539583.novalocal dracut[1264]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 06:29:36 np0005539583.novalocal dracut[1264]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 06:29:36 np0005539583.novalocal dracut[1264]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 06:29:36 np0005539583.novalocal dracut[1264]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 06:29:36 np0005539583.novalocal dracut[1264]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 06:29:36 np0005539583.novalocal dracut[1264]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 06:29:36 np0005539583.novalocal dracut[1264]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 06:29:36 np0005539583.novalocal dracut[1264]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 06:29:36 np0005539583.novalocal dracut[1264]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 06:29:36 np0005539583.novalocal dracut[1264]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 06:29:36 np0005539583.novalocal dracut[1264]: memstrack is not available
Nov 29 06:29:36 np0005539583.novalocal dracut[1264]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 06:29:36 np0005539583.novalocal dracut[1264]: *** Including module: systemd ***
Nov 29 06:29:36 np0005539583.novalocal chronyd[792]: Selected source 158.69.193.108 (2.centos.pool.ntp.org)
Nov 29 06:29:36 np0005539583.novalocal chronyd[792]: System clock TAI offset set to 37 seconds
Nov 29 06:29:36 np0005539583.novalocal dracut[1264]: *** Including module: fips ***
Nov 29 06:29:36 np0005539583.novalocal dracut[1264]: *** Including module: systemd-initrd ***
Nov 29 06:29:36 np0005539583.novalocal dracut[1264]: *** Including module: i18n ***
Nov 29 06:29:37 np0005539583.novalocal dracut[1264]: *** Including module: drm ***
Nov 29 06:29:37 np0005539583.novalocal dracut[1264]: *** Including module: prefixdevname ***
Nov 29 06:29:37 np0005539583.novalocal dracut[1264]: *** Including module: kernel-modules ***
Nov 29 06:29:37 np0005539583.novalocal kernel: block vda: the capability attribute has been deprecated.
Nov 29 06:29:37 np0005539583.novalocal dracut[1264]: *** Including module: kernel-modules-extra ***
Nov 29 06:29:37 np0005539583.novalocal dracut[1264]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Nov 29 06:29:37 np0005539583.novalocal dracut[1264]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Nov 29 06:29:37 np0005539583.novalocal dracut[1264]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Nov 29 06:29:37 np0005539583.novalocal dracut[1264]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Nov 29 06:29:37 np0005539583.novalocal dracut[1264]: *** Including module: qemu ***
Nov 29 06:29:38 np0005539583.novalocal dracut[1264]: *** Including module: fstab-sys ***
Nov 29 06:29:38 np0005539583.novalocal dracut[1264]: *** Including module: rootfs-block ***
Nov 29 06:29:38 np0005539583.novalocal dracut[1264]: *** Including module: terminfo ***
Nov 29 06:29:38 np0005539583.novalocal dracut[1264]: *** Including module: udev-rules ***
Nov 29 06:29:38 np0005539583.novalocal dracut[1264]: Skipping udev rule: 91-permissions.rules
Nov 29 06:29:38 np0005539583.novalocal dracut[1264]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 29 06:29:38 np0005539583.novalocal dracut[1264]: *** Including module: virtiofs ***
Nov 29 06:29:38 np0005539583.novalocal dracut[1264]: *** Including module: dracut-systemd ***
Nov 29 06:29:38 np0005539583.novalocal dracut[1264]: *** Including module: usrmount ***
Nov 29 06:29:38 np0005539583.novalocal dracut[1264]: *** Including module: base ***
Nov 29 06:29:38 np0005539583.novalocal dracut[1264]: *** Including module: fs-lib ***
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]: *** Including module: kdumpbase ***
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:   microcode_ctl module: mangling fw_dir
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:     microcode_ctl: configuration "intel" is ignored
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]: *** Including module: openssl ***
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]: *** Including module: shutdown ***
Nov 29 06:29:39 np0005539583.novalocal dracut[1264]: *** Including module: squash ***
Nov 29 06:29:40 np0005539583.novalocal dracut[1264]: *** Including modules done ***
Nov 29 06:29:40 np0005539583.novalocal dracut[1264]: *** Installing kernel module dependencies ***
Nov 29 06:29:40 np0005539583.novalocal irqbalance[777]: Cannot change IRQ 35 affinity: Operation not permitted
Nov 29 06:29:40 np0005539583.novalocal irqbalance[777]: IRQ 35 affinity is now unmanaged
Nov 29 06:29:40 np0005539583.novalocal irqbalance[777]: Cannot change IRQ 33 affinity: Operation not permitted
Nov 29 06:29:40 np0005539583.novalocal irqbalance[777]: IRQ 33 affinity is now unmanaged
Nov 29 06:29:40 np0005539583.novalocal irqbalance[777]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 29 06:29:40 np0005539583.novalocal irqbalance[777]: IRQ 31 affinity is now unmanaged
Nov 29 06:29:40 np0005539583.novalocal irqbalance[777]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 29 06:29:40 np0005539583.novalocal irqbalance[777]: IRQ 28 affinity is now unmanaged
Nov 29 06:29:40 np0005539583.novalocal irqbalance[777]: Cannot change IRQ 34 affinity: Operation not permitted
Nov 29 06:29:40 np0005539583.novalocal irqbalance[777]: IRQ 34 affinity is now unmanaged
Nov 29 06:29:40 np0005539583.novalocal irqbalance[777]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 29 06:29:40 np0005539583.novalocal irqbalance[777]: IRQ 32 affinity is now unmanaged
Nov 29 06:29:40 np0005539583.novalocal irqbalance[777]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 29 06:29:40 np0005539583.novalocal irqbalance[777]: IRQ 30 affinity is now unmanaged
Nov 29 06:29:40 np0005539583.novalocal irqbalance[777]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 29 06:29:40 np0005539583.novalocal irqbalance[777]: IRQ 29 affinity is now unmanaged
Nov 29 06:29:40 np0005539583.novalocal dracut[1264]: *** Installing kernel module dependencies done ***
Nov 29 06:29:40 np0005539583.novalocal dracut[1264]: *** Resolving executable dependencies ***
Nov 29 06:29:41 np0005539583.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 06:29:42 np0005539583.novalocal dracut[1264]: *** Resolving executable dependencies done ***
Nov 29 06:29:42 np0005539583.novalocal dracut[1264]: *** Generating early-microcode cpio image ***
Nov 29 06:29:42 np0005539583.novalocal dracut[1264]: *** Store current command line parameters ***
Nov 29 06:29:42 np0005539583.novalocal dracut[1264]: Stored kernel commandline:
Nov 29 06:29:42 np0005539583.novalocal dracut[1264]: No dracut internal kernel commandline stored in the initramfs
Nov 29 06:29:42 np0005539583.novalocal dracut[1264]: *** Install squash loader ***
Nov 29 06:29:43 np0005539583.novalocal dracut[1264]: *** Squashing the files inside the initramfs ***
Nov 29 06:29:44 np0005539583.novalocal dracut[1264]: *** Squashing the files inside the initramfs done ***
Nov 29 06:29:44 np0005539583.novalocal dracut[1264]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 29 06:29:44 np0005539583.novalocal dracut[1264]: *** Hardlinking files ***
Nov 29 06:29:44 np0005539583.novalocal dracut[1264]: Mode:           real
Nov 29 06:29:44 np0005539583.novalocal dracut[1264]: Files:          50
Nov 29 06:29:44 np0005539583.novalocal dracut[1264]: Linked:         0 files
Nov 29 06:29:44 np0005539583.novalocal dracut[1264]: Compared:       0 xattrs
Nov 29 06:29:44 np0005539583.novalocal dracut[1264]: Compared:       0 files
Nov 29 06:29:44 np0005539583.novalocal dracut[1264]: Saved:          0 B
Nov 29 06:29:44 np0005539583.novalocal dracut[1264]: Duration:       0.000582 seconds
Nov 29 06:29:44 np0005539583.novalocal dracut[1264]: *** Hardlinking files done ***
Nov 29 06:29:44 np0005539583.novalocal dracut[1264]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 29 06:29:45 np0005539583.novalocal kdumpctl[1012]: kdump: kexec: loaded kdump kernel
Nov 29 06:29:45 np0005539583.novalocal kdumpctl[1012]: kdump: Starting kdump: [OK]
Nov 29 06:29:45 np0005539583.novalocal systemd[1]: Finished Crash recovery kernel arming.
Nov 29 06:29:45 np0005539583.novalocal systemd[1]: Startup finished in 4.291s (kernel) + 3.358s (initrd) + 17.347s (userspace) = 24.997s.
Nov 29 06:29:48 np0005539583.novalocal sshd-session[4291]: Accepted publickey for zuul from 38.102.83.114 port 43334 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Nov 29 06:29:48 np0005539583.novalocal systemd[1]: Created slice User Slice of UID 1000.
Nov 29 06:29:48 np0005539583.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 29 06:29:48 np0005539583.novalocal systemd-logind[782]: New session 1 of user zuul.
Nov 29 06:29:48 np0005539583.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 29 06:29:48 np0005539583.novalocal systemd[1]: Starting User Manager for UID 1000...
Nov 29 06:29:48 np0005539583.novalocal systemd[4295]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:29:48 np0005539583.novalocal systemd[4295]: Queued start job for default target Main User Target.
Nov 29 06:29:48 np0005539583.novalocal systemd[4295]: Created slice User Application Slice.
Nov 29 06:29:48 np0005539583.novalocal systemd[4295]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 06:29:48 np0005539583.novalocal systemd[4295]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 06:29:48 np0005539583.novalocal systemd[4295]: Reached target Paths.
Nov 29 06:29:48 np0005539583.novalocal systemd[4295]: Reached target Timers.
Nov 29 06:29:48 np0005539583.novalocal systemd[4295]: Starting D-Bus User Message Bus Socket...
Nov 29 06:29:48 np0005539583.novalocal systemd[4295]: Starting Create User's Volatile Files and Directories...
Nov 29 06:29:48 np0005539583.novalocal systemd[4295]: Finished Create User's Volatile Files and Directories.
Nov 29 06:29:48 np0005539583.novalocal systemd[4295]: Listening on D-Bus User Message Bus Socket.
Nov 29 06:29:48 np0005539583.novalocal systemd[4295]: Reached target Sockets.
Nov 29 06:29:48 np0005539583.novalocal systemd[4295]: Reached target Basic System.
Nov 29 06:29:48 np0005539583.novalocal systemd[4295]: Reached target Main User Target.
Nov 29 06:29:48 np0005539583.novalocal systemd[4295]: Startup finished in 149ms.
Nov 29 06:29:48 np0005539583.novalocal systemd[1]: Started User Manager for UID 1000.
Nov 29 06:29:48 np0005539583.novalocal systemd[1]: Started Session 1 of User zuul.
Nov 29 06:29:48 np0005539583.novalocal sshd-session[4291]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:29:48 np0005539583.novalocal python3[4377]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:29:51 np0005539583.novalocal python3[4405]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:29:57 np0005539583.novalocal python3[4465]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:29:58 np0005539583.novalocal python3[4505]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 29 06:29:58 np0005539583.novalocal sshd-session[4440]: Received disconnect from 101.47.142.104 port 49224:11: Bye Bye [preauth]
Nov 29 06:29:58 np0005539583.novalocal sshd-session[4440]: Disconnected from authenticating user root 101.47.142.104 port 49224 [preauth]
Nov 29 06:30:00 np0005539583.novalocal python3[4531]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVIqfnU4XykA0ePxNMrfZGMtOrbhPBDp5yjOcyZEjumiNeb6TNj304ZlF5ug9y87bfwZ/aKG804hwsYwOQGohrfKxee2QiVBU1qO7bGI4U9FU+rSVvKTnMcl3v9IFnEkxu90pPWm+mrG6+xhMk0sa7IbtsKl6vjWBqKKGobfB4JvDxfvqIigWW4JZ1mv72AxPnrrf+LH5IpP+D9VSkj2qOOkeysht1GrACIsp15aluBGQSxOTCc4n1hrcU9yRjtwW0+kdCqP1DypYemNLh5Tr/QazNhxaQQrZR8BPQzcC1hzZJaljwvSSxZlv8UNC3Ul4ycsTDgotcJC0vWqyuMYp2/G/X6ZdSGz4RlZMvX0291zDG09h4evGR+hywFSVeCG3DyrrWoCAwEBYcs/cSDY2+9ruKY4IeS4p0z9OpYTmnUfKqnw2VHfVwNEfRSMCp4Svuyv51ukBQxSfG2JiqsLKXZJ15ivljlVilL7HYiJKf5rlJJpKvf9N6MGgLOm4VSCs= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:01 np0005539583.novalocal python3[4555]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:01 np0005539583.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 06:30:01 np0005539583.novalocal python3[4656]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:30:02 np0005539583.novalocal python3[4727]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397801.469732-207-235513561025689/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=a0017ec17400414381ae45d9ef699ad8_id_rsa follow=False checksum=05f335c14dc3138c32a4b05935d14c687b895539 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:02 np0005539583.novalocal python3[4850]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:30:03 np0005539583.novalocal python3[4921]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397802.4371257-240-81905512538735/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=a0017ec17400414381ae45d9ef699ad8_id_rsa.pub follow=False checksum=ce89459655538aa98c31b1a53a723823bce4c029 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:04 np0005539583.novalocal python3[4969]: ansible-ping Invoked with data=pong
Nov 29 06:30:05 np0005539583.novalocal python3[4993]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:30:06 np0005539583.novalocal python3[5051]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 29 06:30:07 np0005539583.novalocal python3[5083]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:08 np0005539583.novalocal python3[5107]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:08 np0005539583.novalocal python3[5131]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:09 np0005539583.novalocal python3[5155]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:09 np0005539583.novalocal python3[5179]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:09 np0005539583.novalocal python3[5203]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:10 np0005539583.novalocal sudo[5227]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pywqixozwqmvxofntiyucsnzeoxmkhsc ; /usr/bin/python3'
Nov 29 06:30:10 np0005539583.novalocal sudo[5227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:11 np0005539583.novalocal python3[5229]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:11 np0005539583.novalocal sudo[5227]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:11 np0005539583.novalocal sudo[5305]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxjzdbxbfyxqyckmxvuxdlleaotbxcks ; /usr/bin/python3'
Nov 29 06:30:11 np0005539583.novalocal sudo[5305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:11 np0005539583.novalocal python3[5307]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:30:11 np0005539583.novalocal sudo[5305]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:12 np0005539583.novalocal sudo[5378]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfcofhjrpchycinmmiihqexxlrkpubvs ; /usr/bin/python3'
Nov 29 06:30:12 np0005539583.novalocal sudo[5378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:12 np0005539583.novalocal python3[5380]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397811.2786238-21-121746555906326/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:12 np0005539583.novalocal sudo[5378]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:12 np0005539583.novalocal python3[5428]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:13 np0005539583.novalocal python3[5452]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:13 np0005539583.novalocal python3[5476]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:13 np0005539583.novalocal python3[5500]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:14 np0005539583.novalocal python3[5524]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:14 np0005539583.novalocal python3[5548]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:14 np0005539583.novalocal python3[5572]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:14 np0005539583.novalocal python3[5596]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:15 np0005539583.novalocal python3[5620]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:15 np0005539583.novalocal python3[5644]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:16 np0005539583.novalocal python3[5668]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:16 np0005539583.novalocal python3[5692]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:16 np0005539583.novalocal python3[5716]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:16 np0005539583.novalocal python3[5740]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:17 np0005539583.novalocal python3[5764]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:17 np0005539583.novalocal python3[5788]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:17 np0005539583.novalocal python3[5812]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:18 np0005539583.novalocal python3[5836]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:18 np0005539583.novalocal python3[5860]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:18 np0005539583.novalocal python3[5884]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:19 np0005539583.novalocal python3[5908]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:19 np0005539583.novalocal python3[5932]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:19 np0005539583.novalocal python3[5956]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:19 np0005539583.novalocal python3[5980]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:20 np0005539583.novalocal python3[6004]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:20 np0005539583.novalocal python3[6028]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:22 np0005539583.novalocal sudo[6052]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtngeqichqwkjkvxhceduoswbritvqci ; /usr/bin/python3'
Nov 29 06:30:22 np0005539583.novalocal sudo[6052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:22 np0005539583.novalocal python3[6054]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 06:30:22 np0005539583.novalocal systemd[1]: Starting Time & Date Service...
Nov 29 06:30:22 np0005539583.novalocal systemd[1]: Started Time & Date Service.
Nov 29 06:30:22 np0005539583.novalocal systemd-timedated[6056]: Changed time zone to 'UTC' (UTC).
Nov 29 06:30:22 np0005539583.novalocal sudo[6052]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:22 np0005539583.novalocal sudo[6083]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kasbdltstengipwgvybhtspbtdhqdlic ; /usr/bin/python3'
Nov 29 06:30:22 np0005539583.novalocal sudo[6083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:22 np0005539583.novalocal python3[6085]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:22 np0005539583.novalocal sudo[6083]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:23 np0005539583.novalocal python3[6161]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:30:23 np0005539583.novalocal python3[6232]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764397822.9836023-153-43612832959353/source _original_basename=tmp6tzdvhiw follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:24 np0005539583.novalocal python3[6332]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:30:24 np0005539583.novalocal python3[6403]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764397823.8011806-183-6043596221375/source _original_basename=tmpos54o7zg follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:24 np0005539583.novalocal sudo[6503]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jotnolmyymlshkgcjnkeboiapspvlxoi ; /usr/bin/python3'
Nov 29 06:30:24 np0005539583.novalocal sudo[6503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:25 np0005539583.novalocal python3[6505]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:30:25 np0005539583.novalocal sudo[6503]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:25 np0005539583.novalocal sudo[6576]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkugbsouanihmqvrmdpfmvqdgzzkrinz ; /usr/bin/python3'
Nov 29 06:30:25 np0005539583.novalocal sudo[6576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:25 np0005539583.novalocal python3[6578]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764397824.80522-231-15503992797914/source _original_basename=tmp557s2jzd follow=False checksum=b142a25af330be1fcee690876fc8170e6e08af8f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:25 np0005539583.novalocal sudo[6576]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:26 np0005539583.novalocal python3[6626]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:30:26 np0005539583.novalocal python3[6652]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:30:26 np0005539583.novalocal sudo[6730]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubwocuhqdwpzczippjzxamqeelndiros ; /usr/bin/python3'
Nov 29 06:30:26 np0005539583.novalocal sudo[6730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:26 np0005539583.novalocal python3[6732]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:30:26 np0005539583.novalocal sudo[6730]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:26 np0005539583.novalocal sudo[6803]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkrnmdxxpjdgmkkgbrxlylzdghlclcxn ; /usr/bin/python3'
Nov 29 06:30:26 np0005539583.novalocal sudo[6803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:27 np0005539583.novalocal python3[6805]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397826.4013994-273-69070719520/source _original_basename=tmprwm1hlg_ follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:27 np0005539583.novalocal sudo[6803]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:27 np0005539583.novalocal sudo[6854]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcfjssqthkcbovuvlkqdnyiqbdojsebg ; /usr/bin/python3'
Nov 29 06:30:27 np0005539583.novalocal sudo[6854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:27 np0005539583.novalocal python3[6856]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-4909-fb60-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:30:27 np0005539583.novalocal sudo[6854]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:28 np0005539583.novalocal python3[6884]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-4909-fb60-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 29 06:30:30 np0005539583.novalocal python3[6913]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:52 np0005539583.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 06:31:05 np0005539583.novalocal sudo[6939]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nacogcddyqwgeolcizrwgppkqudwyfqm ; /usr/bin/python3'
Nov 29 06:31:05 np0005539583.novalocal sudo[6939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:05 np0005539583.novalocal python3[6941]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:31:05 np0005539583.novalocal sudo[6939]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:39 np0005539583.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 06:31:39 np0005539583.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 29 06:31:39 np0005539583.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 29 06:31:39 np0005539583.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 29 06:31:39 np0005539583.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 29 06:31:39 np0005539583.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 29 06:31:39 np0005539583.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 29 06:31:39 np0005539583.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 29 06:31:39 np0005539583.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 29 06:31:39 np0005539583.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 29 06:31:39 np0005539583.novalocal NetworkManager[854]: <info>  [1764397899.2752] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 06:31:39 np0005539583.novalocal systemd-udevd[6942]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 06:31:39 np0005539583.novalocal NetworkManager[854]: <info>  [1764397899.2976] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 06:31:39 np0005539583.novalocal NetworkManager[854]: <info>  [1764397899.3001] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 29 06:31:39 np0005539583.novalocal NetworkManager[854]: <info>  [1764397899.3005] device (eth1): carrier: link connected
Nov 29 06:31:39 np0005539583.novalocal NetworkManager[854]: <info>  [1764397899.3006] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 06:31:39 np0005539583.novalocal NetworkManager[854]: <info>  [1764397899.3013] policy: auto-activating connection 'Wired connection 1' (2bd76030-c321-3792-974f-8b7377201814)
Nov 29 06:31:39 np0005539583.novalocal NetworkManager[854]: <info>  [1764397899.3016] device (eth1): Activation: starting connection 'Wired connection 1' (2bd76030-c321-3792-974f-8b7377201814)
Nov 29 06:31:39 np0005539583.novalocal NetworkManager[854]: <info>  [1764397899.3017] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 06:31:39 np0005539583.novalocal NetworkManager[854]: <info>  [1764397899.3019] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 06:31:39 np0005539583.novalocal NetworkManager[854]: <info>  [1764397899.3022] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 06:31:39 np0005539583.novalocal NetworkManager[854]: <info>  [1764397899.3026] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 06:31:40 np0005539583.novalocal python3[6969]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-ea7d-8567-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:31:50 np0005539583.novalocal sudo[7047]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcusaionoapxusmjcgobewfmqxgembft ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 06:31:50 np0005539583.novalocal sudo[7047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:50 np0005539583.novalocal python3[7049]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:31:50 np0005539583.novalocal sudo[7047]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:50 np0005539583.novalocal sudo[7120]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwekawhpkpsdusogrzwbdzsfwrmsxucr ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 06:31:50 np0005539583.novalocal sudo[7120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:50 np0005539583.novalocal python3[7122]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397910.114437-102-269814464644614/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=a491dbc708ba8a07a618f852c53e59093df83a6a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:31:50 np0005539583.novalocal sudo[7120]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:51 np0005539583.novalocal sudo[7170]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iawldobcxjhfbxcfmrszyfmulrigbckd ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 06:31:51 np0005539583.novalocal sudo[7170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:51 np0005539583.novalocal python3[7172]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 06:31:51 np0005539583.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 06:31:51 np0005539583.novalocal systemd[1]: Stopped Network Manager Wait Online.
Nov 29 06:31:51 np0005539583.novalocal systemd[1]: Stopping Network Manager Wait Online...
Nov 29 06:31:51 np0005539583.novalocal systemd[1]: Stopping Network Manager...
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[854]: <info>  [1764397911.7088] caught SIGTERM, shutting down normally.
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[854]: <info>  [1764397911.7105] dhcp4 (eth0): canceled DHCP transaction
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[854]: <info>  [1764397911.7105] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[854]: <info>  [1764397911.7106] dhcp4 (eth0): state changed no lease
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[854]: <info>  [1764397911.7109] manager: NetworkManager state is now CONNECTING
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[854]: <info>  [1764397911.7291] dhcp4 (eth1): canceled DHCP transaction
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[854]: <info>  [1764397911.7291] dhcp4 (eth1): state changed no lease
Nov 29 06:31:51 np0005539583.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[854]: <info>  [1764397911.7387] exiting (success)
Nov 29 06:31:51 np0005539583.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 06:31:51 np0005539583.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 06:31:51 np0005539583.novalocal systemd[1]: Stopped Network Manager.
Nov 29 06:31:51 np0005539583.novalocal systemd[1]: NetworkManager.service: Consumed 1.152s CPU time, 10.0M memory peak.
Nov 29 06:31:51 np0005539583.novalocal systemd[1]: Starting Network Manager...
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.7961] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:558c485d-5d9e-4d57-8633-c8b29a02c676)
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.7964] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.8037] manager[0x55a5872be070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 06:31:51 np0005539583.novalocal systemd[1]: Starting Hostname Service...
Nov 29 06:31:51 np0005539583.novalocal systemd[1]: Started Hostname Service.
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9338] hostname: hostname: using hostnamed
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9343] hostname: static hostname changed from (none) to "np0005539583.novalocal"
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9352] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9358] manager[0x55a5872be070]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9358] manager[0x55a5872be070]: rfkill: WWAN hardware radio set enabled
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9389] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9389] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9390] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9390] manager: Networking is enabled by state file
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9392] settings: Loaded settings plugin: keyfile (internal)
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9397] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9430] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9444] dhcp: init: Using DHCP client 'internal'
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9447] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9454] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9460] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9469] device (lo): Activation: starting connection 'lo' (aa60ea5c-c713-40ed-b887-294ab3e1a707)
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9476] device (eth0): carrier: link connected
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9480] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9486] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9486] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9494] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9499] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9506] device (eth1): carrier: link connected
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9509] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9514] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (2bd76030-c321-3792-974f-8b7377201814) (indicated)
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9515] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9519] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9526] device (eth1): Activation: starting connection 'Wired connection 1' (2bd76030-c321-3792-974f-8b7377201814)
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9534] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 06:31:51 np0005539583.novalocal systemd[1]: Started Network Manager.
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9537] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9556] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9558] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9560] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9563] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9566] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9569] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9574] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9579] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9583] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9590] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9593] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9610] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9612] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9617] device (lo): Activation: successful, device activated.
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9632] dhcp4 (eth0): state changed new lease, address=38.102.83.203
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9638] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 06:31:51 np0005539583.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9711] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9726] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9728] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9732] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9736] device (eth0): Activation: successful, device activated.
Nov 29 06:31:51 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397911.9740] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 06:31:51 np0005539583.novalocal sudo[7170]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:52 np0005539583.novalocal python3[7256]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-ea7d-8567-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:32:02 np0005539583.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 06:32:21 np0005539583.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 06:32:25 np0005539583.novalocal systemd[4295]: Starting Mark boot as successful...
Nov 29 06:32:25 np0005539583.novalocal systemd[4295]: Finished Mark boot as successful.
Nov 29 06:32:36 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397956.7847] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 06:32:36 np0005539583.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 06:32:36 np0005539583.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 06:32:36 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397956.8116] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 06:32:36 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397956.8119] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 06:32:36 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397956.8129] device (eth1): Activation: successful, device activated.
Nov 29 06:32:36 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397956.8136] manager: startup complete
Nov 29 06:32:36 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397956.8139] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 29 06:32:36 np0005539583.novalocal NetworkManager[7181]: <warn>  [1764397956.8145] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 29 06:32:36 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397956.8153] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 29 06:32:36 np0005539583.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 29 06:32:36 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397956.8250] dhcp4 (eth1): canceled DHCP transaction
Nov 29 06:32:36 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397956.8250] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 06:32:36 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397956.8251] dhcp4 (eth1): state changed no lease
Nov 29 06:32:36 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397956.8270] policy: auto-activating connection 'ci-private-network' (5b89f824-e5ce-5988-887a-f591240f43ef)
Nov 29 06:32:36 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397956.8275] device (eth1): Activation: starting connection 'ci-private-network' (5b89f824-e5ce-5988-887a-f591240f43ef)
Nov 29 06:32:36 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397956.8276] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 06:32:36 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397956.8280] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 06:32:36 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397956.8287] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 06:32:36 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397956.8298] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 06:32:36 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397956.8345] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 06:32:36 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397956.8348] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 06:32:36 np0005539583.novalocal NetworkManager[7181]: <info>  [1764397956.8356] device (eth1): Activation: successful, device activated.
Nov 29 06:32:46 np0005539583.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 06:32:52 np0005539583.novalocal sshd-session[4304]: Received disconnect from 38.102.83.114 port 43334:11: disconnected by user
Nov 29 06:32:52 np0005539583.novalocal sshd-session[4304]: Disconnected from user zuul 38.102.83.114 port 43334
Nov 29 06:32:52 np0005539583.novalocal sshd-session[4291]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:32:52 np0005539583.novalocal systemd-logind[782]: Session 1 logged out. Waiting for processes to exit.
Nov 29 06:32:55 np0005539583.novalocal sshd-session[7289]: Accepted publickey for zuul from 38.102.83.114 port 46424 ssh2: RSA SHA256:2nuGCieGc55QXSoqUlkpsd2tLSAxk15pqg2+HX/vZuM
Nov 29 06:32:55 np0005539583.novalocal systemd-logind[782]: New session 3 of user zuul.
Nov 29 06:32:55 np0005539583.novalocal systemd[1]: Started Session 3 of User zuul.
Nov 29 06:32:55 np0005539583.novalocal sshd-session[7289]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:32:55 np0005539583.novalocal sudo[7368]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iekmhmhszwmppalpwzsnvhprqigbbvkq ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 06:32:55 np0005539583.novalocal sudo[7368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:55 np0005539583.novalocal python3[7370]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:32:55 np0005539583.novalocal sudo[7368]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:56 np0005539583.novalocal sudo[7441]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewhszfwqmnwjqsxqoyrckqcfthevfouo ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 06:32:56 np0005539583.novalocal sudo[7441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:56 np0005539583.novalocal python3[7443]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397975.5844429-267-10957545272627/source _original_basename=tmp_1rgg2d7 follow=False checksum=2edee0418c2bbe355ab77832f378d2bd50a55ed3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:32:56 np0005539583.novalocal sudo[7441]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:58 np0005539583.novalocal sshd-session[7292]: Connection closed by 38.102.83.114 port 46424
Nov 29 06:32:58 np0005539583.novalocal sshd-session[7289]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:32:58 np0005539583.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Nov 29 06:32:58 np0005539583.novalocal systemd-logind[782]: Session 3 logged out. Waiting for processes to exit.
Nov 29 06:32:58 np0005539583.novalocal systemd-logind[782]: Removed session 3.
Nov 29 06:33:01 np0005539583.novalocal sshd-session[7287]: Invalid user kali from 101.47.142.104 port 43086
Nov 29 06:33:02 np0005539583.novalocal sshd-session[7287]: Received disconnect from 101.47.142.104 port 43086:11: Bye Bye [preauth]
Nov 29 06:33:02 np0005539583.novalocal sshd-session[7287]: Disconnected from invalid user kali 101.47.142.104 port 43086 [preauth]
Nov 29 06:34:15 np0005539583.novalocal sshd[1003]: Timeout before authentication for connection from 58.209.82.184 to 38.102.83.203, pid = 7259
Nov 29 06:35:25 np0005539583.novalocal systemd[4295]: Created slice User Background Tasks Slice.
Nov 29 06:35:25 np0005539583.novalocal systemd[4295]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 06:35:25 np0005539583.novalocal systemd[4295]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 06:37:22 np0005539583.novalocal sshd[1003]: Timeout before authentication for connection from 101.47.142.104 to 38.102.83.203, pid = 7470
Nov 29 06:37:47 np0005539583.novalocal sshd[1003]: drop connection #0 from [101.47.142.104]:56178 on [38.102.83.203]:22 penalty: exceeded LoginGraceTime
Nov 29 06:40:17 np0005539583.novalocal sshd-session[7477]: Invalid user ubadmin from 101.47.142.104 port 43900
Nov 29 06:40:18 np0005539583.novalocal sshd-session[7480]: Accepted publickey for zuul from 38.102.83.114 port 37552 ssh2: RSA SHA256:2nuGCieGc55QXSoqUlkpsd2tLSAxk15pqg2+HX/vZuM
Nov 29 06:40:18 np0005539583.novalocal systemd-logind[782]: New session 4 of user zuul.
Nov 29 06:40:18 np0005539583.novalocal systemd[1]: Started Session 4 of User zuul.
Nov 29 06:40:18 np0005539583.novalocal sshd-session[7480]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:40:18 np0005539583.novalocal sudo[7507]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idonncqejizfxazsycxtnfhaokkfkmeg ; /usr/bin/python3'
Nov 29 06:40:18 np0005539583.novalocal sudo[7507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:19 np0005539583.novalocal python3[7509]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-e01a-3859-000000001cd6-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:40:19 np0005539583.novalocal sudo[7507]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:19 np0005539583.novalocal sudo[7535]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrrubqjqzcfsxmcrhwnwvvsivhfswxpz ; /usr/bin/python3'
Nov 29 06:40:19 np0005539583.novalocal sudo[7535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:19 np0005539583.novalocal python3[7537]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:19 np0005539583.novalocal sudo[7535]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:19 np0005539583.novalocal sudo[7561]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjqfudvnstpwfikvnrnfzdgturhwxgku ; /usr/bin/python3'
Nov 29 06:40:19 np0005539583.novalocal sudo[7561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:19 np0005539583.novalocal python3[7563]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:19 np0005539583.novalocal sudo[7561]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:19 np0005539583.novalocal sudo[7588]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lduxoawtvrjsvjnawsbfmpcrzexjzrax ; /usr/bin/python3'
Nov 29 06:40:19 np0005539583.novalocal sudo[7588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:19 np0005539583.novalocal python3[7590]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:19 np0005539583.novalocal sudo[7588]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:19 np0005539583.novalocal sshd-session[7477]: Received disconnect from 101.47.142.104 port 43900:11: Bye Bye [preauth]
Nov 29 06:40:19 np0005539583.novalocal sshd-session[7477]: Disconnected from invalid user ubadmin 101.47.142.104 port 43900 [preauth]
Nov 29 06:40:19 np0005539583.novalocal sudo[7614]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfylcecmqzrzlozulomtukmgoytgqzue ; /usr/bin/python3'
Nov 29 06:40:19 np0005539583.novalocal sudo[7614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:20 np0005539583.novalocal python3[7616]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:20 np0005539583.novalocal sudo[7614]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:20 np0005539583.novalocal sudo[7640]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asqoxrqtjwgtfrwdokjecndttbrdzpgf ; /usr/bin/python3'
Nov 29 06:40:20 np0005539583.novalocal sudo[7640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:20 np0005539583.novalocal python3[7642]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:20 np0005539583.novalocal sudo[7640]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:21 np0005539583.novalocal sudo[7718]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjgunnljskdfzvvtxckrgftvkwufwqyh ; /usr/bin/python3'
Nov 29 06:40:21 np0005539583.novalocal sudo[7718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:21 np0005539583.novalocal python3[7720]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:40:21 np0005539583.novalocal sudo[7718]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:21 np0005539583.novalocal sudo[7791]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdsszcncsefxeyzsbkrertlryyjlhhxb ; /usr/bin/python3'
Nov 29 06:40:21 np0005539583.novalocal sudo[7791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:21 np0005539583.novalocal python3[7793]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398421.164851-477-34260448997893/source _original_basename=tmplx7s8by3 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:21 np0005539583.novalocal sudo[7791]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:22 np0005539583.novalocal sudo[7841]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neenogpfrfkqbqpbcdfbvnfgqdsrsbiw ; /usr/bin/python3'
Nov 29 06:40:22 np0005539583.novalocal sudo[7841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:22 np0005539583.novalocal python3[7843]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 06:40:22 np0005539583.novalocal systemd[1]: Reloading.
Nov 29 06:40:22 np0005539583.novalocal systemd-rc-local-generator[7865]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:40:22 np0005539583.novalocal sudo[7841]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:24 np0005539583.novalocal sudo[7898]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aidnafdxddnwfznebzrshoiuihiyysfw ; /usr/bin/python3'
Nov 29 06:40:24 np0005539583.novalocal sudo[7898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:24 np0005539583.novalocal python3[7900]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 29 06:40:24 np0005539583.novalocal sudo[7898]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:24 np0005539583.novalocal sudo[7924]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uauzxwmnqhzavtxbpyyjrdptqsmtujiq ; /usr/bin/python3'
Nov 29 06:40:24 np0005539583.novalocal sudo[7924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:25 np0005539583.novalocal python3[7926]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:40:25 np0005539583.novalocal sudo[7924]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:25 np0005539583.novalocal sudo[7952]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xndknflezynwgjbjgvbtdmypksoegprg ; /usr/bin/python3'
Nov 29 06:40:25 np0005539583.novalocal sudo[7952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:25 np0005539583.novalocal python3[7954]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:40:25 np0005539583.novalocal sudo[7952]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:25 np0005539583.novalocal sudo[7980]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gncrxfjlyhgljewwoynwetyqhpvutwxe ; /usr/bin/python3'
Nov 29 06:40:25 np0005539583.novalocal sudo[7980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:25 np0005539583.novalocal python3[7982]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:40:25 np0005539583.novalocal sudo[7980]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:25 np0005539583.novalocal sudo[8008]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhjobgbjhhllwbuyacqletxfryzumqox ; /usr/bin/python3'
Nov 29 06:40:25 np0005539583.novalocal sudo[8008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:25 np0005539583.novalocal python3[8010]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:40:25 np0005539583.novalocal sudo[8008]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:26 np0005539583.novalocal python3[8037]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-e01a-3859-000000001cdd-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:40:26 np0005539583.novalocal python3[8067]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 06:40:28 np0005539583.novalocal sshd-session[7483]: Connection closed by 38.102.83.114 port 37552
Nov 29 06:40:28 np0005539583.novalocal sshd-session[7480]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:40:28 np0005539583.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Nov 29 06:40:28 np0005539583.novalocal systemd[1]: session-4.scope: Consumed 4.264s CPU time.
Nov 29 06:40:28 np0005539583.novalocal systemd-logind[782]: Session 4 logged out. Waiting for processes to exit.
Nov 29 06:40:28 np0005539583.novalocal systemd-logind[782]: Removed session 4.
Nov 29 06:40:30 np0005539583.novalocal sshd-session[8073]: Accepted publickey for zuul from 38.102.83.114 port 36798 ssh2: RSA SHA256:2nuGCieGc55QXSoqUlkpsd2tLSAxk15pqg2+HX/vZuM
Nov 29 06:40:30 np0005539583.novalocal systemd-logind[782]: New session 5 of user zuul.
Nov 29 06:40:30 np0005539583.novalocal systemd[1]: Started Session 5 of User zuul.
Nov 29 06:40:30 np0005539583.novalocal sshd-session[8073]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:40:30 np0005539583.novalocal sudo[8100]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noveygvphmkquebfibprhwpmfkuokshp ; /usr/bin/python3'
Nov 29 06:40:30 np0005539583.novalocal sudo[8100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:30 np0005539583.novalocal python3[8102]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 06:40:50 np0005539583.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 29 06:40:50 np0005539583.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 06:40:50 np0005539583.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 29 06:40:50 np0005539583.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 06:40:50 np0005539583.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 29 06:40:50 np0005539583.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 06:40:50 np0005539583.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 06:40:50 np0005539583.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 06:41:01 np0005539583.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 29 06:41:01 np0005539583.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 06:41:01 np0005539583.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 29 06:41:01 np0005539583.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 06:41:01 np0005539583.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 29 06:41:01 np0005539583.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 06:41:01 np0005539583.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 06:41:01 np0005539583.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 06:41:13 np0005539583.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 29 06:41:13 np0005539583.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 06:41:13 np0005539583.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 29 06:41:13 np0005539583.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 06:41:13 np0005539583.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 29 06:41:13 np0005539583.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 06:41:13 np0005539583.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 06:41:13 np0005539583.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 06:41:14 np0005539583.novalocal setsebool[8168]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 29 06:41:14 np0005539583.novalocal setsebool[8168]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 29 06:41:28 np0005539583.novalocal kernel: SELinux:  Converting 388 SID table entries...
Nov 29 06:41:28 np0005539583.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 06:41:28 np0005539583.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 29 06:41:28 np0005539583.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 06:41:28 np0005539583.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 29 06:41:28 np0005539583.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 06:41:28 np0005539583.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 06:41:28 np0005539583.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 06:41:46 np0005539583.novalocal dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 06:41:46 np0005539583.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 06:41:46 np0005539583.novalocal systemd[1]: Starting man-db-cache-update.service...
Nov 29 06:41:46 np0005539583.novalocal systemd[1]: Reloading.
Nov 29 06:41:46 np0005539583.novalocal systemd-rc-local-generator[8923]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:41:46 np0005539583.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 06:41:48 np0005539583.novalocal sudo[8100]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:48 np0005539583.novalocal python3[10199]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163efc-24cc-e62d-b3ae-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:41:49 np0005539583.novalocal kernel: evm: overlay not supported
Nov 29 06:41:49 np0005539583.novalocal systemd[4295]: Starting D-Bus User Message Bus...
Nov 29 06:41:49 np0005539583.novalocal dbus-broker-launch[11381]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 29 06:41:49 np0005539583.novalocal dbus-broker-launch[11381]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 29 06:41:49 np0005539583.novalocal systemd[4295]: Started D-Bus User Message Bus.
Nov 29 06:41:49 np0005539583.novalocal dbus-broker-lau[11381]: Ready
Nov 29 06:41:49 np0005539583.novalocal systemd[4295]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 06:41:49 np0005539583.novalocal systemd[4295]: Created slice Slice /user.
Nov 29 06:41:49 np0005539583.novalocal systemd[4295]: podman-11204.scope: unit configures an IP firewall, but not running as root.
Nov 29 06:41:49 np0005539583.novalocal systemd[4295]: (This warning is only shown for the first unit using IP firewalling.)
Nov 29 06:41:49 np0005539583.novalocal systemd[4295]: Started podman-11204.scope.
Nov 29 06:41:49 np0005539583.novalocal systemd[4295]: Started podman-pause-5d514639.scope.
Nov 29 06:41:50 np0005539583.novalocal sudo[12064]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptqiiaokowahuinaqfkmqjakumzdaxgy ; /usr/bin/python3'
Nov 29 06:41:50 np0005539583.novalocal sudo[12064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:50 np0005539583.novalocal python3[12082]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.113:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.113:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:41:50 np0005539583.novalocal python3[12082]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 29 06:41:50 np0005539583.novalocal sudo[12064]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:51 np0005539583.novalocal sshd-session[8076]: Connection closed by 38.102.83.114 port 36798
Nov 29 06:41:51 np0005539583.novalocal sshd-session[8073]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:41:51 np0005539583.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Nov 29 06:41:51 np0005539583.novalocal systemd[1]: session-5.scope: Consumed 1min 12.727s CPU time.
Nov 29 06:41:51 np0005539583.novalocal systemd-logind[782]: Session 5 logged out. Waiting for processes to exit.
Nov 29 06:41:51 np0005539583.novalocal systemd-logind[782]: Removed session 5.
Nov 29 06:42:10 np0005539583.novalocal sshd-session[20067]: Connection closed by 38.102.83.164 port 48322 [preauth]
Nov 29 06:42:10 np0005539583.novalocal sshd-session[20074]: Unable to negotiate with 38.102.83.164 port 48342: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 29 06:42:10 np0005539583.novalocal sshd-session[20075]: Connection closed by 38.102.83.164 port 48330 [preauth]
Nov 29 06:42:10 np0005539583.novalocal sshd-session[20072]: Unable to negotiate with 38.102.83.164 port 48348: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 29 06:42:10 np0005539583.novalocal sshd-session[20069]: Unable to negotiate with 38.102.83.164 port 48364: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 29 06:42:15 np0005539583.novalocal sshd-session[21765]: Accepted publickey for zuul from 38.102.83.114 port 58834 ssh2: RSA SHA256:2nuGCieGc55QXSoqUlkpsd2tLSAxk15pqg2+HX/vZuM
Nov 29 06:42:15 np0005539583.novalocal systemd-logind[782]: New session 6 of user zuul.
Nov 29 06:42:15 np0005539583.novalocal systemd[1]: Started Session 6 of User zuul.
Nov 29 06:42:15 np0005539583.novalocal sshd-session[21765]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:42:15 np0005539583.novalocal python3[21867]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEe/p2tITvdshvPUMH0y+ga9Kks/sowehDfFPUPjDs8gqPj+V4bPYFgwdIFx4SlRLBDiU6ME89KcoKH9Q206npA= zuul@np0005539582.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:42:15 np0005539583.novalocal sudo[22042]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfhtqipoyddhqegdflnlarjmyqflocxf ; /usr/bin/python3'
Nov 29 06:42:15 np0005539583.novalocal sudo[22042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:15 np0005539583.novalocal python3[22053]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEe/p2tITvdshvPUMH0y+ga9Kks/sowehDfFPUPjDs8gqPj+V4bPYFgwdIFx4SlRLBDiU6ME89KcoKH9Q206npA= zuul@np0005539582.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:42:15 np0005539583.novalocal sudo[22042]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:16 np0005539583.novalocal sudo[22390]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgaewojnccjopvfjbsgovoawmllgcxcb ; /usr/bin/python3'
Nov 29 06:42:16 np0005539583.novalocal sudo[22390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:16 np0005539583.novalocal python3[22398]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005539583.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 29 06:42:16 np0005539583.novalocal useradd[22472]: new group: name=cloud-admin, GID=1002
Nov 29 06:42:16 np0005539583.novalocal useradd[22472]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Nov 29 06:42:16 np0005539583.novalocal sudo[22390]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:17 np0005539583.novalocal sudo[22600]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwxradgzjrbrxxolbrhiagaqhslarsdb ; /usr/bin/python3'
Nov 29 06:42:17 np0005539583.novalocal sudo[22600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:17 np0005539583.novalocal python3[22610]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEe/p2tITvdshvPUMH0y+ga9Kks/sowehDfFPUPjDs8gqPj+V4bPYFgwdIFx4SlRLBDiU6ME89KcoKH9Q206npA= zuul@np0005539582.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:42:17 np0005539583.novalocal sudo[22600]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:17 np0005539583.novalocal sudo[22846]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxuuazhvpkvekcihieoupckloxqlzhun ; /usr/bin/python3'
Nov 29 06:42:17 np0005539583.novalocal sudo[22846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:17 np0005539583.novalocal python3[22854]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:42:17 np0005539583.novalocal sudo[22846]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:18 np0005539583.novalocal sudo[23078]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scdmnctklrtteideyoclmzhzsmyqdzhw ; /usr/bin/python3'
Nov 29 06:42:18 np0005539583.novalocal sudo[23078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:18 np0005539583.novalocal python3[23086]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398537.4089859-135-112483750907964/source _original_basename=tmpj8tytvh2 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:42:18 np0005539583.novalocal sudo[23078]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:18 np0005539583.novalocal sudo[23352]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooxqzybbjznbvcfnrviybbagzwldefqm ; /usr/bin/python3'
Nov 29 06:42:18 np0005539583.novalocal sudo[23352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:19 np0005539583.novalocal python3[23361]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 29 06:42:19 np0005539583.novalocal systemd[1]: Starting Hostname Service...
Nov 29 06:42:19 np0005539583.novalocal systemd[1]: Started Hostname Service.
Nov 29 06:42:19 np0005539583.novalocal systemd-hostnamed[23466]: Changed pretty hostname to 'compute-0'
Nov 29 06:42:19 compute-0 systemd-hostnamed[23466]: Hostname set to <compute-0> (static)
Nov 29 06:42:19 compute-0 NetworkManager[7181]: <info>  [1764398539.1948] hostname: static hostname changed from "np0005539583.novalocal" to "compute-0"
Nov 29 06:42:19 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 06:42:19 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 06:42:19 compute-0 sudo[23352]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:19 compute-0 sshd-session[21812]: Connection closed by 38.102.83.114 port 58834
Nov 29 06:42:19 compute-0 sshd-session[21765]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:42:19 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Nov 29 06:42:19 compute-0 systemd[1]: session-6.scope: Consumed 2.503s CPU time.
Nov 29 06:42:19 compute-0 systemd-logind[782]: Session 6 logged out. Waiting for processes to exit.
Nov 29 06:42:19 compute-0 systemd-logind[782]: Removed session 6.
Nov 29 06:42:29 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 06:42:37 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 06:42:37 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 06:42:37 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 2.501s CPU time.
Nov 29 06:42:37 compute-0 systemd[1]: run-r407d657f34e7468eb3c46b5ff3b8c624.service: Deactivated successfully.
Nov 29 06:42:39 compute-0 sshd-session[29132]: Invalid user intel from 101.47.142.104 port 34990
Nov 29 06:42:40 compute-0 sshd-session[29132]: Received disconnect from 101.47.142.104 port 34990:11: Bye Bye [preauth]
Nov 29 06:42:40 compute-0 sshd-session[29132]: Disconnected from invalid user intel 101.47.142.104 port 34990 [preauth]
Nov 29 06:42:49 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 06:44:25 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 29 06:44:25 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 29 06:44:25 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 29 06:44:25 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 29 06:45:00 compute-0 sshd-session[29921]: Invalid user ubuntu from 101.47.142.104 port 44172
Nov 29 06:45:01 compute-0 sshd-session[29921]: Received disconnect from 101.47.142.104 port 44172:11: Bye Bye [preauth]
Nov 29 06:45:01 compute-0 sshd-session[29921]: Disconnected from invalid user ubuntu 101.47.142.104 port 44172 [preauth]
Nov 29 06:47:40 compute-0 sshd-session[29927]: Accepted publickey for zuul from 38.102.83.164 port 55982 ssh2: RSA SHA256:2nuGCieGc55QXSoqUlkpsd2tLSAxk15pqg2+HX/vZuM
Nov 29 06:47:40 compute-0 systemd-logind[782]: New session 7 of user zuul.
Nov 29 06:47:40 compute-0 systemd[1]: Started Session 7 of User zuul.
Nov 29 06:47:40 compute-0 sshd-session[29927]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:47:41 compute-0 python3[30003]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:47:43 compute-0 sudo[30117]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eujacgthuldohndarpheyzseqexnmyhi ; /usr/bin/python3'
Nov 29 06:47:43 compute-0 sudo[30117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:43 compute-0 python3[30119]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:47:43 compute-0 sudo[30117]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:43 compute-0 sudo[30190]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdbaexkzuzcnmtnawvphdghbtfmnexti ; /usr/bin/python3'
Nov 29 06:47:43 compute-0 sudo[30190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:43 compute-0 python3[30192]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398862.848674-33541-106252778903865/source mode=0755 _original_basename=delorean.repo follow=False checksum=a16f090252000d02a7f7d540bb10f7c1c9cd4ac5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:47:43 compute-0 sudo[30190]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:43 compute-0 sudo[30216]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbeuyljlakcedreklmbdtngavtapgekm ; /usr/bin/python3'
Nov 29 06:47:43 compute-0 sudo[30216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:44 compute-0 python3[30218]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:47:44 compute-0 sudo[30216]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:44 compute-0 sudo[30289]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfftosvyiczmsxycgdqucripbmwnyglu ; /usr/bin/python3'
Nov 29 06:47:44 compute-0 sudo[30289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:44 compute-0 python3[30291]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398862.848674-33541-106252778903865/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:47:44 compute-0 sudo[30289]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:44 compute-0 sudo[30315]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckalptdgutcswozyvgtwzrogejatdzvo ; /usr/bin/python3'
Nov 29 06:47:44 compute-0 sudo[30315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:44 compute-0 python3[30317]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:47:44 compute-0 sudo[30315]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:44 compute-0 sudo[30388]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvattqfoujmkufzqdhzzngxjfwzpkwey ; /usr/bin/python3'
Nov 29 06:47:44 compute-0 sudo[30388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:45 compute-0 python3[30390]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398862.848674-33541-106252778903865/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:47:45 compute-0 sudo[30388]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:45 compute-0 sudo[30414]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqjrkjhiuyrwrdkcakyrdgokwvubwdty ; /usr/bin/python3'
Nov 29 06:47:45 compute-0 sudo[30414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:45 compute-0 python3[30416]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:47:45 compute-0 sudo[30414]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:45 compute-0 sudo[30487]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzkqmntmjkgubunayrhogmqlkifkfuyk ; /usr/bin/python3'
Nov 29 06:47:45 compute-0 sudo[30487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:45 compute-0 python3[30489]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398862.848674-33541-106252778903865/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:47:45 compute-0 sudo[30487]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:45 compute-0 sudo[30513]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsywgcfaszbhpdcjiteskgwfvbnmcbus ; /usr/bin/python3'
Nov 29 06:47:45 compute-0 sudo[30513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:46 compute-0 python3[30515]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:47:46 compute-0 sudo[30513]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:46 compute-0 sudo[30586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-staxqayinxglgunyrrkammgmyyeglzpl ; /usr/bin/python3'
Nov 29 06:47:46 compute-0 sudo[30586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:46 compute-0 python3[30588]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398862.848674-33541-106252778903865/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:47:46 compute-0 sudo[30586]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:46 compute-0 sudo[30612]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvxkgtxicnmcsfqdsefpmlsptrmcbrec ; /usr/bin/python3'
Nov 29 06:47:46 compute-0 sudo[30612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:46 compute-0 python3[30614]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:47:46 compute-0 sudo[30612]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:47 compute-0 sudo[30685]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydqowmdvdpspoqpgnlqdvaqbbrmanxxs ; /usr/bin/python3'
Nov 29 06:47:47 compute-0 sudo[30685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:47 compute-0 python3[30687]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398862.848674-33541-106252778903865/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:47:47 compute-0 sudo[30685]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:47 compute-0 sudo[30711]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvjtgmxkpaimwavmbxxctpftsisybuzr ; /usr/bin/python3'
Nov 29 06:47:47 compute-0 sudo[30711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:47 compute-0 python3[30713]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:47:47 compute-0 sudo[30711]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:47 compute-0 sudo[30784]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anmkzcbhygeskrkkciskrphptbaprcaq ; /usr/bin/python3'
Nov 29 06:47:47 compute-0 sudo[30784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:47 compute-0 python3[30786]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398862.848674-33541-106252778903865/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=25e801a9a05537c191e2aa500f19076ac31d3e5b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:47:47 compute-0 sudo[30784]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:50 compute-0 sshd-session[30811]: Connection closed by 192.168.122.11 port 55318 [preauth]
Nov 29 06:47:50 compute-0 sshd-session[30812]: Connection closed by 192.168.122.11 port 55328 [preauth]
Nov 29 06:47:50 compute-0 sshd-session[30813]: Unable to negotiate with 192.168.122.11 port 55340: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 29 06:47:50 compute-0 sshd-session[30814]: Unable to negotiate with 192.168.122.11 port 55354: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 29 06:47:50 compute-0 sshd-session[30815]: Unable to negotiate with 192.168.122.11 port 55370: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 29 06:47:59 compute-0 sshd-session[29924]: Received disconnect from 101.47.142.104 port 41762:11: Bye Bye [preauth]
Nov 29 06:47:59 compute-0 sshd-session[29924]: Disconnected from 101.47.142.104 port 41762 [preauth]
Nov 29 06:47:59 compute-0 python3[30844]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:50:25 compute-0 systemd[1]: Starting dnf makecache...
Nov 29 06:50:26 compute-0 dnf[30847]: Failed determining last makecache time.
Nov 29 06:50:26 compute-0 dnf[30847]: delorean-openstack-barbican-42b4c41831408a8e323 231 kB/s |  13 kB     00:00
Nov 29 06:50:26 compute-0 dnf[30847]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 2.7 MB/s |  65 kB     00:00
Nov 29 06:50:26 compute-0 dnf[30847]: delorean-openstack-cinder-1c00d6490d88e436f26ef 1.4 MB/s |  32 kB     00:00
Nov 29 06:50:26 compute-0 dnf[30847]: delorean-python-stevedore-c4acc5639fd2329372142 5.5 MB/s | 131 kB     00:00
Nov 29 06:50:26 compute-0 dnf[30847]: delorean-python-cloudkitty-tests-tempest-2c80f8 1.4 MB/s |  32 kB     00:00
Nov 29 06:50:26 compute-0 dnf[30847]: delorean-os-net-config-9758ab42364673d01bc5014e  12 MB/s | 349 kB     00:00
Nov 29 06:50:26 compute-0 dnf[30847]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 1.6 MB/s |  42 kB     00:00
Nov 29 06:50:26 compute-0 dnf[30847]: delorean-python-designate-tests-tempest-347fdbc 764 kB/s |  18 kB     00:00
Nov 29 06:50:27 compute-0 dnf[30847]: delorean-openstack-glance-1fd12c29b339f30fe823e  26 kB/s |  18 kB     00:00
Nov 29 06:50:27 compute-0 dnf[30847]: delorean-openstack-keystone-e4b40af0ae3698fbbbb  47 kB/s |  29 kB     00:00
Nov 29 06:50:27 compute-0 dnf[30847]: delorean-openstack-manila-3c01b7181572c95dac462 399 kB/s |  25 kB     00:00
Nov 29 06:50:28 compute-0 dnf[30847]: delorean-python-whitebox-neutron-tests-tempest- 4.7 MB/s | 154 kB     00:00
Nov 29 06:50:28 compute-0 dnf[30847]: delorean-openstack-octavia-ba397f07a7331190208c 1.0 MB/s |  26 kB     00:00
Nov 29 06:50:28 compute-0 dnf[30847]: delorean-openstack-watcher-c014f81a8647287f6dcc 624 kB/s |  16 kB     00:00
Nov 29 06:50:28 compute-0 dnf[30847]: delorean-python-tcib-1124124ec06aadbac34f0d340b  18 kB/s | 7.4 kB     00:00
Nov 29 06:50:28 compute-0 dnf[30847]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 5.2 MB/s | 144 kB     00:00
Nov 29 06:50:28 compute-0 dnf[30847]: delorean-openstack-swift-dc98a8463506ac520c469a 610 kB/s |  14 kB     00:00
Nov 29 06:50:28 compute-0 dnf[30847]: delorean-python-tempestconf-8515371b7cceebd4282 1.9 MB/s |  53 kB     00:00
Nov 29 06:50:28 compute-0 dnf[30847]: delorean-openstack-heat-ui-013accbfd179753bc3f0 468 kB/s |  96 kB     00:00
Nov 29 06:50:29 compute-0 dnf[30847]: CentOS Stream 9 - BaseOS                         74 kB/s | 7.3 kB     00:00
Nov 29 06:50:29 compute-0 dnf[30847]: CentOS Stream 9 - AppStream                      77 kB/s | 7.4 kB     00:00
Nov 29 06:50:29 compute-0 dnf[30847]: CentOS Stream 9 - CRB                            74 kB/s | 7.2 kB     00:00
Nov 29 06:50:29 compute-0 dnf[30847]: CentOS Stream 9 - Extras packages                24 kB/s | 8.3 kB     00:00
Nov 29 06:50:30 compute-0 dnf[30847]: dlrn-antelope-testing                           3.1 MB/s | 1.1 MB     00:00
Nov 29 06:50:30 compute-0 dnf[30847]: dlrn-antelope-build-deps                        2.1 MB/s | 461 kB     00:00
Nov 29 06:50:30 compute-0 dnf[30847]: centos9-rabbitmq                                2.5 MB/s | 123 kB     00:00
Nov 29 06:50:31 compute-0 dnf[30847]: centos9-storage                                  25 MB/s | 415 kB     00:00
Nov 29 06:50:31 compute-0 dnf[30847]: centos9-opstools                                3.5 MB/s |  51 kB     00:00
Nov 29 06:50:31 compute-0 dnf[30847]: NFV SIG OpenvSwitch                             1.8 MB/s | 456 kB     00:00
Nov 29 06:50:32 compute-0 dnf[30847]: repo-setup-centos-appstream                      36 MB/s |  25 MB     00:00
Nov 29 06:50:40 compute-0 dnf[30847]: repo-setup-centos-baseos                         80 MB/s | 8.8 MB     00:00
Nov 29 06:50:41 compute-0 dnf[30847]: repo-setup-centos-highavailability               29 MB/s | 744 kB     00:00
Nov 29 06:50:42 compute-0 dnf[30847]: repo-setup-centos-powertools                     73 MB/s | 7.3 MB     00:00
Nov 29 06:50:45 compute-0 dnf[30847]: Extra Packages for Enterprise Linux 9 - x86_64   16 MB/s |  20 MB     00:01
Nov 29 06:51:06 compute-0 dnf[30847]: Metadata cache created.
Nov 29 06:51:06 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 29 06:51:06 compute-0 systemd[1]: Finished dnf makecache.
Nov 29 06:51:06 compute-0 systemd[1]: dnf-makecache.service: Consumed 35.155s CPU time.
Nov 29 06:52:07 compute-0 sshd-session[30949]: Connection closed by 101.47.142.104 port 54618 [preauth]
Nov 29 06:52:59 compute-0 sshd-session[29930]: Received disconnect from 38.102.83.164 port 55982:11: disconnected by user
Nov 29 06:52:59 compute-0 sshd-session[29930]: Disconnected from user zuul 38.102.83.164 port 55982
Nov 29 06:52:59 compute-0 sshd-session[29927]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:52:59 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Nov 29 06:52:59 compute-0 systemd[1]: session-7.scope: Consumed 5.647s CPU time.
Nov 29 06:52:59 compute-0 systemd-logind[782]: Session 7 logged out. Waiting for processes to exit.
Nov 29 06:52:59 compute-0 systemd-logind[782]: Removed session 7.
Nov 29 06:54:30 compute-0 sshd-session[30952]: Received disconnect from 101.47.142.104 port 50468:11: Bye Bye [preauth]
Nov 29 06:54:30 compute-0 sshd-session[30952]: Disconnected from authenticating user root 101.47.142.104 port 50468 [preauth]
Nov 29 06:59:10 compute-0 sshd-session[30956]: Invalid user testftp from 101.47.142.104 port 41032
Nov 29 06:59:11 compute-0 sshd-session[30956]: Received disconnect from 101.47.142.104 port 41032:11: Bye Bye [preauth]
Nov 29 06:59:11 compute-0 sshd-session[30956]: Disconnected from invalid user testftp 101.47.142.104 port 41032 [preauth]
Nov 29 07:01:01 compute-0 CROND[30960]: (root) CMD (run-parts /etc/cron.hourly)
Nov 29 07:01:01 compute-0 run-parts[30963]: (/etc/cron.hourly) starting 0anacron
Nov 29 07:01:01 compute-0 anacron[30971]: Anacron started on 2025-11-29
Nov 29 07:01:01 compute-0 anacron[30971]: Will run job `cron.daily' in 24 min.
Nov 29 07:01:01 compute-0 anacron[30971]: Will run job `cron.weekly' in 44 min.
Nov 29 07:01:01 compute-0 anacron[30971]: Will run job `cron.monthly' in 64 min.
Nov 29 07:01:01 compute-0 anacron[30971]: Jobs will be executed sequentially
Nov 29 07:01:01 compute-0 run-parts[30973]: (/etc/cron.hourly) finished 0anacron
Nov 29 07:01:01 compute-0 CROND[30959]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 29 07:05:55 compute-0 sshd[1003]: Timeout before authentication for connection from 101.47.142.104 to 38.102.83.203, pid = 30976
Nov 29 07:06:12 compute-0 sshd[1003]: drop connection #0 from [101.47.142.104]:35312 on [38.102.83.203]:22 penalty: exceeded LoginGraceTime
Nov 29 07:08:39 compute-0 sshd-session[30980]: Invalid user builder from 101.47.142.104 port 56346
Nov 29 07:08:39 compute-0 sshd-session[30980]: Received disconnect from 101.47.142.104 port 56346:11: Bye Bye [preauth]
Nov 29 07:08:39 compute-0 sshd-session[30980]: Disconnected from invalid user builder 101.47.142.104 port 56346 [preauth]
Nov 29 07:11:25 compute-0 sshd-session[30985]: Received disconnect from 20.185.243.158 port 42058:11: Bye Bye [preauth]
Nov 29 07:11:25 compute-0 sshd-session[30985]: Disconnected from authenticating user root 20.185.243.158 port 42058 [preauth]
Nov 29 07:12:31 compute-0 sshd-session[30987]: Invalid user work from 103.236.140.19 port 51786
Nov 29 07:12:31 compute-0 sshd-session[30987]: Received disconnect from 103.236.140.19 port 51786:11: Bye Bye [preauth]
Nov 29 07:12:31 compute-0 sshd-session[30987]: Disconnected from invalid user work 103.236.140.19 port 51786 [preauth]
Nov 29 07:12:44 compute-0 sshd-session[30989]: Invalid user bob from 103.234.151.178 port 10572
Nov 29 07:12:44 compute-0 sshd-session[30989]: Received disconnect from 103.234.151.178 port 10572:11: Bye Bye [preauth]
Nov 29 07:12:44 compute-0 sshd-session[30989]: Disconnected from invalid user bob 103.234.151.178 port 10572 [preauth]
Nov 29 07:12:56 compute-0 sshd[1003]: Timeout before authentication for connection from 101.47.142.104 to 38.102.83.203, pid = 30983
Nov 29 07:13:19 compute-0 sshd[1003]: drop connection #0 from [101.47.142.104]:53088 on [38.102.83.203]:22 penalty: exceeded LoginGraceTime
Nov 29 07:14:28 compute-0 sshd-session[30993]: Received disconnect from 114.34.106.146 port 47154:11: Bye Bye [preauth]
Nov 29 07:14:28 compute-0 sshd-session[30993]: Disconnected from authenticating user root 114.34.106.146 port 47154 [preauth]
Nov 29 07:15:28 compute-0 sshd-session[30995]: Invalid user work from 20.185.243.158 port 33118
Nov 29 07:15:28 compute-0 sshd-session[30995]: Received disconnect from 20.185.243.158 port 33118:11: Bye Bye [preauth]
Nov 29 07:15:28 compute-0 sshd-session[30995]: Disconnected from invalid user work 20.185.243.158 port 33118 [preauth]
Nov 29 07:15:44 compute-0 sshd-session[30997]: Connection closed by 101.47.142.104 port 52674 [preauth]
Nov 29 07:15:57 compute-0 sshd-session[30999]: Invalid user manager from 103.236.140.19 port 32888
Nov 29 07:15:57 compute-0 sshd-session[30999]: Received disconnect from 103.236.140.19 port 32888:11: Bye Bye [preauth]
Nov 29 07:15:57 compute-0 sshd-session[30999]: Disconnected from invalid user manager 103.236.140.19 port 32888 [preauth]
Nov 29 07:16:19 compute-0 sshd[1003]: Timeout before authentication for connection from 115.190.181.91 to 38.102.83.203, pid = 30992
Nov 29 07:16:25 compute-0 sshd-session[31002]: Accepted publickey for zuul from 192.168.122.30 port 58596 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:16:25 compute-0 systemd-logind[782]: New session 8 of user zuul.
Nov 29 07:16:25 compute-0 systemd[1]: Started Session 8 of User zuul.
Nov 29 07:16:25 compute-0 sshd-session[31002]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:16:26 compute-0 python3.9[31155]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:16:27 compute-0 sudo[31334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmfsnbntringbiemttyigkwjwevgwgrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400586.7767584-32-6894978409873/AnsiballZ_command.py'
Nov 29 07:16:27 compute-0 sudo[31334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:16:27 compute-0 python3.9[31336]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:16:33 compute-0 sshd-session[31357]: Invalid user autcom from 20.185.243.158 port 36530
Nov 29 07:16:33 compute-0 sshd-session[31357]: Received disconnect from 20.185.243.158 port 36530:11: Bye Bye [preauth]
Nov 29 07:16:33 compute-0 sshd-session[31357]: Disconnected from invalid user autcom 20.185.243.158 port 36530 [preauth]
Nov 29 07:16:36 compute-0 sshd-session[31363]: Received disconnect from 114.34.106.146 port 58996:11: Bye Bye [preauth]
Nov 29 07:16:36 compute-0 sshd-session[31363]: Disconnected from authenticating user root 114.34.106.146 port 58996 [preauth]
Nov 29 07:16:40 compute-0 sudo[31334]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:40 compute-0 sshd-session[31005]: Connection closed by 192.168.122.30 port 58596
Nov 29 07:16:40 compute-0 sshd-session[31002]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:16:40 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Nov 29 07:16:40 compute-0 systemd[1]: session-8.scope: Consumed 8.944s CPU time.
Nov 29 07:16:40 compute-0 systemd-logind[782]: Session 8 logged out. Waiting for processes to exit.
Nov 29 07:16:40 compute-0 systemd-logind[782]: Removed session 8.
Nov 29 07:16:45 compute-0 sshd-session[31398]: Received disconnect from 103.234.151.178 port 61698:11: Bye Bye [preauth]
Nov 29 07:16:45 compute-0 sshd-session[31398]: Disconnected from authenticating user root 103.234.151.178 port 61698 [preauth]
Nov 29 07:16:56 compute-0 sshd-session[31400]: Accepted publickey for zuul from 192.168.122.30 port 45272 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:16:56 compute-0 systemd-logind[782]: New session 9 of user zuul.
Nov 29 07:16:56 compute-0 systemd[1]: Started Session 9 of User zuul.
Nov 29 07:16:56 compute-0 sshd-session[31400]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:16:57 compute-0 python3.9[31553]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 07:16:59 compute-0 python3.9[31727]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:16:59 compute-0 sudo[31877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igcvmqulwinetuejcaddpkiklypjqttm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400619.2818325-45-163546356084386/AnsiballZ_command.py'
Nov 29 07:16:59 compute-0 sudo[31877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:16:59 compute-0 python3.9[31879]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:17:00 compute-0 sudo[31877]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:00 compute-0 sudo[32030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwszvbqrsrewymefmdulgyczuptmmhff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400620.3399265-57-222956431816436/AnsiballZ_stat.py'
Nov 29 07:17:00 compute-0 sudo[32030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:00 compute-0 python3.9[32032]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:17:00 compute-0 sudo[32030]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:01 compute-0 sudo[32182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvmkuhhnqafuiblridqbratofxygdqlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400621.1752317-65-262044315639999/AnsiballZ_file.py'
Nov 29 07:17:01 compute-0 sudo[32182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:01 compute-0 python3.9[32184]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:17:01 compute-0 sudo[32182]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:02 compute-0 sudo[32334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caglcontmfhhaoxogcohnprtkaihhheb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400622.0102458-73-112017197667897/AnsiballZ_stat.py'
Nov 29 07:17:02 compute-0 sudo[32334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:02 compute-0 python3.9[32336]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:17:02 compute-0 sudo[32334]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:02 compute-0 sudo[32457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogtqhkywfpktbsvzorvdmjbbvxpltawx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400622.0102458-73-112017197667897/AnsiballZ_copy.py'
Nov 29 07:17:03 compute-0 sudo[32457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:03 compute-0 python3.9[32459]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400622.0102458-73-112017197667897/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:17:03 compute-0 sudo[32457]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:03 compute-0 sudo[32609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrddxcmecbfqusvkmkavrrgwgonkxkqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400623.366577-88-189821792447239/AnsiballZ_setup.py'
Nov 29 07:17:03 compute-0 sudo[32609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:03 compute-0 python3.9[32611]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:17:04 compute-0 sudo[32609]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:04 compute-0 sudo[32765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chvjasxrivfslthtbdzbyzqjkzdjhper ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400624.37921-96-47588397830902/AnsiballZ_file.py'
Nov 29 07:17:04 compute-0 sudo[32765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:04 compute-0 python3.9[32767]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:17:04 compute-0 sudo[32765]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:05 compute-0 sudo[32917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-addkhrmmigfdlznplkaxihjlnvgcopuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400625.1135292-105-149708195501940/AnsiballZ_file.py'
Nov 29 07:17:05 compute-0 sudo[32917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:05 compute-0 python3.9[32919]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:17:05 compute-0 sudo[32917]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:06 compute-0 python3.9[33069]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:17:09 compute-0 python3.9[33323]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:17:10 compute-0 python3.9[33473]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:17:11 compute-0 python3.9[33627]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:17:12 compute-0 sudo[33783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xypcgbtogrzhzchaafagoywdzecolskn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400632.3972409-153-94723912328207/AnsiballZ_setup.py'
Nov 29 07:17:12 compute-0 sudo[33783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:13 compute-0 python3.9[33785]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:17:14 compute-0 sudo[33783]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:14 compute-0 sudo[33869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxgwmzqxtnhafclufvsbfmtczvzmhevh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400632.3972409-153-94723912328207/AnsiballZ_dnf.py'
Nov 29 07:17:14 compute-0 sudo[33869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:17:14 compute-0 python3.9[33871]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:17:15 compute-0 sshd-session[33786]: Received disconnect from 103.236.140.19 port 57598:11: Bye Bye [preauth]
Nov 29 07:17:15 compute-0 sshd-session[33786]: Disconnected from authenticating user root 103.236.140.19 port 57598 [preauth]
Nov 29 07:17:42 compute-0 sshd-session[33986]: Received disconnect from 20.185.243.158 port 34978:11: Bye Bye [preauth]
Nov 29 07:17:42 compute-0 sshd-session[33986]: Disconnected from authenticating user root 20.185.243.158 port 34978 [preauth]
Nov 29 07:17:54 compute-0 sshd-session[34017]: Invalid user admin123 from 114.34.106.146 port 51004
Nov 29 07:17:54 compute-0 sshd-session[34017]: Received disconnect from 114.34.106.146 port 51004:11: Bye Bye [preauth]
Nov 29 07:17:54 compute-0 sshd-session[34017]: Disconnected from invalid user admin123 114.34.106.146 port 51004 [preauth]
Nov 29 07:18:05 compute-0 sshd-session[34020]: Invalid user sopuser from 103.234.151.178 port 21990
Nov 29 07:18:05 compute-0 sshd-session[34020]: Received disconnect from 103.234.151.178 port 21990:11: Bye Bye [preauth]
Nov 29 07:18:05 compute-0 sshd-session[34020]: Disconnected from invalid user sopuser 103.234.151.178 port 21990 [preauth]
Nov 29 07:18:19 compute-0 systemd[1]: Reloading.
Nov 29 07:18:19 compute-0 systemd-rc-local-generator[34072]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:18:20 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 29 07:18:20 compute-0 systemd[1]: Reloading.
Nov 29 07:18:20 compute-0 systemd-rc-local-generator[34114]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:18:21 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 29 07:18:21 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 29 07:18:21 compute-0 systemd[1]: Reloading.
Nov 29 07:18:21 compute-0 systemd-rc-local-generator[34151]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:18:21 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 29 07:18:22 compute-0 dbus-broker-launch[747]: Noticed file-system modification, trigger reload.
Nov 29 07:18:22 compute-0 dbus-broker-launch[747]: Noticed file-system modification, trigger reload.
Nov 29 07:18:22 compute-0 dbus-broker-launch[747]: Noticed file-system modification, trigger reload.
Nov 29 07:18:35 compute-0 sshd-session[34186]: Invalid user tempuser from 103.236.140.19 port 57890
Nov 29 07:18:36 compute-0 sshd-session[34186]: Received disconnect from 103.236.140.19 port 57890:11: Bye Bye [preauth]
Nov 29 07:18:36 compute-0 sshd-session[34186]: Disconnected from invalid user tempuser 103.236.140.19 port 57890 [preauth]
Nov 29 07:18:54 compute-0 sshd-session[34210]: Invalid user manager from 20.185.243.158 port 37212
Nov 29 07:18:54 compute-0 sshd-session[34210]: Received disconnect from 20.185.243.158 port 37212:11: Bye Bye [preauth]
Nov 29 07:18:54 compute-0 sshd-session[34210]: Disconnected from invalid user manager 20.185.243.158 port 37212 [preauth]
Nov 29 07:19:10 compute-0 sshd[1003]: Timeout before authentication for connection from 120.48.39.224 to 38.102.83.203, pid = 33103
Nov 29 07:19:15 compute-0 sshd-session[34260]: Received disconnect from 114.34.106.146 port 41058:11: Bye Bye [preauth]
Nov 29 07:19:15 compute-0 sshd-session[34260]: Disconnected from authenticating user root 114.34.106.146 port 41058 [preauth]
Nov 29 07:19:23 compute-0 sshd-session[34299]: Received disconnect from 103.234.151.178 port 45808:11: Bye Bye [preauth]
Nov 29 07:19:23 compute-0 sshd-session[34299]: Disconnected from authenticating user root 103.234.151.178 port 45808 [preauth]
Nov 29 07:19:51 compute-0 sshd-session[34343]: Received disconnect from 103.236.140.19 port 51650:11: Bye Bye [preauth]
Nov 29 07:19:51 compute-0 sshd-session[34343]: Disconnected from authenticating user root 103.236.140.19 port 51650 [preauth]
Nov 29 07:19:59 compute-0 sshd[1003]: Timeout before authentication for connection from 101.47.142.104 to 38.102.83.203, pid = 34019
Nov 29 07:20:08 compute-0 sshd-session[34383]: Invalid user runner from 20.185.243.158 port 32850
Nov 29 07:20:08 compute-0 sshd-session[34383]: Received disconnect from 20.185.243.158 port 32850:11: Bye Bye [preauth]
Nov 29 07:20:08 compute-0 sshd-session[34383]: Disconnected from invalid user runner 20.185.243.158 port 32850 [preauth]
Nov 29 07:20:10 compute-0 kernel: SELinux:  Converting 2719 SID table entries...
Nov 29 07:20:10 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 07:20:10 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 07:20:10 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 07:20:10 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 07:20:10 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 07:20:10 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 07:20:10 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 07:20:10 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 29 07:20:10 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:20:10 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:20:10 compute-0 systemd[1]: Reloading.
Nov 29 07:20:10 compute-0 systemd-rc-local-generator[34492]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:20:10 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 07:20:12 compute-0 sudo[33869]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:12 compute-0 sudo[35406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vykwqpmzixihjvqdatcvdsmouoxgkfvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400812.268741-165-40659506132346/AnsiballZ_command.py'
Nov 29 07:20:12 compute-0 sudo[35406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:12 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:20:12 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:20:12 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.325s CPU time.
Nov 29 07:20:12 compute-0 systemd[1]: run-rbe338f8eddcd47a29d1a00fa2d0d3c6e.service: Deactivated successfully.
Nov 29 07:20:12 compute-0 python3.9[35408]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:20:13 compute-0 sudo[35406]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:14 compute-0 sudo[35688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jahzfycovswfsyagdppzjdxudyxorjec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400814.0396147-173-236683545194972/AnsiballZ_selinux.py'
Nov 29 07:20:14 compute-0 sudo[35688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:15 compute-0 python3.9[35690]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 07:20:15 compute-0 sudo[35688]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:15 compute-0 sudo[35840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wghrphzaycymgjcfyoixkpkzjztdkqum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400815.4153652-184-201201679302107/AnsiballZ_command.py'
Nov 29 07:20:15 compute-0 sudo[35840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:15 compute-0 python3.9[35842]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 07:20:17 compute-0 sudo[35840]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:17 compute-0 sudo[35993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wacnovilhlrsnqzxmtylklqnvpucqdrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400817.566765-192-212021255464763/AnsiballZ_file.py'
Nov 29 07:20:17 compute-0 sudo[35993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:19 compute-0 python3.9[35995]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:19 compute-0 sudo[35993]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:20 compute-0 sudo[36145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxxxwtrxmdwpduygxxweyviqsetnzhih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400819.5654762-200-252756985564019/AnsiballZ_mount.py'
Nov 29 07:20:20 compute-0 sudo[36145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:21 compute-0 python3.9[36147]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 07:20:21 compute-0 sudo[36145]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:22 compute-0 sudo[36297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmscbpvzhozrituxaywhifpnjpddfgyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400821.9119177-228-17997123297858/AnsiballZ_file.py'
Nov 29 07:20:22 compute-0 sudo[36297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:22 compute-0 python3.9[36299]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:20:22 compute-0 sudo[36297]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:22 compute-0 sshd[1003]: drop connection #0 from [101.47.142.104]:43862 on [38.102.83.203]:22 penalty: exceeded LoginGraceTime
Nov 29 07:20:23 compute-0 sudo[36449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzvmnidbphuedggpvmfegburssyzeatb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400822.7027717-236-145236790745889/AnsiballZ_stat.py'
Nov 29 07:20:23 compute-0 sudo[36449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:25 compute-0 python3.9[36451]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:25 compute-0 sudo[36449]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:26 compute-0 sudo[36572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dufhyrvymikyeqtgjhvspvtmktheyqwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400822.7027717-236-145236790745889/AnsiballZ_copy.py'
Nov 29 07:20:26 compute-0 sudo[36572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:26 compute-0 python3.9[36574]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400822.7027717-236-145236790745889/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3646d8a87b7da827f60eae99acd128a9e4b8a41a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:26 compute-0 sudo[36572]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:27 compute-0 sudo[36724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fklqrxsyeedvjqcafolxrjajutkrpyec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400827.021037-260-41260417203415/AnsiballZ_stat.py'
Nov 29 07:20:27 compute-0 sudo[36724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:29 compute-0 python3.9[36726]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:20:29 compute-0 sudo[36724]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:30 compute-0 sudo[36878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdaxwjyejeuoeejvdacnnotqydvzdjnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400830.1799407-268-115184278801742/AnsiballZ_command.py'
Nov 29 07:20:30 compute-0 sudo[36878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:30 compute-0 python3.9[36880]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:20:30 compute-0 sudo[36878]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:31 compute-0 sudo[37031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdzobkaksicxnvebhmqudbvndtqimyij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400830.9490993-276-10933530330621/AnsiballZ_file.py'
Nov 29 07:20:31 compute-0 sudo[37031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:31 compute-0 sshd-session[36745]: Received disconnect from 114.34.106.146 port 34474:11: Bye Bye [preauth]
Nov 29 07:20:31 compute-0 sshd-session[36745]: Disconnected from authenticating user root 114.34.106.146 port 34474 [preauth]
Nov 29 07:20:31 compute-0 python3.9[37033]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:31 compute-0 sudo[37031]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:32 compute-0 sudo[37183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijntrzojxbwojvpubjgahqvhkzfnyxnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400831.7637956-287-256644869481986/AnsiballZ_getent.py'
Nov 29 07:20:32 compute-0 sudo[37183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:32 compute-0 python3.9[37185]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 07:20:32 compute-0 sudo[37183]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:32 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:20:32 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:20:33 compute-0 sudo[37337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqyxixbigstqucvafzpggieuohwvgriw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400832.6263208-295-156709299670346/AnsiballZ_group.py'
Nov 29 07:20:33 compute-0 sudo[37337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:33 compute-0 python3.9[37339]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 07:20:33 compute-0 groupadd[37340]: group added to /etc/group: name=qemu, GID=107
Nov 29 07:20:33 compute-0 groupadd[37340]: group added to /etc/gshadow: name=qemu
Nov 29 07:20:33 compute-0 groupadd[37340]: new group: name=qemu, GID=107
Nov 29 07:20:33 compute-0 sudo[37337]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:34 compute-0 sudo[37495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhkwtpvjlymrlfzyrxhyglmsqzevxteb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400833.5742927-303-58494228153765/AnsiballZ_user.py'
Nov 29 07:20:34 compute-0 sudo[37495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:34 compute-0 python3.9[37497]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 07:20:34 compute-0 useradd[37499]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Nov 29 07:20:34 compute-0 sudo[37495]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:34 compute-0 sudo[37655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdhxgkntrdduvjaecsyioiahonpnyymh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400834.6176105-311-140232883390077/AnsiballZ_getent.py'
Nov 29 07:20:34 compute-0 sudo[37655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:35 compute-0 python3.9[37657]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 07:20:35 compute-0 sudo[37655]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:35 compute-0 sudo[37808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xalxsiskzruotprjyqxsypxcfpfcdgwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400835.3496525-319-125728489214648/AnsiballZ_group.py'
Nov 29 07:20:35 compute-0 sudo[37808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:35 compute-0 python3.9[37810]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 07:20:35 compute-0 groupadd[37811]: group added to /etc/group: name=hugetlbfs, GID=42477
Nov 29 07:20:35 compute-0 groupadd[37811]: group added to /etc/gshadow: name=hugetlbfs
Nov 29 07:20:35 compute-0 groupadd[37811]: new group: name=hugetlbfs, GID=42477
Nov 29 07:20:35 compute-0 sudo[37808]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:36 compute-0 sudo[37966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epsrcqakpwlsbnngaysphamtukufaobd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400836.2456388-328-118203370107171/AnsiballZ_file.py'
Nov 29 07:20:36 compute-0 sudo[37966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:36 compute-0 python3.9[37968]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 07:20:36 compute-0 sudo[37966]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:37 compute-0 sudo[38118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdmeupxxoiltzvsigufglipleqjgtoqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400837.1226807-339-230505248740619/AnsiballZ_dnf.py'
Nov 29 07:20:37 compute-0 sudo[38118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:37 compute-0 python3.9[38120]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:20:39 compute-0 sudo[38118]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:40 compute-0 sudo[38274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbpbqfnyvyubsekjpyxhikazzductjvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400839.810905-347-230004759418195/AnsiballZ_file.py'
Nov 29 07:20:40 compute-0 sudo[38274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:40 compute-0 python3.9[38276]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:20:40 compute-0 sudo[38274]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:40 compute-0 sudo[38426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emjtgsdicjfmsejfgcfihpsukdxikxlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400840.498513-355-79743492040259/AnsiballZ_stat.py'
Nov 29 07:20:40 compute-0 sudo[38426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:41 compute-0 python3.9[38428]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:41 compute-0 sudo[38426]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:41 compute-0 sshd-session[38147]: Received disconnect from 103.234.151.178 port 6094:11: Bye Bye [preauth]
Nov 29 07:20:41 compute-0 sshd-session[38147]: Disconnected from authenticating user root 103.234.151.178 port 6094 [preauth]
Nov 29 07:20:41 compute-0 sudo[38549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzojwsttfvrluxffeocqxgwvgpglbzyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400840.498513-355-79743492040259/AnsiballZ_copy.py'
Nov 29 07:20:41 compute-0 sudo[38549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:41 compute-0 python3.9[38551]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400840.498513-355-79743492040259/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:20:41 compute-0 sudo[38549]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:42 compute-0 sudo[38701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krqdrdufedobmegxdspzmevosoppcyhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400841.7809916-370-154129361370722/AnsiballZ_systemd.py'
Nov 29 07:20:42 compute-0 sudo[38701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:42 compute-0 python3.9[38703]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:20:42 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 29 07:20:42 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 29 07:20:42 compute-0 kernel: Bridge firewalling registered
Nov 29 07:20:42 compute-0 systemd-modules-load[38707]: Inserted module 'br_netfilter'
Nov 29 07:20:42 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 29 07:20:42 compute-0 sudo[38701]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:43 compute-0 sudo[38861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhxcveamrhejgxebuceitoirsovytsiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400843.0892203-378-53440990487271/AnsiballZ_stat.py'
Nov 29 07:20:43 compute-0 sudo[38861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:43 compute-0 python3.9[38863]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:43 compute-0 sudo[38861]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:43 compute-0 sudo[38984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpmiuklywfhmhvlmdjmvsriukrchqiny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400843.0892203-378-53440990487271/AnsiballZ_copy.py'
Nov 29 07:20:43 compute-0 sudo[38984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:44 compute-0 python3.9[38986]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400843.0892203-378-53440990487271/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:20:44 compute-0 sudo[38984]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:45 compute-0 sudo[39136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvkfhaokpcionttwirulpadxnwgtumar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400844.8268359-396-134708710817769/AnsiballZ_dnf.py'
Nov 29 07:20:45 compute-0 sudo[39136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:45 compute-0 python3.9[39138]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:20:50 compute-0 dbus-broker-launch[747]: Noticed file-system modification, trigger reload.
Nov 29 07:20:50 compute-0 dbus-broker-launch[747]: Noticed file-system modification, trigger reload.
Nov 29 07:20:50 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:20:50 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:20:50 compute-0 systemd[1]: Reloading.
Nov 29 07:20:50 compute-0 systemd-rc-local-generator[39199]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:20:50 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 07:20:52 compute-0 sudo[39136]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:52 compute-0 python3.9[41135]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:20:53 compute-0 python3.9[42186]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 07:20:54 compute-0 python3.9[42876]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:20:55 compute-0 sudo[43158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgypvzyrilqtutewkaendkwosytxkzlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400854.9130952-435-150129703207738/AnsiballZ_command.py'
Nov 29 07:20:55 compute-0 sudo[43158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:55 compute-0 python3.9[43160]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:20:55 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 07:20:56 compute-0 systemd[1]: Starting Authorization Manager...
Nov 29 07:20:56 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 07:20:56 compute-0 polkitd[43585]: Started polkitd version 0.117
Nov 29 07:20:56 compute-0 polkitd[43585]: Loading rules from directory /etc/polkit-1/rules.d
Nov 29 07:20:56 compute-0 polkitd[43585]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 29 07:20:56 compute-0 polkitd[43585]: Finished loading, compiling and executing 2 rules
Nov 29 07:20:56 compute-0 systemd[1]: Started Authorization Manager.
Nov 29 07:20:56 compute-0 polkitd[43585]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Nov 29 07:20:56 compute-0 sudo[43158]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:56 compute-0 sudo[43753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glezecxlhfiopssvluuojyhybbcsfdrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400856.5849535-444-202657605767821/AnsiballZ_systemd.py'
Nov 29 07:20:56 compute-0 sudo[43753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:57 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:20:57 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:20:57 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.279s CPU time.
Nov 29 07:20:57 compute-0 systemd[1]: run-re390e07528dd4daeba0742b93fe0ac19.service: Deactivated successfully.
Nov 29 07:20:57 compute-0 python3.9[43755]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:20:57 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 07:20:57 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 07:20:57 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 07:20:57 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 07:20:57 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 07:20:57 compute-0 sudo[43753]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:58 compute-0 python3.9[43918]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 07:21:00 compute-0 sudo[44068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqdfjelzmvtuhmzjfthmmbobenlrnpow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400860.0288742-501-266177157011810/AnsiballZ_systemd.py'
Nov 29 07:21:00 compute-0 sudo[44068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:00 compute-0 python3.9[44070]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:21:00 compute-0 systemd[1]: Reloading.
Nov 29 07:21:00 compute-0 systemd-rc-local-generator[44099]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:21:00 compute-0 sudo[44068]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:01 compute-0 sudo[44257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-espoyqgleyerrqvlietszaknxltyfhmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400861.1603203-501-204029480576135/AnsiballZ_systemd.py'
Nov 29 07:21:01 compute-0 sudo[44257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:01 compute-0 python3.9[44259]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:21:01 compute-0 systemd[1]: Reloading.
Nov 29 07:21:01 compute-0 systemd-rc-local-generator[44290]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:21:02 compute-0 sudo[44257]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:02 compute-0 sudo[44447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmuozhzmypijihxwhgyoftyuobpuioih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400862.3304212-517-267032858745656/AnsiballZ_command.py'
Nov 29 07:21:02 compute-0 sudo[44447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:02 compute-0 python3.9[44449]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:21:02 compute-0 sudo[44447]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:03 compute-0 sudo[44600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcdepbogbpisawtoupzhwebrbovghott ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400863.0959375-525-34452623495859/AnsiballZ_command.py'
Nov 29 07:21:03 compute-0 sudo[44600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:03 compute-0 python3.9[44602]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:21:03 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 29 07:21:03 compute-0 sudo[44600]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:04 compute-0 sudo[44753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnbhchssuluxvesexhiviqqlnfhglxpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400863.8172674-533-136598639871524/AnsiballZ_command.py'
Nov 29 07:21:04 compute-0 sudo[44753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:04 compute-0 python3.9[44755]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:21:05 compute-0 sudo[44753]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:06 compute-0 sudo[44915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdwzxmmlbebfengqarldagrcmyhkzwpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400866.0775368-541-230503437519785/AnsiballZ_command.py'
Nov 29 07:21:06 compute-0 sudo[44915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:06 compute-0 python3.9[44917]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:21:06 compute-0 sudo[44915]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:07 compute-0 sudo[45068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkhfhfjjzkzespiuxazvtvvnjfqqxyxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400866.8263633-549-53836660762458/AnsiballZ_systemd.py'
Nov 29 07:21:07 compute-0 sudo[45068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:07 compute-0 python3.9[45070]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:21:07 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 07:21:07 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Nov 29 07:21:07 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Nov 29 07:21:07 compute-0 systemd[1]: Starting Apply Kernel Variables...
Nov 29 07:21:07 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 07:21:07 compute-0 systemd[1]: Finished Apply Kernel Variables.
Nov 29 07:21:07 compute-0 sudo[45068]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:07 compute-0 sshd-session[31403]: Connection closed by 192.168.122.30 port 45272
Nov 29 07:21:07 compute-0 sshd-session[31400]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:21:07 compute-0 systemd-logind[782]: Session 9 logged out. Waiting for processes to exit.
Nov 29 07:21:07 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Nov 29 07:21:07 compute-0 systemd[1]: session-9.scope: Consumed 2min 24.237s CPU time.
Nov 29 07:21:07 compute-0 systemd-logind[782]: Removed session 9.
Nov 29 07:21:08 compute-0 sshd-session[45071]: Invalid user pivpn from 103.236.140.19 port 50948
Nov 29 07:21:08 compute-0 sshd-session[45071]: Received disconnect from 103.236.140.19 port 50948:11: Bye Bye [preauth]
Nov 29 07:21:08 compute-0 sshd-session[45071]: Disconnected from invalid user pivpn 103.236.140.19 port 50948 [preauth]
Nov 29 07:21:14 compute-0 sshd-session[45102]: Accepted publickey for zuul from 192.168.122.30 port 55272 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:21:14 compute-0 systemd-logind[782]: New session 10 of user zuul.
Nov 29 07:21:14 compute-0 systemd[1]: Started Session 10 of User zuul.
Nov 29 07:21:14 compute-0 sshd-session[45102]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:21:15 compute-0 python3.9[45255]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:21:16 compute-0 sudo[45409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeuouwdkbmvftqgdijylyscwljydnpqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400876.2400498-36-232273769857254/AnsiballZ_getent.py'
Nov 29 07:21:16 compute-0 sudo[45409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:16 compute-0 python3.9[45411]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 07:21:16 compute-0 sudo[45409]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:17 compute-0 sudo[45562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkenqzbbjaqjsfqmikeuvfxbijkcqhor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400877.0899038-44-16725076752346/AnsiballZ_group.py'
Nov 29 07:21:17 compute-0 sudo[45562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:17 compute-0 python3.9[45564]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 07:21:18 compute-0 groupadd[45565]: group added to /etc/group: name=openvswitch, GID=42476
Nov 29 07:21:18 compute-0 groupadd[45565]: group added to /etc/gshadow: name=openvswitch
Nov 29 07:21:18 compute-0 groupadd[45565]: new group: name=openvswitch, GID=42476
Nov 29 07:21:18 compute-0 sudo[45562]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:18 compute-0 sudo[45720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxfscachlmfbphmolruaehebxannzbxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400878.2616777-52-56802521937430/AnsiballZ_user.py'
Nov 29 07:21:18 compute-0 sudo[45720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:19 compute-0 python3.9[45722]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 07:21:19 compute-0 useradd[45724]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Nov 29 07:21:19 compute-0 useradd[45724]: add 'openvswitch' to group 'hugetlbfs'
Nov 29 07:21:19 compute-0 useradd[45724]: add 'openvswitch' to shadow group 'hugetlbfs'
Nov 29 07:21:19 compute-0 sudo[45720]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:20 compute-0 sudo[45880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuiudjngopoqgllsskdhuwefdqwhrfez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400879.6829152-62-91551447900381/AnsiballZ_setup.py'
Nov 29 07:21:20 compute-0 sudo[45880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:20 compute-0 python3.9[45882]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:21:20 compute-0 sshd-session[45883]: Received disconnect from 20.185.243.158 port 50360:11: Bye Bye [preauth]
Nov 29 07:21:20 compute-0 sshd-session[45883]: Disconnected from authenticating user root 20.185.243.158 port 50360 [preauth]
Nov 29 07:21:20 compute-0 sudo[45880]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:21 compute-0 sudo[45966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqizzwbggwzpapejlnvntwtaompbglqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400879.6829152-62-91551447900381/AnsiballZ_dnf.py'
Nov 29 07:21:21 compute-0 sudo[45966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:21 compute-0 python3.9[45968]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 07:21:24 compute-0 sudo[45966]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:25 compute-0 sudo[46132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xghnqpfbvzhytsgvrzbclgirepdphbyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400884.9033148-76-234579736605238/AnsiballZ_dnf.py'
Nov 29 07:21:25 compute-0 sudo[46132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:25 compute-0 python3.9[46134]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:21:38 compute-0 kernel: SELinux:  Converting 2731 SID table entries...
Nov 29 07:21:38 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 07:21:38 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 07:21:38 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 07:21:38 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 07:21:38 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 07:21:38 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 07:21:38 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 07:21:38 compute-0 groupadd[46157]: group added to /etc/group: name=unbound, GID=993
Nov 29 07:21:38 compute-0 groupadd[46157]: group added to /etc/gshadow: name=unbound
Nov 29 07:21:38 compute-0 groupadd[46157]: new group: name=unbound, GID=993
Nov 29 07:21:39 compute-0 useradd[46164]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Nov 29 07:21:39 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 29 07:21:39 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 29 07:21:41 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:21:41 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:21:41 compute-0 systemd[1]: Reloading.
Nov 29 07:21:41 compute-0 systemd-sysv-generator[46666]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:21:41 compute-0 systemd-rc-local-generator[46662]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:21:41 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 07:21:42 compute-0 sudo[46132]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:42 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:21:42 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:21:42 compute-0 systemd[1]: run-r35d595f820214f1e8ea1a229882b9b53.service: Deactivated successfully.
Nov 29 07:21:42 compute-0 sudo[47230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrmntujmzhhakwaanwrjytuexaqqmhkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400902.2734509-84-133874794718293/AnsiballZ_systemd.py'
Nov 29 07:21:42 compute-0 sudo[47230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:43 compute-0 python3.9[47232]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:21:43 compute-0 systemd[1]: Reloading.
Nov 29 07:21:43 compute-0 systemd-rc-local-generator[47263]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:21:43 compute-0 systemd-sysv-generator[47267]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:21:43 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Nov 29 07:21:43 compute-0 chown[47275]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 29 07:21:43 compute-0 ovs-ctl[47280]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 29 07:21:43 compute-0 ovs-ctl[47280]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 29 07:21:43 compute-0 ovs-ctl[47280]: Starting ovsdb-server [  OK  ]
Nov 29 07:21:43 compute-0 ovs-vsctl[47329]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 29 07:21:43 compute-0 ovs-vsctl[47348]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"230c4529-a404-4083-a72e-940c7905cc88\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 29 07:21:43 compute-0 ovs-ctl[47280]: Configuring Open vSwitch system IDs [  OK  ]
Nov 29 07:21:43 compute-0 ovs-ctl[47280]: Enabling remote OVSDB managers [  OK  ]
Nov 29 07:21:43 compute-0 ovs-vsctl[47354]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 29 07:21:43 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Nov 29 07:21:43 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 29 07:21:43 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 29 07:21:43 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 29 07:21:44 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Nov 29 07:21:44 compute-0 ovs-ctl[47399]: Inserting openvswitch module [  OK  ]
Nov 29 07:21:44 compute-0 ovs-ctl[47368]: Starting ovs-vswitchd [  OK  ]
Nov 29 07:21:44 compute-0 ovs-vsctl[47419]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 29 07:21:44 compute-0 ovs-ctl[47368]: Enabling remote OVSDB managers [  OK  ]
Nov 29 07:21:44 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 29 07:21:44 compute-0 systemd[1]: Starting Open vSwitch...
Nov 29 07:21:44 compute-0 systemd[1]: Finished Open vSwitch.
Nov 29 07:21:44 compute-0 sudo[47230]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:45 compute-0 python3.9[47570]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:21:46 compute-0 sudo[47720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbrrwhgwlklkfnmlcxjnpsziufqwuimz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400905.5558813-102-153538629218444/AnsiballZ_sefcontext.py'
Nov 29 07:21:46 compute-0 sudo[47720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:46 compute-0 python3.9[47722]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 07:21:47 compute-0 sshd-session[47723]: Invalid user sopuser from 114.34.106.146 port 42844
Nov 29 07:21:47 compute-0 kernel: SELinux:  Converting 2745 SID table entries...
Nov 29 07:21:47 compute-0 sshd-session[47723]: Received disconnect from 114.34.106.146 port 42844:11: Bye Bye [preauth]
Nov 29 07:21:47 compute-0 sshd-session[47723]: Disconnected from invalid user sopuser 114.34.106.146 port 42844 [preauth]
Nov 29 07:21:47 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 07:21:47 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 07:21:47 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 07:21:47 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 07:21:47 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 07:21:47 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 07:21:47 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 07:21:47 compute-0 sudo[47720]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:48 compute-0 python3.9[47879]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:21:49 compute-0 sudo[48035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieqfrgmqoxpwynkwntgdvzhfcfraeytj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400909.0498047-120-179349616834726/AnsiballZ_dnf.py'
Nov 29 07:21:49 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 29 07:21:49 compute-0 sudo[48035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:49 compute-0 python3.9[48037]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:21:51 compute-0 sudo[48035]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:51 compute-0 sudo[48188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylejpbgcehsjxippoznfdeztanzjyqzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400911.2892814-128-255564126940264/AnsiballZ_command.py'
Nov 29 07:21:51 compute-0 sudo[48188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:51 compute-0 python3.9[48190]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:21:52 compute-0 sudo[48188]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:53 compute-0 sudo[48475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxwpgnhfoxpvcguhkpjuucejkcbcmgbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400912.9254453-136-37534266606848/AnsiballZ_file.py'
Nov 29 07:21:53 compute-0 sudo[48475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:53 compute-0 python3.9[48477]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 07:21:53 compute-0 sudo[48475]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:54 compute-0 python3.9[48627]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:21:54 compute-0 sudo[48779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adfnftwhqcwuatsgigkzlakeezhupzxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400914.6217859-152-239555361897920/AnsiballZ_dnf.py'
Nov 29 07:21:54 compute-0 sudo[48779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:55 compute-0 python3.9[48781]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:21:57 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:21:57 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:21:57 compute-0 systemd[1]: Reloading.
Nov 29 07:21:57 compute-0 systemd-sysv-generator[48824]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:21:57 compute-0 systemd-rc-local-generator[48821]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:21:57 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 07:21:57 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:21:57 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:21:57 compute-0 systemd[1]: run-r126157f5f1ad4579a5442f92091c35a2.service: Deactivated successfully.
Nov 29 07:21:57 compute-0 sudo[48779]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:58 compute-0 sudo[49096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wouposgnvjtffxxeobvbaetzxadystad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400917.785839-160-118712400936793/AnsiballZ_systemd.py'
Nov 29 07:21:58 compute-0 sudo[49096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:58 compute-0 python3.9[49098]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:21:58 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 07:21:58 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Nov 29 07:21:58 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Nov 29 07:21:58 compute-0 systemd[1]: Stopping Network Manager...
Nov 29 07:21:58 compute-0 NetworkManager[7181]: <info>  [1764400918.4158] caught SIGTERM, shutting down normally.
Nov 29 07:21:58 compute-0 NetworkManager[7181]: <info>  [1764400918.4177] dhcp4 (eth0): canceled DHCP transaction
Nov 29 07:21:58 compute-0 NetworkManager[7181]: <info>  [1764400918.4177] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 07:21:58 compute-0 NetworkManager[7181]: <info>  [1764400918.4178] dhcp4 (eth0): state changed no lease
Nov 29 07:21:58 compute-0 NetworkManager[7181]: <info>  [1764400918.4180] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 07:21:58 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 07:21:58 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 07:21:58 compute-0 NetworkManager[7181]: <info>  [1764400918.4863] exiting (success)
Nov 29 07:21:58 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 07:21:58 compute-0 systemd[1]: Stopped Network Manager.
Nov 29 07:21:58 compute-0 systemd[1]: NetworkManager.service: Consumed 22.497s CPU time, 4.1M memory peak, read 0B from disk, written 35.0K to disk.
Nov 29 07:21:58 compute-0 systemd[1]: Starting Network Manager...
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.5758] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:558c485d-5d9e-4d57-8633-c8b29a02c676)
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.5760] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.5835] manager[0x5560d56b5090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 07:21:58 compute-0 systemd[1]: Starting Hostname Service...
Nov 29 07:21:58 compute-0 systemd[1]: Started Hostname Service.
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6807] hostname: hostname: using hostnamed
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6810] hostname: static hostname changed from (none) to "compute-0"
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6815] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6820] manager[0x5560d56b5090]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6820] manager[0x5560d56b5090]: rfkill: WWAN hardware radio set enabled
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6846] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6856] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6857] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6857] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6858] manager: Networking is enabled by state file
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6861] settings: Loaded settings plugin: keyfile (internal)
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6865] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6892] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6904] dhcp: init: Using DHCP client 'internal'
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6906] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6910] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6914] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6921] device (lo): Activation: starting connection 'lo' (aa60ea5c-c713-40ed-b887-294ab3e1a707)
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6927] device (eth0): carrier: link connected
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6931] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6936] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6936] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6942] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6948] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6954] device (eth1): carrier: link connected
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6958] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6962] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (5b89f824-e5ce-5988-887a-f591240f43ef) (indicated)
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6963] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6967] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6973] device (eth1): Activation: starting connection 'ci-private-network' (5b89f824-e5ce-5988-887a-f591240f43ef)
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.6980] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 07:21:58 compute-0 systemd[1]: Started Network Manager.
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.7000] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.7003] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.7005] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.7008] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.7011] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.7014] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.7017] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.7021] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.7028] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.7032] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.7045] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.7056] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.7065] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.7066] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.7070] device (lo): Activation: successful, device activated.
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.7077] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.7078] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.7080] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 29 07:21:58 compute-0 NetworkManager[49116]: <info>  [1764400918.7083] device (eth1): Activation: successful, device activated.
Nov 29 07:21:58 compute-0 systemd[1]: Starting Network Manager Wait Online...
Nov 29 07:21:58 compute-0 sudo[49096]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:59 compute-0 sudo[49303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qywmjaplezfinsfhexwzlrmbhrnwnlzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400918.9215386-168-105522799711635/AnsiballZ_dnf.py'
Nov 29 07:21:59 compute-0 sudo[49303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:59 compute-0 python3.9[49305]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:22:00 compute-0 NetworkManager[49116]: <info>  [1764400920.2293] dhcp4 (eth0): state changed new lease, address=38.102.83.203
Nov 29 07:22:00 compute-0 NetworkManager[49116]: <info>  [1764400920.2304] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 07:22:00 compute-0 NetworkManager[49116]: <info>  [1764400920.2411] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 07:22:00 compute-0 NetworkManager[49116]: <info>  [1764400920.2447] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 07:22:00 compute-0 NetworkManager[49116]: <info>  [1764400920.2449] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 07:22:00 compute-0 NetworkManager[49116]: <info>  [1764400920.2452] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 07:22:00 compute-0 NetworkManager[49116]: <info>  [1764400920.2457] device (eth0): Activation: successful, device activated.
Nov 29 07:22:00 compute-0 NetworkManager[49116]: <info>  [1764400920.2462] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 07:22:00 compute-0 NetworkManager[49116]: <info>  [1764400920.2464] manager: startup complete
Nov 29 07:22:00 compute-0 systemd[1]: Finished Network Manager Wait Online.
Nov 29 07:22:05 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:22:05 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:22:05 compute-0 systemd[1]: Reloading.
Nov 29 07:22:05 compute-0 systemd-rc-local-generator[49375]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:22:05 compute-0 systemd-sysv-generator[49380]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:22:05 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 07:22:07 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:22:07 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:22:07 compute-0 systemd[1]: run-rebe339fc28c74bc3aa1c846984ff3a7f.service: Deactivated successfully.
Nov 29 07:22:07 compute-0 sudo[49303]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:08 compute-0 sudo[49781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stspluetxtearnrcfjzixrnyenbvcxbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400928.2268543-180-209810109470507/AnsiballZ_stat.py'
Nov 29 07:22:08 compute-0 sudo[49781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:08 compute-0 python3.9[49783]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:22:08 compute-0 sudo[49781]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:09 compute-0 sudo[49933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eivzudyzecoijzemgippoyojkijuqvlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400928.9515133-189-163637204980759/AnsiballZ_ini_file.py'
Nov 29 07:22:09 compute-0 sudo[49933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:09 compute-0 python3.9[49935]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:09 compute-0 sudo[49933]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:10 compute-0 sudo[50087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pztcpzfukczbmudcjdlqnybpesanmmin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400929.9231677-199-127275455379584/AnsiballZ_ini_file.py'
Nov 29 07:22:10 compute-0 sudo[50087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:10 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 07:22:10 compute-0 python3.9[50089]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:10 compute-0 sudo[50087]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:10 compute-0 sudo[50239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaaiauwwjozjnhgskzwspzcavvwvzlxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400930.6417565-199-47630761726311/AnsiballZ_ini_file.py'
Nov 29 07:22:10 compute-0 sudo[50239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:11 compute-0 python3.9[50241]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:11 compute-0 sudo[50239]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:11 compute-0 sudo[50391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkkzgovhhugrdhdwwtuyitfbsutgflyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400931.3772352-214-6757689205967/AnsiballZ_ini_file.py'
Nov 29 07:22:11 compute-0 sudo[50391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:11 compute-0 python3.9[50393]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:11 compute-0 sudo[50391]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:12 compute-0 sudo[50543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiqapbasnkkaffoazpvewoispbgrvnvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400932.186469-214-240003408018963/AnsiballZ_ini_file.py'
Nov 29 07:22:12 compute-0 sudo[50543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:12 compute-0 python3.9[50545]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:12 compute-0 sudo[50543]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:13 compute-0 sudo[50695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjvmbzwppqiwzkkwyqlsiculjopaikxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400932.8753765-229-41714385348052/AnsiballZ_stat.py'
Nov 29 07:22:13 compute-0 sudo[50695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:13 compute-0 python3.9[50697]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:13 compute-0 sudo[50695]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:13 compute-0 sudo[50818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiqsynaeconfcxiaydcrvywmslxrbrbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400932.8753765-229-41714385348052/AnsiballZ_copy.py'
Nov 29 07:22:13 compute-0 sudo[50818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:14 compute-0 python3.9[50820]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400932.8753765-229-41714385348052/.source _original_basename=.b1yvew_4 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:14 compute-0 sudo[50818]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:14 compute-0 sudo[50970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfcvhbfertpwbdksahxzkhrurwqbhgjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400934.2617068-244-88135068932758/AnsiballZ_file.py'
Nov 29 07:22:14 compute-0 sudo[50970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:14 compute-0 python3.9[50972]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:14 compute-0 sudo[50970]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:15 compute-0 sudo[51122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqapldhyuojncslyubxsaizzzjiskovd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400934.9683034-252-123009969018627/AnsiballZ_edpm_os_net_config_mappings.py'
Nov 29 07:22:15 compute-0 sudo[51122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:15 compute-0 python3.9[51124]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 29 07:22:15 compute-0 sudo[51122]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:16 compute-0 sudo[51274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blpjpwofwukbtecrotfivigirwxqihbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400935.9017107-261-96810785196454/AnsiballZ_file.py'
Nov 29 07:22:16 compute-0 sudo[51274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:16 compute-0 python3.9[51276]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:16 compute-0 sudo[51274]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:17 compute-0 sudo[51426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqadsmlgimzailknwiasktjtoyzvsgpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400936.6615484-271-3050680561305/AnsiballZ_stat.py'
Nov 29 07:22:17 compute-0 sudo[51426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:17 compute-0 sudo[51426]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:17 compute-0 sudo[51549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzpubnlzpxbwrbnqckishmpwhwwfjzwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400936.6615484-271-3050680561305/AnsiballZ_copy.py'
Nov 29 07:22:17 compute-0 sudo[51549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:17 compute-0 sudo[51549]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:18 compute-0 sudo[51701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfwugqsummocxvebxsuyszvayfldrcmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400938.1024983-286-233154600112772/AnsiballZ_slurp.py'
Nov 29 07:22:18 compute-0 sudo[51701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:18 compute-0 python3.9[51703]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 29 07:22:18 compute-0 sudo[51701]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:19 compute-0 sudo[51878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fagreykzgquodyioggajwfgxmprcqdqn ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400939.032202-295-34087713267083/async_wrapper.py j344767920485 300 /home/zuul/.ansible/tmp/ansible-tmp-1764400939.032202-295-34087713267083/AnsiballZ_edpm_os_net_config.py _'
Nov 29 07:22:19 compute-0 sudo[51878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:19 compute-0 ansible-async_wrapper.py[51880]: Invoked with j344767920485 300 /home/zuul/.ansible/tmp/ansible-tmp-1764400939.032202-295-34087713267083/AnsiballZ_edpm_os_net_config.py _
Nov 29 07:22:19 compute-0 ansible-async_wrapper.py[51883]: Starting module and watcher
Nov 29 07:22:19 compute-0 ansible-async_wrapper.py[51883]: Start watching 51884 (300)
Nov 29 07:22:19 compute-0 ansible-async_wrapper.py[51884]: Start module (51884)
Nov 29 07:22:19 compute-0 ansible-async_wrapper.py[51880]: Return async_wrapper task started.
Nov 29 07:22:19 compute-0 sudo[51878]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:20 compute-0 python3.9[51885]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 29 07:22:20 compute-0 sshd-session[51780]: Invalid user fan from 103.234.151.178 port 29882
Nov 29 07:22:20 compute-0 sshd-session[51780]: Received disconnect from 103.234.151.178 port 29882:11: Bye Bye [preauth]
Nov 29 07:22:20 compute-0 sshd-session[51780]: Disconnected from invalid user fan 103.234.151.178 port 29882 [preauth]
Nov 29 07:22:20 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 29 07:22:20 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 29 07:22:20 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 29 07:22:20 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 29 07:22:20 compute-0 kernel: cfg80211: failed to load regulatory.db
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.1130] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51886 uid=0 result="success"
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.1144] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51886 uid=0 result="success"
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.1577] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.1579] audit: op="connection-add" uuid="80db5803-cdaf-4ee3-8f3d-68f76bc0dfa9" name="br-ex-br" pid=51886 uid=0 result="success"
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.1593] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.1594] audit: op="connection-add" uuid="7aef5ec4-930a-42d6-80aa-ce8e887e8447" name="br-ex-port" pid=51886 uid=0 result="success"
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.1604] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.1606] audit: op="connection-add" uuid="ae2e9520-f6f5-4e80-910c-a3f6133c5616" name="eth1-port" pid=51886 uid=0 result="success"
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.1616] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.1617] audit: op="connection-add" uuid="63cbabb5-b23e-415a-85e6-38f8bcf45a61" name="vlan20-port" pid=51886 uid=0 result="success"
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.1628] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.1630] audit: op="connection-add" uuid="6a5fc1df-158c-4830-b134-bf23017681ae" name="vlan21-port" pid=51886 uid=0 result="success"
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.1640] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.1641] audit: op="connection-add" uuid="d5bd10ad-5ca2-4218-a2f9-808c60b111bd" name="vlan22-port" pid=51886 uid=0 result="success"
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.1651] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.1653] audit: op="connection-add" uuid="434b63bc-36f4-4bb2-b01c-e68170823ffe" name="vlan23-port" pid=51886 uid=0 result="success"
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.1670] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,connection.autoconnect-priority,connection.timestamp" pid=51886 uid=0 result="success"
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.1685] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.1687] audit: op="connection-add" uuid="ec59792c-f3e1-47e0-b92a-142a26cabced" name="br-ex-if" pid=51886 uid=0 result="success"
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2324] audit: op="connection-update" uuid="5b89f824-e5ce-5988-887a-f591240f43ef" name="ci-private-network" args="ovs-interface.type,ipv4.routes,ipv4.never-default,ipv4.addresses,ipv4.method,ipv4.dns,ipv4.routing-rules,ipv6.addr-gen-mode,ipv6.routes,ipv6.addresses,ipv6.method,ipv6.dns,ipv6.routing-rules,connection.port-type,connection.controller,connection.master,connection.slave-type,connection.timestamp,ovs-external-ids.data" pid=51886 uid=0 result="success"
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2347] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2348] audit: op="connection-add" uuid="df5b8757-740e-4d31-9419-322eb3c175c6" name="vlan20-if" pid=51886 uid=0 result="success"
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2380] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2382] audit: op="connection-add" uuid="59a7b1ea-4fb6-499c-b3ff-d4a1a581e332" name="vlan21-if" pid=51886 uid=0 result="success"
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2407] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2412] audit: op="connection-add" uuid="f4beaa78-5cfd-4856-ae60-88763fc987a3" name="vlan22-if" pid=51886 uid=0 result="success"
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2433] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2435] audit: op="connection-add" uuid="0c0d174c-01d4-4c7e-a9fe-29308a94b216" name="vlan23-if" pid=51886 uid=0 result="success"
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2452] audit: op="connection-delete" uuid="2bd76030-c321-3792-974f-8b7377201814" name="Wired connection 1" pid=51886 uid=0 result="success"
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2468] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2479] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2484] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (80db5803-cdaf-4ee3-8f3d-68f76bc0dfa9)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2485] audit: op="connection-activate" uuid="80db5803-cdaf-4ee3-8f3d-68f76bc0dfa9" name="br-ex-br" pid=51886 uid=0 result="success"
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2487] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2495] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2500] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (7aef5ec4-930a-42d6-80aa-ce8e887e8447)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2502] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2509] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2513] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (ae2e9520-f6f5-4e80-910c-a3f6133c5616)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2516] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2523] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2528] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (63cbabb5-b23e-415a-85e6-38f8bcf45a61)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2530] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2538] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2543] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (6a5fc1df-158c-4830-b134-bf23017681ae)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2546] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2553] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2557] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (d5bd10ad-5ca2-4218-a2f9-808c60b111bd)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2559] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2568] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2572] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (434b63bc-36f4-4bb2-b01c-e68170823ffe)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2573] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2576] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2579] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2585] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2590] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2595] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (ec59792c-f3e1-47e0-b92a-142a26cabced)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2596] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2600] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2603] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2604] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2606] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2617] device (eth1): disconnecting for new activation request.
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2618] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2621] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2623] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2624] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2627] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2633] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2638] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (df5b8757-740e-4d31-9419-322eb3c175c6)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2638] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2641] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2643] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2644] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2647] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2652] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2657] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (59a7b1ea-4fb6-499c-b3ff-d4a1a581e332)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2658] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2661] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2663] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2665] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2668] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2674] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2679] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (f4beaa78-5cfd-4856-ae60-88763fc987a3)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2680] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2684] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2686] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2687] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2691] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2696] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2700] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (0c0d174c-01d4-4c7e-a9fe-29308a94b216)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2700] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2703] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2705] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2706] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2708] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2722] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.method,connection.autoconnect-priority" pid=51886 uid=0 result="success"
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2725] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2728] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2730] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2737] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2740] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2744] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2747] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2749] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2754] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2757] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 kernel: ovs-system: entered promiscuous mode
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2761] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2763] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2769] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2773] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2776] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2778] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2783] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2788] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 systemd-udevd[51891]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:22:22 compute-0 kernel: Timeout policy base is empty
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2791] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2793] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2797] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2801] dhcp4 (eth0): canceled DHCP transaction
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2802] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2802] dhcp4 (eth0): state changed no lease
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2804] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2814] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2817] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51886 uid=0 result="fail" reason="Device is not activated"
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2820] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.2826] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 29 07:22:22 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 07:22:22 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4152] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4163] dhcp4 (eth0): state changed new lease, address=38.102.83.203
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4176] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 29 07:22:22 compute-0 kernel: br-ex: entered promiscuous mode
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4253] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 29 07:22:22 compute-0 kernel: vlan22: entered promiscuous mode
Nov 29 07:22:22 compute-0 systemd-udevd[51892]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4394] device (eth1): Activation: starting connection 'ci-private-network' (5b89f824-e5ce-5988-887a-f591240f43ef)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4399] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4401] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4404] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4406] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4408] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4409] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4412] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4415] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 29 07:22:22 compute-0 kernel: vlan23: entered promiscuous mode
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4432] device (eth1): disconnecting for new activation request.
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4433] audit: op="connection-activate" uuid="5b89f824-e5ce-5988-887a-f591240f43ef" name="ci-private-network" pid=51886 uid=0 result="success"
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4449] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4453] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4458] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4464] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4468] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4472] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4478] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4483] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4489] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4495] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4500] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4505] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4510] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4515] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4520] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 07:22:22 compute-0 kernel: vlan20: entered promiscuous mode
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4561] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4562] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51886 uid=0 result="success"
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4562] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4570] device (eth1): Activation: starting connection 'ci-private-network' (5b89f824-e5ce-5988-887a-f591240f43ef)
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4584] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4588] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4596] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4605] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4612] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4624] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4626] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 kernel: vlan21: entered promiscuous mode
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4675] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4691] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4698] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4699] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4703] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4710] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4726] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4732] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4746] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.4758] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.5109] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.5110] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.5111] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.5112] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.5127] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.5132] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.5137] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.5143] device (eth1): Activation: successful, device activated.
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.5148] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.5153] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.5174] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.5416] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.5418] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:22:22 compute-0 NetworkManager[49116]: <info>  [1764400942.5426] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 07:22:23 compute-0 sshd-session[51949]: Invalid user monitoring from 103.236.140.19 port 60242
Nov 29 07:22:23 compute-0 sudo[52248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rndhwhptyhumaycrzocazmtibjiuovrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400943.0357127-295-241588198607748/AnsiballZ_async_status.py'
Nov 29 07:22:23 compute-0 sudo[52248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:23 compute-0 sshd-session[51949]: Received disconnect from 103.236.140.19 port 60242:11: Bye Bye [preauth]
Nov 29 07:22:23 compute-0 sshd-session[51949]: Disconnected from invalid user monitoring 103.236.140.19 port 60242 [preauth]
Nov 29 07:22:23 compute-0 python3.9[52250]: ansible-ansible.legacy.async_status Invoked with jid=j344767920485.51880 mode=status _async_dir=/root/.ansible_async
Nov 29 07:22:23 compute-0 NetworkManager[49116]: <info>  [1764400943.7056] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51886 uid=0 result="success"
Nov 29 07:22:23 compute-0 sudo[52248]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:23 compute-0 NetworkManager[49116]: <info>  [1764400943.8530] checkpoint[0x5560d568b950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 29 07:22:23 compute-0 NetworkManager[49116]: <info>  [1764400943.8532] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51886 uid=0 result="success"
Nov 29 07:22:24 compute-0 NetworkManager[49116]: <info>  [1764400944.1243] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51886 uid=0 result="success"
Nov 29 07:22:24 compute-0 NetworkManager[49116]: <info>  [1764400944.1254] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51886 uid=0 result="success"
Nov 29 07:22:24 compute-0 NetworkManager[49116]: <info>  [1764400944.2970] audit: op="networking-control" arg="global-dns-configuration" pid=51886 uid=0 result="success"
Nov 29 07:22:24 compute-0 NetworkManager[49116]: <info>  [1764400944.2997] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 29 07:22:24 compute-0 NetworkManager[49116]: <info>  [1764400944.3023] audit: op="networking-control" arg="global-dns-configuration" pid=51886 uid=0 result="success"
Nov 29 07:22:24 compute-0 NetworkManager[49116]: <info>  [1764400944.3045] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51886 uid=0 result="success"
Nov 29 07:22:24 compute-0 NetworkManager[49116]: <info>  [1764400944.4350] checkpoint[0x5560d568ba20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 29 07:22:24 compute-0 NetworkManager[49116]: <info>  [1764400944.4354] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51886 uid=0 result="success"
Nov 29 07:22:24 compute-0 ansible-async_wrapper.py[51884]: Module complete (51884)
Nov 29 07:22:24 compute-0 ansible-async_wrapper.py[51883]: Done in kid B.
Nov 29 07:22:27 compute-0 sudo[52354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xspkjcyfwiomcelxdugspvohcypitydi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400943.0357127-295-241588198607748/AnsiballZ_async_status.py'
Nov 29 07:22:27 compute-0 sudo[52354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:27 compute-0 python3.9[52356]: ansible-ansible.legacy.async_status Invoked with jid=j344767920485.51880 mode=status _async_dir=/root/.ansible_async
Nov 29 07:22:27 compute-0 sudo[52354]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:27 compute-0 sudo[52454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otxzumldsulfcqhbxwbcmrycufycbfkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400943.0357127-295-241588198607748/AnsiballZ_async_status.py'
Nov 29 07:22:27 compute-0 sudo[52454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:27 compute-0 python3.9[52456]: ansible-ansible.legacy.async_status Invoked with jid=j344767920485.51880 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 07:22:27 compute-0 sudo[52454]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:28 compute-0 sudo[52606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doipwbnglfxexpehjljnlkbxysefjcau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400948.110323-322-31576839366659/AnsiballZ_stat.py'
Nov 29 07:22:28 compute-0 sudo[52606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:28 compute-0 python3.9[52608]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:28 compute-0 sudo[52606]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:28 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 07:22:29 compute-0 sudo[52731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vetbzvgmklpnnvzrnijjmcpapazqvufn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400948.110323-322-31576839366659/AnsiballZ_copy.py'
Nov 29 07:22:29 compute-0 sudo[52731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:29 compute-0 python3.9[52733]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400948.110323-322-31576839366659/.source.returncode _original_basename=.lfojlfsp follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:29 compute-0 sudo[52731]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:30 compute-0 sudo[52883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dagflgwjxagkksjiuzklxvqzzuitkzkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400949.6028285-338-206859384330731/AnsiballZ_stat.py'
Nov 29 07:22:30 compute-0 sudo[52883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:30 compute-0 python3.9[52885]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:30 compute-0 sudo[52883]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:30 compute-0 sudo[53007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmwzeklekxfddxmvpcmgmhpzixumdpwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400949.6028285-338-206859384330731/AnsiballZ_copy.py'
Nov 29 07:22:30 compute-0 sudo[53007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:30 compute-0 python3.9[53009]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400949.6028285-338-206859384330731/.source.cfg _original_basename=.ui1ylx5n follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:31 compute-0 sudo[53007]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:31 compute-0 sudo[53159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xllosapkqhurygdcrnraprrjyvienetm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400951.2568893-353-150900121784682/AnsiballZ_systemd.py'
Nov 29 07:22:31 compute-0 sudo[53159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:31 compute-0 python3.9[53161]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:22:31 compute-0 systemd[1]: Reloading Network Manager...
Nov 29 07:22:32 compute-0 NetworkManager[49116]: <info>  [1764400952.0124] audit: op="reload" arg="0" pid=53165 uid=0 result="success"
Nov 29 07:22:32 compute-0 NetworkManager[49116]: <info>  [1764400952.0135] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 29 07:22:32 compute-0 systemd[1]: Reloaded Network Manager.
Nov 29 07:22:32 compute-0 sudo[53159]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:32 compute-0 sshd-session[45105]: Connection closed by 192.168.122.30 port 55272
Nov 29 07:22:32 compute-0 sshd-session[45102]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:22:32 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Nov 29 07:22:32 compute-0 systemd[1]: session-10.scope: Consumed 53.182s CPU time.
Nov 29 07:22:32 compute-0 systemd-logind[782]: Session 10 logged out. Waiting for processes to exit.
Nov 29 07:22:32 compute-0 systemd-logind[782]: Removed session 10.
Nov 29 07:22:32 compute-0 sshd-session[53194]: Invalid user admin1 from 20.185.243.158 port 33336
Nov 29 07:22:32 compute-0 sshd-session[53194]: Received disconnect from 20.185.243.158 port 33336:11: Bye Bye [preauth]
Nov 29 07:22:32 compute-0 sshd-session[53194]: Disconnected from invalid user admin1 20.185.243.158 port 33336 [preauth]
Nov 29 07:22:37 compute-0 sshd-session[53198]: Accepted publickey for zuul from 192.168.122.30 port 57462 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:22:37 compute-0 systemd-logind[782]: New session 11 of user zuul.
Nov 29 07:22:37 compute-0 systemd[1]: Started Session 11 of User zuul.
Nov 29 07:22:37 compute-0 sshd-session[53198]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:22:38 compute-0 python3.9[53351]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:22:39 compute-0 python3.9[53505]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:22:41 compute-0 python3.9[53701]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:22:41 compute-0 sshd-session[53201]: Connection closed by 192.168.122.30 port 57462
Nov 29 07:22:41 compute-0 sshd-session[53198]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:22:41 compute-0 systemd-logind[782]: Session 11 logged out. Waiting for processes to exit.
Nov 29 07:22:41 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Nov 29 07:22:41 compute-0 systemd[1]: session-11.scope: Consumed 2.553s CPU time.
Nov 29 07:22:41 compute-0 systemd-logind[782]: Removed session 11.
Nov 29 07:22:42 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 07:22:46 compute-0 sshd-session[53574]: Invalid user onkar from 101.47.142.104 port 40758
Nov 29 07:22:46 compute-0 sshd-session[53574]: Received disconnect from 101.47.142.104 port 40758:11: Bye Bye [preauth]
Nov 29 07:22:46 compute-0 sshd-session[53574]: Disconnected from invalid user onkar 101.47.142.104 port 40758 [preauth]
Nov 29 07:22:46 compute-0 sshd-session[53730]: Accepted publickey for zuul from 192.168.122.30 port 37806 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:22:46 compute-0 systemd-logind[782]: New session 12 of user zuul.
Nov 29 07:22:46 compute-0 systemd[1]: Started Session 12 of User zuul.
Nov 29 07:22:46 compute-0 sshd-session[53730]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:22:47 compute-0 python3.9[53883]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:22:48 compute-0 python3.9[54037]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:22:49 compute-0 sudo[54192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jntihemugrydnekjzrtqlciiknmlitmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400969.4188404-40-105845608957577/AnsiballZ_setup.py'
Nov 29 07:22:49 compute-0 sudo[54192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:50 compute-0 python3.9[54194]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:22:50 compute-0 sudo[54192]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:50 compute-0 sudo[54276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skoikwhhyrybkpgfblctmdzgjqywvazn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400969.4188404-40-105845608957577/AnsiballZ_dnf.py'
Nov 29 07:22:50 compute-0 sudo[54276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:51 compute-0 python3.9[54278]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:22:52 compute-0 sudo[54276]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:53 compute-0 sudo[54430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inubkcxhppctlijxkhvvzvppeqjqsaqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400973.0017524-52-262022974231176/AnsiballZ_setup.py'
Nov 29 07:22:53 compute-0 sudo[54430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:53 compute-0 python3.9[54432]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:22:54 compute-0 sudo[54430]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:54 compute-0 sudo[54625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjeuqhknxucgwouwqiajaziepihzjslf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400974.284183-63-39397157814315/AnsiballZ_file.py'
Nov 29 07:22:54 compute-0 sudo[54625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:54 compute-0 python3.9[54627]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:55 compute-0 sudo[54625]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:55 compute-0 sudo[54777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dunqbqiprtlmfsnqojcvrennphzjhdcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400975.2061446-71-163101713191323/AnsiballZ_command.py'
Nov 29 07:22:55 compute-0 sudo[54777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:55 compute-0 python3.9[54779]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:22:56 compute-0 podman[54780]: 2025-11-29 07:22:56.511365438 +0000 UTC m=+0.585124858 system refresh
Nov 29 07:22:56 compute-0 sudo[54777]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:57 compute-0 sudo[54940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihidzvnhsjwjfkjcdahyrlnqwmetweih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400976.7996206-79-64964960824856/AnsiballZ_stat.py'
Nov 29 07:22:57 compute-0 sudo[54940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:22:57 compute-0 python3.9[54942]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:57 compute-0 sudo[54940]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:58 compute-0 sudo[55063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgzkbckqojcnqaoekvhxajhtuvmhmryk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400976.7996206-79-64964960824856/AnsiballZ_copy.py'
Nov 29 07:22:58 compute-0 sudo[55063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:58 compute-0 python3.9[55065]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400976.7996206-79-64964960824856/.source.json follow=False _original_basename=podman_network_config.j2 checksum=8761dcd0d26b1f894407de0e346ddbd6191dfda9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:58 compute-0 sudo[55063]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:58 compute-0 sudo[55217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvhttlpofllzwlsdbwajpjrgiycdhtpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400978.4543083-94-15095405903290/AnsiballZ_stat.py'
Nov 29 07:22:58 compute-0 sudo[55217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:59 compute-0 python3.9[55219]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:59 compute-0 sudo[55217]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:59 compute-0 sshd-session[55066]: Received disconnect from 114.34.106.146 port 51786:11: Bye Bye [preauth]
Nov 29 07:22:59 compute-0 sshd-session[55066]: Disconnected from authenticating user root 114.34.106.146 port 51786 [preauth]
Nov 29 07:22:59 compute-0 sudo[55340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gunvrvnbzosqulfanuzqmllpqavrouju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400978.4543083-94-15095405903290/AnsiballZ_copy.py'
Nov 29 07:22:59 compute-0 sudo[55340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:59 compute-0 python3.9[55342]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400978.4543083-94-15095405903290/.source.conf follow=False _original_basename=registries.conf.j2 checksum=c2a85b7389d30a5066b1ae0058c9a8ae1bc25688 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:59 compute-0 sudo[55340]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:00 compute-0 sudo[55492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuwknkkrancwwdwemgklqqqpwpgrddoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400980.0346189-110-189834660836696/AnsiballZ_ini_file.py'
Nov 29 07:23:00 compute-0 sudo[55492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:00 compute-0 python3.9[55494]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:00 compute-0 sudo[55492]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:01 compute-0 sudo[55644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcntnkpetqmycmrjsxjpfadocudrputn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400980.9565816-110-69132851209863/AnsiballZ_ini_file.py'
Nov 29 07:23:01 compute-0 sudo[55644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:01 compute-0 python3.9[55646]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:01 compute-0 sudo[55644]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:01 compute-0 sudo[55796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmseenkjaxygqqlzmrpyahbcjvskvezr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400981.6876512-110-221252377510544/AnsiballZ_ini_file.py'
Nov 29 07:23:01 compute-0 sudo[55796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:02 compute-0 python3.9[55798]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:02 compute-0 sudo[55796]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:02 compute-0 sudo[55948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boziowzfaabcqopkwunutzvltlepfwgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400982.4134786-110-226660055183399/AnsiballZ_ini_file.py'
Nov 29 07:23:02 compute-0 sudo[55948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:02 compute-0 python3.9[55950]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:02 compute-0 sudo[55948]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:03 compute-0 sudo[56100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcfaefdorxlfhzyignlyxjswfrmdctsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400983.2551455-141-119595112912073/AnsiballZ_dnf.py'
Nov 29 07:23:03 compute-0 sudo[56100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:03 compute-0 python3.9[56102]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:23:05 compute-0 sudo[56100]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:06 compute-0 sudo[56253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzdmzvfzbxbqymilpigabclybgrudlyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400986.2187448-152-84634200679081/AnsiballZ_setup.py'
Nov 29 07:23:06 compute-0 sudo[56253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:06 compute-0 python3.9[56255]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:23:06 compute-0 sudo[56253]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:07 compute-0 sudo[56407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kznuemsfednipavynssrgvblckeoydbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400987.1404912-160-59245848835228/AnsiballZ_stat.py'
Nov 29 07:23:07 compute-0 sudo[56407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:07 compute-0 python3.9[56409]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:23:07 compute-0 sudo[56407]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:08 compute-0 sudo[56559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqvtrogkbmogbnbtaalqxpfxaisbpkxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400987.963888-169-269955635866377/AnsiballZ_stat.py'
Nov 29 07:23:08 compute-0 sudo[56559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:08 compute-0 python3.9[56561]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:23:08 compute-0 sudo[56559]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:09 compute-0 sudo[56711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwjkmmvvchepzhpozsclvutcaqaiklnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400988.928525-179-28742032326945/AnsiballZ_command.py'
Nov 29 07:23:09 compute-0 sudo[56711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:09 compute-0 python3.9[56713]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:23:09 compute-0 sudo[56711]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:10 compute-0 sudo[56864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djdqmnhnpzylpqelpomjlkhhrxasbwqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400989.8193314-189-257486534291736/AnsiballZ_service_facts.py'
Nov 29 07:23:10 compute-0 sudo[56864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:10 compute-0 python3.9[56866]: ansible-service_facts Invoked
Nov 29 07:23:10 compute-0 network[56883]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:23:10 compute-0 network[56884]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:23:10 compute-0 network[56885]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:23:13 compute-0 sudo[56864]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:14 compute-0 sudo[57168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yisnutosqyiruslexehaudunrcgpgimc ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764400994.4943419-204-248751404834631/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764400994.4943419-204-248751404834631/args'
Nov 29 07:23:14 compute-0 sudo[57168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:15 compute-0 sudo[57168]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:15 compute-0 sudo[57335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgibbdasnyszmegxzspjrogkddgcfutq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400995.2902925-215-131919576359248/AnsiballZ_dnf.py'
Nov 29 07:23:15 compute-0 sudo[57335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:15 compute-0 python3.9[57337]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:23:17 compute-0 sudo[57335]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:18 compute-0 sudo[57488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioodyavpxwoxnvwdkilgtpkdcaxxbszl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400997.706479-228-186122524371897/AnsiballZ_package_facts.py'
Nov 29 07:23:18 compute-0 sudo[57488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:18 compute-0 python3.9[57490]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 07:23:18 compute-0 sudo[57488]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:19 compute-0 sudo[57640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npktiqakkqkldaiopgweqrlewswzzksr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400999.421286-238-99191329849658/AnsiballZ_stat.py'
Nov 29 07:23:19 compute-0 sudo[57640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:20 compute-0 python3.9[57642]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:20 compute-0 sudo[57640]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:20 compute-0 sudo[57765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfakihkxmwdophrzitgpcruwhlsrfhsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400999.421286-238-99191329849658/AnsiballZ_copy.py'
Nov 29 07:23:20 compute-0 sudo[57765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:20 compute-0 python3.9[57767]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400999.421286-238-99191329849658/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:20 compute-0 sudo[57765]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:21 compute-0 sudo[57919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbrrmybizudijjmhhkzlukulpmnlvuwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401000.8965554-253-1920090657282/AnsiballZ_stat.py'
Nov 29 07:23:21 compute-0 sudo[57919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:21 compute-0 python3.9[57921]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:21 compute-0 sudo[57919]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:21 compute-0 sudo[58044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljieydgkvxclktqtnldkyttfshiozlqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401000.8965554-253-1920090657282/AnsiballZ_copy.py'
Nov 29 07:23:21 compute-0 sudo[58044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:22 compute-0 python3.9[58046]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401000.8965554-253-1920090657282/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:22 compute-0 sudo[58044]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:23 compute-0 sudo[58198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyrdjthtkacnvpandgzclboeqfdcdjjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401002.6592698-274-3687195696676/AnsiballZ_lineinfile.py'
Nov 29 07:23:23 compute-0 sudo[58198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:23 compute-0 python3.9[58200]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:23 compute-0 sudo[58198]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:24 compute-0 sudo[58352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxthimieubmvsggfmbndwpqkptgncfdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401003.9983094-289-83512710610176/AnsiballZ_setup.py'
Nov 29 07:23:24 compute-0 sudo[58352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:24 compute-0 python3.9[58354]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:23:24 compute-0 sudo[58352]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:25 compute-0 sudo[58436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnobldxztrngmmsgrtejqkavypodesce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401003.9983094-289-83512710610176/AnsiballZ_systemd.py'
Nov 29 07:23:25 compute-0 sudo[58436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:25 compute-0 python3.9[58438]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:23:25 compute-0 sudo[58436]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:26 compute-0 sudo[58590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqtwcaxcibbodssvwzeheooddwsxjwjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401006.3090162-305-280307749495469/AnsiballZ_setup.py'
Nov 29 07:23:26 compute-0 sudo[58590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:26 compute-0 python3.9[58592]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:23:27 compute-0 sudo[58590]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:27 compute-0 sudo[58674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlueelldwmcxabrfvanzpoxjoscrwsez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401006.3090162-305-280307749495469/AnsiballZ_systemd.py'
Nov 29 07:23:27 compute-0 sudo[58674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:27 compute-0 python3.9[58676]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:23:27 compute-0 chronyd[792]: chronyd exiting
Nov 29 07:23:27 compute-0 systemd[1]: Stopping NTP client/server...
Nov 29 07:23:27 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Nov 29 07:23:27 compute-0 systemd[1]: Stopped NTP client/server.
Nov 29 07:23:27 compute-0 systemd[1]: Starting NTP client/server...
Nov 29 07:23:27 compute-0 chronyd[58684]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 07:23:27 compute-0 chronyd[58684]: Frequency -26.159 +/- 0.160 ppm read from /var/lib/chrony/drift
Nov 29 07:23:27 compute-0 chronyd[58684]: Loaded seccomp filter (level 2)
Nov 29 07:23:27 compute-0 systemd[1]: Started NTP client/server.
Nov 29 07:23:27 compute-0 sudo[58674]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:28 compute-0 sshd-session[53733]: Connection closed by 192.168.122.30 port 37806
Nov 29 07:23:28 compute-0 sshd-session[53730]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:23:28 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Nov 29 07:23:28 compute-0 systemd[1]: session-12.scope: Consumed 29.637s CPU time.
Nov 29 07:23:28 compute-0 systemd-logind[782]: Session 12 logged out. Waiting for processes to exit.
Nov 29 07:23:28 compute-0 systemd-logind[782]: Removed session 12.
Nov 29 07:23:31 compute-0 sshd-session[58710]: Invalid user odoo from 103.234.151.178 port 53706
Nov 29 07:23:31 compute-0 sshd-session[58710]: Received disconnect from 103.234.151.178 port 53706:11: Bye Bye [preauth]
Nov 29 07:23:31 compute-0 sshd-session[58710]: Disconnected from invalid user odoo 103.234.151.178 port 53706 [preauth]
Nov 29 07:23:33 compute-0 sshd-session[58712]: Accepted publickey for zuul from 192.168.122.30 port 55250 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:23:33 compute-0 systemd-logind[782]: New session 13 of user zuul.
Nov 29 07:23:33 compute-0 systemd[1]: Started Session 13 of User zuul.
Nov 29 07:23:33 compute-0 sshd-session[58712]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:23:34 compute-0 sudo[58867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apoegzqtbbjerfqbnozilblfpxflvgxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401013.9036894-22-203223740622922/AnsiballZ_file.py'
Nov 29 07:23:34 compute-0 sudo[58867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:34 compute-0 python3.9[58869]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:34 compute-0 sudo[58867]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:35 compute-0 sudo[59019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyhkxcesgtqcxmmsbwimomyfuahrcvlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401014.8885198-34-120312161352476/AnsiballZ_stat.py'
Nov 29 07:23:35 compute-0 sudo[59019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:35 compute-0 python3.9[59021]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:35 compute-0 sudo[59019]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:35 compute-0 sshd-session[58805]: Received disconnect from 103.236.140.19 port 55692:11: Bye Bye [preauth]
Nov 29 07:23:35 compute-0 sshd-session[58805]: Disconnected from authenticating user root 103.236.140.19 port 55692 [preauth]
Nov 29 07:23:36 compute-0 sudo[59142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuarcxnuikejdpbehqtzbvalbhmxfcxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401014.8885198-34-120312161352476/AnsiballZ_copy.py'
Nov 29 07:23:36 compute-0 sudo[59142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:36 compute-0 python3.9[59144]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401014.8885198-34-120312161352476/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:36 compute-0 sudo[59142]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:36 compute-0 sshd-session[58715]: Connection closed by 192.168.122.30 port 55250
Nov 29 07:23:36 compute-0 sshd-session[58712]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:23:36 compute-0 systemd-logind[782]: Session 13 logged out. Waiting for processes to exit.
Nov 29 07:23:36 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Nov 29 07:23:36 compute-0 systemd[1]: session-13.scope: Consumed 1.804s CPU time.
Nov 29 07:23:36 compute-0 systemd-logind[782]: Removed session 13.
Nov 29 07:23:42 compute-0 sshd-session[59169]: Accepted publickey for zuul from 192.168.122.30 port 49830 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:23:42 compute-0 systemd-logind[782]: New session 14 of user zuul.
Nov 29 07:23:42 compute-0 systemd[1]: Started Session 14 of User zuul.
Nov 29 07:23:42 compute-0 sshd-session[59169]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:23:43 compute-0 python3.9[59322]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:23:44 compute-0 sshd-session[59346]: Received disconnect from 20.185.243.158 port 57164:11: Bye Bye [preauth]
Nov 29 07:23:44 compute-0 sshd-session[59346]: Disconnected from authenticating user root 20.185.243.158 port 57164 [preauth]
Nov 29 07:23:44 compute-0 sudo[59478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfolcdlonoguwwjqeugmvwzygbgovoiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401024.073921-33-185253819635925/AnsiballZ_file.py'
Nov 29 07:23:44 compute-0 sudo[59478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:44 compute-0 python3.9[59480]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:44 compute-0 sudo[59478]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:45 compute-0 sudo[59653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kitcnftrinkgtwdnuzeqqjsxchpmcjsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401024.9357817-41-84710046202384/AnsiballZ_stat.py'
Nov 29 07:23:45 compute-0 sudo[59653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:45 compute-0 python3.9[59655]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:45 compute-0 sudo[59653]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:46 compute-0 sudo[59776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjaaqjugzxvocjlpntlwvmloroepgimr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401024.9357817-41-84710046202384/AnsiballZ_copy.py'
Nov 29 07:23:46 compute-0 sudo[59776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:46 compute-0 python3.9[59778]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764401024.9357817-41-84710046202384/.source.json _original_basename=.08sfpvng follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:46 compute-0 sudo[59776]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:47 compute-0 sudo[59928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dczpbchqejqyqtcferojraloaqynauxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401026.8718338-64-77088331091186/AnsiballZ_stat.py'
Nov 29 07:23:47 compute-0 sudo[59928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:47 compute-0 python3.9[59930]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:47 compute-0 sudo[59928]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:47 compute-0 sudo[60051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xppspjivbxsphplfmqrmhfvdeynyhjfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401026.8718338-64-77088331091186/AnsiballZ_copy.py'
Nov 29 07:23:47 compute-0 sudo[60051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:48 compute-0 python3.9[60053]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401026.8718338-64-77088331091186/.source _original_basename=.u6c_ns66 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:48 compute-0 sudo[60051]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:48 compute-0 sudo[60203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qebkvirdxqntqoudokewbhwaqjsnlknk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401028.3654234-80-179216841654061/AnsiballZ_file.py'
Nov 29 07:23:48 compute-0 sudo[60203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:48 compute-0 python3.9[60205]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:48 compute-0 sudo[60203]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:49 compute-0 sudo[60355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpqyjpqpgclxeiucfbuakbxvuzxeqket ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401029.0806944-88-24837147508842/AnsiballZ_stat.py'
Nov 29 07:23:49 compute-0 sudo[60355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:49 compute-0 python3.9[60357]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:49 compute-0 sudo[60355]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:50 compute-0 sudo[60478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byfiyzdieqlmuumbsdenumgaavbflaus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401029.0806944-88-24837147508842/AnsiballZ_copy.py'
Nov 29 07:23:50 compute-0 sudo[60478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:50 compute-0 python3.9[60480]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401029.0806944-88-24837147508842/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:50 compute-0 sudo[60478]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:50 compute-0 sudo[60630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozxwrfuqdzvhvotrksmuakqyqmjvoaao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401030.4703312-88-17569588897585/AnsiballZ_stat.py'
Nov 29 07:23:50 compute-0 sudo[60630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:51 compute-0 python3.9[60632]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:51 compute-0 sudo[60630]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:51 compute-0 sudo[60754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abwzndzustzwmvxyxwxwhmwawzhpyrmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401030.4703312-88-17569588897585/AnsiballZ_copy.py'
Nov 29 07:23:51 compute-0 sudo[60754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:51 compute-0 sshd-session[60727]: Connection closed by 80.94.92.182 port 40934
Nov 29 07:23:51 compute-0 python3.9[60756]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401030.4703312-88-17569588897585/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:51 compute-0 sudo[60754]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:52 compute-0 sudo[60906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dulfhvsiyadayqeurggqfteivplalkvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401031.9427726-117-236436087730669/AnsiballZ_file.py'
Nov 29 07:23:52 compute-0 sudo[60906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:52 compute-0 python3.9[60908]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:52 compute-0 sudo[60906]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:53 compute-0 sudo[61058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-farmxpcobpyvwxaqydwbrkgwapjukfuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401032.7000499-125-232162728459448/AnsiballZ_stat.py'
Nov 29 07:23:53 compute-0 sudo[61058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:53 compute-0 python3.9[61060]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:53 compute-0 sudo[61058]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:53 compute-0 sudo[61181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zihjnmvqtkrsudubospzgikhxxmppjrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401032.7000499-125-232162728459448/AnsiballZ_copy.py'
Nov 29 07:23:53 compute-0 sudo[61181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:53 compute-0 python3.9[61183]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401032.7000499-125-232162728459448/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:53 compute-0 sudo[61181]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:54 compute-0 sudo[61333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwqgzdkbwoovctuhdycpbtyreodqacek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401034.0694447-140-59140327975268/AnsiballZ_stat.py'
Nov 29 07:23:54 compute-0 sudo[61333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:54 compute-0 python3.9[61335]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:54 compute-0 sudo[61333]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:55 compute-0 sudo[61456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlhppepajstzwqndgsfbyypivugcnyoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401034.0694447-140-59140327975268/AnsiballZ_copy.py'
Nov 29 07:23:55 compute-0 sudo[61456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:55 compute-0 python3.9[61458]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401034.0694447-140-59140327975268/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:55 compute-0 sudo[61456]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:56 compute-0 sudo[61608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-getygdrmdqblxidqivwalcqlzjskssqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401035.4764476-155-125755826822539/AnsiballZ_systemd.py'
Nov 29 07:23:56 compute-0 sudo[61608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:56 compute-0 python3.9[61610]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:23:56 compute-0 systemd[1]: Reloading.
Nov 29 07:23:56 compute-0 systemd-rc-local-generator[61638]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:23:56 compute-0 systemd-sysv-generator[61641]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:23:56 compute-0 systemd[1]: Reloading.
Nov 29 07:23:56 compute-0 systemd-sysv-generator[61680]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:23:56 compute-0 systemd-rc-local-generator[61676]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:23:57 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Nov 29 07:23:57 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Nov 29 07:23:57 compute-0 sudo[61608]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:57 compute-0 sudo[61837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctjsvdmgdilbvnvepefmwoihoklriawp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401037.2672737-163-233419826137420/AnsiballZ_stat.py'
Nov 29 07:23:57 compute-0 sudo[61837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:57 compute-0 python3.9[61839]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:57 compute-0 sudo[61837]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:58 compute-0 sudo[61960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoglgiguihmyyexvuwzlipvzkbkefwoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401037.2672737-163-233419826137420/AnsiballZ_copy.py'
Nov 29 07:23:58 compute-0 sudo[61960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:58 compute-0 python3.9[61962]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401037.2672737-163-233419826137420/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:58 compute-0 sudo[61960]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:59 compute-0 sudo[62112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpqebcyaenlaxtorwenmesdbutwvoogx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401038.730468-178-61165312055109/AnsiballZ_stat.py'
Nov 29 07:23:59 compute-0 sudo[62112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:59 compute-0 python3.9[62114]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:59 compute-0 sudo[62112]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:59 compute-0 sudo[62235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeerfbyecusdwsajybzainildwpczwki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401038.730468-178-61165312055109/AnsiballZ_copy.py'
Nov 29 07:23:59 compute-0 sudo[62235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:59 compute-0 python3.9[62237]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401038.730468-178-61165312055109/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:59 compute-0 sudo[62235]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:00 compute-0 sudo[62387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehotqosaqeygygdtkywvpdbkmjrfawgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401040.1158574-193-55850586733676/AnsiballZ_systemd.py'
Nov 29 07:24:00 compute-0 sudo[62387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:00 compute-0 python3.9[62389]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:24:00 compute-0 systemd[1]: Reloading.
Nov 29 07:24:00 compute-0 systemd-sysv-generator[62421]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:24:00 compute-0 systemd-rc-local-generator[62416]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:24:01 compute-0 systemd[1]: Reloading.
Nov 29 07:24:01 compute-0 systemd-rc-local-generator[62458]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:24:01 compute-0 systemd-sysv-generator[62462]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:24:01 compute-0 systemd[1]: Starting Create netns directory...
Nov 29 07:24:01 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 07:24:01 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 07:24:01 compute-0 systemd[1]: Finished Create netns directory.
Nov 29 07:24:01 compute-0 sudo[62387]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:02 compute-0 python3.9[62617]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:24:02 compute-0 network[62634]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:24:02 compute-0 network[62635]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:24:02 compute-0 network[62636]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:24:05 compute-0 sudo[62896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngzqddttzqgzfixrdudpobbnkiqggbgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401045.4856362-209-278694202609761/AnsiballZ_systemd.py'
Nov 29 07:24:05 compute-0 sudo[62896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:06 compute-0 python3.9[62898]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:24:06 compute-0 systemd[1]: Reloading.
Nov 29 07:24:06 compute-0 systemd-sysv-generator[62927]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:24:06 compute-0 systemd-rc-local-generator[62923]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:24:06 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 29 07:24:06 compute-0 iptables.init[62939]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 29 07:24:06 compute-0 iptables.init[62939]: iptables: Flushing firewall rules: [  OK  ]
Nov 29 07:24:06 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Nov 29 07:24:06 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 29 07:24:06 compute-0 sudo[62896]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:07 compute-0 sudo[63133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzzpshwzfkjzvsnhwqiegrkrllwsmugg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401046.9573243-209-25771733407520/AnsiballZ_systemd.py'
Nov 29 07:24:07 compute-0 sudo[63133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:07 compute-0 python3.9[63135]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:24:07 compute-0 sudo[63133]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:08 compute-0 sudo[63287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzzyjgqmnbcwtluxthzohpvxehnumvxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401047.9023423-225-232419390427496/AnsiballZ_systemd.py'
Nov 29 07:24:08 compute-0 sudo[63287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:08 compute-0 python3.9[63289]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:24:08 compute-0 systemd[1]: Reloading.
Nov 29 07:24:08 compute-0 systemd-sysv-generator[63319]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:24:08 compute-0 systemd-rc-local-generator[63310]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:24:09 compute-0 systemd[1]: Starting Netfilter Tables...
Nov 29 07:24:09 compute-0 systemd[1]: Finished Netfilter Tables.
Nov 29 07:24:09 compute-0 sudo[63287]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:09 compute-0 sudo[63479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkzuentipxazlgcraedoibnjeklweydn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401049.3774362-233-232663168220318/AnsiballZ_command.py'
Nov 29 07:24:09 compute-0 sudo[63479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:10 compute-0 python3.9[63481]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:24:10 compute-0 sudo[63479]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:10 compute-0 sudo[63632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewdlenuckjummqbbsxilrtjpozpnfnqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401050.6149364-247-13136270062013/AnsiballZ_stat.py'
Nov 29 07:24:10 compute-0 sudo[63632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:11 compute-0 python3.9[63634]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:24:11 compute-0 sudo[63632]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:11 compute-0 sudo[63759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xquyzplogdeytldkgaeljzkmvzyxtpie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401050.6149364-247-13136270062013/AnsiballZ_copy.py'
Nov 29 07:24:11 compute-0 sudo[63759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:11 compute-0 python3.9[63761]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401050.6149364-247-13136270062013/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:11 compute-0 sudo[63759]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:12 compute-0 sudo[63912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igmogivbccgrbxqjjfchdsvcjmschexl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401052.0816967-262-61448445381609/AnsiballZ_systemd.py'
Nov 29 07:24:12 compute-0 sudo[63912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:12 compute-0 sshd-session[63707]: Invalid user testuser from 114.34.106.146 port 52058
Nov 29 07:24:12 compute-0 sshd-session[63707]: Received disconnect from 114.34.106.146 port 52058:11: Bye Bye [preauth]
Nov 29 07:24:12 compute-0 sshd-session[63707]: Disconnected from invalid user testuser 114.34.106.146 port 52058 [preauth]
Nov 29 07:24:12 compute-0 python3.9[63914]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:24:12 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Nov 29 07:24:12 compute-0 sshd[1003]: Received SIGHUP; restarting.
Nov 29 07:24:12 compute-0 sshd[1003]: Server listening on 0.0.0.0 port 22.
Nov 29 07:24:12 compute-0 sshd[1003]: Server listening on :: port 22.
Nov 29 07:24:12 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Nov 29 07:24:12 compute-0 sudo[63912]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:13 compute-0 sudo[64068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmcnemuscsoyfuhrkhiflkcivjnrgdwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401053.075543-270-99685891173039/AnsiballZ_file.py'
Nov 29 07:24:13 compute-0 sudo[64068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:13 compute-0 python3.9[64070]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:13 compute-0 sudo[64068]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:14 compute-0 sudo[64220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuvphtyivhruxedjvsmviejyrhledybf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401053.7853374-278-138905150644052/AnsiballZ_stat.py'
Nov 29 07:24:14 compute-0 sudo[64220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:14 compute-0 python3.9[64222]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:24:14 compute-0 sudo[64220]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:14 compute-0 sudo[64343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tljrfovazyxkdhotniggthxitnuqpakd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401053.7853374-278-138905150644052/AnsiballZ_copy.py'
Nov 29 07:24:14 compute-0 sudo[64343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:14 compute-0 python3.9[64345]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401053.7853374-278-138905150644052/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:14 compute-0 sudo[64343]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:15 compute-0 sudo[64495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mesdeuxjpackehkzncrlcmelvtxclrlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401055.2872624-296-150461148261333/AnsiballZ_timezone.py'
Nov 29 07:24:15 compute-0 sudo[64495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:16 compute-0 python3.9[64497]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 07:24:16 compute-0 systemd[1]: Starting Time & Date Service...
Nov 29 07:24:16 compute-0 systemd[1]: Started Time & Date Service.
Nov 29 07:24:16 compute-0 sudo[64495]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:16 compute-0 sudo[64651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-premppiwgqvxcjbclfakbsjdtxlutthq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401056.4921408-305-50742952034472/AnsiballZ_file.py'
Nov 29 07:24:16 compute-0 sudo[64651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:17 compute-0 python3.9[64653]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:17 compute-0 sudo[64651]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:17 compute-0 sudo[64803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptymlrulgrwzvbjvlmzsyfbtoxhsmhte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401057.2430143-313-130246660465061/AnsiballZ_stat.py'
Nov 29 07:24:17 compute-0 sudo[64803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:17 compute-0 python3.9[64805]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:24:17 compute-0 sudo[64803]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:18 compute-0 sudo[64926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttilowffocuanponbbkdsbswkotlckvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401057.2430143-313-130246660465061/AnsiballZ_copy.py'
Nov 29 07:24:18 compute-0 sudo[64926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:18 compute-0 python3.9[64928]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401057.2430143-313-130246660465061/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:18 compute-0 sudo[64926]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:19 compute-0 sudo[65078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoqrtssnsmitzphyinjhlfthsphixlit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401058.7903214-328-268823793408728/AnsiballZ_stat.py'
Nov 29 07:24:19 compute-0 sudo[65078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:19 compute-0 python3.9[65080]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:24:19 compute-0 sudo[65078]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:19 compute-0 sudo[65201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvjdxiwmooayfjsseqnwpmmsmtrvopbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401058.7903214-328-268823793408728/AnsiballZ_copy.py'
Nov 29 07:24:19 compute-0 sudo[65201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:20 compute-0 python3.9[65203]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401058.7903214-328-268823793408728/.source.yaml _original_basename=.pgwy7k49 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:20 compute-0 sudo[65201]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:20 compute-0 sudo[65353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfypdjjzptpovidyyqbdxzufjegodrhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401060.241994-343-35338094694660/AnsiballZ_stat.py'
Nov 29 07:24:20 compute-0 sudo[65353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:20 compute-0 python3.9[65355]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:24:20 compute-0 sudo[65353]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:21 compute-0 sudo[65476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feqpbnbfgyvlnxjjzznijyenqtsjymve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401060.241994-343-35338094694660/AnsiballZ_copy.py'
Nov 29 07:24:21 compute-0 sudo[65476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:21 compute-0 python3.9[65478]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401060.241994-343-35338094694660/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:21 compute-0 sudo[65476]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:22 compute-0 sudo[65628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdgtcddtpfigncgfusgqikqibsdkelcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401061.6899908-358-82348662014142/AnsiballZ_command.py'
Nov 29 07:24:22 compute-0 sudo[65628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:22 compute-0 python3.9[65630]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:24:22 compute-0 sudo[65628]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:23 compute-0 sudo[65781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuwhqinrthkmgxgvzxusaiksepggwaay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401062.7308338-366-238620050595672/AnsiballZ_command.py'
Nov 29 07:24:23 compute-0 sudo[65781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:23 compute-0 python3.9[65783]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:24:23 compute-0 sudo[65781]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:23 compute-0 sudo[65934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-canzumymhrtgosnwczjfwwlyydnixasv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764401063.4507215-374-159397144671300/AnsiballZ_edpm_nftables_from_files.py'
Nov 29 07:24:23 compute-0 sudo[65934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:24 compute-0 python3[65936]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 07:24:24 compute-0 sudo[65934]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:24 compute-0 sudo[66086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoqdxytvvywswoujwqdrbguuondrydzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401064.3963904-382-164128059213408/AnsiballZ_stat.py'
Nov 29 07:24:24 compute-0 sudo[66086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:24 compute-0 python3.9[66088]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:24:24 compute-0 sudo[66086]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:25 compute-0 sudo[66209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcqctpbzfsjiauonrcwvjjbmewjeuuff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401064.3963904-382-164128059213408/AnsiballZ_copy.py'
Nov 29 07:24:25 compute-0 sudo[66209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:25 compute-0 python3.9[66211]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401064.3963904-382-164128059213408/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:25 compute-0 sudo[66209]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:26 compute-0 sudo[66361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnxsqccmtruehxfgzuvbxvvcnqdwxbvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401065.8199987-397-206301667398660/AnsiballZ_stat.py'
Nov 29 07:24:26 compute-0 sudo[66361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:26 compute-0 python3.9[66363]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:24:26 compute-0 sudo[66361]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:26 compute-0 sudo[66484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpxlpwrpqsukiuhdkzatuvclnkwsonkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401065.8199987-397-206301667398660/AnsiballZ_copy.py'
Nov 29 07:24:26 compute-0 sudo[66484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:26 compute-0 python3.9[66486]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401065.8199987-397-206301667398660/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:26 compute-0 sudo[66484]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:27 compute-0 sudo[66636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cojgqpktxpvzruhojhnocnjwmmbiujtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401067.211016-412-68747783395846/AnsiballZ_stat.py'
Nov 29 07:24:27 compute-0 sudo[66636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:27 compute-0 python3.9[66638]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:24:27 compute-0 sudo[66636]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:28 compute-0 sudo[66759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvnbdvlgxoyrqgouvxjxtiutrbaoshoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401067.211016-412-68747783395846/AnsiballZ_copy.py'
Nov 29 07:24:28 compute-0 sudo[66759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:28 compute-0 python3.9[66761]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401067.211016-412-68747783395846/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:28 compute-0 sudo[66759]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:29 compute-0 sudo[66911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqejaglrgztxvwggklljrvnmbcenttcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401068.7202728-427-2732311314473/AnsiballZ_stat.py'
Nov 29 07:24:29 compute-0 sudo[66911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:29 compute-0 python3.9[66913]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:24:29 compute-0 sudo[66911]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:29 compute-0 sudo[67034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mejvdbgvvgimtwogvhphufgmdjrlhheq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401068.7202728-427-2732311314473/AnsiballZ_copy.py'
Nov 29 07:24:29 compute-0 sudo[67034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:29 compute-0 python3.9[67036]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401068.7202728-427-2732311314473/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:29 compute-0 sudo[67034]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:30 compute-0 sudo[67186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hysfubdilfnntyswnmwwojbpiwbgllfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401070.0754397-442-33547835476918/AnsiballZ_stat.py'
Nov 29 07:24:30 compute-0 sudo[67186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:30 compute-0 python3.9[67188]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:24:30 compute-0 sudo[67186]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:31 compute-0 sudo[67309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdpvchzahqguhkceebdgyjfmqkhifaye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401070.0754397-442-33547835476918/AnsiballZ_copy.py'
Nov 29 07:24:31 compute-0 sudo[67309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:31 compute-0 python3.9[67311]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401070.0754397-442-33547835476918/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:31 compute-0 sudo[67309]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:31 compute-0 sudo[67461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzuxhadkqribndgxhqpfkayjyslnrova ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401071.500777-457-108267355669830/AnsiballZ_file.py'
Nov 29 07:24:31 compute-0 sudo[67461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:32 compute-0 python3.9[67463]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:32 compute-0 sudo[67461]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:32 compute-0 sudo[67613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cimmubvhqeiffkjewzhkdkpcwapwvzoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401072.298191-465-76449119603087/AnsiballZ_command.py'
Nov 29 07:24:32 compute-0 sudo[67613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:32 compute-0 python3.9[67615]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:24:33 compute-0 sudo[67613]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:33 compute-0 sudo[67772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kafxshqhkuxkykenhfvrtbftehcjsxhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401073.1899917-473-120674171607354/AnsiballZ_blockinfile.py'
Nov 29 07:24:33 compute-0 sudo[67772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:34 compute-0 python3.9[67774]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:34 compute-0 sudo[67772]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:34 compute-0 sudo[67925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtoedabhzvakuzianiyraxathzwhovvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401074.3166957-482-273436615065727/AnsiballZ_file.py'
Nov 29 07:24:34 compute-0 sudo[67925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:34 compute-0 python3.9[67927]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:34 compute-0 sudo[67925]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:35 compute-0 sudo[68077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrfhccbainnoanqayzqrlvkpwfpkzuvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401075.1345422-482-59916228347093/AnsiballZ_file.py'
Nov 29 07:24:35 compute-0 sudo[68077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:35 compute-0 python3.9[68079]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:35 compute-0 sudo[68077]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:36 compute-0 sudo[68229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dosooansqpqinpnyiyjdbremqkgynqhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401075.9261081-497-23540066739390/AnsiballZ_mount.py'
Nov 29 07:24:36 compute-0 sudo[68229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:36 compute-0 python3.9[68231]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 07:24:36 compute-0 sudo[68229]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:37 compute-0 sudo[68382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtejnpjnnelfwvfocgspvgockiohhtsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401076.888036-497-87109044029414/AnsiballZ_mount.py'
Nov 29 07:24:37 compute-0 sudo[68382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:37 compute-0 python3.9[68384]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 07:24:37 compute-0 sudo[68382]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:37 compute-0 sshd-session[59172]: Connection closed by 192.168.122.30 port 49830
Nov 29 07:24:37 compute-0 sshd-session[59169]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:24:37 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Nov 29 07:24:37 compute-0 systemd-logind[782]: Session 14 logged out. Waiting for processes to exit.
Nov 29 07:24:37 compute-0 systemd[1]: session-14.scope: Consumed 40.378s CPU time.
Nov 29 07:24:37 compute-0 systemd-logind[782]: Removed session 14.
Nov 29 07:24:43 compute-0 sshd-session[68411]: Accepted publickey for zuul from 192.168.122.30 port 33898 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:24:43 compute-0 systemd-logind[782]: New session 15 of user zuul.
Nov 29 07:24:43 compute-0 systemd[1]: Started Session 15 of User zuul.
Nov 29 07:24:43 compute-0 sshd-session[68411]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:24:44 compute-0 sudo[68564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywryobissomtlxkojhovfltdfnerfjst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401083.6912026-16-197745162621912/AnsiballZ_tempfile.py'
Nov 29 07:24:44 compute-0 sudo[68564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:44 compute-0 python3.9[68566]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 07:24:44 compute-0 sudo[68564]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:44 compute-0 sudo[68718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxecrubtzvlehgxwyzdjcnytewebyjfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401084.5440068-28-29151056716388/AnsiballZ_stat.py'
Nov 29 07:24:44 compute-0 sudo[68718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:45 compute-0 python3.9[68720]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:24:45 compute-0 sudo[68718]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:46 compute-0 sudo[68872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhwyonlppcbhcqntxsnicxnwacnwszoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401085.467682-38-124868548272311/AnsiballZ_setup.py'
Nov 29 07:24:46 compute-0 sudo[68872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:46 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 07:24:46 compute-0 python3.9[68874]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:24:46 compute-0 sudo[68872]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:46 compute-0 sshd-session[68666]: Received disconnect from 103.234.151.178 port 13990:11: Bye Bye [preauth]
Nov 29 07:24:46 compute-0 sshd-session[68666]: Disconnected from authenticating user root 103.234.151.178 port 13990 [preauth]
Nov 29 07:24:47 compute-0 sudo[69028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajujmkndlmtaxrfbpxukezlyntwjfhgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401086.8354187-47-147703492556746/AnsiballZ_blockinfile.py'
Nov 29 07:24:47 compute-0 sudo[69028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:47 compute-0 python3.9[69030]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRCkuuHZs47SCzcu7tjcimdSkKiN0f+0uYKphtJHmNUbnKzTarwpz0zTzwNB6Rkfa0cNXzjM0eQ2CPe+Snkw0qyJtc2enMqtbj360S5H3yQR2rhUDSVpK9OthgSSa87le6SFKfC02rAVgrgzJCApppzYPI9bW/0S+nrRzwKLahzug8A4ADYyEBlm+Jl3DGbTL5d+Ryvws5Qze65DRbFOeFwoKbeEYnrApC92h6s/WPMTg/PhnvcI8OEOvHjHiJwmglVVJJDpcwmcyHixANCM3zJgtVdS7gG+rMrXG4QRXI/xryq832mzGzPwl3y9wI8DGle8cAycke8o4IXAogH8jRzrJiFLG9v4CwlViuukpGRoURnKy50qhpjPFKMNUc69dZRerAgnEUjGQ2CytcZZPjdGsbwKXadWtQKzVIHo8voavaQrujz9oZY6UtfQWFC9kZrieKVrYwl/OxcAA2ta/ogyuwbE7/9Mq8b+4yY0ng8rzb9l4TDRQA6AxzAROl7H8=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK+05mnacD3gOVCqPwMC2ZPXt1TacIIrH2bpY65vzLCO
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA/zUKxW+GMg5y7+JQdnqaiSzO9TWuAqKOZ40ijEIbMdhLCmDPz2JJeEGgT+Ou1gk72ewR7yoXP5Gzbj0L3RGPI=
                                             create=True mode=0644 path=/tmp/ansible.aa8rmzml state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:47 compute-0 sudo[69028]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:47 compute-0 sshd-session[68847]: Received disconnect from 114.66.38.28 port 41276:11:  [preauth]
Nov 29 07:24:47 compute-0 sshd-session[68847]: Disconnected from authenticating user root 114.66.38.28 port 41276 [preauth]
Nov 29 07:24:48 compute-0 sudo[69180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkkzzgtaqpeiibqvrkullfijdxqcztio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401087.7872784-55-65020606426550/AnsiballZ_command.py'
Nov 29 07:24:48 compute-0 sudo[69180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:48 compute-0 python3.9[69182]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.aa8rmzml' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:24:48 compute-0 sudo[69180]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:48 compute-0 sshd-session[68976]: Invalid user terraria from 103.236.140.19 port 54458
Nov 29 07:24:48 compute-0 sshd-session[68976]: Received disconnect from 103.236.140.19 port 54458:11: Bye Bye [preauth]
Nov 29 07:24:48 compute-0 sshd-session[68976]: Disconnected from invalid user terraria 103.236.140.19 port 54458 [preauth]
Nov 29 07:24:49 compute-0 sudo[69334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikshdgliduvypccjfepirsdholpdbdky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401088.7614708-63-130856241362176/AnsiballZ_file.py'
Nov 29 07:24:49 compute-0 sudo[69334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:49 compute-0 python3.9[69336]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.aa8rmzml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:49 compute-0 sudo[69334]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:49 compute-0 sshd-session[68414]: Connection closed by 192.168.122.30 port 33898
Nov 29 07:24:49 compute-0 sshd-session[68411]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:24:49 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Nov 29 07:24:49 compute-0 systemd[1]: session-15.scope: Consumed 4.134s CPU time.
Nov 29 07:24:49 compute-0 systemd-logind[782]: Session 15 logged out. Waiting for processes to exit.
Nov 29 07:24:49 compute-0 systemd-logind[782]: Removed session 15.
Nov 29 07:24:55 compute-0 sshd-session[69361]: Accepted publickey for zuul from 192.168.122.30 port 37360 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:24:55 compute-0 systemd-logind[782]: New session 16 of user zuul.
Nov 29 07:24:55 compute-0 systemd[1]: Started Session 16 of User zuul.
Nov 29 07:24:55 compute-0 sshd-session[69361]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:24:56 compute-0 python3.9[69514]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:24:57 compute-0 sudo[69668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhvfiklkozjnqchafelhipftxbbatnwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401097.175346-32-100376677469966/AnsiballZ_systemd.py'
Nov 29 07:24:57 compute-0 sudo[69668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:58 compute-0 python3.9[69670]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 07:24:58 compute-0 sudo[69668]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:58 compute-0 sudo[69822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjfqnyywmovuabnfiaeljsnwymhrgiht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401098.4534647-40-249971499755538/AnsiballZ_systemd.py'
Nov 29 07:24:58 compute-0 sudo[69822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:59 compute-0 python3.9[69824]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:24:59 compute-0 sudo[69822]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:59 compute-0 sudo[69977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkgwalgzsnjfuweledtxsiirtlxlfonm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401099.3653693-49-170357913836848/AnsiballZ_command.py'
Nov 29 07:24:59 compute-0 sudo[69977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:00 compute-0 python3.9[69979]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:25:00 compute-0 sudo[69977]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:00 compute-0 sudo[70132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkdcyezimqjozarwwumjkldybwehhzzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401100.3077443-57-55903318461076/AnsiballZ_stat.py'
Nov 29 07:25:00 compute-0 sudo[70132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:00 compute-0 sshd-session[70080]: Received disconnect from 20.185.243.158 port 39568:11: Bye Bye [preauth]
Nov 29 07:25:00 compute-0 sshd-session[70080]: Disconnected from authenticating user root 20.185.243.158 port 39568 [preauth]
Nov 29 07:25:00 compute-0 python3.9[70134]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:25:00 compute-0 sudo[70132]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:01 compute-0 anacron[30971]: Job `cron.daily' started
Nov 29 07:25:01 compute-0 anacron[30971]: Job `cron.daily' terminated
Nov 29 07:25:01 compute-0 sshd-session[69825]: Invalid user user from 45.78.219.195 port 41320
Nov 29 07:25:01 compute-0 sudo[70288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chulyuggcsolbjdomyyiyqonojcffftt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401101.1606781-65-272972757282978/AnsiballZ_command.py'
Nov 29 07:25:01 compute-0 sudo[70288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:01 compute-0 python3.9[70290]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:25:01 compute-0 sudo[70288]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:02 compute-0 sudo[70443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qusfntthypjflecfcnpxpoysrusvhynl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401101.899662-73-146731150411851/AnsiballZ_file.py'
Nov 29 07:25:02 compute-0 sudo[70443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:02 compute-0 sshd-session[69825]: Received disconnect from 45.78.219.195 port 41320:11: Bye Bye [preauth]
Nov 29 07:25:02 compute-0 sshd-session[69825]: Disconnected from invalid user user 45.78.219.195 port 41320 [preauth]
Nov 29 07:25:02 compute-0 python3.9[70445]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:25:02 compute-0 sudo[70443]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:02 compute-0 sshd-session[69364]: Connection closed by 192.168.122.30 port 37360
Nov 29 07:25:02 compute-0 sshd-session[69361]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:25:02 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Nov 29 07:25:02 compute-0 systemd[1]: session-16.scope: Consumed 5.107s CPU time.
Nov 29 07:25:02 compute-0 systemd-logind[782]: Session 16 logged out. Waiting for processes to exit.
Nov 29 07:25:02 compute-0 systemd-logind[782]: Removed session 16.
Nov 29 07:25:09 compute-0 sshd-session[70471]: Accepted publickey for zuul from 192.168.122.30 port 47314 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:25:09 compute-0 systemd-logind[782]: New session 17 of user zuul.
Nov 29 07:25:09 compute-0 systemd[1]: Started Session 17 of User zuul.
Nov 29 07:25:09 compute-0 sshd-session[70471]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:25:10 compute-0 python3.9[70624]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:25:11 compute-0 sudo[70778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hianzybewkuabkwzsgyupmdrjeivcjlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401111.078381-34-124403517387821/AnsiballZ_setup.py'
Nov 29 07:25:11 compute-0 sudo[70778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:11 compute-0 python3.9[70780]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:25:11 compute-0 sudo[70778]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:12 compute-0 sudo[70862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mustetfmhcfsjndkhpsjtjrnoapphoch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401111.078381-34-124403517387821/AnsiballZ_dnf.py'
Nov 29 07:25:12 compute-0 sudo[70862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:12 compute-0 python3.9[70864]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 07:25:14 compute-0 sudo[70862]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:14 compute-0 python3.9[71015]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:25:16 compute-0 python3.9[71166]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 07:25:16 compute-0 python3.9[71316]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:25:16 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:25:17 compute-0 python3.9[71467]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:25:18 compute-0 sshd-session[70474]: Connection closed by 192.168.122.30 port 47314
Nov 29 07:25:18 compute-0 sshd-session[70471]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:25:18 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Nov 29 07:25:18 compute-0 systemd[1]: session-17.scope: Consumed 6.454s CPU time.
Nov 29 07:25:18 compute-0 systemd-logind[782]: Session 17 logged out. Waiting for processes to exit.
Nov 29 07:25:18 compute-0 systemd-logind[782]: Removed session 17.
Nov 29 07:25:19 compute-0 sshd-session[70470]: error: kex_exchange_identification: read: Connection timed out
Nov 29 07:25:19 compute-0 sshd-session[70470]: banner exchange: Connection from 14.116.156.100 port 59598: Connection timed out
Nov 29 07:25:26 compute-0 sshd-session[71492]: Received disconnect from 114.34.106.146 port 54560:11: Bye Bye [preauth]
Nov 29 07:25:26 compute-0 sshd-session[71492]: Disconnected from authenticating user root 114.34.106.146 port 54560 [preauth]
Nov 29 07:25:26 compute-0 sshd-session[71494]: Accepted publickey for zuul from 38.102.83.164 port 38010 ssh2: RSA SHA256:2nuGCieGc55QXSoqUlkpsd2tLSAxk15pqg2+HX/vZuM
Nov 29 07:25:26 compute-0 systemd-logind[782]: New session 18 of user zuul.
Nov 29 07:25:26 compute-0 systemd[1]: Started Session 18 of User zuul.
Nov 29 07:25:26 compute-0 sshd-session[71494]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:25:27 compute-0 sudo[71570]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpzapgsrxfhlmxrzgggvjbxhqiroungc ; /usr/bin/python3'
Nov 29 07:25:27 compute-0 sudo[71570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:27 compute-0 useradd[71574]: new group: name=ceph-admin, GID=42478
Nov 29 07:25:27 compute-0 useradd[71574]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Nov 29 07:25:27 compute-0 sudo[71570]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:27 compute-0 sudo[71656]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqjcmmeniezpmxlfteygakwnhxyijcge ; /usr/bin/python3'
Nov 29 07:25:27 compute-0 sudo[71656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:28 compute-0 sudo[71656]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:28 compute-0 sudo[71729]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuenooaflnfehmuwhpyysfthifmdajaw ; /usr/bin/python3'
Nov 29 07:25:28 compute-0 sudo[71729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:28 compute-0 sudo[71729]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:28 compute-0 sudo[71779]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxkeorxnrdyrmnymsvczmkemxmqfkyvp ; /usr/bin/python3'
Nov 29 07:25:28 compute-0 sudo[71779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:29 compute-0 sudo[71779]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:29 compute-0 sudo[71805]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbmssyfvdgehmyquuvcitlietgmtakas ; /usr/bin/python3'
Nov 29 07:25:29 compute-0 sudo[71805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:29 compute-0 sudo[71805]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:29 compute-0 sudo[71831]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phyinpqupuxlcmqhdqcnnwfkozsluutw ; /usr/bin/python3'
Nov 29 07:25:29 compute-0 sudo[71831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:29 compute-0 sudo[71831]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:30 compute-0 sudo[71857]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpqmgpnfnuhnqvrayvynhiogxapqbuxz ; /usr/bin/python3'
Nov 29 07:25:30 compute-0 sudo[71857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:30 compute-0 sudo[71857]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:30 compute-0 sudo[71935]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfguandhnhawfnumplblunlitmbsqapn ; /usr/bin/python3'
Nov 29 07:25:30 compute-0 sudo[71935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:30 compute-0 sudo[71935]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:30 compute-0 sudo[72008]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqgugceyxvsnvlfwsvxbuperfcryhses ; /usr/bin/python3'
Nov 29 07:25:30 compute-0 sudo[72008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:31 compute-0 sudo[72008]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:31 compute-0 sudo[72110]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjeufchcywedjtyzqqnnalwjylshpjeu ; /usr/bin/python3'
Nov 29 07:25:31 compute-0 sudo[72110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:31 compute-0 sudo[72110]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:31 compute-0 sudo[72183]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpazxzbxcbevvwnyczudvefzgrluqaut ; /usr/bin/python3'
Nov 29 07:25:31 compute-0 sudo[72183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:31 compute-0 sudo[72183]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:32 compute-0 sudo[72233]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ornicppklbdkmutoqhmwsjamgxlhotck ; /usr/bin/python3'
Nov 29 07:25:32 compute-0 sudo[72233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:32 compute-0 python3[72235]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:25:33 compute-0 sudo[72233]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:34 compute-0 sudo[72328]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkktmukghxfsulinuzkojbxuycncmhog ; /usr/bin/python3'
Nov 29 07:25:34 compute-0 sudo[72328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:34 compute-0 python3[72330]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 07:25:35 compute-0 sudo[72328]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:35 compute-0 sudo[72355]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itplwykpborngsezneltgdfwognomsmy ; /usr/bin/python3'
Nov 29 07:25:35 compute-0 sudo[72355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:35 compute-0 python3[72357]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 07:25:35 compute-0 sudo[72355]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:36 compute-0 sudo[72381]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhzsdiktvqozfgukxcowcoqmuxqdmycd ; /usr/bin/python3'
Nov 29 07:25:36 compute-0 sudo[72381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:36 compute-0 python3[72383]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:25:36 compute-0 kernel: loop: module loaded
Nov 29 07:25:36 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Nov 29 07:25:36 compute-0 sudo[72381]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:37 compute-0 sudo[72416]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjyluqrkqtjslxxblzlrzmrjmghmfcya ; /usr/bin/python3'
Nov 29 07:25:37 compute-0 sudo[72416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:37 compute-0 python3[72418]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:25:38 compute-0 lvm[72421]: PV /dev/loop3 not used.
Nov 29 07:25:38 compute-0 lvm[72423]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 07:25:38 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 29 07:25:38 compute-0 lvm[72425]:   1 logical volume(s) in volume group "ceph_vg0" now active
Nov 29 07:25:38 compute-0 chronyd[58684]: Selected source 216.232.132.102 (pool.ntp.org)
Nov 29 07:25:38 compute-0 lvm[72433]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 07:25:38 compute-0 lvm[72433]: VG ceph_vg0 finished
Nov 29 07:25:38 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 29 07:25:38 compute-0 sudo[72416]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:38 compute-0 sudo[72509]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmjnnypoifywypxfsxruxbdavcnfnwkp ; /usr/bin/python3'
Nov 29 07:25:38 compute-0 sudo[72509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:38 compute-0 python3[72511]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 07:25:38 compute-0 sudo[72509]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:39 compute-0 sudo[72582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsdaxopqtlekgostksepbplhfqzokykj ; /usr/bin/python3'
Nov 29 07:25:39 compute-0 sudo[72582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:39 compute-0 python3[72584]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401138.5746305-36532-107899240906348/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:25:39 compute-0 sudo[72582]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:39 compute-0 sudo[72632]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvwqntmwhputrsqtcbsbizmmdprinhqb ; /usr/bin/python3'
Nov 29 07:25:39 compute-0 sudo[72632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:40 compute-0 python3[72634]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:25:40 compute-0 systemd[1]: Reloading.
Nov 29 07:25:40 compute-0 systemd-sysv-generator[72664]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:25:40 compute-0 systemd-rc-local-generator[72661]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:25:40 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 29 07:25:40 compute-0 bash[72674]: /dev/loop3: [64513]:4327936 (/var/lib/ceph-osd-0.img)
Nov 29 07:25:40 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 29 07:25:40 compute-0 sudo[72632]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:40 compute-0 lvm[72676]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 07:25:40 compute-0 lvm[72676]: VG ceph_vg0 finished
Nov 29 07:25:40 compute-0 sudo[72700]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knjtuiingttlxdkfeavanqpycuknoton ; /usr/bin/python3'
Nov 29 07:25:40 compute-0 sudo[72700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:40 compute-0 python3[72702]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 07:25:42 compute-0 sudo[72700]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:42 compute-0 sudo[72727]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weopepjpjzvsvrqppwrewgbfwmdggtab ; /usr/bin/python3'
Nov 29 07:25:42 compute-0 sudo[72727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:42 compute-0 python3[72729]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 07:25:42 compute-0 sudo[72727]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:43 compute-0 sudo[72753]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnvchgjwjmilsevqylpkjkvdykxtrual ; /usr/bin/python3'
Nov 29 07:25:43 compute-0 sudo[72753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:43 compute-0 python3[72755]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:25:43 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Nov 29 07:25:43 compute-0 sudo[72753]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:43 compute-0 sudo[72784]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaexjxynjsuggufxgjioeibtzixsdsei ; /usr/bin/python3'
Nov 29 07:25:43 compute-0 sudo[72784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:43 compute-0 python3[72786]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:25:43 compute-0 lvm[72790]: PV /dev/loop4 not used.
Nov 29 07:25:43 compute-0 lvm[72800]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 07:25:43 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Nov 29 07:25:43 compute-0 sudo[72784]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:43 compute-0 lvm[72802]:   1 logical volume(s) in volume group "ceph_vg1" now active
Nov 29 07:25:43 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Nov 29 07:25:44 compute-0 sudo[72878]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwxupzedkovwxpkucfvsetgtrcgylhps ; /usr/bin/python3'
Nov 29 07:25:44 compute-0 sudo[72878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:44 compute-0 python3[72880]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 07:25:44 compute-0 sudo[72878]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:44 compute-0 sudo[72951]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xugkehxovufrhzxkrlzmfxibijongguf ; /usr/bin/python3'
Nov 29 07:25:44 compute-0 sudo[72951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:44 compute-0 python3[72953]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401143.9445376-36559-72364212946256/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:25:44 compute-0 sudo[72951]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:44 compute-0 sudo[73001]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzxlhdrlwhpuntiozqkmeyldnfggvjms ; /usr/bin/python3'
Nov 29 07:25:44 compute-0 sudo[73001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:45 compute-0 python3[73003]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:25:45 compute-0 systemd[1]: Reloading.
Nov 29 07:25:45 compute-0 systemd-rc-local-generator[73031]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:25:45 compute-0 systemd-sysv-generator[73036]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:25:45 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 29 07:25:45 compute-0 bash[73043]: /dev/loop4: [64513]:4327939 (/var/lib/ceph-osd-1.img)
Nov 29 07:25:45 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 29 07:25:45 compute-0 lvm[73044]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 07:25:45 compute-0 lvm[73044]: VG ceph_vg1 finished
Nov 29 07:25:45 compute-0 sudo[73001]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:45 compute-0 sudo[73068]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shebiywbwkgwcqzhwphvgjldzjqwvwfy ; /usr/bin/python3'
Nov 29 07:25:45 compute-0 sudo[73068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:46 compute-0 python3[73070]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 07:25:47 compute-0 sudo[73068]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:47 compute-0 sudo[73095]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgsuqqzmwotxehahvfiksledfasdatlu ; /usr/bin/python3'
Nov 29 07:25:47 compute-0 sudo[73095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:47 compute-0 python3[73097]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 07:25:47 compute-0 sudo[73095]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:47 compute-0 sudo[73121]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyrdeqdaubuqhqmscvhqheinoaietmjj ; /usr/bin/python3'
Nov 29 07:25:47 compute-0 sudo[73121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:48 compute-0 python3[73123]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:25:48 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Nov 29 07:25:48 compute-0 sudo[73121]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:48 compute-0 sudo[73153]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eynytdpyrfqimuigldvykodzbhwbijwa ; /usr/bin/python3'
Nov 29 07:25:48 compute-0 sudo[73153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:48 compute-0 python3[73155]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:25:48 compute-0 lvm[73158]: PV /dev/loop5 not used.
Nov 29 07:25:49 compute-0 lvm[73160]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 07:25:49 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Nov 29 07:25:49 compute-0 lvm[73162]:   0 logical volume(s) in volume group "ceph_vg2" now active
Nov 29 07:25:49 compute-0 lvm[73163]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 07:25:49 compute-0 lvm[73163]: VG ceph_vg2 finished
Nov 29 07:25:49 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Nov 29 07:25:49 compute-0 lvm[73172]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 07:25:49 compute-0 lvm[73172]: VG ceph_vg2 finished
Nov 29 07:25:49 compute-0 sudo[73153]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:50 compute-0 sudo[73248]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbyxmofccpnaejbpkwzmeoapqcfzxrxd ; /usr/bin/python3'
Nov 29 07:25:50 compute-0 sudo[73248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:50 compute-0 python3[73250]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 07:25:50 compute-0 sudo[73248]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:50 compute-0 sudo[73321]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcerhzioygfjqybbndvguyggdjgvnlwb ; /usr/bin/python3'
Nov 29 07:25:50 compute-0 sudo[73321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:50 compute-0 python3[73323]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401150.0418537-36586-56367456938745/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:25:50 compute-0 sudo[73321]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:50 compute-0 sudo[73371]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smcvfhoioznytjlrhtootkgzdegzverc ; /usr/bin/python3'
Nov 29 07:25:50 compute-0 sudo[73371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:51 compute-0 python3[73373]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:25:51 compute-0 systemd[1]: Reloading.
Nov 29 07:25:51 compute-0 systemd-rc-local-generator[73397]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:25:51 compute-0 systemd-sysv-generator[73404]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:25:51 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 29 07:25:51 compute-0 bash[73412]: /dev/loop5: [64513]:4328632 (/var/lib/ceph-osd-2.img)
Nov 29 07:25:51 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 29 07:25:51 compute-0 sudo[73371]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:51 compute-0 lvm[73414]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 07:25:51 compute-0 lvm[73414]: VG ceph_vg2 finished
Nov 29 07:25:53 compute-0 python3[73439]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:25:56 compute-0 sudo[73530]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdcumtqtxuzbhbdvtbzzrgsagwmowswl ; /usr/bin/python3'
Nov 29 07:25:56 compute-0 sudo[73530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:56 compute-0 python3[73532]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 07:25:58 compute-0 groupadd[73538]: group added to /etc/group: name=cephadm, GID=992
Nov 29 07:25:58 compute-0 groupadd[73538]: group added to /etc/gshadow: name=cephadm
Nov 29 07:25:58 compute-0 groupadd[73538]: new group: name=cephadm, GID=992
Nov 29 07:25:58 compute-0 useradd[73545]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Nov 29 07:25:58 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:25:58 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:25:59 compute-0 sudo[73530]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:59 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:25:59 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:25:59 compute-0 systemd[1]: run-rcb23c35ae22d46658759429dea2e3ed3.service: Deactivated successfully.
Nov 29 07:25:59 compute-0 sudo[73642]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkaykdbrokvkuunbgaxjgylqqejbrjfa ; /usr/bin/python3'
Nov 29 07:25:59 compute-0 sudo[73642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:59 compute-0 python3[73644]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 07:25:59 compute-0 sudo[73642]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:59 compute-0 sudo[73670]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nymehavnermylulepaiebwjjzzhgbrow ; /usr/bin/python3'
Nov 29 07:25:59 compute-0 sudo[73670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:59 compute-0 python3[73672]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:25:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:25:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:25:59 compute-0 sudo[73670]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:00 compute-0 sudo[73735]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icecznwoqyfhyavuxddzscqvcqicnplc ; /usr/bin/python3'
Nov 29 07:26:00 compute-0 sudo[73735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:00 compute-0 python3[73737]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:26:00 compute-0 sudo[73735]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:00 compute-0 sudo[73761]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofkurwrwvnjugwdmrgbbnatfggysplqc ; /usr/bin/python3'
Nov 29 07:26:00 compute-0 sudo[73761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:00 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:26:00 compute-0 python3[73763]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:26:00 compute-0 sudo[73761]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:01 compute-0 sshd-session[73710]: Invalid user roott from 103.234.151.178 port 37812
Nov 29 07:26:01 compute-0 sudo[73839]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqknmbwmlpffybvuioajddvdcvhxndib ; /usr/bin/python3'
Nov 29 07:26:01 compute-0 sudo[73839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:01 compute-0 sshd-session[73710]: Received disconnect from 103.234.151.178 port 37812:11: Bye Bye [preauth]
Nov 29 07:26:01 compute-0 sshd-session[73710]: Disconnected from invalid user roott 103.234.151.178 port 37812 [preauth]
Nov 29 07:26:01 compute-0 python3[73841]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 07:26:01 compute-0 sudo[73839]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:01 compute-0 sudo[73912]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hysvnatpmqfpfnghdxaurakjrcadlyxz ; /usr/bin/python3'
Nov 29 07:26:01 compute-0 sudo[73912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:02 compute-0 python3[73914]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401161.384832-36733-202356543543366/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:26:02 compute-0 sudo[73912]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:02 compute-0 sudo[74014]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldawbluvzkiflkorapfyahrsixltbzkj ; /usr/bin/python3'
Nov 29 07:26:02 compute-0 sudo[74014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:02 compute-0 python3[74016]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 07:26:02 compute-0 sudo[74014]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:02 compute-0 sudo[74087]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhqcznixeubbworibcerdompfazesijw ; /usr/bin/python3'
Nov 29 07:26:02 compute-0 sudo[74087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:03 compute-0 python3[74089]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401162.4950833-36751-114262885218128/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:26:03 compute-0 sudo[74087]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:03 compute-0 sudo[74137]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmvgmjisjyizrcypqakwtphrwywsujhu ; /usr/bin/python3'
Nov 29 07:26:03 compute-0 sudo[74137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:03 compute-0 python3[74139]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 07:26:03 compute-0 sudo[74137]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:03 compute-0 sudo[74165]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-islyxunjugulmnnjsxzfczrbhhlkedir ; /usr/bin/python3'
Nov 29 07:26:03 compute-0 sudo[74165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:03 compute-0 python3[74167]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 07:26:03 compute-0 sudo[74165]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:04 compute-0 sudo[74193]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sthkcmpjnrsaoqynzuatmqupkqsdazhy ; /usr/bin/python3'
Nov 29 07:26:04 compute-0 sudo[74193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:04 compute-0 python3[74195]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 07:26:04 compute-0 sudo[74193]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:04 compute-0 sudo[74221]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khpqegfuigizobmpcqzlhtdmxlrrisjh ; /usr/bin/python3'
Nov 29 07:26:04 compute-0 sudo[74221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:04 compute-0 python3[74223]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:26:04 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:26:04 compute-0 sshd-session[74239]: Accepted publickey for ceph-admin from 192.168.122.100 port 38260 ssh2: RSA SHA256:ei20DuU97i+OzSG1I2IKSD1P0mnnHdAB5FKXo24KVFQ
Nov 29 07:26:04 compute-0 systemd-logind[782]: New session 19 of user ceph-admin.
Nov 29 07:26:04 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 29 07:26:04 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 29 07:26:04 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 29 07:26:04 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 29 07:26:04 compute-0 systemd[74243]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:26:04 compute-0 systemd[74243]: Queued start job for default target Main User Target.
Nov 29 07:26:04 compute-0 systemd[74243]: Created slice User Application Slice.
Nov 29 07:26:04 compute-0 systemd[74243]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 07:26:04 compute-0 systemd[74243]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 07:26:04 compute-0 systemd[74243]: Reached target Paths.
Nov 29 07:26:04 compute-0 systemd[74243]: Reached target Timers.
Nov 29 07:26:04 compute-0 systemd[74243]: Starting D-Bus User Message Bus Socket...
Nov 29 07:26:04 compute-0 systemd[74243]: Starting Create User's Volatile Files and Directories...
Nov 29 07:26:05 compute-0 systemd[74243]: Listening on D-Bus User Message Bus Socket.
Nov 29 07:26:05 compute-0 systemd[74243]: Reached target Sockets.
Nov 29 07:26:05 compute-0 systemd[74243]: Finished Create User's Volatile Files and Directories.
Nov 29 07:26:05 compute-0 systemd[74243]: Reached target Basic System.
Nov 29 07:26:05 compute-0 systemd[74243]: Reached target Main User Target.
Nov 29 07:26:05 compute-0 systemd[74243]: Startup finished in 123ms.
Nov 29 07:26:05 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 29 07:26:05 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Nov 29 07:26:05 compute-0 sshd-session[74239]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:26:05 compute-0 sudo[74260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Nov 29 07:26:05 compute-0 sudo[74260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:26:05 compute-0 sudo[74260]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:05 compute-0 sshd-session[74259]: Received disconnect from 192.168.122.100 port 38260:11: disconnected by user
Nov 29 07:26:05 compute-0 sshd-session[74259]: Disconnected from user ceph-admin 192.168.122.100 port 38260
Nov 29 07:26:05 compute-0 sshd-session[74239]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 29 07:26:05 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Nov 29 07:26:05 compute-0 systemd-logind[782]: Session 19 logged out. Waiting for processes to exit.
Nov 29 07:26:05 compute-0 systemd-logind[782]: Removed session 19.
Nov 29 07:26:05 compute-0 sshd-session[74224]: Invalid user server from 103.236.140.19 port 43430
Nov 29 07:26:06 compute-0 sshd-session[74224]: Received disconnect from 103.236.140.19 port 43430:11: Bye Bye [preauth]
Nov 29 07:26:06 compute-0 sshd-session[74224]: Disconnected from invalid user server 103.236.140.19 port 43430 [preauth]
Nov 29 07:26:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat99002373-lower\x2dmapped.mount: Deactivated successfully.
Nov 29 07:26:15 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Nov 29 07:26:19 compute-0 sshd-session[74355]: Invalid user work from 20.185.243.158 port 47436
Nov 29 07:26:19 compute-0 sshd-session[74355]: Received disconnect from 20.185.243.158 port 47436:11: Bye Bye [preauth]
Nov 29 07:26:19 compute-0 sshd-session[74355]: Disconnected from invalid user work 20.185.243.158 port 47436 [preauth]
Nov 29 07:26:19 compute-0 systemd[74243]: Activating special unit Exit the Session...
Nov 29 07:26:19 compute-0 systemd[74243]: Stopped target Main User Target.
Nov 29 07:26:19 compute-0 systemd[74243]: Stopped target Basic System.
Nov 29 07:26:19 compute-0 systemd[74243]: Stopped target Paths.
Nov 29 07:26:19 compute-0 systemd[74243]: Stopped target Sockets.
Nov 29 07:26:19 compute-0 systemd[74243]: Stopped target Timers.
Nov 29 07:26:19 compute-0 systemd[74243]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 07:26:19 compute-0 systemd[74243]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 07:26:19 compute-0 systemd[74243]: Closed D-Bus User Message Bus Socket.
Nov 29 07:26:19 compute-0 systemd[74243]: Stopped Create User's Volatile Files and Directories.
Nov 29 07:26:19 compute-0 systemd[74243]: Removed slice User Application Slice.
Nov 29 07:26:19 compute-0 systemd[74243]: Reached target Shutdown.
Nov 29 07:26:19 compute-0 systemd[74243]: Finished Exit the Session.
Nov 29 07:26:19 compute-0 systemd[74243]: Reached target Exit the Session.
Nov 29 07:26:19 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Nov 29 07:26:19 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Nov 29 07:26:20 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 29 07:26:20 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 29 07:26:20 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 29 07:26:20 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 29 07:26:20 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Nov 29 07:26:37 compute-0 podman[74297]: 2025-11-29 07:26:37.776014695 +0000 UTC m=+32.603939838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:26:37 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:26:37 compute-0 podman[74358]: 2025-11-29 07:26:37.864897174 +0000 UTC m=+0.057085495 container create 517339b15ada65c565185ee52083f837f934bdf4b9ae81bd35788c804247680b (image=quay.io/ceph/ceph:v18, name=eloquent_dewdney, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 07:26:37 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 29 07:26:37 compute-0 systemd[1]: Started libpod-conmon-517339b15ada65c565185ee52083f837f934bdf4b9ae81bd35788c804247680b.scope.
Nov 29 07:26:37 compute-0 podman[74358]: 2025-11-29 07:26:37.839573543 +0000 UTC m=+0.031761884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:26:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:26:37 compute-0 podman[74358]: 2025-11-29 07:26:37.976485592 +0000 UTC m=+0.168673943 container init 517339b15ada65c565185ee52083f837f934bdf4b9ae81bd35788c804247680b (image=quay.io/ceph/ceph:v18, name=eloquent_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:26:37 compute-0 podman[74358]: 2025-11-29 07:26:37.984872627 +0000 UTC m=+0.177060948 container start 517339b15ada65c565185ee52083f837f934bdf4b9ae81bd35788c804247680b (image=quay.io/ceph/ceph:v18, name=eloquent_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:26:37 compute-0 podman[74358]: 2025-11-29 07:26:37.989331877 +0000 UTC m=+0.181520268 container attach 517339b15ada65c565185ee52083f837f934bdf4b9ae81bd35788c804247680b (image=quay.io/ceph/ceph:v18, name=eloquent_dewdney, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:26:38 compute-0 eloquent_dewdney[74374]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 29 07:26:38 compute-0 systemd[1]: libpod-517339b15ada65c565185ee52083f837f934bdf4b9ae81bd35788c804247680b.scope: Deactivated successfully.
Nov 29 07:26:38 compute-0 podman[74358]: 2025-11-29 07:26:38.336046604 +0000 UTC m=+0.528234965 container died 517339b15ada65c565185ee52083f837f934bdf4b9ae81bd35788c804247680b (image=quay.io/ceph/ceph:v18, name=eloquent_dewdney, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:26:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f3f9bceb193f5326303c5a32758ac9a88a010333599655bfa75af03d35646d9-merged.mount: Deactivated successfully.
Nov 29 07:26:38 compute-0 podman[74358]: 2025-11-29 07:26:38.387190689 +0000 UTC m=+0.579379020 container remove 517339b15ada65c565185ee52083f837f934bdf4b9ae81bd35788c804247680b (image=quay.io/ceph/ceph:v18, name=eloquent_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 07:26:38 compute-0 systemd[1]: libpod-conmon-517339b15ada65c565185ee52083f837f934bdf4b9ae81bd35788c804247680b.scope: Deactivated successfully.
Nov 29 07:26:38 compute-0 podman[74391]: 2025-11-29 07:26:38.45567283 +0000 UTC m=+0.043626035 container create affa49cc7b1180ec92f694020c66136c91ebac64311bd3a5b4435d817dce222e (image=quay.io/ceph/ceph:v18, name=frosty_shtern, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:26:38 compute-0 systemd[1]: Started libpod-conmon-affa49cc7b1180ec92f694020c66136c91ebac64311bd3a5b4435d817dce222e.scope.
Nov 29 07:26:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:26:38 compute-0 podman[74391]: 2025-11-29 07:26:38.515432325 +0000 UTC m=+0.103385610 container init affa49cc7b1180ec92f694020c66136c91ebac64311bd3a5b4435d817dce222e (image=quay.io/ceph/ceph:v18, name=frosty_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:26:38 compute-0 podman[74391]: 2025-11-29 07:26:38.525507115 +0000 UTC m=+0.113460360 container start affa49cc7b1180ec92f694020c66136c91ebac64311bd3a5b4435d817dce222e (image=quay.io/ceph/ceph:v18, name=frosty_shtern, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:26:38 compute-0 frosty_shtern[74407]: 167 167
Nov 29 07:26:38 compute-0 podman[74391]: 2025-11-29 07:26:38.435289272 +0000 UTC m=+0.023242507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:26:38 compute-0 systemd[1]: libpod-affa49cc7b1180ec92f694020c66136c91ebac64311bd3a5b4435d817dce222e.scope: Deactivated successfully.
Nov 29 07:26:38 compute-0 podman[74391]: 2025-11-29 07:26:38.531030824 +0000 UTC m=+0.118984059 container attach affa49cc7b1180ec92f694020c66136c91ebac64311bd3a5b4435d817dce222e (image=quay.io/ceph/ceph:v18, name=frosty_shtern, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Nov 29 07:26:38 compute-0 podman[74391]: 2025-11-29 07:26:38.531658271 +0000 UTC m=+0.119611516 container died affa49cc7b1180ec92f694020c66136c91ebac64311bd3a5b4435d817dce222e (image=quay.io/ceph/ceph:v18, name=frosty_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:26:38 compute-0 podman[74391]: 2025-11-29 07:26:38.594702105 +0000 UTC m=+0.182655340 container remove affa49cc7b1180ec92f694020c66136c91ebac64311bd3a5b4435d817dce222e (image=quay.io/ceph/ceph:v18, name=frosty_shtern, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:26:38 compute-0 systemd[1]: libpod-conmon-affa49cc7b1180ec92f694020c66136c91ebac64311bd3a5b4435d817dce222e.scope: Deactivated successfully.
Nov 29 07:26:38 compute-0 podman[74424]: 2025-11-29 07:26:38.668387005 +0000 UTC m=+0.050360153 container create 87aa16faa3359b6fb5749977e53bb8f9b564fcb52936073aae6648251b28dea6 (image=quay.io/ceph/ceph:v18, name=nice_mendel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 29 07:26:38 compute-0 systemd[1]: Started libpod-conmon-87aa16faa3359b6fb5749977e53bb8f9b564fcb52936073aae6648251b28dea6.scope.
Nov 29 07:26:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:26:38 compute-0 podman[74424]: 2025-11-29 07:26:38.648410988 +0000 UTC m=+0.030384176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:26:38 compute-0 podman[74424]: 2025-11-29 07:26:38.757174792 +0000 UTC m=+0.139148020 container init 87aa16faa3359b6fb5749977e53bb8f9b564fcb52936073aae6648251b28dea6 (image=quay.io/ceph/ceph:v18, name=nice_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:26:38 compute-0 podman[74424]: 2025-11-29 07:26:38.768122186 +0000 UTC m=+0.150095324 container start 87aa16faa3359b6fb5749977e53bb8f9b564fcb52936073aae6648251b28dea6 (image=quay.io/ceph/ceph:v18, name=nice_mendel, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 07:26:38 compute-0 podman[74424]: 2025-11-29 07:26:38.788184365 +0000 UTC m=+0.170157513 container attach 87aa16faa3359b6fb5749977e53bb8f9b564fcb52936073aae6648251b28dea6 (image=quay.io/ceph/ceph:v18, name=nice_mendel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:26:38 compute-0 nice_mendel[74440]: AQAuoCppHzo+LxAANtqAWNOUllm6u4az49TBQA==
Nov 29 07:26:38 compute-0 systemd[1]: libpod-87aa16faa3359b6fb5749977e53bb8f9b564fcb52936073aae6648251b28dea6.scope: Deactivated successfully.
Nov 29 07:26:38 compute-0 podman[74424]: 2025-11-29 07:26:38.795993865 +0000 UTC m=+0.177966993 container died 87aa16faa3359b6fb5749977e53bb8f9b564fcb52936073aae6648251b28dea6 (image=quay.io/ceph/ceph:v18, name=nice_mendel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:26:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ad054c5c35c578b4f5fc2392d68d6b6b8c47877a58096d59e034daf3167c152-merged.mount: Deactivated successfully.
Nov 29 07:26:38 compute-0 podman[74424]: 2025-11-29 07:26:38.840083029 +0000 UTC m=+0.222056157 container remove 87aa16faa3359b6fb5749977e53bb8f9b564fcb52936073aae6648251b28dea6 (image=quay.io/ceph/ceph:v18, name=nice_mendel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 07:26:38 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:26:38 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:26:38 compute-0 systemd[1]: libpod-conmon-87aa16faa3359b6fb5749977e53bb8f9b564fcb52936073aae6648251b28dea6.scope: Deactivated successfully.
Nov 29 07:26:38 compute-0 podman[74459]: 2025-11-29 07:26:38.884447592 +0000 UTC m=+0.023548294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:26:39 compute-0 podman[74459]: 2025-11-29 07:26:39.125301925 +0000 UTC m=+0.264402627 container create 6e96f4a6a18c65a55dde40d3d507e0b512d2fdf5c802db4d892a43e0f34aa9f8 (image=quay.io/ceph/ceph:v18, name=serene_ramanujan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:26:39 compute-0 systemd[1]: Started libpod-conmon-6e96f4a6a18c65a55dde40d3d507e0b512d2fdf5c802db4d892a43e0f34aa9f8.scope.
Nov 29 07:26:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:26:39 compute-0 podman[74459]: 2025-11-29 07:26:39.200885335 +0000 UTC m=+0.339986017 container init 6e96f4a6a18c65a55dde40d3d507e0b512d2fdf5c802db4d892a43e0f34aa9f8 (image=quay.io/ceph/ceph:v18, name=serene_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 07:26:39 compute-0 podman[74459]: 2025-11-29 07:26:39.206153107 +0000 UTC m=+0.345253789 container start 6e96f4a6a18c65a55dde40d3d507e0b512d2fdf5c802db4d892a43e0f34aa9f8 (image=quay.io/ceph/ceph:v18, name=serene_ramanujan, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:26:39 compute-0 podman[74459]: 2025-11-29 07:26:39.211623233 +0000 UTC m=+0.350723915 container attach 6e96f4a6a18c65a55dde40d3d507e0b512d2fdf5c802db4d892a43e0f34aa9f8 (image=quay.io/ceph/ceph:v18, name=serene_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 07:26:39 compute-0 serene_ramanujan[74475]: AQAvoCppevOODRAA7Zz1o1hK1xRPyEpZEatDww==
Nov 29 07:26:39 compute-0 systemd[1]: libpod-6e96f4a6a18c65a55dde40d3d507e0b512d2fdf5c802db4d892a43e0f34aa9f8.scope: Deactivated successfully.
Nov 29 07:26:39 compute-0 podman[74459]: 2025-11-29 07:26:39.231153628 +0000 UTC m=+0.370254320 container died 6e96f4a6a18c65a55dde40d3d507e0b512d2fdf5c802db4d892a43e0f34aa9f8 (image=quay.io/ceph/ceph:v18, name=serene_ramanujan, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:26:39 compute-0 podman[74459]: 2025-11-29 07:26:39.268595305 +0000 UTC m=+0.407695987 container remove 6e96f4a6a18c65a55dde40d3d507e0b512d2fdf5c802db4d892a43e0f34aa9f8 (image=quay.io/ceph/ceph:v18, name=serene_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Nov 29 07:26:39 compute-0 systemd[1]: libpod-conmon-6e96f4a6a18c65a55dde40d3d507e0b512d2fdf5c802db4d892a43e0f34aa9f8.scope: Deactivated successfully.
Nov 29 07:26:39 compute-0 podman[74495]: 2025-11-29 07:26:39.314475048 +0000 UTC m=+0.025824346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:26:39 compute-0 podman[74495]: 2025-11-29 07:26:39.423381865 +0000 UTC m=+0.134731113 container create a3f8e8e32a4247b908576d20491bf9683cf88a5ad57ad5556fef2eed92d4db8a (image=quay.io/ceph/ceph:v18, name=upbeat_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 29 07:26:39 compute-0 systemd[1]: Started libpod-conmon-a3f8e8e32a4247b908576d20491bf9683cf88a5ad57ad5556fef2eed92d4db8a.scope.
Nov 29 07:26:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:26:39 compute-0 podman[74495]: 2025-11-29 07:26:39.486749887 +0000 UTC m=+0.198099155 container init a3f8e8e32a4247b908576d20491bf9683cf88a5ad57ad5556fef2eed92d4db8a (image=quay.io/ceph/ceph:v18, name=upbeat_jackson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 07:26:39 compute-0 podman[74495]: 2025-11-29 07:26:39.492781239 +0000 UTC m=+0.204130487 container start a3f8e8e32a4247b908576d20491bf9683cf88a5ad57ad5556fef2eed92d4db8a (image=quay.io/ceph/ceph:v18, name=upbeat_jackson, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:26:39 compute-0 podman[74495]: 2025-11-29 07:26:39.496618852 +0000 UTC m=+0.207968120 container attach a3f8e8e32a4247b908576d20491bf9683cf88a5ad57ad5556fef2eed92d4db8a (image=quay.io/ceph/ceph:v18, name=upbeat_jackson, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:26:39 compute-0 upbeat_jackson[74511]: AQAvoCppvby8HhAAw/8maLnRl6BwLS0++Ll3iA==
Nov 29 07:26:39 compute-0 systemd[1]: libpod-a3f8e8e32a4247b908576d20491bf9683cf88a5ad57ad5556fef2eed92d4db8a.scope: Deactivated successfully.
Nov 29 07:26:39 compute-0 podman[74495]: 2025-11-29 07:26:39.519719744 +0000 UTC m=+0.231068992 container died a3f8e8e32a4247b908576d20491bf9683cf88a5ad57ad5556fef2eed92d4db8a (image=quay.io/ceph/ceph:v18, name=upbeat_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:26:39 compute-0 podman[74495]: 2025-11-29 07:26:39.55346337 +0000 UTC m=+0.264812608 container remove a3f8e8e32a4247b908576d20491bf9683cf88a5ad57ad5556fef2eed92d4db8a (image=quay.io/ceph/ceph:v18, name=upbeat_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:26:39 compute-0 systemd[1]: libpod-conmon-a3f8e8e32a4247b908576d20491bf9683cf88a5ad57ad5556fef2eed92d4db8a.scope: Deactivated successfully.
Nov 29 07:26:39 compute-0 podman[74531]: 2025-11-29 07:26:39.594317898 +0000 UTC m=+0.021896440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:26:40 compute-0 podman[74531]: 2025-11-29 07:26:40.031597998 +0000 UTC m=+0.459176520 container create c04fc26aa9b3275e49b6954b48ed145c3a31186cb3f0b3c050d5022e36819cb5 (image=quay.io/ceph/ceph:v18, name=brave_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:26:40 compute-0 systemd[1]: Started libpod-conmon-c04fc26aa9b3275e49b6954b48ed145c3a31186cb3f0b3c050d5022e36819cb5.scope.
Nov 29 07:26:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76f073953c6a88141f1ea6b34400ee72368bc77c9c59540a897d8e17c8d24a05/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:40 compute-0 podman[74531]: 2025-11-29 07:26:40.700285639 +0000 UTC m=+1.127864161 container init c04fc26aa9b3275e49b6954b48ed145c3a31186cb3f0b3c050d5022e36819cb5 (image=quay.io/ceph/ceph:v18, name=brave_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 07:26:40 compute-0 podman[74531]: 2025-11-29 07:26:40.706472045 +0000 UTC m=+1.134050587 container start c04fc26aa9b3275e49b6954b48ed145c3a31186cb3f0b3c050d5022e36819cb5 (image=quay.io/ceph/ceph:v18, name=brave_borg, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 07:26:40 compute-0 podman[74531]: 2025-11-29 07:26:40.72862788 +0000 UTC m=+1.156206432 container attach c04fc26aa9b3275e49b6954b48ed145c3a31186cb3f0b3c050d5022e36819cb5 (image=quay.io/ceph/ceph:v18, name=brave_borg, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 07:26:40 compute-0 brave_borg[74550]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 29 07:26:40 compute-0 brave_borg[74550]: setting min_mon_release = pacific
Nov 29 07:26:40 compute-0 brave_borg[74550]: /usr/bin/monmaptool: set fsid to 321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:26:40 compute-0 brave_borg[74550]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 29 07:26:40 compute-0 systemd[1]: libpod-c04fc26aa9b3275e49b6954b48ed145c3a31186cb3f0b3c050d5022e36819cb5.scope: Deactivated successfully.
Nov 29 07:26:40 compute-0 podman[74558]: 2025-11-29 07:26:40.785617452 +0000 UTC m=+0.029777182 container died c04fc26aa9b3275e49b6954b48ed145c3a31186cb3f0b3c050d5022e36819cb5 (image=quay.io/ceph/ceph:v18, name=brave_borg, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:26:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-76f073953c6a88141f1ea6b34400ee72368bc77c9c59540a897d8e17c8d24a05-merged.mount: Deactivated successfully.
Nov 29 07:26:40 compute-0 podman[74558]: 2025-11-29 07:26:40.822568725 +0000 UTC m=+0.066728465 container remove c04fc26aa9b3275e49b6954b48ed145c3a31186cb3f0b3c050d5022e36819cb5 (image=quay.io/ceph/ceph:v18, name=brave_borg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:26:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:26:40 compute-0 systemd[1]: libpod-conmon-c04fc26aa9b3275e49b6954b48ed145c3a31186cb3f0b3c050d5022e36819cb5.scope: Deactivated successfully.
Nov 29 07:26:40 compute-0 sshd-session[74546]: Invalid user vps from 114.34.106.146 port 47626
Nov 29 07:26:40 compute-0 podman[74574]: 2025-11-29 07:26:40.890648785 +0000 UTC m=+0.031043956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:26:41 compute-0 sshd-session[74546]: Received disconnect from 114.34.106.146 port 47626:11: Bye Bye [preauth]
Nov 29 07:26:41 compute-0 sshd-session[74546]: Disconnected from invalid user vps 114.34.106.146 port 47626 [preauth]
Nov 29 07:26:41 compute-0 podman[74574]: 2025-11-29 07:26:41.525869834 +0000 UTC m=+0.666264945 container create 2d32c593660118a1141e92d21cd48f1e8b70452dbf8256eef9e758c94afdc58a (image=quay.io/ceph/ceph:v18, name=adoring_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 07:26:41 compute-0 systemd[1]: Started libpod-conmon-2d32c593660118a1141e92d21cd48f1e8b70452dbf8256eef9e758c94afdc58a.scope.
Nov 29 07:26:41 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b46db71808b6b12a8958ab86aa59deb3681abb3c9190bfd2e0ae79ea98f4cf7/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b46db71808b6b12a8958ab86aa59deb3681abb3c9190bfd2e0ae79ea98f4cf7/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b46db71808b6b12a8958ab86aa59deb3681abb3c9190bfd2e0ae79ea98f4cf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b46db71808b6b12a8958ab86aa59deb3681abb3c9190bfd2e0ae79ea98f4cf7/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:41 compute-0 podman[74574]: 2025-11-29 07:26:41.808660264 +0000 UTC m=+0.949055345 container init 2d32c593660118a1141e92d21cd48f1e8b70452dbf8256eef9e758c94afdc58a (image=quay.io/ceph/ceph:v18, name=adoring_haibt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 07:26:41 compute-0 podman[74574]: 2025-11-29 07:26:41.816134245 +0000 UTC m=+0.956529356 container start 2d32c593660118a1141e92d21cd48f1e8b70452dbf8256eef9e758c94afdc58a (image=quay.io/ceph/ceph:v18, name=adoring_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:26:41 compute-0 podman[74574]: 2025-11-29 07:26:41.822227908 +0000 UTC m=+0.962623019 container attach 2d32c593660118a1141e92d21cd48f1e8b70452dbf8256eef9e758c94afdc58a (image=quay.io/ceph/ceph:v18, name=adoring_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:26:41 compute-0 systemd[1]: libpod-2d32c593660118a1141e92d21cd48f1e8b70452dbf8256eef9e758c94afdc58a.scope: Deactivated successfully.
Nov 29 07:26:41 compute-0 podman[74574]: 2025-11-29 07:26:41.911457506 +0000 UTC m=+1.051852577 container died 2d32c593660118a1141e92d21cd48f1e8b70452dbf8256eef9e758c94afdc58a (image=quay.io/ceph/ceph:v18, name=adoring_haibt, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 07:26:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b46db71808b6b12a8958ab86aa59deb3681abb3c9190bfd2e0ae79ea98f4cf7-merged.mount: Deactivated successfully.
Nov 29 07:26:41 compute-0 podman[74574]: 2025-11-29 07:26:41.95591806 +0000 UTC m=+1.096313171 container remove 2d32c593660118a1141e92d21cd48f1e8b70452dbf8256eef9e758c94afdc58a (image=quay.io/ceph/ceph:v18, name=adoring_haibt, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:26:41 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:26:41 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:26:41 compute-0 systemd[1]: libpod-conmon-2d32c593660118a1141e92d21cd48f1e8b70452dbf8256eef9e758c94afdc58a.scope: Deactivated successfully.
Nov 29 07:26:42 compute-0 systemd[1]: Reloading.
Nov 29 07:26:42 compute-0 systemd-sysv-generator[74662]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:26:42 compute-0 systemd-rc-local-generator[74659]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:26:42 compute-0 systemd[1]: Reloading.
Nov 29 07:26:42 compute-0 systemd-rc-local-generator[74695]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:26:42 compute-0 systemd-sysv-generator[74698]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:26:42 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Nov 29 07:26:42 compute-0 systemd[1]: Reloading.
Nov 29 07:26:42 compute-0 systemd-rc-local-generator[74730]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:26:42 compute-0 systemd-sysv-generator[74736]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:26:42 compute-0 systemd[1]: Reached target Ceph cluster 321e9cb7-01a2-5759-bf8c-981c9a64aa3e.
Nov 29 07:26:42 compute-0 systemd[1]: Reloading.
Nov 29 07:26:42 compute-0 systemd-rc-local-generator[74767]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:26:42 compute-0 systemd-sysv-generator[74773]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:26:43 compute-0 systemd[1]: Reloading.
Nov 29 07:26:43 compute-0 systemd-sysv-generator[74813]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:26:43 compute-0 systemd-rc-local-generator[74810]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:26:43 compute-0 systemd[1]: Created slice Slice /system/ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e.
Nov 29 07:26:43 compute-0 systemd[1]: Reached target System Time Set.
Nov 29 07:26:43 compute-0 systemd[1]: Reached target System Time Synchronized.
Nov 29 07:26:43 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e...
Nov 29 07:26:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:26:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:26:43 compute-0 podman[74868]: 2025-11-29 07:26:43.679330831 +0000 UTC m=+0.049400733 container create 0f5d1e28e964990f80d63af2c86c4709db5f1d1a29847935b1a2e91285f114d8 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab5e24f8f1d52858d5369248d148dd83949139241151dc85f2d7e8044caa75d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab5e24f8f1d52858d5369248d148dd83949139241151dc85f2d7e8044caa75d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab5e24f8f1d52858d5369248d148dd83949139241151dc85f2d7e8044caa75d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab5e24f8f1d52858d5369248d148dd83949139241151dc85f2d7e8044caa75d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:43 compute-0 podman[74868]: 2025-11-29 07:26:43.747232736 +0000 UTC m=+0.117302468 container init 0f5d1e28e964990f80d63af2c86c4709db5f1d1a29847935b1a2e91285f114d8 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 07:26:43 compute-0 podman[74868]: 2025-11-29 07:26:43.660375459 +0000 UTC m=+0.030445191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:26:43 compute-0 podman[74868]: 2025-11-29 07:26:43.754461635 +0000 UTC m=+0.124531337 container start 0f5d1e28e964990f80d63af2c86c4709db5f1d1a29847935b1a2e91285f114d8 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:26:43 compute-0 bash[74868]: 0f5d1e28e964990f80d63af2c86c4709db5f1d1a29847935b1a2e91285f114d8
Nov 29 07:26:43 compute-0 systemd[1]: Started Ceph mon.compute-0 for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e.
Nov 29 07:26:43 compute-0 ceph-mon[74887]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:26:43 compute-0 ceph-mon[74887]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 29 07:26:43 compute-0 ceph-mon[74887]: pidfile_write: ignore empty --pid-file
Nov 29 07:26:43 compute-0 ceph-mon[74887]: load: jerasure load: lrc 
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: RocksDB version: 7.9.2
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Git sha 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: DB SUMMARY
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: DB Session ID:  QCI8IVNFCIW7DUNJQMQ0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: CURRENT file:  CURRENT
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                         Options.error_if_exists: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                       Options.create_if_missing: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                                     Options.env: 0x564744f40c40
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                                Options.info_log: 0x5647461a0e80
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                              Options.statistics: (nil)
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                               Options.use_fsync: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                              Options.db_log_dir: 
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                                 Options.wal_dir: 
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                    Options.write_buffer_manager: 0x5647461b0b40
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                  Options.unordered_write: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                               Options.row_cache: None
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                              Options.wal_filter: None
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:             Options.two_write_queues: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:             Options.wal_compression: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:             Options.atomic_flush: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:             Options.max_background_jobs: 2
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:             Options.max_background_compactions: -1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:             Options.max_subcompactions: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:             Options.max_total_wal_size: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                          Options.max_open_files: -1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:       Options.compaction_readahead_size: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Compression algorithms supported:
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:         kZSTD supported: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:         kXpressCompression supported: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:         kBZip2Compression supported: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:         kLZ4Compression supported: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:         kZlibCompression supported: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:         kSnappyCompression supported: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:           Options.merge_operator: 
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:        Options.compaction_filter: None
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5647461a0a80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5647461991f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:        Options.write_buffer_size: 33554432
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:  Options.max_write_buffer_number: 2
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:          Options.compression: NoCompression
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:             Options.num_levels: 7
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3e5adb8f-86c2-4b9b-abb8-241c7a80419a
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401203810631, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401203812879, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "QCI8IVNFCIW7DUNJQMQ0", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401203813032, "job": 1, "event": "recovery_finished"}
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5647461c2e00
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: DB pointer 0x56474624c000
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:26:43 compute-0 ceph-mon[74887]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5647461991f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 07:26:43 compute-0 ceph-mon[74887]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@-1(???) e0 preinit fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 29 07:26:43 compute-0 ceph-mon[74887]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:26:43 compute-0 ceph-mon[74887]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 29 07:26:43 compute-0 ceph-mon[74887]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 29 07:26:43 compute-0 podman[74888]: 2025-11-29 07:26:43.852232948 +0000 UTC m=+0.050036387 container create 43241ddf01da9c47abc51bb92662d76d5e50de8133b84136c27590688007e027 (image=quay.io/ceph/ceph:v18, name=pensive_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:26:43 compute-0 ceph-mon[74887]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 07:26:43 compute-0 ceph-mon[74887]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-11-29T07:26:41.858173Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864324,os=Linux}
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader).mds e1 new map
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 29 07:26:43 compute-0 ceph-mon[74887]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mkfs 321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 29 07:26:43 compute-0 ceph-mon[74887]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 29 07:26:43 compute-0 ceph-mon[74887]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 29 07:26:43 compute-0 ceph-mon[74887]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 07:26:43 compute-0 systemd[1]: Started libpod-conmon-43241ddf01da9c47abc51bb92662d76d5e50de8133b84136c27590688007e027.scope.
Nov 29 07:26:43 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fede5303f9733c24ec4a4432c9ff1b4f7bb5e7ae83f60e754df8434e27ffb112/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fede5303f9733c24ec4a4432c9ff1b4f7bb5e7ae83f60e754df8434e27ffb112/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fede5303f9733c24ec4a4432c9ff1b4f7bb5e7ae83f60e754df8434e27ffb112/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:43 compute-0 podman[74888]: 2025-11-29 07:26:43.925915241 +0000 UTC m=+0.123718700 container init 43241ddf01da9c47abc51bb92662d76d5e50de8133b84136c27590688007e027 (image=quay.io/ceph/ceph:v18, name=pensive_beaver, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:26:43 compute-0 podman[74888]: 2025-11-29 07:26:43.832654493 +0000 UTC m=+0.030457952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:26:43 compute-0 podman[74888]: 2025-11-29 07:26:43.93357354 +0000 UTC m=+0.131376979 container start 43241ddf01da9c47abc51bb92662d76d5e50de8133b84136c27590688007e027 (image=quay.io/ceph/ceph:v18, name=pensive_beaver, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 07:26:43 compute-0 podman[74888]: 2025-11-29 07:26:43.936622293 +0000 UTC m=+0.134425742 container attach 43241ddf01da9c47abc51bb92662d76d5e50de8133b84136c27590688007e027 (image=quay.io/ceph/ceph:v18, name=pensive_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:26:44 compute-0 ceph-mon[74887]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 07:26:44 compute-0 ceph-mon[74887]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/855577127' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 07:26:44 compute-0 pensive_beaver[74942]:   cluster:
Nov 29 07:26:44 compute-0 pensive_beaver[74942]:     id:     321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:26:44 compute-0 pensive_beaver[74942]:     health: HEALTH_OK
Nov 29 07:26:44 compute-0 pensive_beaver[74942]:  
Nov 29 07:26:44 compute-0 pensive_beaver[74942]:   services:
Nov 29 07:26:44 compute-0 pensive_beaver[74942]:     mon: 1 daemons, quorum compute-0 (age 0.465143s)
Nov 29 07:26:44 compute-0 pensive_beaver[74942]:     mgr: no daemons active
Nov 29 07:26:44 compute-0 pensive_beaver[74942]:     osd: 0 osds: 0 up, 0 in
Nov 29 07:26:44 compute-0 pensive_beaver[74942]:  
Nov 29 07:26:44 compute-0 pensive_beaver[74942]:   data:
Nov 29 07:26:44 compute-0 pensive_beaver[74942]:     pools:   0 pools, 0 pgs
Nov 29 07:26:44 compute-0 pensive_beaver[74942]:     objects: 0 objects, 0 B
Nov 29 07:26:44 compute-0 pensive_beaver[74942]:     usage:   0 B used, 0 B / 0 B avail
Nov 29 07:26:44 compute-0 pensive_beaver[74942]:     pgs:     
Nov 29 07:26:44 compute-0 pensive_beaver[74942]:  
Nov 29 07:26:44 compute-0 systemd[1]: libpod-43241ddf01da9c47abc51bb92662d76d5e50de8133b84136c27590688007e027.scope: Deactivated successfully.
Nov 29 07:26:44 compute-0 podman[74888]: 2025-11-29 07:26:44.337888833 +0000 UTC m=+0.535692292 container died 43241ddf01da9c47abc51bb92662d76d5e50de8133b84136c27590688007e027 (image=quay.io/ceph/ceph:v18, name=pensive_beaver, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 29 07:26:44 compute-0 podman[74888]: 2025-11-29 07:26:44.386738243 +0000 UTC m=+0.584541692 container remove 43241ddf01da9c47abc51bb92662d76d5e50de8133b84136c27590688007e027 (image=quay.io/ceph/ceph:v18, name=pensive_beaver, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:26:44 compute-0 systemd[1]: libpod-conmon-43241ddf01da9c47abc51bb92662d76d5e50de8133b84136c27590688007e027.scope: Deactivated successfully.
Nov 29 07:26:44 compute-0 podman[74979]: 2025-11-29 07:26:44.454367722 +0000 UTC m=+0.040292795 container create 722495f42d59cfc7cce5c0c340f07cdf1be722d5123888874a4cb921b75df1b3 (image=quay.io/ceph/ceph:v18, name=pedantic_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:26:44 compute-0 systemd[1]: Started libpod-conmon-722495f42d59cfc7cce5c0c340f07cdf1be722d5123888874a4cb921b75df1b3.scope.
Nov 29 07:26:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:26:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6ccf3b4444d952364b09125a57f04dbf2d5b23aa14bdcb01ccd676ffa6b9bc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6ccf3b4444d952364b09125a57f04dbf2d5b23aa14bdcb01ccd676ffa6b9bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6ccf3b4444d952364b09125a57f04dbf2d5b23aa14bdcb01ccd676ffa6b9bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:44 compute-0 podman[74979]: 2025-11-29 07:26:44.4388317 +0000 UTC m=+0.024756793 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:26:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6ccf3b4444d952364b09125a57f04dbf2d5b23aa14bdcb01ccd676ffa6b9bc/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:44 compute-0 podman[74979]: 2025-11-29 07:26:44.551164834 +0000 UTC m=+0.137090007 container init 722495f42d59cfc7cce5c0c340f07cdf1be722d5123888874a4cb921b75df1b3 (image=quay.io/ceph/ceph:v18, name=pedantic_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:26:44 compute-0 podman[74979]: 2025-11-29 07:26:44.563329686 +0000 UTC m=+0.149254799 container start 722495f42d59cfc7cce5c0c340f07cdf1be722d5123888874a4cb921b75df1b3 (image=quay.io/ceph/ceph:v18, name=pedantic_brattain, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 07:26:44 compute-0 podman[74979]: 2025-11-29 07:26:44.574450786 +0000 UTC m=+0.160375859 container attach 722495f42d59cfc7cce5c0c340f07cdf1be722d5123888874a4cb921b75df1b3 (image=quay.io/ceph/ceph:v18, name=pedantic_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 07:26:44 compute-0 ceph-mon[74887]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 07:26:44 compute-0 ceph-mon[74887]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 07:26:44 compute-0 ceph-mon[74887]: fsmap 
Nov 29 07:26:44 compute-0 ceph-mon[74887]: osdmap e1: 0 total, 0 up, 0 in
Nov 29 07:26:44 compute-0 ceph-mon[74887]: mgrmap e1: no daemons active
Nov 29 07:26:44 compute-0 ceph-mon[74887]: from='client.? 192.168.122.100:0/855577127' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 07:26:45 compute-0 ceph-mon[74887]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 07:26:45 compute-0 ceph-mon[74887]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3140365402' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 07:26:45 compute-0 ceph-mon[74887]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3140365402' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 07:26:45 compute-0 pedantic_brattain[74996]: 
Nov 29 07:26:45 compute-0 pedantic_brattain[74996]: [global]
Nov 29 07:26:45 compute-0 pedantic_brattain[74996]:         fsid = 321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:26:45 compute-0 pedantic_brattain[74996]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 29 07:26:45 compute-0 pedantic_brattain[74996]:         osd_crush_chooseleaf_type = 0
Nov 29 07:26:45 compute-0 systemd[1]: libpod-722495f42d59cfc7cce5c0c340f07cdf1be722d5123888874a4cb921b75df1b3.scope: Deactivated successfully.
Nov 29 07:26:45 compute-0 podman[75022]: 2025-11-29 07:26:45.092790276 +0000 UTC m=+0.040399176 container died 722495f42d59cfc7cce5c0c340f07cdf1be722d5123888874a4cb921b75df1b3 (image=quay.io/ceph/ceph:v18, name=pedantic_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:26:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed6ccf3b4444d952364b09125a57f04dbf2d5b23aa14bdcb01ccd676ffa6b9bc-merged.mount: Deactivated successfully.
Nov 29 07:26:45 compute-0 podman[75022]: 2025-11-29 07:26:45.160384934 +0000 UTC m=+0.107993814 container remove 722495f42d59cfc7cce5c0c340f07cdf1be722d5123888874a4cb921b75df1b3 (image=quay.io/ceph/ceph:v18, name=pedantic_brattain, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 07:26:45 compute-0 systemd[1]: libpod-conmon-722495f42d59cfc7cce5c0c340f07cdf1be722d5123888874a4cb921b75df1b3.scope: Deactivated successfully.
Nov 29 07:26:45 compute-0 podman[75038]: 2025-11-29 07:26:45.242934302 +0000 UTC m=+0.051634910 container create e341e69e1f21eccc8d81983d49802db781ae1057df340407ed0b3a50098716a9 (image=quay.io/ceph/ceph:v18, name=kind_bouman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 07:26:45 compute-0 systemd[1]: Started libpod-conmon-e341e69e1f21eccc8d81983d49802db781ae1057df340407ed0b3a50098716a9.scope.
Nov 29 07:26:45 compute-0 podman[75038]: 2025-11-29 07:26:45.219582058 +0000 UTC m=+0.028282696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:26:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61dba70ecdafdc6088e4b6d6df88900ca278ff220910d5415d52ac538ebd37e5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61dba70ecdafdc6088e4b6d6df88900ca278ff220910d5415d52ac538ebd37e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61dba70ecdafdc6088e4b6d6df88900ca278ff220910d5415d52ac538ebd37e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61dba70ecdafdc6088e4b6d6df88900ca278ff220910d5415d52ac538ebd37e5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:45 compute-0 podman[75038]: 2025-11-29 07:26:45.336566248 +0000 UTC m=+0.145266936 container init e341e69e1f21eccc8d81983d49802db781ae1057df340407ed0b3a50098716a9 (image=quay.io/ceph/ceph:v18, name=kind_bouman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:26:45 compute-0 podman[75038]: 2025-11-29 07:26:45.349926895 +0000 UTC m=+0.158627493 container start e341e69e1f21eccc8d81983d49802db781ae1057df340407ed0b3a50098716a9 (image=quay.io/ceph/ceph:v18, name=kind_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:26:45 compute-0 podman[75038]: 2025-11-29 07:26:45.354251144 +0000 UTC m=+0.162951752 container attach e341e69e1f21eccc8d81983d49802db781ae1057df340407ed0b3a50098716a9 (image=quay.io/ceph/ceph:v18, name=kind_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:26:45 compute-0 ceph-mon[74887]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:26:45 compute-0 ceph-mon[74887]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2784495342' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:26:45 compute-0 systemd[1]: libpod-e341e69e1f21eccc8d81983d49802db781ae1057df340407ed0b3a50098716a9.scope: Deactivated successfully.
Nov 29 07:26:45 compute-0 podman[75038]: 2025-11-29 07:26:45.804558068 +0000 UTC m=+0.613258766 container died e341e69e1f21eccc8d81983d49802db781ae1057df340407ed0b3a50098716a9 (image=quay.io/ceph/ceph:v18, name=kind_bouman, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:26:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-61dba70ecdafdc6088e4b6d6df88900ca278ff220910d5415d52ac538ebd37e5-merged.mount: Deactivated successfully.
Nov 29 07:26:45 compute-0 podman[75038]: 2025-11-29 07:26:45.855015361 +0000 UTC m=+0.663715959 container remove e341e69e1f21eccc8d81983d49802db781ae1057df340407ed0b3a50098716a9 (image=quay.io/ceph/ceph:v18, name=kind_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:26:45 compute-0 systemd[1]: libpod-conmon-e341e69e1f21eccc8d81983d49802db781ae1057df340407ed0b3a50098716a9.scope: Deactivated successfully.
Nov 29 07:26:45 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e...
Nov 29 07:26:45 compute-0 ceph-mon[74887]: from='client.? 192.168.122.100:0/3140365402' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 07:26:45 compute-0 ceph-mon[74887]: from='client.? 192.168.122.100:0/3140365402' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 07:26:45 compute-0 ceph-mon[74887]: from='client.? 192.168.122.100:0/2784495342' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:26:46 compute-0 ceph-mon[74887]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 29 07:26:46 compute-0 ceph-mon[74887]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 29 07:26:46 compute-0 ceph-mon[74887]: mon.compute-0@0(leader) e1 shutdown
Nov 29 07:26:46 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0[74883]: 2025-11-29T07:26:46.048+0000 7f4edefa7640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 29 07:26:46 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0[74883]: 2025-11-29T07:26:46.048+0000 7f4edefa7640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 29 07:26:46 compute-0 ceph-mon[74887]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 07:26:46 compute-0 ceph-mon[74887]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 07:26:46 compute-0 podman[75122]: 2025-11-29 07:26:46.316045147 +0000 UTC m=+0.303323205 container died 0f5d1e28e964990f80d63af2c86c4709db5f1d1a29847935b1a2e91285f114d8 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:26:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ab5e24f8f1d52858d5369248d148dd83949139241151dc85f2d7e8044caa75d-merged.mount: Deactivated successfully.
Nov 29 07:26:46 compute-0 podman[75122]: 2025-11-29 07:26:46.352012901 +0000 UTC m=+0.339290939 container remove 0f5d1e28e964990f80d63af2c86c4709db5f1d1a29847935b1a2e91285f114d8 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:26:46 compute-0 bash[75122]: ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0
Nov 29 07:26:46 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:26:46 compute-0 systemd[1]: ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e@mon.compute-0.service: Deactivated successfully.
Nov 29 07:26:46 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e.
Nov 29 07:26:46 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e...
Nov 29 07:26:46 compute-0 podman[75218]: 2025-11-29 07:26:46.721248588 +0000 UTC m=+0.048468144 container create 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 07:26:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0baef3664de73aab6ce933ef4ad04cb6db43c2fbf2f35bd7b150a8fd8fc8f9dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0baef3664de73aab6ce933ef4ad04cb6db43c2fbf2f35bd7b150a8fd8fc8f9dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0baef3664de73aab6ce933ef4ad04cb6db43c2fbf2f35bd7b150a8fd8fc8f9dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0baef3664de73aab6ce933ef4ad04cb6db43c2fbf2f35bd7b150a8fd8fc8f9dc/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:46 compute-0 podman[75218]: 2025-11-29 07:26:46.781286 +0000 UTC m=+0.108505576 container init 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:26:46 compute-0 podman[75218]: 2025-11-29 07:26:46.787788463 +0000 UTC m=+0.115008019 container start 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:26:46 compute-0 bash[75218]: 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2
Nov 29 07:26:46 compute-0 podman[75218]: 2025-11-29 07:26:46.702914299 +0000 UTC m=+0.030133905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:26:46 compute-0 systemd[1]: Started Ceph mon.compute-0 for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e.
Nov 29 07:26:46 compute-0 ceph-mon[75237]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:26:46 compute-0 ceph-mon[75237]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 29 07:26:46 compute-0 ceph-mon[75237]: pidfile_write: ignore empty --pid-file
Nov 29 07:26:46 compute-0 ceph-mon[75237]: load: jerasure load: lrc 
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: RocksDB version: 7.9.2
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Git sha 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: DB SUMMARY
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: DB Session ID:  UZSC7H07F1USVRLC4WEP
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: CURRENT file:  CURRENT
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55676 ; 
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                         Options.error_if_exists: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                       Options.create_if_missing: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                                     Options.env: 0x55dbdd186c40
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                                Options.info_log: 0x55dbdf335040
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                              Options.statistics: (nil)
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                               Options.use_fsync: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                              Options.db_log_dir: 
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                                 Options.wal_dir: 
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                    Options.write_buffer_manager: 0x55dbdf344b40
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                  Options.unordered_write: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                               Options.row_cache: None
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                              Options.wal_filter: None
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:             Options.two_write_queues: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:             Options.wal_compression: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:             Options.atomic_flush: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:             Options.max_background_jobs: 2
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:             Options.max_background_compactions: -1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:             Options.max_subcompactions: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:             Options.max_total_wal_size: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                          Options.max_open_files: -1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:       Options.compaction_readahead_size: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Compression algorithms supported:
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:         kZSTD supported: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:         kXpressCompression supported: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:         kBZip2Compression supported: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:         kLZ4Compression supported: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:         kZlibCompression supported: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:         kSnappyCompression supported: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:           Options.merge_operator: 
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:        Options.compaction_filter: None
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55dbdf334c40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55dbdf32d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:        Options.write_buffer_size: 33554432
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:  Options.max_write_buffer_number: 2
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:          Options.compression: NoCompression
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:             Options.num_levels: 7
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3e5adb8f-86c2-4b9b-abb8-241c7a80419a
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401206829303, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401206831835, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 55257, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 53797, "index_size": 166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3050, "raw_average_key_size": 30, "raw_value_size": 51386, "raw_average_value_size": 508, "num_data_blocks": 9, "num_entries": 101, "num_filter_entries": 101, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401206, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401206831938, "job": 1, "event": "recovery_finished"}
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55dbdf356e00
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: DB pointer 0x55dbdf3e0000
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:26:46 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   55.86 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     24.3      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0   55.86 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     24.3      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     24.3      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     24.3      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 4.49 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 4.49 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55dbdf32d1f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 07:26:46 compute-0 ceph-mon[75237]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:26:46 compute-0 ceph-mon[75237]: mon.compute-0@-1(???) e1 preinit fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:26:46 compute-0 ceph-mon[75237]: mon.compute-0@-1(???).mds e1 new map
Nov 29 07:26:46 compute-0 ceph-mon[75237]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 29 07:26:46 compute-0 ceph-mon[75237]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 29 07:26:46 compute-0 ceph-mon[75237]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 07:26:46 compute-0 ceph-mon[75237]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 07:26:46 compute-0 ceph-mon[75237]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 07:26:46 compute-0 ceph-mon[75237]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 29 07:26:46 compute-0 ceph-mon[75237]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 29 07:26:46 compute-0 ceph-mon[75237]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 29 07:26:46 compute-0 ceph-mon[75237]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 29 07:26:46 compute-0 ceph-mon[75237]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:26:46 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 07:26:46 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 07:26:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:26:46 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 07:26:46 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 29 07:26:46 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 29 07:26:46 compute-0 podman[75238]: 2025-11-29 07:26:46.896156495 +0000 UTC m=+0.067761933 container create 978913e2df10e1b08818a41016adeaf4f186200657bf43952e389b1b8a1821ec (image=quay.io/ceph/ceph:v18, name=epic_borg, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 29 07:26:46 compute-0 ceph-mon[75237]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 07:26:46 compute-0 ceph-mon[75237]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 07:26:46 compute-0 ceph-mon[75237]: fsmap 
Nov 29 07:26:46 compute-0 ceph-mon[75237]: osdmap e1: 0 total, 0 up, 0 in
Nov 29 07:26:46 compute-0 ceph-mon[75237]: mgrmap e1: no daemons active
Nov 29 07:26:46 compute-0 systemd[1]: Started libpod-conmon-978913e2df10e1b08818a41016adeaf4f186200657bf43952e389b1b8a1821ec.scope.
Nov 29 07:26:46 compute-0 podman[75238]: 2025-11-29 07:26:46.87365783 +0000 UTC m=+0.045263288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:26:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:26:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21cf8f351ad0f49b8ff88ba86ca9c09010f6a3580331853dd8dabd6af027a027/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21cf8f351ad0f49b8ff88ba86ca9c09010f6a3580331853dd8dabd6af027a027/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21cf8f351ad0f49b8ff88ba86ca9c09010f6a3580331853dd8dabd6af027a027/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:46 compute-0 podman[75238]: 2025-11-29 07:26:46.991679411 +0000 UTC m=+0.163284859 container init 978913e2df10e1b08818a41016adeaf4f186200657bf43952e389b1b8a1821ec (image=quay.io/ceph/ceph:v18, name=epic_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Nov 29 07:26:47 compute-0 podman[75238]: 2025-11-29 07:26:47.004074968 +0000 UTC m=+0.175680406 container start 978913e2df10e1b08818a41016adeaf4f186200657bf43952e389b1b8a1821ec (image=quay.io/ceph/ceph:v18, name=epic_borg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:26:47 compute-0 podman[75238]: 2025-11-29 07:26:47.008015059 +0000 UTC m=+0.179620517 container attach 978913e2df10e1b08818a41016adeaf4f186200657bf43952e389b1b8a1821ec (image=quay.io/ceph/ceph:v18, name=epic_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:26:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 29 07:26:47 compute-0 systemd[1]: libpod-978913e2df10e1b08818a41016adeaf4f186200657bf43952e389b1b8a1821ec.scope: Deactivated successfully.
Nov 29 07:26:47 compute-0 podman[75238]: 2025-11-29 07:26:47.437159575 +0000 UTC m=+0.608765033 container died 978913e2df10e1b08818a41016adeaf4f186200657bf43952e389b1b8a1821ec (image=quay.io/ceph/ceph:v18, name=epic_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:26:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-21cf8f351ad0f49b8ff88ba86ca9c09010f6a3580331853dd8dabd6af027a027-merged.mount: Deactivated successfully.
Nov 29 07:26:47 compute-0 podman[75238]: 2025-11-29 07:26:47.486215549 +0000 UTC m=+0.657820987 container remove 978913e2df10e1b08818a41016adeaf4f186200657bf43952e389b1b8a1821ec (image=quay.io/ceph/ceph:v18, name=epic_borg, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:26:47 compute-0 systemd[1]: libpod-conmon-978913e2df10e1b08818a41016adeaf4f186200657bf43952e389b1b8a1821ec.scope: Deactivated successfully.
Nov 29 07:26:47 compute-0 podman[75329]: 2025-11-29 07:26:47.557553125 +0000 UTC m=+0.051238890 container create 84397493bea00bfd496ae7e237e3f27b1036d20d2b66ab6d24357b70566727d0 (image=quay.io/ceph/ceph:v18, name=sad_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:26:47 compute-0 systemd[1]: Started libpod-conmon-84397493bea00bfd496ae7e237e3f27b1036d20d2b66ab6d24357b70566727d0.scope.
Nov 29 07:26:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:26:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8678905862fcec4ca405be2568209a3665fc9ec44ae837a6e11a8e9a67f94a11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8678905862fcec4ca405be2568209a3665fc9ec44ae837a6e11a8e9a67f94a11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8678905862fcec4ca405be2568209a3665fc9ec44ae837a6e11a8e9a67f94a11/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:47 compute-0 podman[75329]: 2025-11-29 07:26:47.531820413 +0000 UTC m=+0.025506208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:26:47 compute-0 podman[75329]: 2025-11-29 07:26:47.637264654 +0000 UTC m=+0.130950459 container init 84397493bea00bfd496ae7e237e3f27b1036d20d2b66ab6d24357b70566727d0 (image=quay.io/ceph/ceph:v18, name=sad_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:26:47 compute-0 podman[75329]: 2025-11-29 07:26:47.644385261 +0000 UTC m=+0.138071036 container start 84397493bea00bfd496ae7e237e3f27b1036d20d2b66ab6d24357b70566727d0 (image=quay.io/ceph/ceph:v18, name=sad_northcutt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 07:26:47 compute-0 podman[75329]: 2025-11-29 07:26:47.649068968 +0000 UTC m=+0.142754733 container attach 84397493bea00bfd496ae7e237e3f27b1036d20d2b66ab6d24357b70566727d0 (image=quay.io/ceph/ceph:v18, name=sad_northcutt, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:26:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 29 07:26:48 compute-0 systemd[1]: libpod-84397493bea00bfd496ae7e237e3f27b1036d20d2b66ab6d24357b70566727d0.scope: Deactivated successfully.
Nov 29 07:26:48 compute-0 conmon[75345]: conmon 84397493bea00bfd496a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-84397493bea00bfd496ae7e237e3f27b1036d20d2b66ab6d24357b70566727d0.scope/container/memory.events
Nov 29 07:26:48 compute-0 podman[75329]: 2025-11-29 07:26:48.059164701 +0000 UTC m=+0.552850516 container died 84397493bea00bfd496ae7e237e3f27b1036d20d2b66ab6d24357b70566727d0 (image=quay.io/ceph/ceph:v18, name=sad_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 07:26:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-8678905862fcec4ca405be2568209a3665fc9ec44ae837a6e11a8e9a67f94a11-merged.mount: Deactivated successfully.
Nov 29 07:26:48 compute-0 podman[75329]: 2025-11-29 07:26:48.119192401 +0000 UTC m=+0.612878166 container remove 84397493bea00bfd496ae7e237e3f27b1036d20d2b66ab6d24357b70566727d0 (image=quay.io/ceph/ceph:v18, name=sad_northcutt, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:26:48 compute-0 systemd[1]: libpod-conmon-84397493bea00bfd496ae7e237e3f27b1036d20d2b66ab6d24357b70566727d0.scope: Deactivated successfully.
Nov 29 07:26:48 compute-0 systemd[1]: Reloading.
Nov 29 07:26:48 compute-0 systemd-rc-local-generator[75412]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:26:48 compute-0 systemd-sysv-generator[75416]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:26:48 compute-0 systemd[1]: Reloading.
Nov 29 07:26:48 compute-0 systemd-rc-local-generator[75450]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:26:48 compute-0 systemd-sysv-generator[75454]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:26:48 compute-0 systemd[1]: Starting Ceph mgr.compute-0.fwfehy for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e...
Nov 29 07:26:49 compute-0 podman[75507]: 2025-11-29 07:26:49.017594843 +0000 UTC m=+0.051243521 container create 6283142e4ea10de64e64feea88f71d7b72c9d43c6246daf901c9ddf14270378b (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:26:49 compute-0 podman[75507]: 2025-11-29 07:26:48.994584398 +0000 UTC m=+0.028233096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:26:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16505de1d2d5254431cab513dd2b00fb202b43f171a9582058d7847e6275d49a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16505de1d2d5254431cab513dd2b00fb202b43f171a9582058d7847e6275d49a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16505de1d2d5254431cab513dd2b00fb202b43f171a9582058d7847e6275d49a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16505de1d2d5254431cab513dd2b00fb202b43f171a9582058d7847e6275d49a/merged/var/lib/ceph/mgr/ceph-compute-0.fwfehy supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:49 compute-0 podman[75507]: 2025-11-29 07:26:49.110754161 +0000 UTC m=+0.144402899 container init 6283142e4ea10de64e64feea88f71d7b72c9d43c6246daf901c9ddf14270378b (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 29 07:26:49 compute-0 podman[75507]: 2025-11-29 07:26:49.12284003 +0000 UTC m=+0.156488758 container start 6283142e4ea10de64e64feea88f71d7b72c9d43c6246daf901c9ddf14270378b (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:26:49 compute-0 bash[75507]: 6283142e4ea10de64e64feea88f71d7b72c9d43c6246daf901c9ddf14270378b
Nov 29 07:26:49 compute-0 systemd[1]: Started Ceph mgr.compute-0.fwfehy for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e.
Nov 29 07:26:49 compute-0 ceph-mgr[75527]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:26:49 compute-0 ceph-mgr[75527]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 07:26:49 compute-0 ceph-mgr[75527]: pidfile_write: ignore empty --pid-file
Nov 29 07:26:49 compute-0 podman[75528]: 2025-11-29 07:26:49.225228688 +0000 UTC m=+0.053576899 container create 1f2d32ef8c70801b19e56411233b6bb018a9145b2b39fe6946596b089526163f (image=quay.io/ceph/ceph:v18, name=cranky_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:26:49 compute-0 systemd[1]: Started libpod-conmon-1f2d32ef8c70801b19e56411233b6bb018a9145b2b39fe6946596b089526163f.scope.
Nov 29 07:26:49 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'alerts'
Nov 29 07:26:49 compute-0 podman[75528]: 2025-11-29 07:26:49.203411237 +0000 UTC m=+0.031759438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:26:49 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:26:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4589ddfac1e46adf21f53e6841a3ec2ae1fbecd1528d7f64c063ddfd103c5896/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4589ddfac1e46adf21f53e6841a3ec2ae1fbecd1528d7f64c063ddfd103c5896/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4589ddfac1e46adf21f53e6841a3ec2ae1fbecd1528d7f64c063ddfd103c5896/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:49 compute-0 podman[75528]: 2025-11-29 07:26:49.324670254 +0000 UTC m=+0.153018455 container init 1f2d32ef8c70801b19e56411233b6bb018a9145b2b39fe6946596b089526163f (image=quay.io/ceph/ceph:v18, name=cranky_meninsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:26:49 compute-0 podman[75528]: 2025-11-29 07:26:49.332814483 +0000 UTC m=+0.161162664 container start 1f2d32ef8c70801b19e56411233b6bb018a9145b2b39fe6946596b089526163f (image=quay.io/ceph/ceph:v18, name=cranky_meninsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:26:49 compute-0 podman[75528]: 2025-11-29 07:26:49.336339116 +0000 UTC m=+0.164687327 container attach 1f2d32ef8c70801b19e56411233b6bb018a9145b2b39fe6946596b089526163f (image=quay.io/ceph/ceph:v18, name=cranky_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 07:26:49 compute-0 ceph-mgr[75527]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 07:26:49 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'balancer'
Nov 29 07:26:49 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:26:49.606+0000 7f7da71fd140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 07:26:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 07:26:49 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4084196844' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]: 
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]: {
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:     "fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:     "health": {
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "status": "HEALTH_OK",
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "checks": {},
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "mutes": []
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:     },
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:     "election_epoch": 5,
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:     "quorum": [
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         0
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:     ],
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:     "quorum_names": [
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "compute-0"
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:     ],
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:     "quorum_age": 2,
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:     "monmap": {
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "epoch": 1,
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "min_mon_release_name": "reef",
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "num_mons": 1
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:     },
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:     "osdmap": {
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "epoch": 1,
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "num_osds": 0,
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "num_up_osds": 0,
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "osd_up_since": 0,
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "num_in_osds": 0,
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "osd_in_since": 0,
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "num_remapped_pgs": 0
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:     },
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:     "pgmap": {
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "pgs_by_state": [],
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "num_pgs": 0,
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "num_pools": 0,
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "num_objects": 0,
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "data_bytes": 0,
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "bytes_used": 0,
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "bytes_avail": 0,
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "bytes_total": 0
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:     },
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:     "fsmap": {
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "epoch": 1,
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "by_rank": [],
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "up:standby": 0
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:     },
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:     "mgrmap": {
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "available": false,
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "num_standbys": 0,
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "modules": [
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:             "iostat",
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:             "nfs",
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:             "restful"
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         ],
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "services": {}
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:     },
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:     "servicemap": {
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "epoch": 1,
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "modified": "2025-11-29T07:26:43.858355+0000",
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:         "services": {}
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:     },
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]:     "progress_events": {}
Nov 29 07:26:49 compute-0 cranky_meninsky[75569]: }
Nov 29 07:26:49 compute-0 systemd[1]: libpod-1f2d32ef8c70801b19e56411233b6bb018a9145b2b39fe6946596b089526163f.scope: Deactivated successfully.
Nov 29 07:26:49 compute-0 podman[75528]: 2025-11-29 07:26:49.750030713 +0000 UTC m=+0.578378894 container died 1f2d32ef8c70801b19e56411233b6bb018a9145b2b39fe6946596b089526163f (image=quay.io/ceph/ceph:v18, name=cranky_meninsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:26:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-4589ddfac1e46adf21f53e6841a3ec2ae1fbecd1528d7f64c063ddfd103c5896-merged.mount: Deactivated successfully.
Nov 29 07:26:49 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/4084196844' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:26:49 compute-0 podman[75528]: 2025-11-29 07:26:49.796076205 +0000 UTC m=+0.624424386 container remove 1f2d32ef8c70801b19e56411233b6bb018a9145b2b39fe6946596b089526163f (image=quay.io/ceph/ceph:v18, name=cranky_meninsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:26:49 compute-0 systemd[1]: libpod-conmon-1f2d32ef8c70801b19e56411233b6bb018a9145b2b39fe6946596b089526163f.scope: Deactivated successfully.
Nov 29 07:26:49 compute-0 ceph-mgr[75527]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 07:26:49 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:26:49.911+0000 7f7da71fd140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 07:26:49 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'cephadm'
Nov 29 07:26:51 compute-0 podman[75618]: 2025-11-29 07:26:51.853706404 +0000 UTC m=+0.030491502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:26:51 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'crash'
Nov 29 07:26:52 compute-0 podman[75618]: 2025-11-29 07:26:52.030179974 +0000 UTC m=+0.206965042 container create 4b98479e42b2a87972c4c77415453c060019aee41194192829803bb52ea2c5c3 (image=quay.io/ceph/ceph:v18, name=zealous_shtern, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 07:26:52 compute-0 systemd[1]: Started libpod-conmon-4b98479e42b2a87972c4c77415453c060019aee41194192829803bb52ea2c5c3.scope.
Nov 29 07:26:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a4f6a384ba0f14a83a0f66c5b847916524cd23cd04a09f452c926fada990b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a4f6a384ba0f14a83a0f66c5b847916524cd23cd04a09f452c926fada990b9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a4f6a384ba0f14a83a0f66c5b847916524cd23cd04a09f452c926fada990b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:52 compute-0 podman[75618]: 2025-11-29 07:26:52.102267174 +0000 UTC m=+0.279052262 container init 4b98479e42b2a87972c4c77415453c060019aee41194192829803bb52ea2c5c3 (image=quay.io/ceph/ceph:v18, name=zealous_shtern, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:26:52 compute-0 podman[75618]: 2025-11-29 07:26:52.10690176 +0000 UTC m=+0.283686828 container start 4b98479e42b2a87972c4c77415453c060019aee41194192829803bb52ea2c5c3 (image=quay.io/ceph/ceph:v18, name=zealous_shtern, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:26:52 compute-0 podman[75618]: 2025-11-29 07:26:52.110154528 +0000 UTC m=+0.286939596 container attach 4b98479e42b2a87972c4c77415453c060019aee41194192829803bb52ea2c5c3 (image=quay.io/ceph/ceph:v18, name=zealous_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:26:52 compute-0 ceph-mgr[75527]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 07:26:52 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:26:52.239+0000 7f7da71fd140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 07:26:52 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'dashboard'
Nov 29 07:26:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 07:26:52 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/701378387' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:26:52 compute-0 zealous_shtern[75634]: 
Nov 29 07:26:52 compute-0 zealous_shtern[75634]: {
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:     "fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:     "health": {
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "status": "HEALTH_OK",
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "checks": {},
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "mutes": []
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:     },
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:     "election_epoch": 5,
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:     "quorum": [
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         0
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:     ],
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:     "quorum_names": [
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "compute-0"
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:     ],
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:     "quorum_age": 5,
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:     "monmap": {
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "epoch": 1,
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "min_mon_release_name": "reef",
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "num_mons": 1
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:     },
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:     "osdmap": {
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "epoch": 1,
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "num_osds": 0,
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "num_up_osds": 0,
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "osd_up_since": 0,
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "num_in_osds": 0,
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "osd_in_since": 0,
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "num_remapped_pgs": 0
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:     },
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:     "pgmap": {
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "pgs_by_state": [],
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "num_pgs": 0,
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "num_pools": 0,
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "num_objects": 0,
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "data_bytes": 0,
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "bytes_used": 0,
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "bytes_avail": 0,
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "bytes_total": 0
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:     },
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:     "fsmap": {
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "epoch": 1,
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "by_rank": [],
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "up:standby": 0
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:     },
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:     "mgrmap": {
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "available": false,
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "num_standbys": 0,
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "modules": [
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:             "iostat",
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:             "nfs",
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:             "restful"
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         ],
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "services": {}
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:     },
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:     "servicemap": {
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "epoch": 1,
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "modified": "2025-11-29T07:26:43.858355+0000",
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:         "services": {}
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:     },
Nov 29 07:26:52 compute-0 zealous_shtern[75634]:     "progress_events": {}
Nov 29 07:26:52 compute-0 zealous_shtern[75634]: }
Nov 29 07:26:52 compute-0 systemd[1]: libpod-4b98479e42b2a87972c4c77415453c060019aee41194192829803bb52ea2c5c3.scope: Deactivated successfully.
Nov 29 07:26:52 compute-0 podman[75618]: 2025-11-29 07:26:52.516994662 +0000 UTC m=+0.693779730 container died 4b98479e42b2a87972c4c77415453c060019aee41194192829803bb52ea2c5c3 (image=quay.io/ceph/ceph:v18, name=zealous_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:26:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-90a4f6a384ba0f14a83a0f66c5b847916524cd23cd04a09f452c926fada990b9-merged.mount: Deactivated successfully.
Nov 29 07:26:52 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/701378387' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:26:52 compute-0 podman[75618]: 2025-11-29 07:26:52.554471757 +0000 UTC m=+0.731256825 container remove 4b98479e42b2a87972c4c77415453c060019aee41194192829803bb52ea2c5c3 (image=quay.io/ceph/ceph:v18, name=zealous_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:26:52 compute-0 systemd[1]: libpod-conmon-4b98479e42b2a87972c4c77415453c060019aee41194192829803bb52ea2c5c3.scope: Deactivated successfully.
Nov 29 07:26:53 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'devicehealth'
Nov 29 07:26:53 compute-0 ceph-mgr[75527]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 07:26:53 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:26:53.936+0000 7f7da71fd140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 07:26:53 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 07:26:54 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 07:26:54 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 07:26:54 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]:   from numpy import show_config as show_numpy_config
Nov 29 07:26:54 compute-0 ceph-mgr[75527]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 07:26:54 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'influx'
Nov 29 07:26:54 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:26:54.483+0000 7f7da71fd140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 07:26:54 compute-0 podman[75672]: 2025-11-29 07:26:54.603473258 +0000 UTC m=+0.025547280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:26:54 compute-0 ceph-mgr[75527]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 07:26:54 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:26:54.725+0000 7f7da71fd140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 07:26:54 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'insights'
Nov 29 07:26:54 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'iostat'
Nov 29 07:26:55 compute-0 ceph-mgr[75527]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 07:26:55 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:26:55.181+0000 7f7da71fd140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 07:26:55 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'k8sevents'
Nov 29 07:26:55 compute-0 podman[75672]: 2025-11-29 07:26:55.835645533 +0000 UTC m=+1.257719515 container create 6837a18eb932b65f670d69b0f521c9032aa7dd9695817bd62f6c1eb980e6dcd8 (image=quay.io/ceph/ceph:v18, name=suspicious_rosalind, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 07:26:55 compute-0 systemd[1]: Started libpod-conmon-6837a18eb932b65f670d69b0f521c9032aa7dd9695817bd62f6c1eb980e6dcd8.scope.
Nov 29 07:26:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49146c3cfbf628d95dc0afa8847eeacc8651aff70cb6ed29efad981e1787476e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49146c3cfbf628d95dc0afa8847eeacc8651aff70cb6ed29efad981e1787476e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49146c3cfbf628d95dc0afa8847eeacc8651aff70cb6ed29efad981e1787476e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:56 compute-0 podman[75672]: 2025-11-29 07:26:56.050221971 +0000 UTC m=+1.472295953 container init 6837a18eb932b65f670d69b0f521c9032aa7dd9695817bd62f6c1eb980e6dcd8 (image=quay.io/ceph/ceph:v18, name=suspicious_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 07:26:56 compute-0 podman[75672]: 2025-11-29 07:26:56.055883548 +0000 UTC m=+1.477957530 container start 6837a18eb932b65f670d69b0f521c9032aa7dd9695817bd62f6c1eb980e6dcd8 (image=quay.io/ceph/ceph:v18, name=suspicious_rosalind, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 07:26:56 compute-0 podman[75672]: 2025-11-29 07:26:56.063760492 +0000 UTC m=+1.485834504 container attach 6837a18eb932b65f670d69b0f521c9032aa7dd9695817bd62f6c1eb980e6dcd8 (image=quay.io/ceph/ceph:v18, name=suspicious_rosalind, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:26:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 07:26:56 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1648854382' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]: 
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]: {
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:     "fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:     "health": {
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "status": "HEALTH_OK",
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "checks": {},
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "mutes": []
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:     },
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:     "election_epoch": 5,
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:     "quorum": [
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         0
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:     ],
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:     "quorum_names": [
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "compute-0"
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:     ],
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:     "quorum_age": 9,
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:     "monmap": {
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "epoch": 1,
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "min_mon_release_name": "reef",
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "num_mons": 1
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:     },
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:     "osdmap": {
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "epoch": 1,
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "num_osds": 0,
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "num_up_osds": 0,
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "osd_up_since": 0,
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "num_in_osds": 0,
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "osd_in_since": 0,
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "num_remapped_pgs": 0
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:     },
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:     "pgmap": {
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "pgs_by_state": [],
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "num_pgs": 0,
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "num_pools": 0,
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "num_objects": 0,
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "data_bytes": 0,
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "bytes_used": 0,
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "bytes_avail": 0,
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "bytes_total": 0
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:     },
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:     "fsmap": {
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "epoch": 1,
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "by_rank": [],
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "up:standby": 0
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:     },
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:     "mgrmap": {
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "available": false,
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "num_standbys": 0,
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "modules": [
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:             "iostat",
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:             "nfs",
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:             "restful"
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         ],
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "services": {}
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:     },
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:     "servicemap": {
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "epoch": 1,
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "modified": "2025-11-29T07:26:43.858355+0000",
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:         "services": {}
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:     },
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]:     "progress_events": {}
Nov 29 07:26:56 compute-0 suspicious_rosalind[75688]: }
Nov 29 07:26:56 compute-0 systemd[1]: libpod-6837a18eb932b65f670d69b0f521c9032aa7dd9695817bd62f6c1eb980e6dcd8.scope: Deactivated successfully.
Nov 29 07:26:56 compute-0 podman[75672]: 2025-11-29 07:26:56.498601375 +0000 UTC m=+1.920675357 container died 6837a18eb932b65f670d69b0f521c9032aa7dd9695817bd62f6c1eb980e6dcd8 (image=quay.io/ceph/ceph:v18, name=suspicious_rosalind, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 07:26:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-49146c3cfbf628d95dc0afa8847eeacc8651aff70cb6ed29efad981e1787476e-merged.mount: Deactivated successfully.
Nov 29 07:26:56 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1648854382' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:26:56 compute-0 podman[75672]: 2025-11-29 07:26:56.548526878 +0000 UTC m=+1.970600860 container remove 6837a18eb932b65f670d69b0f521c9032aa7dd9695817bd62f6c1eb980e6dcd8 (image=quay.io/ceph/ceph:v18, name=suspicious_rosalind, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:26:56 compute-0 systemd[1]: libpod-conmon-6837a18eb932b65f670d69b0f521c9032aa7dd9695817bd62f6c1eb980e6dcd8.scope: Deactivated successfully.
Nov 29 07:26:57 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'localpool'
Nov 29 07:26:57 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'mds_autoscaler'
Nov 29 07:26:58 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'mirroring'
Nov 29 07:26:58 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'nfs'
Nov 29 07:26:58 compute-0 podman[75725]: 2025-11-29 07:26:58.600680332 +0000 UTC m=+0.029612623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:26:59 compute-0 ceph-mgr[75527]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 07:26:59 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'orchestrator'
Nov 29 07:26:59 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:26:59.027+0000 7f7da71fd140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 07:26:59 compute-0 ceph-mgr[75527]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 07:26:59 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:26:59.703+0000 7f7da71fd140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 07:26:59 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'osd_perf_query'
Nov 29 07:26:59 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:26:59.977+0000 7f7da71fd140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 07:26:59 compute-0 ceph-mgr[75527]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 07:26:59 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'osd_support'
Nov 29 07:27:00 compute-0 ceph-mgr[75527]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 07:27:00 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'pg_autoscaler'
Nov 29 07:27:00 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:00.232+0000 7f7da71fd140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 07:27:00 compute-0 ceph-mgr[75527]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 07:27:00 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'progress'
Nov 29 07:27:00 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:00.501+0000 7f7da71fd140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 07:27:00 compute-0 podman[75725]: 2025-11-29 07:27:00.555372363 +0000 UTC m=+1.984304564 container create ddaa84afd7ab9543881623b5ac4e8bb43ceac5c580239de7d03c86a80cbd0f88 (image=quay.io/ceph/ceph:v18, name=musing_joliot, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 07:27:00 compute-0 systemd[1]: Started libpod-conmon-ddaa84afd7ab9543881623b5ac4e8bb43ceac5c580239de7d03c86a80cbd0f88.scope.
Nov 29 07:27:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:00 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:00.767+0000 7f7da71fd140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 07:27:00 compute-0 ceph-mgr[75527]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 07:27:00 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'prometheus'
Nov 29 07:27:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657135ade04c1991b7ec3491d943a8f51bb13b278100d3471e2b72c63496d06b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657135ade04c1991b7ec3491d943a8f51bb13b278100d3471e2b72c63496d06b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657135ade04c1991b7ec3491d943a8f51bb13b278100d3471e2b72c63496d06b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:01 compute-0 podman[75725]: 2025-11-29 07:27:01.291780803 +0000 UTC m=+2.720713044 container init ddaa84afd7ab9543881623b5ac4e8bb43ceac5c580239de7d03c86a80cbd0f88 (image=quay.io/ceph/ceph:v18, name=musing_joliot, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 07:27:01 compute-0 podman[75725]: 2025-11-29 07:27:01.302151498 +0000 UTC m=+2.731083729 container start ddaa84afd7ab9543881623b5ac4e8bb43ceac5c580239de7d03c86a80cbd0f88 (image=quay.io/ceph/ceph:v18, name=musing_joliot, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:27:01 compute-0 podman[75725]: 2025-11-29 07:27:01.659853456 +0000 UTC m=+3.088785757 container attach ddaa84afd7ab9543881623b5ac4e8bb43ceac5c580239de7d03c86a80cbd0f88 (image=quay.io/ceph/ceph:v18, name=musing_joliot, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 07:27:01 compute-0 ceph-mgr[75527]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 07:27:01 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'rbd_support'
Nov 29 07:27:01 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:01.753+0000 7f7da71fd140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 07:27:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 07:27:01 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/87470932' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:27:01 compute-0 musing_joliot[75742]: 
Nov 29 07:27:01 compute-0 musing_joliot[75742]: {
Nov 29 07:27:01 compute-0 musing_joliot[75742]:     "fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:27:01 compute-0 musing_joliot[75742]:     "health": {
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "status": "HEALTH_OK",
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "checks": {},
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "mutes": []
Nov 29 07:27:01 compute-0 musing_joliot[75742]:     },
Nov 29 07:27:01 compute-0 musing_joliot[75742]:     "election_epoch": 5,
Nov 29 07:27:01 compute-0 musing_joliot[75742]:     "quorum": [
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         0
Nov 29 07:27:01 compute-0 musing_joliot[75742]:     ],
Nov 29 07:27:01 compute-0 musing_joliot[75742]:     "quorum_names": [
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "compute-0"
Nov 29 07:27:01 compute-0 musing_joliot[75742]:     ],
Nov 29 07:27:01 compute-0 musing_joliot[75742]:     "quorum_age": 14,
Nov 29 07:27:01 compute-0 musing_joliot[75742]:     "monmap": {
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "epoch": 1,
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "min_mon_release_name": "reef",
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "num_mons": 1
Nov 29 07:27:01 compute-0 musing_joliot[75742]:     },
Nov 29 07:27:01 compute-0 musing_joliot[75742]:     "osdmap": {
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "epoch": 1,
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "num_osds": 0,
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "num_up_osds": 0,
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "osd_up_since": 0,
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "num_in_osds": 0,
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "osd_in_since": 0,
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "num_remapped_pgs": 0
Nov 29 07:27:01 compute-0 musing_joliot[75742]:     },
Nov 29 07:27:01 compute-0 musing_joliot[75742]:     "pgmap": {
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "pgs_by_state": [],
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "num_pgs": 0,
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "num_pools": 0,
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "num_objects": 0,
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "data_bytes": 0,
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "bytes_used": 0,
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "bytes_avail": 0,
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "bytes_total": 0
Nov 29 07:27:01 compute-0 musing_joliot[75742]:     },
Nov 29 07:27:01 compute-0 musing_joliot[75742]:     "fsmap": {
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "epoch": 1,
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "by_rank": [],
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "up:standby": 0
Nov 29 07:27:01 compute-0 musing_joliot[75742]:     },
Nov 29 07:27:01 compute-0 musing_joliot[75742]:     "mgrmap": {
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "available": false,
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "num_standbys": 0,
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "modules": [
Nov 29 07:27:01 compute-0 musing_joliot[75742]:             "iostat",
Nov 29 07:27:01 compute-0 musing_joliot[75742]:             "nfs",
Nov 29 07:27:01 compute-0 musing_joliot[75742]:             "restful"
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         ],
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "services": {}
Nov 29 07:27:01 compute-0 musing_joliot[75742]:     },
Nov 29 07:27:01 compute-0 musing_joliot[75742]:     "servicemap": {
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "epoch": 1,
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "modified": "2025-11-29T07:26:43.858355+0000",
Nov 29 07:27:01 compute-0 musing_joliot[75742]:         "services": {}
Nov 29 07:27:01 compute-0 musing_joliot[75742]:     },
Nov 29 07:27:01 compute-0 musing_joliot[75742]:     "progress_events": {}
Nov 29 07:27:01 compute-0 musing_joliot[75742]: }
Nov 29 07:27:01 compute-0 systemd[1]: libpod-ddaa84afd7ab9543881623b5ac4e8bb43ceac5c580239de7d03c86a80cbd0f88.scope: Deactivated successfully.
Nov 29 07:27:01 compute-0 podman[75725]: 2025-11-29 07:27:01.83021692 +0000 UTC m=+3.259149141 container died ddaa84afd7ab9543881623b5ac4e8bb43ceac5c580239de7d03c86a80cbd0f88 (image=quay.io/ceph/ceph:v18, name=musing_joliot, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:27:01 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/87470932' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:27:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-657135ade04c1991b7ec3491d943a8f51bb13b278100d3471e2b72c63496d06b-merged.mount: Deactivated successfully.
Nov 29 07:27:02 compute-0 ceph-mgr[75527]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 07:27:02 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'restful'
Nov 29 07:27:02 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:02.076+0000 7f7da71fd140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 07:27:02 compute-0 podman[75725]: 2025-11-29 07:27:02.799163501 +0000 UTC m=+4.228095702 container remove ddaa84afd7ab9543881623b5ac4e8bb43ceac5c580239de7d03c86a80cbd0f88 (image=quay.io/ceph/ceph:v18, name=musing_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:27:02 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'rgw'
Nov 29 07:27:02 compute-0 systemd[1]: libpod-conmon-ddaa84afd7ab9543881623b5ac4e8bb43ceac5c580239de7d03c86a80cbd0f88.scope: Deactivated successfully.
Nov 29 07:27:03 compute-0 ceph-mgr[75527]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 07:27:03 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'rook'
Nov 29 07:27:03 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:03.462+0000 7f7da71fd140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 07:27:04 compute-0 podman[75782]: 2025-11-29 07:27:04.871288539 +0000 UTC m=+0.031236957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:27:05 compute-0 ceph-mgr[75527]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 07:27:05 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'selftest'
Nov 29 07:27:05 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:05.469+0000 7f7da71fd140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 07:27:05 compute-0 ceph-mgr[75527]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 07:27:05 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'snap_schedule'
Nov 29 07:27:05 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:05.706+0000 7f7da71fd140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 07:27:05 compute-0 podman[75782]: 2025-11-29 07:27:05.709186389 +0000 UTC m=+0.869134807 container create 34e242b3295148287a55a5c794aca6abdc72bcc59430e4448fad5669fa5dc3d7 (image=quay.io/ceph/ceph:v18, name=dazzling_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Nov 29 07:27:05 compute-0 systemd[1]: Started libpod-conmon-34e242b3295148287a55a5c794aca6abdc72bcc59430e4448fad5669fa5dc3d7.scope.
Nov 29 07:27:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a23b2707ba31f34403323d75037a72261c943c9f837bc4244d8f3006143c629/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a23b2707ba31f34403323d75037a72261c943c9f837bc4244d8f3006143c629/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a23b2707ba31f34403323d75037a72261c943c9f837bc4244d8f3006143c629/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:05 compute-0 ceph-mgr[75527]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 07:27:05 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:05.983+0000 7f7da71fd140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 07:27:05 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'stats'
Nov 29 07:27:06 compute-0 podman[75782]: 2025-11-29 07:27:06.093468758 +0000 UTC m=+1.253417256 container init 34e242b3295148287a55a5c794aca6abdc72bcc59430e4448fad5669fa5dc3d7 (image=quay.io/ceph/ceph:v18, name=dazzling_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:27:06 compute-0 podman[75782]: 2025-11-29 07:27:06.103599837 +0000 UTC m=+1.263548295 container start 34e242b3295148287a55a5c794aca6abdc72bcc59430e4448fad5669fa5dc3d7 (image=quay.io/ceph/ceph:v18, name=dazzling_beaver, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 07:27:06 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'status'
Nov 29 07:27:06 compute-0 podman[75782]: 2025-11-29 07:27:06.449606574 +0000 UTC m=+1.609555022 container attach 34e242b3295148287a55a5c794aca6abdc72bcc59430e4448fad5669fa5dc3d7 (image=quay.io/ceph/ceph:v18, name=dazzling_beaver, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 07:27:06 compute-0 ceph-mgr[75527]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 07:27:06 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:06.479+0000 7f7da71fd140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 07:27:06 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'telegraf'
Nov 29 07:27:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 07:27:06 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3232169215' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]: 
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]: {
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:     "fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:     "health": {
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "status": "HEALTH_OK",
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "checks": {},
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "mutes": []
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:     },
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:     "election_epoch": 5,
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:     "quorum": [
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         0
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:     ],
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:     "quorum_names": [
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "compute-0"
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:     ],
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:     "quorum_age": 19,
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:     "monmap": {
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "epoch": 1,
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "min_mon_release_name": "reef",
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "num_mons": 1
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:     },
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:     "osdmap": {
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "epoch": 1,
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "num_osds": 0,
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "num_up_osds": 0,
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "osd_up_since": 0,
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "num_in_osds": 0,
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "osd_in_since": 0,
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "num_remapped_pgs": 0
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:     },
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:     "pgmap": {
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "pgs_by_state": [],
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "num_pgs": 0,
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "num_pools": 0,
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "num_objects": 0,
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "data_bytes": 0,
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "bytes_used": 0,
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "bytes_avail": 0,
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "bytes_total": 0
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:     },
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:     "fsmap": {
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "epoch": 1,
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "by_rank": [],
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "up:standby": 0
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:     },
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:     "mgrmap": {
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "available": false,
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "num_standbys": 0,
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "modules": [
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:             "iostat",
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:             "nfs",
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:             "restful"
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         ],
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "services": {}
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:     },
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:     "servicemap": {
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "epoch": 1,
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "modified": "2025-11-29T07:26:43.858355+0000",
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:         "services": {}
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:     },
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]:     "progress_events": {}
Nov 29 07:27:06 compute-0 dazzling_beaver[75798]: }
Nov 29 07:27:06 compute-0 systemd[1]: libpod-34e242b3295148287a55a5c794aca6abdc72bcc59430e4448fad5669fa5dc3d7.scope: Deactivated successfully.
Nov 29 07:27:06 compute-0 podman[75782]: 2025-11-29 07:27:06.505186453 +0000 UTC m=+1.665134891 container died 34e242b3295148287a55a5c794aca6abdc72bcc59430e4448fad5669fa5dc3d7 (image=quay.io/ceph/ceph:v18, name=dazzling_beaver, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:27:06 compute-0 ceph-mgr[75527]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 07:27:06 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:06.718+0000 7f7da71fd140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 07:27:06 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'telemetry'
Nov 29 07:27:07 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3232169215' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:27:07 compute-0 ceph-mgr[75527]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 07:27:07 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'test_orchestrator'
Nov 29 07:27:07 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:07.356+0000 7f7da71fd140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 07:27:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a23b2707ba31f34403323d75037a72261c943c9f837bc4244d8f3006143c629-merged.mount: Deactivated successfully.
Nov 29 07:27:08 compute-0 ceph-mgr[75527]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 07:27:08 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'volumes'
Nov 29 07:27:08 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:08.041+0000 7f7da71fd140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 07:27:08 compute-0 ceph-mgr[75527]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 07:27:08 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:08.754+0000 7f7da71fd140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 07:27:08 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'zabbix'
Nov 29 07:27:08 compute-0 ceph-mgr[75527]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 07:27:08 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:08.997+0000 7f7da71fd140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 07:27:08 compute-0 ceph-mgr[75527]: ms_deliver_dispatch: unhandled message 0x555ffc64b1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 29 07:27:09 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.fwfehy
Nov 29 07:27:09 compute-0 podman[75782]: 2025-11-29 07:27:09.634704341 +0000 UTC m=+4.794652799 container remove 34e242b3295148287a55a5c794aca6abdc72bcc59430e4448fad5669fa5dc3d7 (image=quay.io/ceph/ceph:v18, name=dazzling_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:27:09 compute-0 systemd[1]: libpod-conmon-34e242b3295148287a55a5c794aca6abdc72bcc59430e4448fad5669fa5dc3d7.scope: Deactivated successfully.
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: mgr handle_mgr_map Activating!
Nov 29 07:27:09 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.fwfehy(active, starting, since 0.757296s)
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: mgr handle_mgr_map I am now activating
Nov 29 07:27:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 29 07:27:09 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2376335060' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 07:27:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).mds e1 all = 1
Nov 29 07:27:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 07:27:09 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2376335060' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 07:27:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 29 07:27:09 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2376335060' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 07:27:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 07:27:09 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2376335060' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 07:27:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.fwfehy", "id": "compute-0.fwfehy"} v 0) v1
Nov 29 07:27:09 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2376335060' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mgr metadata", "who": "compute-0.fwfehy", "id": "compute-0.fwfehy"}]: dispatch
Nov 29 07:27:09 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : Manager daemon compute-0.fwfehy is now available
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: balancer
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [balancer INFO root] Starting
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:27:09
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [balancer INFO root] No pools available
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: crash
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: devicehealth
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [devicehealth INFO root] Starting
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: iostat
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: nfs
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: orchestrator
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: pg_autoscaler
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: progress
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [progress INFO root] Loading...
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [progress INFO root] No stored events to load
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [progress INFO root] Loaded [] historic events
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [progress INFO root] Loaded OSDMap, ready.
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [rbd_support INFO root] recovery thread starting
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [rbd_support INFO root] starting setup
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: rbd_support
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: restful
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: status
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [restful INFO root] server_addr: :: server_port: 8003
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fwfehy/mirror_snapshot_schedule"} v 0) v1
Nov 29 07:27:09 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2376335060' entity='mgr.compute-0.fwfehy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fwfehy/mirror_snapshot_schedule"}]: dispatch
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: telemetry
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:27:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [restful WARNING root] server not running: no certificate configured
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [rbd_support INFO root] PerfHandler: starting
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TaskHandler: starting
Nov 29 07:27:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fwfehy/trash_purge_schedule"} v 0) v1
Nov 29 07:27:09 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2376335060' entity='mgr.compute-0.fwfehy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fwfehy/trash_purge_schedule"}]: dispatch
Nov 29 07:27:09 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: volumes
Nov 29 07:27:10 compute-0 ceph-mon[75237]: Activating manager daemon compute-0.fwfehy
Nov 29 07:27:11 compute-0 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:27:11 compute-0 podman[75915]: 2025-11-29 07:27:11.685388357 +0000 UTC m=+0.023707262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:27:11 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2376335060' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:11 compute-0 podman[75915]: 2025-11-29 07:27:11.957968434 +0000 UTC m=+0.296287289 container create 899d0f36910209ed8ada12de3d7c04d08837fe04fb55053becf69c200c199bd0 (image=quay.io/ceph/ceph:v18, name=heuristic_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:27:11 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:27:11 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 29 07:27:11 compute-0 ceph-mgr[75527]: [rbd_support INFO root] setup complete
Nov 29 07:27:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 29 07:27:13 compute-0 ceph-mon[75237]: mgrmap e2: compute-0.fwfehy(active, starting, since 0.757296s)
Nov 29 07:27:13 compute-0 ceph-mon[75237]: from='mgr.14102 192.168.122.100:0/2376335060' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 07:27:13 compute-0 ceph-mon[75237]: from='mgr.14102 192.168.122.100:0/2376335060' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 07:27:13 compute-0 ceph-mon[75237]: from='mgr.14102 192.168.122.100:0/2376335060' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 07:27:13 compute-0 ceph-mon[75237]: from='mgr.14102 192.168.122.100:0/2376335060' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 07:27:13 compute-0 ceph-mon[75237]: from='mgr.14102 192.168.122.100:0/2376335060' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mgr metadata", "who": "compute-0.fwfehy", "id": "compute-0.fwfehy"}]: dispatch
Nov 29 07:27:13 compute-0 ceph-mon[75237]: Manager daemon compute-0.fwfehy is now available
Nov 29 07:27:13 compute-0 ceph-mon[75237]: from='mgr.14102 192.168.122.100:0/2376335060' entity='mgr.compute-0.fwfehy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fwfehy/mirror_snapshot_schedule"}]: dispatch
Nov 29 07:27:13 compute-0 ceph-mon[75237]: from='mgr.14102 192.168.122.100:0/2376335060' entity='mgr.compute-0.fwfehy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fwfehy/trash_purge_schedule"}]: dispatch
Nov 29 07:27:13 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.fwfehy(active, since 4s)
Nov 29 07:27:13 compute-0 systemd[1]: Started libpod-conmon-899d0f36910209ed8ada12de3d7c04d08837fe04fb55053becf69c200c199bd0.scope.
Nov 29 07:27:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8becefffd049acb10fc158025c1fc21df9176e470504a494b96fb89376c517/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8becefffd049acb10fc158025c1fc21df9176e470504a494b96fb89376c517/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8becefffd049acb10fc158025c1fc21df9176e470504a494b96fb89376c517/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:13 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2376335060' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 29 07:27:13 compute-0 podman[75915]: 2025-11-29 07:27:13.671857383 +0000 UTC m=+2.010176278 container init 899d0f36910209ed8ada12de3d7c04d08837fe04fb55053becf69c200c199bd0 (image=quay.io/ceph/ceph:v18, name=heuristic_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 07:27:13 compute-0 podman[75915]: 2025-11-29 07:27:13.678219495 +0000 UTC m=+2.016538390 container start 899d0f36910209ed8ada12de3d7c04d08837fe04fb55053becf69c200c199bd0 (image=quay.io/ceph/ceph:v18, name=heuristic_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:27:13 compute-0 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:27:13 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2376335060' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:13 compute-0 podman[75915]: 2025-11-29 07:27:13.880961798 +0000 UTC m=+2.219280653 container attach 899d0f36910209ed8ada12de3d7c04d08837fe04fb55053becf69c200c199bd0 (image=quay.io/ceph/ceph:v18, name=heuristic_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 07:27:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 07:27:14 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2677916145' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]: 
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]: {
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:     "fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:     "health": {
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "status": "HEALTH_OK",
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "checks": {},
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "mutes": []
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:     },
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:     "election_epoch": 5,
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:     "quorum": [
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         0
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:     ],
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:     "quorum_names": [
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "compute-0"
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:     ],
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:     "quorum_age": 27,
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:     "monmap": {
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "epoch": 1,
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "min_mon_release_name": "reef",
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "num_mons": 1
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:     },
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:     "osdmap": {
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "epoch": 1,
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "num_osds": 0,
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "num_up_osds": 0,
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "osd_up_since": 0,
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "num_in_osds": 0,
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "osd_in_since": 0,
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "num_remapped_pgs": 0
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:     },
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:     "pgmap": {
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "pgs_by_state": [],
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "num_pgs": 0,
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "num_pools": 0,
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "num_objects": 0,
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "data_bytes": 0,
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "bytes_used": 0,
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "bytes_avail": 0,
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "bytes_total": 0
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:     },
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:     "fsmap": {
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "epoch": 1,
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "by_rank": [],
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "up:standby": 0
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:     },
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:     "mgrmap": {
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "available": true,
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "num_standbys": 0,
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "modules": [
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:             "iostat",
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:             "nfs",
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:             "restful"
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         ],
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "services": {}
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:     },
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:     "servicemap": {
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "epoch": 1,
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "modified": "2025-11-29T07:26:43.858355+0000",
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:         "services": {}
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:     },
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]:     "progress_events": {}
Nov 29 07:27:14 compute-0 heuristic_dewdney[75933]: }
Nov 29 07:27:14 compute-0 systemd[1]: libpod-899d0f36910209ed8ada12de3d7c04d08837fe04fb55053becf69c200c199bd0.scope: Deactivated successfully.
Nov 29 07:27:14 compute-0 podman[75915]: 2025-11-29 07:27:14.26643642 +0000 UTC m=+2.604755275 container died 899d0f36910209ed8ada12de3d7c04d08837fe04fb55053becf69c200c199bd0 (image=quay.io/ceph/ceph:v18, name=heuristic_dewdney, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:27:14 compute-0 ceph-mon[75237]: from='mgr.14102 192.168.122.100:0/2376335060' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:14 compute-0 ceph-mon[75237]: mgrmap e3: compute-0.fwfehy(active, since 4s)
Nov 29 07:27:14 compute-0 ceph-mon[75237]: from='mgr.14102 192.168.122.100:0/2376335060' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:14 compute-0 ceph-mon[75237]: from='mgr.14102 192.168.122.100:0/2376335060' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:14 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2677916145' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:27:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f8becefffd049acb10fc158025c1fc21df9176e470504a494b96fb89376c517-merged.mount: Deactivated successfully.
Nov 29 07:27:15 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.fwfehy(active, since 6s)
Nov 29 07:27:15 compute-0 podman[75915]: 2025-11-29 07:27:15.556530744 +0000 UTC m=+3.894849599 container remove 899d0f36910209ed8ada12de3d7c04d08837fe04fb55053becf69c200c199bd0 (image=quay.io/ceph/ceph:v18, name=heuristic_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:27:15 compute-0 podman[75971]: 2025-11-29 07:27:15.600739647 +0000 UTC m=+0.022889844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:27:15 compute-0 podman[75971]: 2025-11-29 07:27:15.734842982 +0000 UTC m=+0.156993189 container create 2ddc4df14d5cdf922a0767ebcc74677230c1025d5f6aaa17976306bd43c22dd1 (image=quay.io/ceph/ceph:v18, name=ecstatic_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:27:15 compute-0 systemd[1]: libpod-conmon-899d0f36910209ed8ada12de3d7c04d08837fe04fb55053becf69c200c199bd0.scope: Deactivated successfully.
Nov 29 07:27:15 compute-0 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:27:15 compute-0 systemd[1]: Started libpod-conmon-2ddc4df14d5cdf922a0767ebcc74677230c1025d5f6aaa17976306bd43c22dd1.scope.
Nov 29 07:27:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99419f0643e295c72fc13fe5d686e3e8c7a2e930c8c7acbc65077b7f8d70c9a0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99419f0643e295c72fc13fe5d686e3e8c7a2e930c8c7acbc65077b7f8d70c9a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99419f0643e295c72fc13fe5d686e3e8c7a2e930c8c7acbc65077b7f8d70c9a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99419f0643e295c72fc13fe5d686e3e8c7a2e930c8c7acbc65077b7f8d70c9a0/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:15 compute-0 podman[75971]: 2025-11-29 07:27:15.910840742 +0000 UTC m=+0.332990949 container init 2ddc4df14d5cdf922a0767ebcc74677230c1025d5f6aaa17976306bd43c22dd1 (image=quay.io/ceph/ceph:v18, name=ecstatic_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:27:15 compute-0 podman[75971]: 2025-11-29 07:27:15.916348566 +0000 UTC m=+0.338498763 container start 2ddc4df14d5cdf922a0767ebcc74677230c1025d5f6aaa17976306bd43c22dd1 (image=quay.io/ceph/ceph:v18, name=ecstatic_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:27:15 compute-0 podman[75971]: 2025-11-29 07:27:15.989808325 +0000 UTC m=+0.411958542 container attach 2ddc4df14d5cdf922a0767ebcc74677230c1025d5f6aaa17976306bd43c22dd1 (image=quay.io/ceph/ceph:v18, name=ecstatic_kilby, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 07:27:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 07:27:16 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2771520649' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 07:27:16 compute-0 systemd[1]: libpod-2ddc4df14d5cdf922a0767ebcc74677230c1025d5f6aaa17976306bd43c22dd1.scope: Deactivated successfully.
Nov 29 07:27:16 compute-0 podman[75971]: 2025-11-29 07:27:16.491641824 +0000 UTC m=+0.913792041 container died 2ddc4df14d5cdf922a0767ebcc74677230c1025d5f6aaa17976306bd43c22dd1 (image=quay.io/ceph/ceph:v18, name=ecstatic_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:27:16 compute-0 ceph-mon[75237]: mgrmap e4: compute-0.fwfehy(active, since 6s)
Nov 29 07:27:16 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2771520649' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 07:27:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-99419f0643e295c72fc13fe5d686e3e8c7a2e930c8c7acbc65077b7f8d70c9a0-merged.mount: Deactivated successfully.
Nov 29 07:27:16 compute-0 podman[75971]: 2025-11-29 07:27:16.782187634 +0000 UTC m=+1.204337831 container remove 2ddc4df14d5cdf922a0767ebcc74677230c1025d5f6aaa17976306bd43c22dd1 (image=quay.io/ceph/ceph:v18, name=ecstatic_kilby, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 29 07:27:16 compute-0 systemd[1]: libpod-conmon-2ddc4df14d5cdf922a0767ebcc74677230c1025d5f6aaa17976306bd43c22dd1.scope: Deactivated successfully.
Nov 29 07:27:16 compute-0 podman[76028]: 2025-11-29 07:27:16.855684034 +0000 UTC m=+0.028385538 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:27:17 compute-0 podman[76028]: 2025-11-29 07:27:17.524464627 +0000 UTC m=+0.697166041 container create 325ec82ed5216696c5e39ceadeb623efe1e4ab638ad2f633ec8dfee5f8d67aea (image=quay.io/ceph/ceph:v18, name=heuristic_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:27:17 compute-0 systemd[1]: Started libpod-conmon-325ec82ed5216696c5e39ceadeb623efe1e4ab638ad2f633ec8dfee5f8d67aea.scope.
Nov 29 07:27:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712531d73d1b4dc16f421c0d3ff7af9e5836ed14cd5a1b9c2f7d1562d3fc135f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712531d73d1b4dc16f421c0d3ff7af9e5836ed14cd5a1b9c2f7d1562d3fc135f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712531d73d1b4dc16f421c0d3ff7af9e5836ed14cd5a1b9c2f7d1562d3fc135f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:17 compute-0 podman[76028]: 2025-11-29 07:27:17.623402893 +0000 UTC m=+0.796104317 container init 325ec82ed5216696c5e39ceadeb623efe1e4ab638ad2f633ec8dfee5f8d67aea (image=quay.io/ceph/ceph:v18, name=heuristic_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:27:17 compute-0 podman[76028]: 2025-11-29 07:27:17.629711573 +0000 UTC m=+0.802412987 container start 325ec82ed5216696c5e39ceadeb623efe1e4ab638ad2f633ec8dfee5f8d67aea (image=quay.io/ceph/ceph:v18, name=heuristic_chandrasekhar, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:27:17 compute-0 podman[76028]: 2025-11-29 07:27:17.724180218 +0000 UTC m=+0.896881692 container attach 325ec82ed5216696c5e39ceadeb623efe1e4ab638ad2f633ec8dfee5f8d67aea (image=quay.io/ceph/ceph:v18, name=heuristic_chandrasekhar, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 07:27:17 compute-0 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:27:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 29 07:27:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2609304007' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 29 07:27:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2609304007' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 29 07:27:18 compute-0 ceph-mgr[75527]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 29 07:27:18 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.fwfehy(active, since 9s)
Nov 29 07:27:18 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2609304007' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 29 07:27:18 compute-0 systemd[1]: libpod-325ec82ed5216696c5e39ceadeb623efe1e4ab638ad2f633ec8dfee5f8d67aea.scope: Deactivated successfully.
Nov 29 07:27:18 compute-0 podman[76028]: 2025-11-29 07:27:18.340822851 +0000 UTC m=+1.513524255 container died 325ec82ed5216696c5e39ceadeb623efe1e4ab638ad2f633ec8dfee5f8d67aea (image=quay.io/ceph/ceph:v18, name=heuristic_chandrasekhar, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 07:27:18 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: ignoring --setuser ceph since I am not root
Nov 29 07:27:18 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: ignoring --setgroup ceph since I am not root
Nov 29 07:27:18 compute-0 ceph-mgr[75527]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 07:27:18 compute-0 ceph-mgr[75527]: pidfile_write: ignore empty --pid-file
Nov 29 07:27:18 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'alerts'
Nov 29 07:27:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-712531d73d1b4dc16f421c0d3ff7af9e5836ed14cd5a1b9c2f7d1562d3fc135f-merged.mount: Deactivated successfully.
Nov 29 07:27:18 compute-0 podman[76028]: 2025-11-29 07:27:18.746574934 +0000 UTC m=+1.919276388 container remove 325ec82ed5216696c5e39ceadeb623efe1e4ab638ad2f633ec8dfee5f8d67aea (image=quay.io/ceph/ceph:v18, name=heuristic_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 07:27:18 compute-0 systemd[1]: libpod-conmon-325ec82ed5216696c5e39ceadeb623efe1e4ab638ad2f633ec8dfee5f8d67aea.scope: Deactivated successfully.
Nov 29 07:27:18 compute-0 ceph-mgr[75527]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 07:27:18 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'balancer'
Nov 29 07:27:18 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:18.870+0000 7fedd51cf140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 07:27:18 compute-0 podman[76106]: 2025-11-29 07:27:18.793729409 +0000 UTC m=+0.026567910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:27:18 compute-0 podman[76106]: 2025-11-29 07:27:18.953075874 +0000 UTC m=+0.185914335 container create 441e98da49562e034965f8d951a92ac6feab37a9b969b578be7e5b5bf70271e9 (image=quay.io/ceph/ceph:v18, name=tender_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:27:19 compute-0 systemd[1]: Started libpod-conmon-441e98da49562e034965f8d951a92ac6feab37a9b969b578be7e5b5bf70271e9.scope.
Nov 29 07:27:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/116b24f59ac78410b24d94b422fb0c14447ba2e3f3485862782fdb7c48464fda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/116b24f59ac78410b24d94b422fb0c14447ba2e3f3485862782fdb7c48464fda/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/116b24f59ac78410b24d94b422fb0c14447ba2e3f3485862782fdb7c48464fda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:19 compute-0 podman[76106]: 2025-11-29 07:27:19.057747519 +0000 UTC m=+0.290585960 container init 441e98da49562e034965f8d951a92ac6feab37a9b969b578be7e5b5bf70271e9 (image=quay.io/ceph/ceph:v18, name=tender_robinson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:27:19 compute-0 podman[76106]: 2025-11-29 07:27:19.065129202 +0000 UTC m=+0.297967633 container start 441e98da49562e034965f8d951a92ac6feab37a9b969b578be7e5b5bf70271e9 (image=quay.io/ceph/ceph:v18, name=tender_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:27:19 compute-0 podman[76106]: 2025-11-29 07:27:19.069316489 +0000 UTC m=+0.302154900 container attach 441e98da49562e034965f8d951a92ac6feab37a9b969b578be7e5b5bf70271e9 (image=quay.io/ceph/ceph:v18, name=tender_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:27:19 compute-0 ceph-mgr[75527]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 07:27:19 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:19.152+0000 7fedd51cf140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 07:27:19 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'cephadm'
Nov 29 07:27:19 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2609304007' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 29 07:27:19 compute-0 ceph-mon[75237]: mgrmap e5: compute-0.fwfehy(active, since 9s)
Nov 29 07:27:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 07:27:19 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1071220166' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 07:27:19 compute-0 tender_robinson[76122]: {
Nov 29 07:27:19 compute-0 tender_robinson[76122]:     "epoch": 5,
Nov 29 07:27:19 compute-0 tender_robinson[76122]:     "available": true,
Nov 29 07:27:19 compute-0 tender_robinson[76122]:     "active_name": "compute-0.fwfehy",
Nov 29 07:27:19 compute-0 tender_robinson[76122]:     "num_standby": 0
Nov 29 07:27:19 compute-0 tender_robinson[76122]: }
Nov 29 07:27:19 compute-0 systemd[1]: libpod-441e98da49562e034965f8d951a92ac6feab37a9b969b578be7e5b5bf70271e9.scope: Deactivated successfully.
Nov 29 07:27:19 compute-0 podman[76106]: 2025-11-29 07:27:19.735792643 +0000 UTC m=+0.968631054 container died 441e98da49562e034965f8d951a92ac6feab37a9b969b578be7e5b5bf70271e9 (image=quay.io/ceph/ceph:v18, name=tender_robinson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 07:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-116b24f59ac78410b24d94b422fb0c14447ba2e3f3485862782fdb7c48464fda-merged.mount: Deactivated successfully.
Nov 29 07:27:19 compute-0 podman[76106]: 2025-11-29 07:27:19.922918974 +0000 UTC m=+1.155757385 container remove 441e98da49562e034965f8d951a92ac6feab37a9b969b578be7e5b5bf70271e9 (image=quay.io/ceph/ceph:v18, name=tender_robinson, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 07:27:19 compute-0 systemd[1]: libpod-conmon-441e98da49562e034965f8d951a92ac6feab37a9b969b578be7e5b5bf70271e9.scope: Deactivated successfully.
Nov 29 07:27:19 compute-0 podman[76160]: 2025-11-29 07:27:19.987193743 +0000 UTC m=+0.038546677 container create 19663032b657d0931da777831d30e016680ac2c45cdaa0a6c8176bda5975e455 (image=quay.io/ceph/ceph:v18, name=optimistic_margulis, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:27:20 compute-0 systemd[1]: Started libpod-conmon-19663032b657d0931da777831d30e016680ac2c45cdaa0a6c8176bda5975e455.scope.
Nov 29 07:27:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97436a1d46b188492a2f513a2ebab34590dd8142224eb4fe4f6ad796f79290ca/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97436a1d46b188492a2f513a2ebab34590dd8142224eb4fe4f6ad796f79290ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97436a1d46b188492a2f513a2ebab34590dd8142224eb4fe4f6ad796f79290ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:20 compute-0 podman[76160]: 2025-11-29 07:27:19.972098811 +0000 UTC m=+0.023451775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:27:20 compute-0 podman[76160]: 2025-11-29 07:27:20.159893255 +0000 UTC m=+0.211246209 container init 19663032b657d0931da777831d30e016680ac2c45cdaa0a6c8176bda5975e455 (image=quay.io/ceph/ceph:v18, name=optimistic_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:27:20 compute-0 podman[76160]: 2025-11-29 07:27:20.167553484 +0000 UTC m=+0.218906428 container start 19663032b657d0931da777831d30e016680ac2c45cdaa0a6c8176bda5975e455 (image=quay.io/ceph/ceph:v18, name=optimistic_margulis, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 07:27:20 compute-0 podman[76160]: 2025-11-29 07:27:20.179380528 +0000 UTC m=+0.230733492 container attach 19663032b657d0931da777831d30e016680ac2c45cdaa0a6c8176bda5975e455 (image=quay.io/ceph/ceph:v18, name=optimistic_margulis, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:27:21 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'crash'
Nov 29 07:27:21 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1071220166' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 07:27:21 compute-0 ceph-mgr[75527]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 07:27:21 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'dashboard'
Nov 29 07:27:21 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:21.387+0000 7fedd51cf140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 07:27:22 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'devicehealth'
Nov 29 07:27:23 compute-0 ceph-mgr[75527]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 07:27:23 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 07:27:23 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:23.140+0000 7fedd51cf140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 07:27:23 compute-0 sshd-session[76211]: Received disconnect from 103.236.140.19 port 60472:11: Bye Bye [preauth]
Nov 29 07:27:23 compute-0 sshd-session[76211]: Disconnected from authenticating user root 103.236.140.19 port 60472 [preauth]
Nov 29 07:27:23 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 07:27:23 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 07:27:23 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]:   from numpy import show_config as show_numpy_config
Nov 29 07:27:23 compute-0 ceph-mgr[75527]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 07:27:23 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:23.768+0000 7fedd51cf140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 07:27:23 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'influx'
Nov 29 07:27:24 compute-0 ceph-mgr[75527]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 07:27:24 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'insights'
Nov 29 07:27:24 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:24.016+0000 7fedd51cf140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 07:27:24 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'iostat'
Nov 29 07:27:24 compute-0 ceph-mgr[75527]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 07:27:24 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'k8sevents'
Nov 29 07:27:24 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:24.470+0000 7fedd51cf140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 07:27:26 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'localpool'
Nov 29 07:27:26 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'mds_autoscaler'
Nov 29 07:27:27 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'mirroring'
Nov 29 07:27:27 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'nfs'
Nov 29 07:27:28 compute-0 ceph-mgr[75527]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 07:27:28 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'orchestrator'
Nov 29 07:27:28 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:28.225+0000 7fedd51cf140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 07:27:28 compute-0 ceph-mgr[75527]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 07:27:28 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'osd_perf_query'
Nov 29 07:27:28 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:28.931+0000 7fedd51cf140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 07:27:29 compute-0 ceph-mgr[75527]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 07:27:29 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'osd_support'
Nov 29 07:27:29 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:29.190+0000 7fedd51cf140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 07:27:29 compute-0 ceph-mgr[75527]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 07:27:29 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'pg_autoscaler'
Nov 29 07:27:29 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:29.431+0000 7fedd51cf140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 07:27:29 compute-0 ceph-mgr[75527]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 07:27:29 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:29.713+0000 7fedd51cf140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 07:27:29 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'progress'
Nov 29 07:27:29 compute-0 ceph-mgr[75527]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 07:27:29 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'prometheus'
Nov 29 07:27:29 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:29.956+0000 7fedd51cf140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 07:27:30 compute-0 sshd-session[76213]: Invalid user sipv from 101.47.142.104 port 50702
Nov 29 07:27:30 compute-0 ceph-mgr[75527]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 07:27:30 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'rbd_support'
Nov 29 07:27:30 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:30.974+0000 7fedd51cf140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 07:27:31 compute-0 ceph-mgr[75527]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 07:27:31 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:31.272+0000 7fedd51cf140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 07:27:31 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'restful'
Nov 29 07:27:31 compute-0 sshd-session[76213]: Received disconnect from 101.47.142.104 port 50702:11: Bye Bye [preauth]
Nov 29 07:27:31 compute-0 sshd-session[76213]: Disconnected from invalid user sipv 101.47.142.104 port 50702 [preauth]
Nov 29 07:27:32 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'rgw'
Nov 29 07:27:32 compute-0 ceph-mgr[75527]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 07:27:32 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'rook'
Nov 29 07:27:32 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:32.737+0000 7fedd51cf140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 07:27:34 compute-0 ceph-mgr[75527]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 07:27:34 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:34.823+0000 7fedd51cf140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 07:27:34 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'selftest'
Nov 29 07:27:35 compute-0 ceph-mgr[75527]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 07:27:35 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'snap_schedule'
Nov 29 07:27:35 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:35.062+0000 7fedd51cf140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 07:27:35 compute-0 ceph-mgr[75527]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 07:27:35 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'stats'
Nov 29 07:27:35 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:35.329+0000 7fedd51cf140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 07:27:35 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'status'
Nov 29 07:27:35 compute-0 ceph-mgr[75527]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 07:27:35 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'telegraf'
Nov 29 07:27:35 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:35.875+0000 7fedd51cf140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 07:27:36 compute-0 ceph-mgr[75527]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 07:27:36 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'telemetry'
Nov 29 07:27:36 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:36.146+0000 7fedd51cf140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 07:27:36 compute-0 sshd-session[76215]: Invalid user halo from 20.185.243.158 port 45128
Nov 29 07:27:36 compute-0 sshd-session[76215]: Received disconnect from 20.185.243.158 port 45128:11: Bye Bye [preauth]
Nov 29 07:27:36 compute-0 sshd-session[76215]: Disconnected from invalid user halo 20.185.243.158 port 45128 [preauth]
Nov 29 07:27:36 compute-0 ceph-mgr[75527]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 07:27:36 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'test_orchestrator'
Nov 29 07:27:36 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:36.830+0000 7fedd51cf140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 07:27:37 compute-0 ceph-mgr[75527]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 07:27:37 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:37.535+0000 7fedd51cf140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 07:27:37 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'volumes'
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: mgr[py] Loading python module 'zabbix'
Nov 29 07:27:38 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:38.264+0000 7fedd51cf140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 07:27:38 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T07:27:38.501+0000 7fedd51cf140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 07:27:38 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : Active manager daemon compute-0.fwfehy restarted
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: ms_deliver_dispatch: unhandled message 0x55dec73a91e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 29 07:27:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 29 07:27:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:27:38 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.fwfehy
Nov 29 07:27:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 29 07:27:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 29 07:27:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: mgr handle_mgr_map Activating!
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: mgr handle_mgr_map I am now activating
Nov 29 07:27:38 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 29 07:27:38 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.fwfehy(active, starting, since 0.0158559s)
Nov 29 07:27:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 07:27:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 07:27:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.fwfehy", "id": "compute-0.fwfehy"} v 0) v1
Nov 29 07:27:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mgr metadata", "who": "compute-0.fwfehy", "id": "compute-0.fwfehy"}]: dispatch
Nov 29 07:27:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 29 07:27:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 07:27:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).mds e1 all = 1
Nov 29 07:27:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 07:27:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 07:27:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 29 07:27:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 07:27:38 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : Manager daemon compute-0.fwfehy is now available
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: balancer
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Starting
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:27:38
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [balancer INFO root] No pools available
Nov 29 07:27:38 compute-0 ceph-mon[75237]: Active manager daemon compute-0.fwfehy restarted
Nov 29 07:27:38 compute-0 ceph-mon[75237]: Activating manager daemon compute-0.fwfehy
Nov 29 07:27:38 compute-0 ceph-mon[75237]: osdmap e2: 0 total, 0 up, 0 in
Nov 29 07:27:38 compute-0 ceph-mon[75237]: mgrmap e6: compute-0.fwfehy(active, starting, since 0.0158559s)
Nov 29 07:27:38 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 07:27:38 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mgr metadata", "who": "compute-0.fwfehy", "id": "compute-0.fwfehy"}]: dispatch
Nov 29 07:27:38 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 07:27:38 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 07:27:38 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 07:27:38 compute-0 ceph-mon[75237]: Manager daemon compute-0.fwfehy is now available
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 29 07:27:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 29 07:27:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 29 07:27:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: cephadm
Nov 29 07:27:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 07:27:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: crash
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: devicehealth
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [devicehealth INFO root] Starting
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: iostat
Nov 29 07:27:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 07:27:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: nfs
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: orchestrator
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: pg_autoscaler
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: progress
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [progress INFO root] Loading...
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [progress INFO root] No stored events to load
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [progress INFO root] Loaded [] historic events
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [progress INFO root] Loaded OSDMap, ready.
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [rbd_support INFO root] recovery thread starting
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [rbd_support INFO root] starting setup
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: rbd_support
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: restful
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [restful INFO root] server_addr: :: server_port: 8003
Nov 29 07:27:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fwfehy/mirror_snapshot_schedule"} v 0) v1
Nov 29 07:27:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fwfehy/mirror_snapshot_schedule"}]: dispatch
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: status
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: telemetry
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [rbd_support INFO root] PerfHandler: starting
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TaskHandler: starting
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [restful WARNING root] server not running: no certificate configured
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:27:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fwfehy/trash_purge_schedule"} v 0) v1
Nov 29 07:27:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fwfehy/trash_purge_schedule"}]: dispatch
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: [rbd_support INFO root] setup complete
Nov 29 07:27:38 compute-0 ceph-mgr[75527]: mgr load Constructed class from module: volumes
Nov 29 07:27:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 29 07:27:39 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:39 compute-0 sshd-session[76225]: Invalid user supermaint from 103.234.151.178 port 61598
Nov 29 07:27:40 compute-0 sshd-session[76225]: Received disconnect from 103.234.151.178 port 61598:11: Bye Bye [preauth]
Nov 29 07:27:40 compute-0 sshd-session[76225]: Disconnected from invalid user supermaint 103.234.151.178 port 61598 [preauth]
Nov 29 07:27:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 29 07:27:40 compute-0 ceph-mon[75237]: Found migration_current of "None". Setting to last migration.
Nov 29 07:27:40 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:40 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:40 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:27:40 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:27:40 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fwfehy/mirror_snapshot_schedule"}]: dispatch
Nov 29 07:27:40 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fwfehy/trash_purge_schedule"}]: dispatch
Nov 29 07:27:40 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14130 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 29 07:27:40 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.fwfehy(active, since 2s)
Nov 29 07:27:40 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:40 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14130 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 29 07:27:40 compute-0 optimistic_margulis[76176]: {
Nov 29 07:27:40 compute-0 optimistic_margulis[76176]:     "mgrmap_epoch": 7,
Nov 29 07:27:40 compute-0 optimistic_margulis[76176]:     "initialized": true
Nov 29 07:27:40 compute-0 optimistic_margulis[76176]: }
Nov 29 07:27:40 compute-0 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:27:40 compute-0 systemd[1]: libpod-19663032b657d0931da777831d30e016680ac2c45cdaa0a6c8176bda5975e455.scope: Deactivated successfully.
Nov 29 07:27:40 compute-0 podman[76160]: 2025-11-29 07:27:40.543635167 +0000 UTC m=+20.594988131 container died 19663032b657d0931da777831d30e016680ac2c45cdaa0a6c8176bda5975e455 (image=quay.io/ceph/ceph:v18, name=optimistic_margulis, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:27:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-97436a1d46b188492a2f513a2ebab34590dd8142224eb4fe4f6ad796f79290ca-merged.mount: Deactivated successfully.
Nov 29 07:27:40 compute-0 podman[76160]: 2025-11-29 07:27:40.67384441 +0000 UTC m=+20.725197344 container remove 19663032b657d0931da777831d30e016680ac2c45cdaa0a6c8176bda5975e455 (image=quay.io/ceph/ceph:v18, name=optimistic_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:27:40 compute-0 systemd[1]: libpod-conmon-19663032b657d0931da777831d30e016680ac2c45cdaa0a6c8176bda5975e455.scope: Deactivated successfully.
Nov 29 07:27:40 compute-0 podman[76344]: 2025-11-29 07:27:40.740410216 +0000 UTC m=+0.040675802 container create e15f822049c260a8b1b1cebd12d596544c06af66c322e7e5a9d4a99bb4f1bbe0 (image=quay.io/ceph/ceph:v18, name=pedantic_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 07:27:40 compute-0 systemd[1]: Started libpod-conmon-e15f822049c260a8b1b1cebd12d596544c06af66c322e7e5a9d4a99bb4f1bbe0.scope.
Nov 29 07:27:40 compute-0 podman[76344]: 2025-11-29 07:27:40.721274371 +0000 UTC m=+0.021539947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:27:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/320f41b11d6d5c24158019cba51a1f6bf2584c091210e3f4ad17bd596cc8568c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/320f41b11d6d5c24158019cba51a1f6bf2584c091210e3f4ad17bd596cc8568c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/320f41b11d6d5c24158019cba51a1f6bf2584c091210e3f4ad17bd596cc8568c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:40 compute-0 podman[76344]: 2025-11-29 07:27:40.840584859 +0000 UTC m=+0.140850515 container init e15f822049c260a8b1b1cebd12d596544c06af66c322e7e5a9d4a99bb4f1bbe0 (image=quay.io/ceph/ceph:v18, name=pedantic_ptolemy, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:27:40 compute-0 podman[76344]: 2025-11-29 07:27:40.849898842 +0000 UTC m=+0.150164408 container start e15f822049c260a8b1b1cebd12d596544c06af66c322e7e5a9d4a99bb4f1bbe0 (image=quay.io/ceph/ceph:v18, name=pedantic_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 07:27:40 compute-0 podman[76344]: 2025-11-29 07:27:40.853808472 +0000 UTC m=+0.154074128 container attach e15f822049c260a8b1b1cebd12d596544c06af66c322e7e5a9d4a99bb4f1bbe0 (image=quay.io/ceph/ceph:v18, name=pedantic_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:27:41 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:27:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 29 07:27:41 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 07:27:41 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:27:41 compute-0 systemd[1]: libpod-e15f822049c260a8b1b1cebd12d596544c06af66c322e7e5a9d4a99bb4f1bbe0.scope: Deactivated successfully.
Nov 29 07:27:41 compute-0 podman[76344]: 2025-11-29 07:27:41.446142793 +0000 UTC m=+0.746408439 container died e15f822049c260a8b1b1cebd12d596544c06af66c322e7e5a9d4a99bb4f1bbe0 (image=quay.io/ceph/ceph:v18, name=pedantic_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:27:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-320f41b11d6d5c24158019cba51a1f6bf2584c091210e3f4ad17bd596cc8568c-merged.mount: Deactivated successfully.
Nov 29 07:27:41 compute-0 ceph-mgr[75527]: [cephadm INFO cherrypy.error] [29/Nov/2025:07:27:41] ENGINE Bus STARTING
Nov 29 07:27:41 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : [29/Nov/2025:07:27:41] ENGINE Bus STARTING
Nov 29 07:27:41 compute-0 ceph-mgr[75527]: [cephadm INFO cherrypy.error] [29/Nov/2025:07:27:41] ENGINE Serving on http://192.168.122.100:8765
Nov 29 07:27:41 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : [29/Nov/2025:07:27:41] ENGINE Serving on http://192.168.122.100:8765
Nov 29 07:27:41 compute-0 ceph-mgr[75527]: [cephadm INFO cherrypy.error] [29/Nov/2025:07:27:41] ENGINE Serving on https://192.168.122.100:7150
Nov 29 07:27:41 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : [29/Nov/2025:07:27:41] ENGINE Serving on https://192.168.122.100:7150
Nov 29 07:27:41 compute-0 ceph-mgr[75527]: [cephadm INFO cherrypy.error] [29/Nov/2025:07:27:41] ENGINE Bus STARTED
Nov 29 07:27:41 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : [29/Nov/2025:07:27:41] ENGINE Bus STARTED
Nov 29 07:27:41 compute-0 ceph-mgr[75527]: [cephadm INFO cherrypy.error] [29/Nov/2025:07:27:41] ENGINE Client ('192.168.122.100', 46692) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 07:27:41 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : [29/Nov/2025:07:27:41] ENGINE Client ('192.168.122.100', 46692) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 07:27:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 07:27:42 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:27:42 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.fwfehy(active, since 3s)
Nov 29 07:27:42 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:42 compute-0 ceph-mon[75237]: from='client.14130 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 29 07:27:42 compute-0 ceph-mon[75237]: mgrmap e7: compute-0.fwfehy(active, since 2s)
Nov 29 07:27:42 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:42 compute-0 ceph-mon[75237]: from='client.14130 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 29 07:27:42 compute-0 ceph-mon[75237]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:27:42 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:42 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:27:42 compute-0 podman[76344]: 2025-11-29 07:27:42.364209212 +0000 UTC m=+1.664474778 container remove e15f822049c260a8b1b1cebd12d596544c06af66c322e7e5a9d4a99bb4f1bbe0 (image=quay.io/ceph/ceph:v18, name=pedantic_ptolemy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:27:42 compute-0 systemd[1]: libpod-conmon-e15f822049c260a8b1b1cebd12d596544c06af66c322e7e5a9d4a99bb4f1bbe0.scope: Deactivated successfully.
Nov 29 07:27:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019916250 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:27:42 compute-0 podman[76422]: 2025-11-29 07:27:42.427642714 +0000 UTC m=+0.035715389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:27:42 compute-0 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:27:42 compute-0 podman[76422]: 2025-11-29 07:27:42.615251905 +0000 UTC m=+0.223324570 container create 62d45083ab16dd3936d70c3721482bb15500e0bd530310bb9c060a565dbe5b09 (image=quay.io/ceph/ceph:v18, name=hardcore_pascal, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 07:27:42 compute-0 systemd[1]: Started libpod-conmon-62d45083ab16dd3936d70c3721482bb15500e0bd530310bb9c060a565dbe5b09.scope.
Nov 29 07:27:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f45f84dd16b5d57e901dea0cd547d9711016b080e965600eca0f7878aabaf51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f45f84dd16b5d57e901dea0cd547d9711016b080e965600eca0f7878aabaf51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f45f84dd16b5d57e901dea0cd547d9711016b080e965600eca0f7878aabaf51/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:42 compute-0 podman[76422]: 2025-11-29 07:27:42.727916105 +0000 UTC m=+0.335988830 container init 62d45083ab16dd3936d70c3721482bb15500e0bd530310bb9c060a565dbe5b09 (image=quay.io/ceph/ceph:v18, name=hardcore_pascal, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:27:42 compute-0 podman[76422]: 2025-11-29 07:27:42.739690238 +0000 UTC m=+0.347762863 container start 62d45083ab16dd3936d70c3721482bb15500e0bd530310bb9c060a565dbe5b09 (image=quay.io/ceph/ceph:v18, name=hardcore_pascal, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 07:27:42 compute-0 podman[76422]: 2025-11-29 07:27:42.743770932 +0000 UTC m=+0.351843607 container attach 62d45083ab16dd3936d70c3721482bb15500e0bd530310bb9c060a565dbe5b09 (image=quay.io/ceph/ceph:v18, name=hardcore_pascal, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 07:27:43 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:27:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 29 07:27:43 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:43 compute-0 ceph-mgr[75527]: [cephadm INFO root] Set ssh ssh_user
Nov 29 07:27:43 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 29 07:27:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 29 07:27:43 compute-0 ceph-mon[75237]: [29/Nov/2025:07:27:41] ENGINE Bus STARTING
Nov 29 07:27:43 compute-0 ceph-mon[75237]: [29/Nov/2025:07:27:41] ENGINE Serving on http://192.168.122.100:8765
Nov 29 07:27:43 compute-0 ceph-mon[75237]: [29/Nov/2025:07:27:41] ENGINE Serving on https://192.168.122.100:7150
Nov 29 07:27:43 compute-0 ceph-mon[75237]: [29/Nov/2025:07:27:41] ENGINE Bus STARTED
Nov 29 07:27:43 compute-0 ceph-mon[75237]: [29/Nov/2025:07:27:41] ENGINE Client ('192.168.122.100', 46692) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 07:27:43 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:27:43 compute-0 ceph-mon[75237]: mgrmap e8: compute-0.fwfehy(active, since 3s)
Nov 29 07:27:43 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:43 compute-0 ceph-mgr[75527]: [cephadm INFO root] Set ssh ssh_config
Nov 29 07:27:43 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 29 07:27:43 compute-0 ceph-mgr[75527]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 29 07:27:43 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 29 07:27:43 compute-0 hardcore_pascal[76437]: ssh user set to ceph-admin. sudo will be used
Nov 29 07:27:43 compute-0 systemd[1]: libpod-62d45083ab16dd3936d70c3721482bb15500e0bd530310bb9c060a565dbe5b09.scope: Deactivated successfully.
Nov 29 07:27:43 compute-0 podman[76422]: 2025-11-29 07:27:43.945600931 +0000 UTC m=+1.553673566 container died 62d45083ab16dd3936d70c3721482bb15500e0bd530310bb9c060a565dbe5b09 (image=quay.io/ceph/ceph:v18, name=hardcore_pascal, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 07:27:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f45f84dd16b5d57e901dea0cd547d9711016b080e965600eca0f7878aabaf51-merged.mount: Deactivated successfully.
Nov 29 07:27:44 compute-0 podman[76422]: 2025-11-29 07:27:44.137833747 +0000 UTC m=+1.745906372 container remove 62d45083ab16dd3936d70c3721482bb15500e0bd530310bb9c060a565dbe5b09 (image=quay.io/ceph/ceph:v18, name=hardcore_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 07:27:44 compute-0 systemd[1]: libpod-conmon-62d45083ab16dd3936d70c3721482bb15500e0bd530310bb9c060a565dbe5b09.scope: Deactivated successfully.
Nov 29 07:27:44 compute-0 podman[76475]: 2025-11-29 07:27:44.223504668 +0000 UTC m=+0.062475813 container create 2932d523f8752dabc2090d0de9df1bceb3757765a06acdcbaabf40cbc3ff143d (image=quay.io/ceph/ceph:v18, name=elegant_goldstine, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:27:44 compute-0 systemd[1]: Started libpod-conmon-2932d523f8752dabc2090d0de9df1bceb3757765a06acdcbaabf40cbc3ff143d.scope.
Nov 29 07:27:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:44 compute-0 podman[76475]: 2025-11-29 07:27:44.18923737 +0000 UTC m=+0.028208505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66de8504c5ba761c9739177604e3d7d5b2fafbf571b37718f400d7b4b41579fc/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66de8504c5ba761c9739177604e3d7d5b2fafbf571b37718f400d7b4b41579fc/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66de8504c5ba761c9739177604e3d7d5b2fafbf571b37718f400d7b4b41579fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66de8504c5ba761c9739177604e3d7d5b2fafbf571b37718f400d7b4b41579fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66de8504c5ba761c9739177604e3d7d5b2fafbf571b37718f400d7b4b41579fc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:44 compute-0 podman[76475]: 2025-11-29 07:27:44.298749035 +0000 UTC m=+0.137720170 container init 2932d523f8752dabc2090d0de9df1bceb3757765a06acdcbaabf40cbc3ff143d (image=quay.io/ceph/ceph:v18, name=elegant_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:27:44 compute-0 podman[76475]: 2025-11-29 07:27:44.306613057 +0000 UTC m=+0.145584152 container start 2932d523f8752dabc2090d0de9df1bceb3757765a06acdcbaabf40cbc3ff143d (image=quay.io/ceph/ceph:v18, name=elegant_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 07:27:44 compute-0 podman[76475]: 2025-11-29 07:27:44.310193631 +0000 UTC m=+0.149164736 container attach 2932d523f8752dabc2090d0de9df1bceb3757765a06acdcbaabf40cbc3ff143d (image=quay.io/ceph/ceph:v18, name=elegant_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 07:27:44 compute-0 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:27:44 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:27:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 29 07:27:44 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:44 compute-0 ceph-mgr[75527]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 29 07:27:44 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 29 07:27:44 compute-0 ceph-mgr[75527]: [cephadm INFO root] Set ssh private key
Nov 29 07:27:44 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 29 07:27:44 compute-0 systemd[1]: libpod-2932d523f8752dabc2090d0de9df1bceb3757765a06acdcbaabf40cbc3ff143d.scope: Deactivated successfully.
Nov 29 07:27:44 compute-0 podman[76475]: 2025-11-29 07:27:44.904918253 +0000 UTC m=+0.743889368 container died 2932d523f8752dabc2090d0de9df1bceb3757765a06acdcbaabf40cbc3ff143d (image=quay.io/ceph/ceph:v18, name=elegant_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:27:44 compute-0 ceph-mon[75237]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:27:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:44 compute-0 ceph-mon[75237]: Set ssh ssh_user
Nov 29 07:27:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:44 compute-0 ceph-mon[75237]: Set ssh ssh_config
Nov 29 07:27:44 compute-0 ceph-mon[75237]: ssh user set to ceph-admin. sudo will be used
Nov 29 07:27:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-66de8504c5ba761c9739177604e3d7d5b2fafbf571b37718f400d7b4b41579fc-merged.mount: Deactivated successfully.
Nov 29 07:27:44 compute-0 podman[76475]: 2025-11-29 07:27:44.95268328 +0000 UTC m=+0.791654385 container remove 2932d523f8752dabc2090d0de9df1bceb3757765a06acdcbaabf40cbc3ff143d (image=quay.io/ceph/ceph:v18, name=elegant_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:27:44 compute-0 systemd[1]: libpod-conmon-2932d523f8752dabc2090d0de9df1bceb3757765a06acdcbaabf40cbc3ff143d.scope: Deactivated successfully.
Nov 29 07:27:45 compute-0 podman[76525]: 2025-11-29 07:27:45.020553394 +0000 UTC m=+0.044523392 container create 67670d0b2267f5ba357028be1afc7b372900cda73ccdfc84c44f1a3171fe1567 (image=quay.io/ceph/ceph:v18, name=hungry_rosalind, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:27:45 compute-0 systemd[1]: Started libpod-conmon-67670d0b2267f5ba357028be1afc7b372900cda73ccdfc84c44f1a3171fe1567.scope.
Nov 29 07:27:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c3621ef246d373c46e81ba3938d9cf8ddf9505c99e28c329aa34ef9ce2889ed/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c3621ef246d373c46e81ba3938d9cf8ddf9505c99e28c329aa34ef9ce2889ed/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c3621ef246d373c46e81ba3938d9cf8ddf9505c99e28c329aa34ef9ce2889ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c3621ef246d373c46e81ba3938d9cf8ddf9505c99e28c329aa34ef9ce2889ed/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c3621ef246d373c46e81ba3938d9cf8ddf9505c99e28c329aa34ef9ce2889ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:45 compute-0 podman[76525]: 2025-11-29 07:27:45.097923925 +0000 UTC m=+0.121893953 container init 67670d0b2267f5ba357028be1afc7b372900cda73ccdfc84c44f1a3171fe1567 (image=quay.io/ceph/ceph:v18, name=hungry_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:27:45 compute-0 podman[76525]: 2025-11-29 07:27:45.003064982 +0000 UTC m=+0.027035070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:27:45 compute-0 podman[76525]: 2025-11-29 07:27:45.110596967 +0000 UTC m=+0.134566975 container start 67670d0b2267f5ba357028be1afc7b372900cda73ccdfc84c44f1a3171fe1567 (image=quay.io/ceph/ceph:v18, name=hungry_rosalind, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:27:45 compute-0 podman[76525]: 2025-11-29 07:27:45.115165741 +0000 UTC m=+0.139135749 container attach 67670d0b2267f5ba357028be1afc7b372900cda73ccdfc84c44f1a3171fe1567 (image=quay.io/ceph/ceph:v18, name=hungry_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 07:27:45 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:27:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 29 07:27:45 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:45 compute-0 ceph-mgr[75527]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 29 07:27:45 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 29 07:27:45 compute-0 systemd[1]: libpod-67670d0b2267f5ba357028be1afc7b372900cda73ccdfc84c44f1a3171fe1567.scope: Deactivated successfully.
Nov 29 07:27:45 compute-0 podman[76525]: 2025-11-29 07:27:45.686776784 +0000 UTC m=+0.710746782 container died 67670d0b2267f5ba357028be1afc7b372900cda73ccdfc84c44f1a3171fe1567 (image=quay.io/ceph/ceph:v18, name=hungry_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:27:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c3621ef246d373c46e81ba3938d9cf8ddf9505c99e28c329aa34ef9ce2889ed-merged.mount: Deactivated successfully.
Nov 29 07:27:45 compute-0 podman[76525]: 2025-11-29 07:27:45.738770259 +0000 UTC m=+0.762740257 container remove 67670d0b2267f5ba357028be1afc7b372900cda73ccdfc84c44f1a3171fe1567 (image=quay.io/ceph/ceph:v18, name=hungry_rosalind, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 07:27:45 compute-0 systemd[1]: libpod-conmon-67670d0b2267f5ba357028be1afc7b372900cda73ccdfc84c44f1a3171fe1567.scope: Deactivated successfully.
Nov 29 07:27:45 compute-0 podman[76584]: 2025-11-29 07:27:45.806541521 +0000 UTC m=+0.047940423 container create 8576ec6bf797477f2fd0223f16a6730f167b508e86192ee495abb3dbf0ae73b9 (image=quay.io/ceph/ceph:v18, name=upbeat_banzai, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 07:27:45 compute-0 systemd[1]: Started libpod-conmon-8576ec6bf797477f2fd0223f16a6730f167b508e86192ee495abb3dbf0ae73b9.scope.
Nov 29 07:27:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e20634b4cce9a61fd2beb7f419d9b0ee5edcd7af597f6b8f696b5488ef41d92/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e20634b4cce9a61fd2beb7f419d9b0ee5edcd7af597f6b8f696b5488ef41d92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e20634b4cce9a61fd2beb7f419d9b0ee5edcd7af597f6b8f696b5488ef41d92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:45 compute-0 podman[76584]: 2025-11-29 07:27:45.868846169 +0000 UTC m=+0.110245051 container init 8576ec6bf797477f2fd0223f16a6730f167b508e86192ee495abb3dbf0ae73b9 (image=quay.io/ceph/ceph:v18, name=upbeat_banzai, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 07:27:45 compute-0 podman[76584]: 2025-11-29 07:27:45.87805723 +0000 UTC m=+0.119456092 container start 8576ec6bf797477f2fd0223f16a6730f167b508e86192ee495abb3dbf0ae73b9 (image=quay.io/ceph/ceph:v18, name=upbeat_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:27:45 compute-0 podman[76584]: 2025-11-29 07:27:45.786486706 +0000 UTC m=+0.027885588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:27:45 compute-0 podman[76584]: 2025-11-29 07:27:45.883618325 +0000 UTC m=+0.125017197 container attach 8576ec6bf797477f2fd0223f16a6730f167b508e86192ee495abb3dbf0ae73b9 (image=quay.io/ceph/ceph:v18, name=upbeat_banzai, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 29 07:27:45 compute-0 ceph-mon[75237]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:27:45 compute-0 ceph-mon[75237]: Set ssh ssh_identity_key
Nov 29 07:27:45 compute-0 ceph-mon[75237]: Set ssh private key
Nov 29 07:27:45 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:46 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:27:46 compute-0 upbeat_banzai[76600]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZR+tjJvhJFC0+0sKxYqpFMpcV6keEkzML57MbRzUxYOToEB2ktb01HPpYXYB4UDap5Fm34AWRtRp9MRa2FTIUqx9xcM1CR6qAIwUwBlsCaG+JbcKDHAgO5RlWdNTFeZh1BQYql1tGWGwSy6PDnQhfHSa9XDocMxqOLHc/Qg3V/8zIFVtsU8WgDGyuMEq+OIYYCyQyE90uZxKBvhbnJ61tZ/4iGkic6pA/JNAecuDjNCsQhkyZYgDHon7UFoso3dR567XN7fQi/QU5C7XeXjcx7CKAi1KbPHgR0yTnTRS1/+aSOY22KHx2J/QY3EJJEuKJ7PweRcVGegjCdXcqGOrqhuLMWZORvALtz3pBH5f2Uxfm+f5MqeESJ9F2YVndw7/B6Jq1w8lsjIofaWlFx0yzILlW9Fl0vBaix1D170/ZebCyyO72sSphg81DYr05+0iJ7hD6xJem46aR7jkHjjOQT02IpnB0+3LvWqIXa2iAKxrKCCXLHh6Wz2KkGIiDixk= zuul@controller
Nov 29 07:27:46 compute-0 systemd[1]: libpod-8576ec6bf797477f2fd0223f16a6730f167b508e86192ee495abb3dbf0ae73b9.scope: Deactivated successfully.
Nov 29 07:27:46 compute-0 podman[76584]: 2025-11-29 07:27:46.413552806 +0000 UTC m=+0.654951668 container died 8576ec6bf797477f2fd0223f16a6730f167b508e86192ee495abb3dbf0ae73b9 (image=quay.io/ceph/ceph:v18, name=upbeat_banzai, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:27:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e20634b4cce9a61fd2beb7f419d9b0ee5edcd7af597f6b8f696b5488ef41d92-merged.mount: Deactivated successfully.
Nov 29 07:27:46 compute-0 podman[76584]: 2025-11-29 07:27:46.472142368 +0000 UTC m=+0.713541230 container remove 8576ec6bf797477f2fd0223f16a6730f167b508e86192ee495abb3dbf0ae73b9 (image=quay.io/ceph/ceph:v18, name=upbeat_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:27:46 compute-0 systemd[1]: libpod-conmon-8576ec6bf797477f2fd0223f16a6730f167b508e86192ee495abb3dbf0ae73b9.scope: Deactivated successfully.
Nov 29 07:27:46 compute-0 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:27:46 compute-0 podman[76639]: 2025-11-29 07:27:46.5433208 +0000 UTC m=+0.046652766 container create 6064b004eaf0589609a5a91f4b20c4b6fef5aebec9d76855932e8113a936d0da (image=quay.io/ceph/ceph:v18, name=heuristic_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:27:46 compute-0 systemd[1]: Started libpod-conmon-6064b004eaf0589609a5a91f4b20c4b6fef5aebec9d76855932e8113a936d0da.scope.
Nov 29 07:27:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e149a7ee3e4c5f98e4660faf604bee38ba123d93d3b5252a34c04fa91fd4ec5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e149a7ee3e4c5f98e4660faf604bee38ba123d93d3b5252a34c04fa91fd4ec5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e149a7ee3e4c5f98e4660faf604bee38ba123d93d3b5252a34c04fa91fd4ec5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:46 compute-0 podman[76639]: 2025-11-29 07:27:46.521763663 +0000 UTC m=+0.025095609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:27:46 compute-0 podman[76639]: 2025-11-29 07:27:46.633506085 +0000 UTC m=+0.136838051 container init 6064b004eaf0589609a5a91f4b20c4b6fef5aebec9d76855932e8113a936d0da (image=quay.io/ceph/ceph:v18, name=heuristic_shaw, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:27:46 compute-0 podman[76639]: 2025-11-29 07:27:46.63953304 +0000 UTC m=+0.142864956 container start 6064b004eaf0589609a5a91f4b20c4b6fef5aebec9d76855932e8113a936d0da (image=quay.io/ceph/ceph:v18, name=heuristic_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:27:46 compute-0 podman[76639]: 2025-11-29 07:27:46.643564173 +0000 UTC m=+0.146896089 container attach 6064b004eaf0589609a5a91f4b20c4b6fef5aebec9d76855932e8113a936d0da (image=quay.io/ceph/ceph:v18, name=heuristic_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 07:27:46 compute-0 ceph-mon[75237]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:27:46 compute-0 ceph-mon[75237]: Set ssh ssh_identity_pub
Nov 29 07:27:46 compute-0 ceph-mon[75237]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:27:47 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:27:47 compute-0 sshd-session[76681]: Accepted publickey for ceph-admin from 192.168.122.100 port 39502 ssh2: RSA SHA256:ei20DuU97i+OzSG1I2IKSD1P0mnnHdAB5FKXo24KVFQ
Nov 29 07:27:47 compute-0 systemd-logind[782]: New session 21 of user ceph-admin.
Nov 29 07:27:47 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 29 07:27:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052933 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:27:47 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 29 07:27:47 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 29 07:27:47 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 29 07:27:47 compute-0 systemd[76685]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:27:47 compute-0 sshd-session[76691]: Accepted publickey for ceph-admin from 192.168.122.100 port 39508 ssh2: RSA SHA256:ei20DuU97i+OzSG1I2IKSD1P0mnnHdAB5FKXo24KVFQ
Nov 29 07:27:47 compute-0 systemd-logind[782]: New session 23 of user ceph-admin.
Nov 29 07:27:47 compute-0 systemd[76685]: Queued start job for default target Main User Target.
Nov 29 07:27:47 compute-0 systemd[76685]: Created slice User Application Slice.
Nov 29 07:27:47 compute-0 systemd[76685]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 07:27:47 compute-0 systemd[76685]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 07:27:47 compute-0 systemd[76685]: Reached target Paths.
Nov 29 07:27:47 compute-0 systemd[76685]: Reached target Timers.
Nov 29 07:27:47 compute-0 systemd[76685]: Starting D-Bus User Message Bus Socket...
Nov 29 07:27:47 compute-0 systemd[76685]: Starting Create User's Volatile Files and Directories...
Nov 29 07:27:47 compute-0 systemd[76685]: Listening on D-Bus User Message Bus Socket.
Nov 29 07:27:47 compute-0 systemd[76685]: Reached target Sockets.
Nov 29 07:27:47 compute-0 systemd[76685]: Finished Create User's Volatile Files and Directories.
Nov 29 07:27:47 compute-0 systemd[76685]: Reached target Basic System.
Nov 29 07:27:47 compute-0 systemd[76685]: Reached target Main User Target.
Nov 29 07:27:47 compute-0 systemd[76685]: Startup finished in 157ms.
Nov 29 07:27:47 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 29 07:27:47 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Nov 29 07:27:47 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Nov 29 07:27:47 compute-0 sshd-session[76681]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:27:47 compute-0 sshd-session[76691]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:27:47 compute-0 sudo[76706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:47 compute-0 sudo[76706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:47 compute-0 sudo[76706]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:47 compute-0 sudo[76731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:27:47 compute-0 sudo[76731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:47 compute-0 sudo[76731]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:47 compute-0 ceph-mon[75237]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:27:48 compute-0 sshd-session[76756]: Accepted publickey for ceph-admin from 192.168.122.100 port 39514 ssh2: RSA SHA256:ei20DuU97i+OzSG1I2IKSD1P0mnnHdAB5FKXo24KVFQ
Nov 29 07:27:48 compute-0 systemd-logind[782]: New session 24 of user ceph-admin.
Nov 29 07:27:48 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Nov 29 07:27:48 compute-0 sshd-session[76756]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:27:48 compute-0 sudo[76760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:48 compute-0 sudo[76760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:48 compute-0 sudo[76760]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:48 compute-0 sudo[76785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 29 07:27:48 compute-0 sudo[76785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:48 compute-0 sudo[76785]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:48 compute-0 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:27:48 compute-0 sshd-session[76810]: Accepted publickey for ceph-admin from 192.168.122.100 port 39524 ssh2: RSA SHA256:ei20DuU97i+OzSG1I2IKSD1P0mnnHdAB5FKXo24KVFQ
Nov 29 07:27:48 compute-0 systemd-logind[782]: New session 25 of user ceph-admin.
Nov 29 07:27:48 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Nov 29 07:27:48 compute-0 sshd-session[76810]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:27:48 compute-0 sudo[76814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:48 compute-0 sudo[76814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:48 compute-0 sudo[76814]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:48 compute-0 sudo[76839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 29 07:27:48 compute-0 sudo[76839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:48 compute-0 sudo[76839]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:48 compute-0 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 29 07:27:48 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 29 07:27:49 compute-0 sshd-session[76864]: Accepted publickey for ceph-admin from 192.168.122.100 port 39538 ssh2: RSA SHA256:ei20DuU97i+OzSG1I2IKSD1P0mnnHdAB5FKXo24KVFQ
Nov 29 07:27:49 compute-0 systemd-logind[782]: New session 26 of user ceph-admin.
Nov 29 07:27:49 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Nov 29 07:27:49 compute-0 sshd-session[76864]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:27:49 compute-0 sudo[76868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:49 compute-0 sudo[76868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:49 compute-0 sudo[76868]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:49 compute-0 sudo[76893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:27:49 compute-0 sudo[76893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:49 compute-0 sudo[76893]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:49 compute-0 sshd-session[76918]: Accepted publickey for ceph-admin from 192.168.122.100 port 39542 ssh2: RSA SHA256:ei20DuU97i+OzSG1I2IKSD1P0mnnHdAB5FKXo24KVFQ
Nov 29 07:27:49 compute-0 systemd-logind[782]: New session 27 of user ceph-admin.
Nov 29 07:27:49 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Nov 29 07:27:49 compute-0 sshd-session[76918]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:27:49 compute-0 sudo[76922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:49 compute-0 sudo[76922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:49 compute-0 sudo[76922]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:49 compute-0 sudo[76947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:27:49 compute-0 sudo[76947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:49 compute-0 sudo[76947]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:50 compute-0 sshd-session[76972]: Accepted publickey for ceph-admin from 192.168.122.100 port 39548 ssh2: RSA SHA256:ei20DuU97i+OzSG1I2IKSD1P0mnnHdAB5FKXo24KVFQ
Nov 29 07:27:50 compute-0 systemd-logind[782]: New session 28 of user ceph-admin.
Nov 29 07:27:50 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Nov 29 07:27:50 compute-0 sshd-session[76972]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:27:50 compute-0 sudo[76976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:50 compute-0 sudo[76976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:50 compute-0 sudo[76976]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:50 compute-0 ceph-mon[75237]: Deploying cephadm binary to compute-0
Nov 29 07:27:50 compute-0 sudo[77001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 29 07:27:50 compute-0 sudo[77001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:50 compute-0 sudo[77001]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:50 compute-0 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:27:50 compute-0 sshd-session[77026]: Accepted publickey for ceph-admin from 192.168.122.100 port 39564 ssh2: RSA SHA256:ei20DuU97i+OzSG1I2IKSD1P0mnnHdAB5FKXo24KVFQ
Nov 29 07:27:50 compute-0 systemd-logind[782]: New session 29 of user ceph-admin.
Nov 29 07:27:50 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Nov 29 07:27:50 compute-0 sshd-session[77026]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:27:50 compute-0 sudo[77030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:50 compute-0 sudo[77030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:50 compute-0 sudo[77030]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:50 compute-0 sudo[77055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:27:50 compute-0 sudo[77055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:50 compute-0 sudo[77055]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:51 compute-0 sshd-session[77080]: Accepted publickey for ceph-admin from 192.168.122.100 port 39570 ssh2: RSA SHA256:ei20DuU97i+OzSG1I2IKSD1P0mnnHdAB5FKXo24KVFQ
Nov 29 07:27:51 compute-0 systemd-logind[782]: New session 30 of user ceph-admin.
Nov 29 07:27:51 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Nov 29 07:27:51 compute-0 sshd-session[77080]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:27:51 compute-0 sudo[77084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:51 compute-0 sudo[77084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:51 compute-0 sudo[77084]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:51 compute-0 sudo[77109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 29 07:27:51 compute-0 sudo[77109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:51 compute-0 sudo[77109]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:51 compute-0 sshd-session[77134]: Accepted publickey for ceph-admin from 192.168.122.100 port 39586 ssh2: RSA SHA256:ei20DuU97i+OzSG1I2IKSD1P0mnnHdAB5FKXo24KVFQ
Nov 29 07:27:51 compute-0 systemd-logind[782]: New session 31 of user ceph-admin.
Nov 29 07:27:51 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Nov 29 07:27:51 compute-0 sshd-session[77134]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:27:52 compute-0 sshd-session[77161]: Accepted publickey for ceph-admin from 192.168.122.100 port 39600 ssh2: RSA SHA256:ei20DuU97i+OzSG1I2IKSD1P0mnnHdAB5FKXo24KVFQ
Nov 29 07:27:52 compute-0 systemd-logind[782]: New session 32 of user ceph-admin.
Nov 29 07:27:52 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Nov 29 07:27:52 compute-0 sshd-session[77161]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:27:52 compute-0 sudo[77165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:52 compute-0 sudo[77165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:52 compute-0 sudo[77165]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:52 compute-0 sudo[77192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 29 07:27:52 compute-0 sudo[77192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:52 compute-0 sudo[77192]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054708 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:27:52 compute-0 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:27:52 compute-0 sshd-session[77217]: Accepted publickey for ceph-admin from 192.168.122.100 port 39610 ssh2: RSA SHA256:ei20DuU97i+OzSG1I2IKSD1P0mnnHdAB5FKXo24KVFQ
Nov 29 07:27:52 compute-0 systemd-logind[782]: New session 33 of user ceph-admin.
Nov 29 07:27:52 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Nov 29 07:27:52 compute-0 sshd-session[77217]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:27:52 compute-0 sudo[77221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:52 compute-0 sudo[77221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:52 compute-0 sudo[77221]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:52 compute-0 sudo[77246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 29 07:27:52 compute-0 sudo[77246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:53 compute-0 sudo[77246]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 07:27:53 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:53 compute-0 ceph-mgr[75527]: [cephadm INFO root] Added host compute-0
Nov 29 07:27:53 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 29 07:27:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 07:27:53 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:27:53 compute-0 heuristic_shaw[76655]: Added host 'compute-0' with addr '192.168.122.100'
Nov 29 07:27:53 compute-0 systemd[1]: libpod-6064b004eaf0589609a5a91f4b20c4b6fef5aebec9d76855932e8113a936d0da.scope: Deactivated successfully.
Nov 29 07:27:53 compute-0 podman[76639]: 2025-11-29 07:27:53.173942265 +0000 UTC m=+6.677274221 container died 6064b004eaf0589609a5a91f4b20c4b6fef5aebec9d76855932e8113a936d0da (image=quay.io/ceph/ceph:v18, name=heuristic_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 07:27:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e149a7ee3e4c5f98e4660faf604bee38ba123d93d3b5252a34c04fa91fd4ec5-merged.mount: Deactivated successfully.
Nov 29 07:27:53 compute-0 sudo[77290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:53 compute-0 sudo[77290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:53 compute-0 sudo[77290]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:53 compute-0 podman[76639]: 2025-11-29 07:27:53.242457664 +0000 UTC m=+6.745789580 container remove 6064b004eaf0589609a5a91f4b20c4b6fef5aebec9d76855932e8113a936d0da (image=quay.io/ceph/ceph:v18, name=heuristic_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:27:53 compute-0 systemd[1]: libpod-conmon-6064b004eaf0589609a5a91f4b20c4b6fef5aebec9d76855932e8113a936d0da.scope: Deactivated successfully.
Nov 29 07:27:53 compute-0 sshd-session[77182]: Invalid user bob from 114.34.106.146 port 40366
Nov 29 07:27:53 compute-0 sudo[77328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:27:53 compute-0 sudo[77328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:53 compute-0 sudo[77328]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:53 compute-0 podman[77332]: 2025-11-29 07:27:53.360458889 +0000 UTC m=+0.061939990 container create d9996b61b2731636e72cd839cf5ff3392ebcb2f8246983d983c679fc4e13cc54 (image=quay.io/ceph/ceph:v18, name=mystifying_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 07:27:53 compute-0 systemd[1]: Started libpod-conmon-d9996b61b2731636e72cd839cf5ff3392ebcb2f8246983d983c679fc4e13cc54.scope.
Nov 29 07:27:53 compute-0 sudo[77366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:53 compute-0 sudo[77366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:53 compute-0 sudo[77366]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:53 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65dd21bcc024df8a65966e23197338125e26381dc3e42d32825cba9dc3b07a27/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65dd21bcc024df8a65966e23197338125e26381dc3e42d32825cba9dc3b07a27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:53 compute-0 podman[77332]: 2025-11-29 07:27:53.334966727 +0000 UTC m=+0.036447858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:27:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65dd21bcc024df8a65966e23197338125e26381dc3e42d32825cba9dc3b07a27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:53 compute-0 podman[77332]: 2025-11-29 07:27:53.443495385 +0000 UTC m=+0.144976486 container init d9996b61b2731636e72cd839cf5ff3392ebcb2f8246983d983c679fc4e13cc54 (image=quay.io/ceph/ceph:v18, name=mystifying_gould, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:27:53 compute-0 podman[77332]: 2025-11-29 07:27:53.453925056 +0000 UTC m=+0.155406177 container start d9996b61b2731636e72cd839cf5ff3392ebcb2f8246983d983c679fc4e13cc54 (image=quay.io/ceph/ceph:v18, name=mystifying_gould, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:27:53 compute-0 podman[77332]: 2025-11-29 07:27:53.458834793 +0000 UTC m=+0.160315914 container attach d9996b61b2731636e72cd839cf5ff3392ebcb2f8246983d983c679fc4e13cc54 (image=quay.io/ceph/ceph:v18, name=mystifying_gould, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 07:27:53 compute-0 sudo[77396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Nov 29 07:27:53 compute-0 sudo[77396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:53 compute-0 sshd-session[77182]: Received disconnect from 114.34.106.146 port 40366:11: Bye Bye [preauth]
Nov 29 07:27:53 compute-0 sshd-session[77182]: Disconnected from invalid user bob 114.34.106.146 port 40366 [preauth]
Nov 29 07:27:53 compute-0 podman[77451]: 2025-11-29 07:27:53.881459858 +0000 UTC m=+0.076425316 container create 96111da2b9985ee17f562bce340961a8bed69db285ef23ed0b85526ca7ee99df (image=quay.io/ceph/ceph:v18, name=strange_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:27:53 compute-0 systemd[1]: Started libpod-conmon-96111da2b9985ee17f562bce340961a8bed69db285ef23ed0b85526ca7ee99df.scope.
Nov 29 07:27:53 compute-0 podman[77451]: 2025-11-29 07:27:53.847838415 +0000 UTC m=+0.042803953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:27:53 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:53 compute-0 podman[77451]: 2025-11-29 07:27:53.966943148 +0000 UTC m=+0.161908626 container init 96111da2b9985ee17f562bce340961a8bed69db285ef23ed0b85526ca7ee99df (image=quay.io/ceph/ceph:v18, name=strange_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 07:27:53 compute-0 podman[77451]: 2025-11-29 07:27:53.977372659 +0000 UTC m=+0.172338137 container start 96111da2b9985ee17f562bce340961a8bed69db285ef23ed0b85526ca7ee99df (image=quay.io/ceph/ceph:v18, name=strange_hypatia, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:27:53 compute-0 podman[77451]: 2025-11-29 07:27:53.981931518 +0000 UTC m=+0.176896986 container attach 96111da2b9985ee17f562bce340961a8bed69db285ef23ed0b85526ca7ee99df (image=quay.io/ceph/ceph:v18, name=strange_hypatia, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 07:27:54 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:27:54 compute-0 ceph-mgr[75527]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 29 07:27:54 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 29 07:27:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 07:27:54 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:54 compute-0 mystifying_gould[77391]: Scheduled mon update...
Nov 29 07:27:54 compute-0 systemd[1]: libpod-d9996b61b2731636e72cd839cf5ff3392ebcb2f8246983d983c679fc4e13cc54.scope: Deactivated successfully.
Nov 29 07:27:54 compute-0 podman[77332]: 2025-11-29 07:27:54.058535076 +0000 UTC m=+0.760016177 container died d9996b61b2731636e72cd839cf5ff3392ebcb2f8246983d983c679fc4e13cc54 (image=quay.io/ceph/ceph:v18, name=mystifying_gould, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 07:27:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-65dd21bcc024df8a65966e23197338125e26381dc3e42d32825cba9dc3b07a27-merged.mount: Deactivated successfully.
Nov 29 07:27:54 compute-0 podman[77332]: 2025-11-29 07:27:54.115785233 +0000 UTC m=+0.817266344 container remove d9996b61b2731636e72cd839cf5ff3392ebcb2f8246983d983c679fc4e13cc54 (image=quay.io/ceph/ceph:v18, name=mystifying_gould, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:27:54 compute-0 systemd[1]: libpod-conmon-d9996b61b2731636e72cd839cf5ff3392ebcb2f8246983d983c679fc4e13cc54.scope: Deactivated successfully.
Nov 29 07:27:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:54 compute-0 ceph-mon[75237]: Added host compute-0
Nov 29 07:27:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:27:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:54 compute-0 podman[77505]: 2025-11-29 07:27:54.195795791 +0000 UTC m=+0.052172936 container create 3103d7e8f62d115b81b88a16bc5ccb505add7220372cd23156ea770a0531cd22 (image=quay.io/ceph/ceph:v18, name=zealous_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:27:54 compute-0 systemd[1]: Started libpod-conmon-3103d7e8f62d115b81b88a16bc5ccb505add7220372cd23156ea770a0531cd22.scope.
Nov 29 07:27:54 compute-0 podman[77505]: 2025-11-29 07:27:54.174919949 +0000 UTC m=+0.031297084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:27:54 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f43ebdcb6f54e952583fea15b183c6a238b4a0820613ea2be71fb8a0f6d6c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f43ebdcb6f54e952583fea15b183c6a238b4a0820613ea2be71fb8a0f6d6c3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f43ebdcb6f54e952583fea15b183c6a238b4a0820613ea2be71fb8a0f6d6c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:54 compute-0 podman[77505]: 2025-11-29 07:27:54.29355463 +0000 UTC m=+0.149931845 container init 3103d7e8f62d115b81b88a16bc5ccb505add7220372cd23156ea770a0531cd22 (image=quay.io/ceph/ceph:v18, name=zealous_mcnulty, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:27:54 compute-0 podman[77505]: 2025-11-29 07:27:54.303831097 +0000 UTC m=+0.160208262 container start 3103d7e8f62d115b81b88a16bc5ccb505add7220372cd23156ea770a0531cd22 (image=quay.io/ceph/ceph:v18, name=zealous_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:27:54 compute-0 podman[77505]: 2025-11-29 07:27:54.308134239 +0000 UTC m=+0.164511454 container attach 3103d7e8f62d115b81b88a16bc5ccb505add7220372cd23156ea770a0531cd22 (image=quay.io/ceph/ceph:v18, name=zealous_mcnulty, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 07:27:54 compute-0 strange_hypatia[77485]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 29 07:27:54 compute-0 systemd[1]: libpod-96111da2b9985ee17f562bce340961a8bed69db285ef23ed0b85526ca7ee99df.scope: Deactivated successfully.
Nov 29 07:27:54 compute-0 podman[77451]: 2025-11-29 07:27:54.329319899 +0000 UTC m=+0.524285377 container died 96111da2b9985ee17f562bce340961a8bed69db285ef23ed0b85526ca7ee99df (image=quay.io/ceph/ceph:v18, name=strange_hypatia, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 07:27:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3a90d89998fbe4bcfac4e9c28248e6f64793680ab1224bd21c776d22d14f897-merged.mount: Deactivated successfully.
Nov 29 07:27:54 compute-0 podman[77451]: 2025-11-29 07:27:54.389675246 +0000 UTC m=+0.584640734 container remove 96111da2b9985ee17f562bce340961a8bed69db285ef23ed0b85526ca7ee99df (image=quay.io/ceph/ceph:v18, name=strange_hypatia, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:27:54 compute-0 systemd[1]: libpod-conmon-96111da2b9985ee17f562bce340961a8bed69db285ef23ed0b85526ca7ee99df.scope: Deactivated successfully.
Nov 29 07:27:54 compute-0 sudo[77396]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 29 07:27:54 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:54 compute-0 sudo[77539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:54 compute-0 sudo[77539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:54 compute-0 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:27:54 compute-0 sudo[77539]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:54 compute-0 sudo[77564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:27:54 compute-0 sudo[77564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:54 compute-0 sudo[77564]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:54 compute-0 sudo[77589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:54 compute-0 sudo[77589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:54 compute-0 sudo[77589]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:54 compute-0 sudo[77633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 07:27:54 compute-0 sudo[77633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:54 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:27:54 compute-0 ceph-mgr[75527]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 29 07:27:54 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 29 07:27:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 07:27:54 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:54 compute-0 zealous_mcnulty[77521]: Scheduled mgr update...
Nov 29 07:27:54 compute-0 systemd[1]: libpod-3103d7e8f62d115b81b88a16bc5ccb505add7220372cd23156ea770a0531cd22.scope: Deactivated successfully.
Nov 29 07:27:54 compute-0 podman[77505]: 2025-11-29 07:27:54.952840971 +0000 UTC m=+0.809218106 container died 3103d7e8f62d115b81b88a16bc5ccb505add7220372cd23156ea770a0531cd22 (image=quay.io/ceph/ceph:v18, name=zealous_mcnulty, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:27:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-20f43ebdcb6f54e952583fea15b183c6a238b4a0820613ea2be71fb8a0f6d6c3-merged.mount: Deactivated successfully.
Nov 29 07:27:54 compute-0 podman[77505]: 2025-11-29 07:27:54.996526445 +0000 UTC m=+0.852903580 container remove 3103d7e8f62d115b81b88a16bc5ccb505add7220372cd23156ea770a0531cd22 (image=quay.io/ceph/ceph:v18, name=zealous_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 07:27:55 compute-0 systemd[1]: libpod-conmon-3103d7e8f62d115b81b88a16bc5ccb505add7220372cd23156ea770a0531cd22.scope: Deactivated successfully.
Nov 29 07:27:55 compute-0 podman[77687]: 2025-11-29 07:27:55.054963953 +0000 UTC m=+0.041238402 container create b7f73066239cf67651601a8026a7e1ce6f9f3f34ad9527b16eb6c519bda7198c (image=quay.io/ceph/ceph:v18, name=vigilant_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 07:27:55 compute-0 sudo[77633]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:27:55 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:55 compute-0 systemd[1]: Started libpod-conmon-b7f73066239cf67651601a8026a7e1ce6f9f3f34ad9527b16eb6c519bda7198c.scope.
Nov 29 07:27:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:55 compute-0 sudo[77708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:55 compute-0 podman[77687]: 2025-11-29 07:27:55.034413649 +0000 UTC m=+0.020688098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7090f75f64f209ce3380bb24cd5e61850df831c75102f5924d60bad494ea619/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7090f75f64f209ce3380bb24cd5e61850df831c75102f5924d60bad494ea619/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7090f75f64f209ce3380bb24cd5e61850df831c75102f5924d60bad494ea619/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:55 compute-0 sudo[77708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:55 compute-0 sudo[77708]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:55 compute-0 podman[77687]: 2025-11-29 07:27:55.147122246 +0000 UTC m=+0.133396705 container init b7f73066239cf67651601a8026a7e1ce6f9f3f34ad9527b16eb6c519bda7198c (image=quay.io/ceph/ceph:v18, name=vigilant_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 07:27:55 compute-0 podman[77687]: 2025-11-29 07:27:55.152848045 +0000 UTC m=+0.139122514 container start b7f73066239cf67651601a8026a7e1ce6f9f3f34ad9527b16eb6c519bda7198c (image=quay.io/ceph/ceph:v18, name=vigilant_antonelli, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:27:55 compute-0 podman[77687]: 2025-11-29 07:27:55.156982982 +0000 UTC m=+0.143257411 container attach b7f73066239cf67651601a8026a7e1ce6f9f3f34ad9527b16eb6c519bda7198c (image=quay.io/ceph/ceph:v18, name=vigilant_antonelli, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:27:55 compute-0 sudo[77737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:27:55 compute-0 sudo[77737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:55 compute-0 sudo[77737]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:55 compute-0 sudo[77763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:55 compute-0 sudo[77763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:55 compute-0 sudo[77763]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:55 compute-0 sudo[77788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:27:55 compute-0 sudo[77788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:55 compute-0 ceph-mon[75237]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:27:55 compute-0 ceph-mon[75237]: Saving service mon spec with placement count:5
Nov 29 07:27:55 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:55 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:55 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:55 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:27:55 compute-0 ceph-mgr[75527]: [cephadm INFO root] Saving service crash spec with placement *
Nov 29 07:27:55 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 29 07:27:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 07:27:55 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:55 compute-0 vigilant_antonelli[77714]: Scheduled crash update...
Nov 29 07:27:55 compute-0 systemd[1]: libpod-b7f73066239cf67651601a8026a7e1ce6f9f3f34ad9527b16eb6c519bda7198c.scope: Deactivated successfully.
Nov 29 07:27:55 compute-0 podman[77687]: 2025-11-29 07:27:55.769382715 +0000 UTC m=+0.755657134 container died b7f73066239cf67651601a8026a7e1ce6f9f3f34ad9527b16eb6c519bda7198c (image=quay.io/ceph/ceph:v18, name=vigilant_antonelli, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:27:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7090f75f64f209ce3380bb24cd5e61850df831c75102f5924d60bad494ea619-merged.mount: Deactivated successfully.
Nov 29 07:27:55 compute-0 podman[77687]: 2025-11-29 07:27:55.816306814 +0000 UTC m=+0.802581243 container remove b7f73066239cf67651601a8026a7e1ce6f9f3f34ad9527b16eb6c519bda7198c (image=quay.io/ceph/ceph:v18, name=vigilant_antonelli, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:27:55 compute-0 systemd[1]: libpod-conmon-b7f73066239cf67651601a8026a7e1ce6f9f3f34ad9527b16eb6c519bda7198c.scope: Deactivated successfully.
Nov 29 07:27:55 compute-0 podman[77902]: 2025-11-29 07:27:55.876164279 +0000 UTC m=+0.036424118 container create 834bc88e3f5a0f4dd3fda767be2a702ca92cb6c33c35fde531f9210eb0d891cb (image=quay.io/ceph/ceph:v18, name=quizzical_jennings, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:27:55 compute-0 systemd[1]: Started libpod-conmon-834bc88e3f5a0f4dd3fda767be2a702ca92cb6c33c35fde531f9210eb0d891cb.scope.
Nov 29 07:27:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:55 compute-0 podman[77902]: 2025-11-29 07:27:55.860620644 +0000 UTC m=+0.020880503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2c71a91958e5c5e0e05caa2c58f4e36e7794d6621d967b617eb5772b8eb2e1c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2c71a91958e5c5e0e05caa2c58f4e36e7794d6621d967b617eb5772b8eb2e1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2c71a91958e5c5e0e05caa2c58f4e36e7794d6621d967b617eb5772b8eb2e1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:55 compute-0 podman[77936]: 2025-11-29 07:27:55.984337518 +0000 UTC m=+0.072662859 container exec 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 29 07:27:56 compute-0 podman[77902]: 2025-11-29 07:27:56.018211957 +0000 UTC m=+0.178471836 container init 834bc88e3f5a0f4dd3fda767be2a702ca92cb6c33c35fde531f9210eb0d891cb (image=quay.io/ceph/ceph:v18, name=quizzical_jennings, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:27:56 compute-0 podman[77902]: 2025-11-29 07:27:56.024777628 +0000 UTC m=+0.185037467 container start 834bc88e3f5a0f4dd3fda767be2a702ca92cb6c33c35fde531f9210eb0d891cb (image=quay.io/ceph/ceph:v18, name=quizzical_jennings, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:27:56 compute-0 podman[77902]: 2025-11-29 07:27:56.030543417 +0000 UTC m=+0.190803316 container attach 834bc88e3f5a0f4dd3fda767be2a702ca92cb6c33c35fde531f9210eb0d891cb (image=quay.io/ceph/ceph:v18, name=quizzical_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:27:56 compute-0 podman[77936]: 2025-11-29 07:27:56.303509636 +0000 UTC m=+0.391834927 container exec_died 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 07:27:56 compute-0 sudo[77788]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:56 compute-0 ceph-mon[75237]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:27:56 compute-0 ceph-mon[75237]: Saving service mgr spec with placement count:2
Nov 29 07:27:56 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:27:56 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:56 compute-0 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:27:56 compute-0 sudo[78008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:56 compute-0 sudo[78008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:56 compute-0 sudo[78008]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 29 07:27:56 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2351891006' entity='client.admin' 
Nov 29 07:27:56 compute-0 sudo[78033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:27:56 compute-0 sudo[78033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:56 compute-0 podman[77902]: 2025-11-29 07:27:56.620158789 +0000 UTC m=+0.780418668 container died 834bc88e3f5a0f4dd3fda767be2a702ca92cb6c33c35fde531f9210eb0d891cb (image=quay.io/ceph/ceph:v18, name=quizzical_jennings, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:27:56 compute-0 systemd[1]: libpod-834bc88e3f5a0f4dd3fda767be2a702ca92cb6c33c35fde531f9210eb0d891cb.scope: Deactivated successfully.
Nov 29 07:27:56 compute-0 sudo[78033]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2c71a91958e5c5e0e05caa2c58f4e36e7794d6621d967b617eb5772b8eb2e1c-merged.mount: Deactivated successfully.
Nov 29 07:27:56 compute-0 podman[77902]: 2025-11-29 07:27:56.672083217 +0000 UTC m=+0.832343076 container remove 834bc88e3f5a0f4dd3fda767be2a702ca92cb6c33c35fde531f9210eb0d891cb (image=quay.io/ceph/ceph:v18, name=quizzical_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Nov 29 07:27:56 compute-0 systemd[1]: libpod-conmon-834bc88e3f5a0f4dd3fda767be2a702ca92cb6c33c35fde531f9210eb0d891cb.scope: Deactivated successfully.
Nov 29 07:27:56 compute-0 sudo[78061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:56 compute-0 sudo[78061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:56 compute-0 sudo[78061]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:56 compute-0 podman[78094]: 2025-11-29 07:27:56.743294337 +0000 UTC m=+0.048721757 container create 89af2c6fc3065a35c93ea68791c65673324b551a5eff05ba146928901d58fee4 (image=quay.io/ceph/ceph:v18, name=dreamy_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:27:56 compute-0 systemd[1]: Started libpod-conmon-89af2c6fc3065a35c93ea68791c65673324b551a5eff05ba146928901d58fee4.scope.
Nov 29 07:27:56 compute-0 sudo[78103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:27:56 compute-0 sudo[78103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bbeac624342a69e5661726fca13ea470d767c20c414d25bdc0a2b3d298009ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bbeac624342a69e5661726fca13ea470d767c20c414d25bdc0a2b3d298009ac/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bbeac624342a69e5661726fca13ea470d767c20c414d25bdc0a2b3d298009ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:56 compute-0 podman[78094]: 2025-11-29 07:27:56.81353493 +0000 UTC m=+0.118962350 container init 89af2c6fc3065a35c93ea68791c65673324b551a5eff05ba146928901d58fee4 (image=quay.io/ceph/ceph:v18, name=dreamy_babbage, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:27:56 compute-0 podman[78094]: 2025-11-29 07:27:56.723190474 +0000 UTC m=+0.028617924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:27:56 compute-0 podman[78094]: 2025-11-29 07:27:56.818598942 +0000 UTC m=+0.124026342 container start 89af2c6fc3065a35c93ea68791c65673324b551a5eff05ba146928901d58fee4 (image=quay.io/ceph/ceph:v18, name=dreamy_babbage, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 29 07:27:56 compute-0 podman[78094]: 2025-11-29 07:27:56.822363789 +0000 UTC m=+0.127791189 container attach 89af2c6fc3065a35c93ea68791c65673324b551a5eff05ba146928901d58fee4 (image=quay.io/ceph/ceph:v18, name=dreamy_babbage, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:27:56 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 78154 (sysctl)
Nov 29 07:27:56 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 29 07:27:57 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 29 07:27:57 compute-0 sudo[78103]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:57 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:27:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:27:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 29 07:27:57 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:57 compute-0 sudo[78195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:57 compute-0 sudo[78195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:57 compute-0 sudo[78195]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:57 compute-0 systemd[1]: libpod-89af2c6fc3065a35c93ea68791c65673324b551a5eff05ba146928901d58fee4.scope: Deactivated successfully.
Nov 29 07:27:57 compute-0 podman[78094]: 2025-11-29 07:27:57.421679243 +0000 UTC m=+0.727106653 container died 89af2c6fc3065a35c93ea68791c65673324b551a5eff05ba146928901d58fee4 (image=quay.io/ceph/ceph:v18, name=dreamy_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 29 07:27:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bbeac624342a69e5661726fca13ea470d767c20c414d25bdc0a2b3d298009ac-merged.mount: Deactivated successfully.
Nov 29 07:27:57 compute-0 podman[78094]: 2025-11-29 07:27:57.474840944 +0000 UTC m=+0.780268344 container remove 89af2c6fc3065a35c93ea68791c65673324b551a5eff05ba146928901d58fee4 (image=quay.io/ceph/ceph:v18, name=dreamy_babbage, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:27:57 compute-0 ceph-mon[75237]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:27:57 compute-0 ceph-mon[75237]: Saving service crash spec with placement *
Nov 29 07:27:57 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:57 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2351891006' entity='client.admin' 
Nov 29 07:27:57 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:57 compute-0 systemd[1]: libpod-conmon-89af2c6fc3065a35c93ea68791c65673324b551a5eff05ba146928901d58fee4.scope: Deactivated successfully.
Nov 29 07:27:57 compute-0 sudo[78223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:27:57 compute-0 sudo[78223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:57 compute-0 sudo[78223]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:57 compute-0 podman[78258]: 2025-11-29 07:27:57.533705422 +0000 UTC m=+0.038209933 container create f242bf595f049e47d683ea819bf30bcd85f6b9bc488e820e07e5b191873dc6ae (image=quay.io/ceph/ceph:v18, name=practical_hofstadter, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:27:57 compute-0 systemd[1]: Started libpod-conmon-f242bf595f049e47d683ea819bf30bcd85f6b9bc488e820e07e5b191873dc6ae.scope.
Nov 29 07:27:57 compute-0 sudo[78271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:57 compute-0 sudo[78271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:57 compute-0 sudo[78271]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:57 compute-0 podman[78258]: 2025-11-29 07:27:57.51783156 +0000 UTC m=+0.022336091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:27:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a494622b01820e52ec3c9482a5dbbb8c371332ccbdf659ffa3f1f0d83eb161d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a494622b01820e52ec3c9482a5dbbb8c371332ccbdf659ffa3f1f0d83eb161d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a494622b01820e52ec3c9482a5dbbb8c371332ccbdf659ffa3f1f0d83eb161d6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:57 compute-0 podman[78258]: 2025-11-29 07:27:57.638999497 +0000 UTC m=+0.143504028 container init f242bf595f049e47d683ea819bf30bcd85f6b9bc488e820e07e5b191873dc6ae (image=quay.io/ceph/ceph:v18, name=practical_hofstadter, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 07:27:57 compute-0 podman[78258]: 2025-11-29 07:27:57.64757343 +0000 UTC m=+0.152077951 container start f242bf595f049e47d683ea819bf30bcd85f6b9bc488e820e07e5b191873dc6ae (image=quay.io/ceph/ceph:v18, name=practical_hofstadter, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:27:57 compute-0 podman[78258]: 2025-11-29 07:27:57.651374298 +0000 UTC m=+0.155878839 container attach f242bf595f049e47d683ea819bf30bcd85f6b9bc488e820e07e5b191873dc6ae (image=quay.io/ceph/ceph:v18, name=practical_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 07:27:57 compute-0 sudo[78304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 29 07:27:57 compute-0 sudo[78304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:57 compute-0 sudo[78304]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:27:57 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:57 compute-0 sudo[78350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:57 compute-0 sudo[78350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:57 compute-0 sudo[78350]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:58 compute-0 sudo[78385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:27:58 compute-0 sudo[78385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:58 compute-0 sudo[78385]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:58 compute-0 sudo[78419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:58 compute-0 sudo[78419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:58 compute-0 sudo[78419]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:58 compute-0 sudo[78444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- inventory --format=json-pretty --filter-for-batch
Nov 29 07:27:58 compute-0 sudo[78444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:58 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:27:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 07:27:58 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:58 compute-0 ceph-mgr[75527]: [cephadm INFO root] Added label _admin to host compute-0
Nov 29 07:27:58 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 29 07:27:58 compute-0 practical_hofstadter[78300]: Added label _admin to host compute-0
Nov 29 07:27:58 compute-0 systemd[1]: libpod-f242bf595f049e47d683ea819bf30bcd85f6b9bc488e820e07e5b191873dc6ae.scope: Deactivated successfully.
Nov 29 07:27:58 compute-0 podman[78258]: 2025-11-29 07:27:58.252976861 +0000 UTC m=+0.757481382 container died f242bf595f049e47d683ea819bf30bcd85f6b9bc488e820e07e5b191873dc6ae (image=quay.io/ceph/ceph:v18, name=practical_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:27:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a494622b01820e52ec3c9482a5dbbb8c371332ccbdf659ffa3f1f0d83eb161d6-merged.mount: Deactivated successfully.
Nov 29 07:27:58 compute-0 podman[78258]: 2025-11-29 07:27:58.306722166 +0000 UTC m=+0.811226677 container remove f242bf595f049e47d683ea819bf30bcd85f6b9bc488e820e07e5b191873dc6ae (image=quay.io/ceph/ceph:v18, name=practical_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 07:27:58 compute-0 systemd[1]: libpod-conmon-f242bf595f049e47d683ea819bf30bcd85f6b9bc488e820e07e5b191873dc6ae.scope: Deactivated successfully.
Nov 29 07:27:58 compute-0 podman[78485]: 2025-11-29 07:27:58.373269695 +0000 UTC m=+0.044953928 container create 94b4f2dbc9b2de3cd44cb638f053e0e7a3b91a56b31a547e71558aaefb11052a (image=quay.io/ceph/ceph:v18, name=tender_diffie, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 29 07:27:58 compute-0 systemd[1]: Started libpod-conmon-94b4f2dbc9b2de3cd44cb638f053e0e7a3b91a56b31a547e71558aaefb11052a.scope.
Nov 29 07:27:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e06695a3717f34aca2fa6beb6f8a8563c1aaa165760f9e38623c0d13a12792/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e06695a3717f34aca2fa6beb6f8a8563c1aaa165760f9e38623c0d13a12792/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e06695a3717f34aca2fa6beb6f8a8563c1aaa165760f9e38623c0d13a12792/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:58 compute-0 podman[78485]: 2025-11-29 07:27:58.350934885 +0000 UTC m=+0.022619138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:27:58 compute-0 podman[78485]: 2025-11-29 07:27:58.455021667 +0000 UTC m=+0.126705930 container init 94b4f2dbc9b2de3cd44cb638f053e0e7a3b91a56b31a547e71558aaefb11052a (image=quay.io/ceph/ceph:v18, name=tender_diffie, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 07:27:58 compute-0 podman[78485]: 2025-11-29 07:27:58.466792114 +0000 UTC m=+0.138476347 container start 94b4f2dbc9b2de3cd44cb638f053e0e7a3b91a56b31a547e71558aaefb11052a (image=quay.io/ceph/ceph:v18, name=tender_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 07:27:58 compute-0 podman[78485]: 2025-11-29 07:27:58.470656704 +0000 UTC m=+0.142340947 container attach 94b4f2dbc9b2de3cd44cb638f053e0e7a3b91a56b31a547e71558aaefb11052a (image=quay.io/ceph/ceph:v18, name=tender_diffie, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:27:58 compute-0 ceph-mon[75237]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:27:58 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:58 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:27:58 compute-0 ceph-mgr[75527]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 29 07:27:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:27:58 compute-0 ceph-mon[75237]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 29 07:27:58 compute-0 podman[78542]: 2025-11-29 07:27:58.619766306 +0000 UTC m=+0.048107110 container create b2ce2e892cd1b12f8f58fb75cd945b59572432d372eab025e9e0728188b36f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_boyd, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:27:58 compute-0 systemd[1]: Started libpod-conmon-b2ce2e892cd1b12f8f58fb75cd945b59572432d372eab025e9e0728188b36f32.scope.
Nov 29 07:27:58 compute-0 podman[78542]: 2025-11-29 07:27:58.597469557 +0000 UTC m=+0.025810381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:27:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:58 compute-0 podman[78542]: 2025-11-29 07:27:58.717200987 +0000 UTC m=+0.145541811 container init b2ce2e892cd1b12f8f58fb75cd945b59572432d372eab025e9e0728188b36f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_boyd, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 07:27:58 compute-0 podman[78542]: 2025-11-29 07:27:58.725558664 +0000 UTC m=+0.153899468 container start b2ce2e892cd1b12f8f58fb75cd945b59572432d372eab025e9e0728188b36f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_boyd, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:27:58 compute-0 distracted_boyd[78558]: 167 167
Nov 29 07:27:58 compute-0 systemd[1]: libpod-b2ce2e892cd1b12f8f58fb75cd945b59572432d372eab025e9e0728188b36f32.scope: Deactivated successfully.
Nov 29 07:27:58 compute-0 podman[78542]: 2025-11-29 07:27:58.730533382 +0000 UTC m=+0.158874256 container attach b2ce2e892cd1b12f8f58fb75cd945b59572432d372eab025e9e0728188b36f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_boyd, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:27:58 compute-0 podman[78542]: 2025-11-29 07:27:58.731043485 +0000 UTC m=+0.159384309 container died b2ce2e892cd1b12f8f58fb75cd945b59572432d372eab025e9e0728188b36f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_boyd, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:27:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a564e60000eb2e07d6460def123b0c7390162209656c9d298c8d859c1b17903a-merged.mount: Deactivated successfully.
Nov 29 07:27:58 compute-0 podman[78542]: 2025-11-29 07:27:58.788688583 +0000 UTC m=+0.217029377 container remove b2ce2e892cd1b12f8f58fb75cd945b59572432d372eab025e9e0728188b36f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_boyd, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 07:27:58 compute-0 systemd[1]: libpod-conmon-b2ce2e892cd1b12f8f58fb75cd945b59572432d372eab025e9e0728188b36f32.scope: Deactivated successfully.
Nov 29 07:27:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 29 07:27:59 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/773434375' entity='client.admin' 
Nov 29 07:27:59 compute-0 systemd[1]: libpod-94b4f2dbc9b2de3cd44cb638f053e0e7a3b91a56b31a547e71558aaefb11052a.scope: Deactivated successfully.
Nov 29 07:27:59 compute-0 podman[78485]: 2025-11-29 07:27:59.08817089 +0000 UTC m=+0.759855163 container died 94b4f2dbc9b2de3cd44cb638f053e0e7a3b91a56b31a547e71558aaefb11052a (image=quay.io/ceph/ceph:v18, name=tender_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 07:27:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-63e06695a3717f34aca2fa6beb6f8a8563c1aaa165760f9e38623c0d13a12792-merged.mount: Deactivated successfully.
Nov 29 07:27:59 compute-0 podman[78485]: 2025-11-29 07:27:59.163114896 +0000 UTC m=+0.834799129 container remove 94b4f2dbc9b2de3cd44cb638f053e0e7a3b91a56b31a547e71558aaefb11052a (image=quay.io/ceph/ceph:v18, name=tender_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:27:59 compute-0 systemd[1]: libpod-conmon-94b4f2dbc9b2de3cd44cb638f053e0e7a3b91a56b31a547e71558aaefb11052a.scope: Deactivated successfully.
Nov 29 07:27:59 compute-0 podman[78610]: 2025-11-29 07:27:59.252729373 +0000 UTC m=+0.059142496 container create 6b4a492128af1a85706341571a608d41fed071b5b55afe0e37d139bf615f0094 (image=quay.io/ceph/ceph:v18, name=compassionate_euclid, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:27:59 compute-0 systemd[1]: Started libpod-conmon-6b4a492128af1a85706341571a608d41fed071b5b55afe0e37d139bf615f0094.scope.
Nov 29 07:27:59 compute-0 podman[78610]: 2025-11-29 07:27:59.230881515 +0000 UTC m=+0.037294688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:27:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c80804e3759d61e0638b222f60f1dde7fde2f5c73c04075a37e3507a9f9cd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c80804e3759d61e0638b222f60f1dde7fde2f5c73c04075a37e3507a9f9cd4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c80804e3759d61e0638b222f60f1dde7fde2f5c73c04075a37e3507a9f9cd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:59 compute-0 podman[78610]: 2025-11-29 07:27:59.350261816 +0000 UTC m=+0.156675019 container init 6b4a492128af1a85706341571a608d41fed071b5b55afe0e37d139bf615f0094 (image=quay.io/ceph/ceph:v18, name=compassionate_euclid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:27:59 compute-0 podman[78610]: 2025-11-29 07:27:59.359928128 +0000 UTC m=+0.166341251 container start 6b4a492128af1a85706341571a608d41fed071b5b55afe0e37d139bf615f0094 (image=quay.io/ceph/ceph:v18, name=compassionate_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 07:27:59 compute-0 podman[78610]: 2025-11-29 07:27:59.364392003 +0000 UTC m=+0.170805206 container attach 6b4a492128af1a85706341571a608d41fed071b5b55afe0e37d139bf615f0094 (image=quay.io/ceph/ceph:v18, name=compassionate_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:27:59 compute-0 ceph-mon[75237]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:27:59 compute-0 ceph-mon[75237]: Added label _admin to host compute-0
Nov 29 07:27:59 compute-0 ceph-mon[75237]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:27:59 compute-0 ceph-mon[75237]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 29 07:27:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/773434375' entity='client.admin' 
Nov 29 07:27:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 29 07:28:00 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/837647413' entity='client.admin' 
Nov 29 07:28:00 compute-0 compassionate_euclid[78627]: set mgr/dashboard/cluster/status
Nov 29 07:28:00 compute-0 systemd[1]: libpod-6b4a492128af1a85706341571a608d41fed071b5b55afe0e37d139bf615f0094.scope: Deactivated successfully.
Nov 29 07:28:00 compute-0 podman[78610]: 2025-11-29 07:28:00.019004422 +0000 UTC m=+0.825417565 container died 6b4a492128af1a85706341571a608d41fed071b5b55afe0e37d139bf615f0094 (image=quay.io/ceph/ceph:v18, name=compassionate_euclid, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 07:28:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-28c80804e3759d61e0638b222f60f1dde7fde2f5c73c04075a37e3507a9f9cd4-merged.mount: Deactivated successfully.
Nov 29 07:28:00 compute-0 podman[78610]: 2025-11-29 07:28:00.074594286 +0000 UTC m=+0.881007409 container remove 6b4a492128af1a85706341571a608d41fed071b5b55afe0e37d139bf615f0094 (image=quay.io/ceph/ceph:v18, name=compassionate_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:28:00 compute-0 systemd[1]: libpod-conmon-6b4a492128af1a85706341571a608d41fed071b5b55afe0e37d139bf615f0094.scope: Deactivated successfully.
Nov 29 07:28:00 compute-0 sudo[74221]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:00 compute-0 podman[78674]: 2025-11-29 07:28:00.329819024 +0000 UTC m=+0.066146089 container create d3276942e682d6de1aed28aced4ceb6a61c6e5dec57f2ba9018ed6645d8b6669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_zhukovsky, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:28:00 compute-0 systemd[1]: Started libpod-conmon-d3276942e682d6de1aed28aced4ceb6a61c6e5dec57f2ba9018ed6645d8b6669.scope.
Nov 29 07:28:00 compute-0 podman[78674]: 2025-11-29 07:28:00.307008081 +0000 UTC m=+0.043335186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:28:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58127fdb308ff5d5c338b0b5769d80868e072df8dac4805b4849a917699c1d94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58127fdb308ff5d5c338b0b5769d80868e072df8dac4805b4849a917699c1d94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58127fdb308ff5d5c338b0b5769d80868e072df8dac4805b4849a917699c1d94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58127fdb308ff5d5c338b0b5769d80868e072df8dac4805b4849a917699c1d94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:00 compute-0 podman[78674]: 2025-11-29 07:28:00.428892177 +0000 UTC m=+0.165219242 container init d3276942e682d6de1aed28aced4ceb6a61c6e5dec57f2ba9018ed6645d8b6669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_zhukovsky, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:28:00 compute-0 podman[78674]: 2025-11-29 07:28:00.44712881 +0000 UTC m=+0.183455875 container start d3276942e682d6de1aed28aced4ceb6a61c6e5dec57f2ba9018ed6645d8b6669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_zhukovsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:28:00 compute-0 podman[78674]: 2025-11-29 07:28:00.451015401 +0000 UTC m=+0.187342516 container attach d3276942e682d6de1aed28aced4ceb6a61c6e5dec57f2ba9018ed6645d8b6669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_zhukovsky, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:28:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:00 compute-0 sudo[78719]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxdqabwdspmncjsjtcrtnregsxojwdmh ; /usr/bin/python3'
Nov 29 07:28:00 compute-0 sudo[78719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:00 compute-0 python3[78721]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:28:00 compute-0 podman[78722]: 2025-11-29 07:28:00.854241483 +0000 UTC m=+0.071518849 container create 79d6bb50d423f42be1da3328a3de8db1c0a4b43aa3df507c7f2c5045c3902266 (image=quay.io/ceph/ceph:v18, name=practical_ritchie, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:28:00 compute-0 systemd[1]: Started libpod-conmon-79d6bb50d423f42be1da3328a3de8db1c0a4b43aa3df507c7f2c5045c3902266.scope.
Nov 29 07:28:00 compute-0 podman[78722]: 2025-11-29 07:28:00.833422642 +0000 UTC m=+0.050700048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:28:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/273ba72cdf996f9b3966ebd7532158592eee814c2ec6fd76878c4b0937bd8825/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/273ba72cdf996f9b3966ebd7532158592eee814c2ec6fd76878c4b0937bd8825/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:00 compute-0 podman[78722]: 2025-11-29 07:28:00.953279715 +0000 UTC m=+0.170557171 container init 79d6bb50d423f42be1da3328a3de8db1c0a4b43aa3df507c7f2c5045c3902266 (image=quay.io/ceph/ceph:v18, name=practical_ritchie, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:28:00 compute-0 podman[78722]: 2025-11-29 07:28:00.965474151 +0000 UTC m=+0.182751527 container start 79d6bb50d423f42be1da3328a3de8db1c0a4b43aa3df507c7f2c5045c3902266 (image=quay.io/ceph/ceph:v18, name=practical_ritchie, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:28:00 compute-0 podman[78722]: 2025-11-29 07:28:00.969441104 +0000 UTC m=+0.186718580 container attach 79d6bb50d423f42be1da3328a3de8db1c0a4b43aa3df507c7f2c5045c3902266 (image=quay.io/ceph/ceph:v18, name=practical_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 07:28:01 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/837647413' entity='client.admin' 
Nov 29 07:28:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 29 07:28:01 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/851212622' entity='client.admin' 
Nov 29 07:28:01 compute-0 systemd[1]: libpod-79d6bb50d423f42be1da3328a3de8db1c0a4b43aa3df507c7f2c5045c3902266.scope: Deactivated successfully.
Nov 29 07:28:01 compute-0 podman[78781]: 2025-11-29 07:28:01.58117154 +0000 UTC m=+0.030597856 container died 79d6bb50d423f42be1da3328a3de8db1c0a4b43aa3df507c7f2c5045c3902266 (image=quay.io/ceph/ceph:v18, name=practical_ritchie, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:28:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-273ba72cdf996f9b3966ebd7532158592eee814c2ec6fd76878c4b0937bd8825-merged.mount: Deactivated successfully.
Nov 29 07:28:01 compute-0 podman[78781]: 2025-11-29 07:28:01.626034245 +0000 UTC m=+0.075460561 container remove 79d6bb50d423f42be1da3328a3de8db1c0a4b43aa3df507c7f2c5045c3902266 (image=quay.io/ceph/ceph:v18, name=practical_ritchie, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:28:01 compute-0 systemd[1]: libpod-conmon-79d6bb50d423f42be1da3328a3de8db1c0a4b43aa3df507c7f2c5045c3902266.scope: Deactivated successfully.
Nov 29 07:28:01 compute-0 sudo[78719]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]: [
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:     {
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:         "available": false,
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:         "ceph_device": false,
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:         "lsm_data": {},
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:         "lvs": [],
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:         "path": "/dev/sr0",
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:         "rejected_reasons": [
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "Has a FileSystem",
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "Insufficient space (<5GB)"
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:         ],
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:         "sys_api": {
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "actuators": null,
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "device_nodes": "sr0",
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "devname": "sr0",
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "human_readable_size": "482.00 KB",
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "id_bus": "ata",
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "model": "QEMU DVD-ROM",
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "nr_requests": "2",
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "parent": "/dev/sr0",
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "partitions": {},
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "path": "/dev/sr0",
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "removable": "1",
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "rev": "2.5+",
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "ro": "0",
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "rotational": "1",
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "sas_address": "",
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "sas_device_handle": "",
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "scheduler_mode": "mq-deadline",
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "sectors": 0,
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "sectorsize": "2048",
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "size": 493568.0,
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "support_discard": "2048",
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "type": "disk",
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:             "vendor": "QEMU"
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:         }
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]:     }
Nov 29 07:28:01 compute-0 hardcore_zhukovsky[78691]: ]
Nov 29 07:28:02 compute-0 ceph-mon[75237]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:02 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/851212622' entity='client.admin' 
Nov 29 07:28:02 compute-0 systemd[1]: libpod-d3276942e682d6de1aed28aced4ceb6a61c6e5dec57f2ba9018ed6645d8b6669.scope: Deactivated successfully.
Nov 29 07:28:02 compute-0 systemd[1]: libpod-d3276942e682d6de1aed28aced4ceb6a61c6e5dec57f2ba9018ed6645d8b6669.scope: Consumed 1.632s CPU time.
Nov 29 07:28:02 compute-0 podman[78674]: 2025-11-29 07:28:02.047048268 +0000 UTC m=+1.783375373 container died d3276942e682d6de1aed28aced4ceb6a61c6e5dec57f2ba9018ed6645d8b6669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_zhukovsky, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 07:28:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-58127fdb308ff5d5c338b0b5769d80868e072df8dac4805b4849a917699c1d94-merged.mount: Deactivated successfully.
Nov 29 07:28:02 compute-0 podman[78674]: 2025-11-29 07:28:02.123948656 +0000 UTC m=+1.860275721 container remove d3276942e682d6de1aed28aced4ceb6a61c6e5dec57f2ba9018ed6645d8b6669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 07:28:02 compute-0 systemd[1]: libpod-conmon-d3276942e682d6de1aed28aced4ceb6a61c6e5dec57f2ba9018ed6645d8b6669.scope: Deactivated successfully.
Nov 29 07:28:02 compute-0 sudo[78444]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:28:02 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:28:02 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:28:02 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:28:02 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 07:28:02 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:28:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:28:02 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:28:02 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:28:02 compute-0 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 29 07:28:02 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 29 07:28:02 compute-0 sudo[80608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:02 compute-0 sudo[80608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:02 compute-0 sudo[80608]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:02 compute-0 sudo[80633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 29 07:28:02 compute-0 sudo[80633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:02 compute-0 sudo[80633]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:28:02 compute-0 sudo[80658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:02 compute-0 sudo[80658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:02 compute-0 sudo[80658]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:02 compute-0 sudo[80689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/etc/ceph
Nov 29 07:28:02 compute-0 sudo[80689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:02 compute-0 sudo[80689]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:02 compute-0 sudo[80732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:02 compute-0 sudo[80732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:02 compute-0 sudo[80732]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:02 compute-0 sudo[80780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/etc/ceph/ceph.conf.new
Nov 29 07:28:02 compute-0 sudo[80780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:02 compute-0 sudo[80780]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:02 compute-0 sudo[80828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-froyklgtdkuiiujgfxitgsfhiapupldr ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764401282.0497868-36793-247771454934320/async_wrapper.py j599568390006 30 /home/zuul/.ansible/tmp/ansible-tmp-1764401282.0497868-36793-247771454934320/AnsiballZ_command.py _'
Nov 29 07:28:02 compute-0 sudo[80828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:02 compute-0 sudo[80833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:02 compute-0 sudo[80833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:02 compute-0 sudo[80833]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:02 compute-0 sudo[80858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:28:02 compute-0 sudo[80858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:02 compute-0 sudo[80858]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:02 compute-0 sudo[80883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:02 compute-0 sudo[80883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:02 compute-0 ansible-async_wrapper.py[80832]: Invoked with j599568390006 30 /home/zuul/.ansible/tmp/ansible-tmp-1764401282.0497868-36793-247771454934320/AnsiballZ_command.py _
Nov 29 07:28:02 compute-0 sudo[80883]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:02 compute-0 ansible-async_wrapper.py[80910]: Starting module and watcher
Nov 29 07:28:02 compute-0 ansible-async_wrapper.py[80910]: Start watching 80911 (30)
Nov 29 07:28:02 compute-0 ansible-async_wrapper.py[80911]: Start module (80911)
Nov 29 07:28:02 compute-0 ansible-async_wrapper.py[80832]: Return async_wrapper task started.
Nov 29 07:28:02 compute-0 sudo[80828]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:02 compute-0 sudo[80912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/etc/ceph/ceph.conf.new
Nov 29 07:28:02 compute-0 sudo[80912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:02 compute-0 sudo[80912]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:02 compute-0 sudo[80961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:02 compute-0 python3[80913]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:28:02 compute-0 sudo[80961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:02 compute-0 sudo[80961]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:03 compute-0 sudo[80987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/etc/ceph/ceph.conf.new
Nov 29 07:28:03 compute-0 sudo[80987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:03 compute-0 podman[80986]: 2025-11-29 07:28:03.008310001 +0000 UTC m=+0.053345327 container create 6f67eeb7cd2a6a861b9a851c005761e1bfdae2ab7175c931856cced3dd9fa05a (image=quay.io/ceph/ceph:v18, name=objective_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 07:28:03 compute-0 sudo[80987]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:03 compute-0 systemd[1]: Started libpod-conmon-6f67eeb7cd2a6a861b9a851c005761e1bfdae2ab7175c931856cced3dd9fa05a.scope.
Nov 29 07:28:03 compute-0 podman[80986]: 2025-11-29 07:28:02.97707263 +0000 UTC m=+0.022107966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:28:03 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:03 compute-0 sudo[81024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d7016399676e36bccac88f46dd4e3e51208f3f945ba01c6006096db680edf2b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d7016399676e36bccac88f46dd4e3e51208f3f945ba01c6006096db680edf2b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:03 compute-0 sudo[81024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:03 compute-0 sudo[81024]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:03 compute-0 podman[80986]: 2025-11-29 07:28:03.102029715 +0000 UTC m=+0.147065031 container init 6f67eeb7cd2a6a861b9a851c005761e1bfdae2ab7175c931856cced3dd9fa05a (image=quay.io/ceph/ceph:v18, name=objective_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 07:28:03 compute-0 podman[80986]: 2025-11-29 07:28:03.112768784 +0000 UTC m=+0.157804100 container start 6f67eeb7cd2a6a861b9a851c005761e1bfdae2ab7175c931856cced3dd9fa05a (image=quay.io/ceph/ceph:v18, name=objective_joliot, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:28:03 compute-0 podman[80986]: 2025-11-29 07:28:03.116350267 +0000 UTC m=+0.161385613 container attach 6f67eeb7cd2a6a861b9a851c005761e1bfdae2ab7175c931856cced3dd9fa05a (image=quay.io/ceph/ceph:v18, name=objective_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 07:28:03 compute-0 sudo[81054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/etc/ceph/ceph.conf.new
Nov 29 07:28:03 compute-0 sudo[81054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:03 compute-0 sudo[81054]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:03 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:03 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:03 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:03 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:03 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:28:03 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:03 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:28:03 compute-0 ceph-mon[75237]: Updating compute-0:/etc/ceph/ceph.conf
Nov 29 07:28:03 compute-0 sudo[81080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:03 compute-0 sudo[81080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:03 compute-0 sudo[81080]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:03 compute-0 sudo[81105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 29 07:28:03 compute-0 sudo[81105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:03 compute-0 sudo[81105]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:03 compute-0 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/config/ceph.conf
Nov 29 07:28:03 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/config/ceph.conf
Nov 29 07:28:03 compute-0 sudo[81130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:03 compute-0 sudo[81130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:03 compute-0 sudo[81130]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:03 compute-0 sudo[81155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/config
Nov 29 07:28:03 compute-0 sudo[81155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:03 compute-0 sudo[81155]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:03 compute-0 sudo[81199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:03 compute-0 sudo[81199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:03 compute-0 sudo[81199]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:03 compute-0 sudo[81224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/config
Nov 29 07:28:03 compute-0 sudo[81224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:03 compute-0 sudo[81224]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:03 compute-0 sudo[81249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:03 compute-0 sudo[81249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:03 compute-0 sudo[81249]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:03 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14168 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:28:03 compute-0 objective_joliot[81047]: 
Nov 29 07:28:03 compute-0 objective_joliot[81047]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 07:28:03 compute-0 systemd[1]: libpod-6f67eeb7cd2a6a861b9a851c005761e1bfdae2ab7175c931856cced3dd9fa05a.scope: Deactivated successfully.
Nov 29 07:28:03 compute-0 podman[80986]: 2025-11-29 07:28:03.658289761 +0000 UTC m=+0.703325077 container died 6f67eeb7cd2a6a861b9a851c005761e1bfdae2ab7175c931856cced3dd9fa05a (image=quay.io/ceph/ceph:v18, name=objective_joliot, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 29 07:28:03 compute-0 sudo[81274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/config/ceph.conf.new
Nov 29 07:28:03 compute-0 sudo[81274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:03 compute-0 sudo[81274]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d7016399676e36bccac88f46dd4e3e51208f3f945ba01c6006096db680edf2b-merged.mount: Deactivated successfully.
Nov 29 07:28:03 compute-0 sudo[81311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:03 compute-0 sudo[81311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:03 compute-0 sudo[81311]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:03 compute-0 podman[80986]: 2025-11-29 07:28:03.76223797 +0000 UTC m=+0.807273286 container remove 6f67eeb7cd2a6a861b9a851c005761e1bfdae2ab7175c931856cced3dd9fa05a (image=quay.io/ceph/ceph:v18, name=objective_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:28:03 compute-0 systemd[1]: libpod-conmon-6f67eeb7cd2a6a861b9a851c005761e1bfdae2ab7175c931856cced3dd9fa05a.scope: Deactivated successfully.
Nov 29 07:28:03 compute-0 ansible-async_wrapper.py[80911]: Module complete (80911)
Nov 29 07:28:03 compute-0 sudo[81341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:28:03 compute-0 sudo[81341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:03 compute-0 sudo[81341]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:03 compute-0 sudo[81366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:03 compute-0 sudo[81366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:03 compute-0 sudo[81366]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:03 compute-0 sudo[81414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/config/ceph.conf.new
Nov 29 07:28:03 compute-0 sudo[81414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:03 compute-0 sudo[81414]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:04 compute-0 sudo[81462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:04 compute-0 sudo[81462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:04 compute-0 sudo[81462]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:04 compute-0 sudo[81510]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhcwveyldzuddfsyebicmyclamairphs ; /usr/bin/python3'
Nov 29 07:28:04 compute-0 sudo[81510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:04 compute-0 ceph-mon[75237]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:04 compute-0 ceph-mon[75237]: Updating compute-0:/var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/config/ceph.conf
Nov 29 07:28:04 compute-0 sudo[81511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/config/ceph.conf.new
Nov 29 07:28:04 compute-0 sudo[81511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:04 compute-0 sudo[81511]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:04 compute-0 sudo[81538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:04 compute-0 sudo[81538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:04 compute-0 sudo[81538]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:04 compute-0 python3[81520]: ansible-ansible.legacy.async_status Invoked with jid=j599568390006.80832 mode=status _async_dir=/root/.ansible_async
Nov 29 07:28:04 compute-0 sudo[81510]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:04 compute-0 sudo[81563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/config/ceph.conf.new
Nov 29 07:28:04 compute-0 sudo[81563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:04 compute-0 sudo[81563]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:04 compute-0 sudo[81588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:04 compute-0 sudo[81588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:04 compute-0 sudo[81588]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:04 compute-0 sudo[81636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/config/ceph.conf.new /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/config/ceph.conf
Nov 29 07:28:04 compute-0 sudo[81636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:04 compute-0 sudo[81636]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:04 compute-0 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 07:28:04 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 07:28:04 compute-0 sudo[81684]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whbdyyegwwssksilbuwglgzbrhhbkwtd ; /usr/bin/python3'
Nov 29 07:28:04 compute-0 sudo[81684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:04 compute-0 sudo[81686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:04 compute-0 sudo[81686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:04 compute-0 sudo[81686]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:04 compute-0 sudo[81712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 29 07:28:04 compute-0 sudo[81712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:04 compute-0 sudo[81712]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:04 compute-0 python3[81689]: ansible-ansible.legacy.async_status Invoked with jid=j599568390006.80832 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 07:28:04 compute-0 sudo[81684]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:04 compute-0 sudo[81737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:04 compute-0 sudo[81737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:04 compute-0 sudo[81737]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:04 compute-0 sudo[81762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/etc/ceph
Nov 29 07:28:04 compute-0 sudo[81762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:04 compute-0 sudo[81762]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:04 compute-0 sudo[81787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:04 compute-0 sudo[81787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:04 compute-0 sudo[81787]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:04 compute-0 sudo[81812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/etc/ceph/ceph.client.admin.keyring.new
Nov 29 07:28:04 compute-0 sudo[81812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:04 compute-0 sudo[81812]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:04 compute-0 sudo[81837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:04 compute-0 sudo[81837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:04 compute-0 sudo[81837]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:04 compute-0 sudo[81863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:28:04 compute-0 sudo[81863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:04 compute-0 sudo[81863]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:05 compute-0 sudo[81908]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndtyevejxyzrgejiupevgplesivigyqq ; /usr/bin/python3'
Nov 29 07:28:05 compute-0 sudo[81908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:05 compute-0 sudo[81912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:05 compute-0 sudo[81912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:05 compute-0 sudo[81912]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:05 compute-0 sudo[81938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/etc/ceph/ceph.client.admin.keyring.new
Nov 29 07:28:05 compute-0 sudo[81938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:05 compute-0 sudo[81938]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:05 compute-0 python3[81913]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 07:28:05 compute-0 ceph-mon[75237]: from='client.14168 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:28:05 compute-0 ceph-mon[75237]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 07:28:05 compute-0 sudo[81908]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:05 compute-0 sudo[81988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:05 compute-0 sudo[81988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:05 compute-0 sudo[81988]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:05 compute-0 sudo[82013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/etc/ceph/ceph.client.admin.keyring.new
Nov 29 07:28:05 compute-0 sudo[82013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:05 compute-0 sudo[82013]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:05 compute-0 sudo[82038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:05 compute-0 sudo[82038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:05 compute-0 sudo[82038]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:05 compute-0 sudo[82063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/etc/ceph/ceph.client.admin.keyring.new
Nov 29 07:28:05 compute-0 sudo[82063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:05 compute-0 sudo[82063]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:05 compute-0 sudo[82117]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwikvtmlulwvkotmbkcahtdiqjxynkoa ; /usr/bin/python3'
Nov 29 07:28:05 compute-0 sudo[82117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:05 compute-0 sudo[82106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:05 compute-0 sudo[82106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:05 compute-0 sudo[82106]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:05 compute-0 sudo[82139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 29 07:28:05 compute-0 sudo[82139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:05 compute-0 sudo[82139]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:05 compute-0 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/config/ceph.client.admin.keyring
Nov 29 07:28:05 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/config/ceph.client.admin.keyring
Nov 29 07:28:05 compute-0 python3[82130]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:28:05 compute-0 sudo[82164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:05 compute-0 sudo[82164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:05 compute-0 podman[82165]: 2025-11-29 07:28:05.773250643 +0000 UTC m=+0.042801272 container create e08f793bb26b1b735dd9e6ce872dd54f246b8f161c38e950e05d38a6fb74fa37 (image=quay.io/ceph/ceph:v18, name=strange_kepler, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 07:28:05 compute-0 sudo[82164]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:05 compute-0 systemd[1]: Started libpod-conmon-e08f793bb26b1b735dd9e6ce872dd54f246b8f161c38e950e05d38a6fb74fa37.scope.
Nov 29 07:28:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:05 compute-0 sudo[82202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/config
Nov 29 07:28:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eafbfc61ab291013814036c65021f7c0b703a645f8784e9ab2908b609ccb389f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:05 compute-0 sudo[82202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eafbfc61ab291013814036c65021f7c0b703a645f8784e9ab2908b609ccb389f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eafbfc61ab291013814036c65021f7c0b703a645f8784e9ab2908b609ccb389f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:05 compute-0 sudo[82202]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:05 compute-0 podman[82165]: 2025-11-29 07:28:05.846968138 +0000 UTC m=+0.116518787 container init e08f793bb26b1b735dd9e6ce872dd54f246b8f161c38e950e05d38a6fb74fa37 (image=quay.io/ceph/ceph:v18, name=strange_kepler, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:28:05 compute-0 podman[82165]: 2025-11-29 07:28:05.750907263 +0000 UTC m=+0.020457912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:28:05 compute-0 podman[82165]: 2025-11-29 07:28:05.857275735 +0000 UTC m=+0.126826354 container start e08f793bb26b1b735dd9e6ce872dd54f246b8f161c38e950e05d38a6fb74fa37 (image=quay.io/ceph/ceph:v18, name=strange_kepler, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:28:05 compute-0 podman[82165]: 2025-11-29 07:28:05.861169846 +0000 UTC m=+0.130720535 container attach e08f793bb26b1b735dd9e6ce872dd54f246b8f161c38e950e05d38a6fb74fa37 (image=quay.io/ceph/ceph:v18, name=strange_kepler, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:28:05 compute-0 sudo[82232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:05 compute-0 sudo[82232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:05 compute-0 sudo[82232]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:05 compute-0 sudo[82258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/config
Nov 29 07:28:05 compute-0 sudo[82258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:05 compute-0 sudo[82258]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:06 compute-0 sudo[82283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:06 compute-0 sudo[82283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:06 compute-0 sudo[82283]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:06 compute-0 sudo[82308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/config/ceph.client.admin.keyring.new
Nov 29 07:28:06 compute-0 sudo[82308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:06 compute-0 sudo[82308]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:06 compute-0 sudo[82333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:06 compute-0 sudo[82333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:06 compute-0 sudo[82333]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:06 compute-0 sudo[82359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:28:06 compute-0 sudo[82359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:06 compute-0 sudo[82359]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:06 compute-0 ceph-mon[75237]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:06 compute-0 sudo[82402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:06 compute-0 sudo[82402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:06 compute-0 sudo[82402]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:06 compute-0 sudo[82427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/config/ceph.client.admin.keyring.new
Nov 29 07:28:06 compute-0 sudo[82427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:06 compute-0 sudo[82427]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:06 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14170 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:28:06 compute-0 strange_kepler[82222]: 
Nov 29 07:28:06 compute-0 strange_kepler[82222]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 07:28:06 compute-0 systemd[1]: libpod-e08f793bb26b1b735dd9e6ce872dd54f246b8f161c38e950e05d38a6fb74fa37.scope: Deactivated successfully.
Nov 29 07:28:06 compute-0 podman[82165]: 2025-11-29 07:28:06.453224682 +0000 UTC m=+0.722775331 container died e08f793bb26b1b735dd9e6ce872dd54f246b8f161c38e950e05d38a6fb74fa37 (image=quay.io/ceph/ceph:v18, name=strange_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 07:28:06 compute-0 sudo[82475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:06 compute-0 sudo[82475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:06 compute-0 sudo[82475]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-eafbfc61ab291013814036c65021f7c0b703a645f8784e9ab2908b609ccb389f-merged.mount: Deactivated successfully.
Nov 29 07:28:06 compute-0 podman[82165]: 2025-11-29 07:28:06.519823541 +0000 UTC m=+0.789374180 container remove e08f793bb26b1b735dd9e6ce872dd54f246b8f161c38e950e05d38a6fb74fa37 (image=quay.io/ceph/ceph:v18, name=strange_kepler, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:28:06 compute-0 systemd[1]: libpod-conmon-e08f793bb26b1b735dd9e6ce872dd54f246b8f161c38e950e05d38a6fb74fa37.scope: Deactivated successfully.
Nov 29 07:28:06 compute-0 sudo[82503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/config/ceph.client.admin.keyring.new
Nov 29 07:28:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:06 compute-0 sudo[82503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:06 compute-0 sudo[82503]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:06 compute-0 sudo[82117]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:06 compute-0 sudo[82540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:06 compute-0 sudo[82540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:06 compute-0 sudo[82540]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:06 compute-0 sudo[82565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/config/ceph.client.admin.keyring.new
Nov 29 07:28:06 compute-0 sudo[82565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:06 compute-0 sudo[82565]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:06 compute-0 sudo[82590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:06 compute-0 sudo[82590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:06 compute-0 sudo[82590]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:06 compute-0 sudo[82615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-321e9cb7-01a2-5759-bf8c-981c9a64aa3e/var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/config/ceph.client.admin.keyring.new /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/config/ceph.client.admin.keyring
Nov 29 07:28:06 compute-0 sudo[82615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:06 compute-0 sudo[82615]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:28:06 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:28:06 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:28:06 compute-0 sudo[82663]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukznuavihmoebqjczrysbkapcrapfusr ; /usr/bin/python3'
Nov 29 07:28:06 compute-0 sudo[82663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:06 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:06 compute-0 ceph-mgr[75527]: [progress INFO root] update: starting ev e4e7d23f-4ee9-4da0-8cb3-d0b7449b3abe (Updating crash deployment (+1 -> 1))
Nov 29 07:28:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 07:28:06 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 07:28:06 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 07:28:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:28:06 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:06 compute-0 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 29 07:28:06 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 29 07:28:06 compute-0 sudo[82666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:06 compute-0 sudo[82666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:06 compute-0 sudo[82666]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:06 compute-0 python3[82665]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:28:06 compute-0 sudo[82691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:28:06 compute-0 sudo[82691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:06 compute-0 sudo[82691]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:07 compute-0 podman[82715]: 2025-11-29 07:28:07.047053593 +0000 UTC m=+0.044495447 container create 599d7a1f604dfb526a3b1e7143c6854395acb75a6358538aa87b8f1a79fb33f6 (image=quay.io/ceph/ceph:v18, name=jovial_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 07:28:07 compute-0 sudo[82722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:07 compute-0 systemd[1]: Started libpod-conmon-599d7a1f604dfb526a3b1e7143c6854395acb75a6358538aa87b8f1a79fb33f6.scope.
Nov 29 07:28:07 compute-0 sudo[82722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:07 compute-0 sudo[82722]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12ecc49ccdefc4617a3844228b18ca9f3b1d1a12e6a99e4321af6e448c7a34c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12ecc49ccdefc4617a3844228b18ca9f3b1d1a12e6a99e4321af6e448c7a34c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12ecc49ccdefc4617a3844228b18ca9f3b1d1a12e6a99e4321af6e448c7a34c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:07 compute-0 podman[82715]: 2025-11-29 07:28:07.028435879 +0000 UTC m=+0.025877753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:28:07 compute-0 podman[82715]: 2025-11-29 07:28:07.129196036 +0000 UTC m=+0.126637910 container init 599d7a1f604dfb526a3b1e7143c6854395acb75a6358538aa87b8f1a79fb33f6 (image=quay.io/ceph/ceph:v18, name=jovial_pike, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:28:07 compute-0 podman[82715]: 2025-11-29 07:28:07.135738555 +0000 UTC m=+0.133180409 container start 599d7a1f604dfb526a3b1e7143c6854395acb75a6358538aa87b8f1a79fb33f6 (image=quay.io/ceph/ceph:v18, name=jovial_pike, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:28:07 compute-0 podman[82715]: 2025-11-29 07:28:07.142473 +0000 UTC m=+0.139914874 container attach 599d7a1f604dfb526a3b1e7143c6854395acb75a6358538aa87b8f1a79fb33f6 (image=quay.io/ceph/ceph:v18, name=jovial_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 07:28:07 compute-0 sudo[82759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:28:07 compute-0 sudo[82759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:07 compute-0 ceph-mon[75237]: Updating compute-0:/var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/config/ceph.client.admin.keyring
Nov 29 07:28:07 compute-0 ceph-mon[75237]: from='client.14170 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:28:07 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:07 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:07 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:07 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 07:28:07 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 07:28:07 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:28:07 compute-0 podman[82824]: 2025-11-29 07:28:07.537213301 +0000 UTC m=+0.057499424 container create 0289b381f2e69602715872e21777bcd0cb981987dc8d80d4ed73715cd087918c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_germain, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:28:07 compute-0 systemd[1]: Started libpod-conmon-0289b381f2e69602715872e21777bcd0cb981987dc8d80d4ed73715cd087918c.scope.
Nov 29 07:28:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:07 compute-0 podman[82824]: 2025-11-29 07:28:07.517369996 +0000 UTC m=+0.037656079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:28:07 compute-0 podman[82824]: 2025-11-29 07:28:07.627399103 +0000 UTC m=+0.147685276 container init 0289b381f2e69602715872e21777bcd0cb981987dc8d80d4ed73715cd087918c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_germain, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:28:07 compute-0 podman[82824]: 2025-11-29 07:28:07.637227648 +0000 UTC m=+0.157513781 container start 0289b381f2e69602715872e21777bcd0cb981987dc8d80d4ed73715cd087918c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_germain, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:28:07 compute-0 podman[82824]: 2025-11-29 07:28:07.641625943 +0000 UTC m=+0.161912046 container attach 0289b381f2e69602715872e21777bcd0cb981987dc8d80d4ed73715cd087918c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 07:28:07 compute-0 mystifying_germain[82859]: 167 167
Nov 29 07:28:07 compute-0 podman[82824]: 2025-11-29 07:28:07.644896188 +0000 UTC m=+0.165182311 container died 0289b381f2e69602715872e21777bcd0cb981987dc8d80d4ed73715cd087918c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:28:07 compute-0 systemd[1]: libpod-0289b381f2e69602715872e21777bcd0cb981987dc8d80d4ed73715cd087918c.scope: Deactivated successfully.
Nov 29 07:28:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-16b6c47754ac797391919996889c24ebb6281370a0bdeb4a0e45187226e0b1f0-merged.mount: Deactivated successfully.
Nov 29 07:28:07 compute-0 podman[82824]: 2025-11-29 07:28:07.695261906 +0000 UTC m=+0.215548029 container remove 0289b381f2e69602715872e21777bcd0cb981987dc8d80d4ed73715cd087918c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 07:28:07 compute-0 systemd[1]: libpod-conmon-0289b381f2e69602715872e21777bcd0cb981987dc8d80d4ed73715cd087918c.scope: Deactivated successfully.
Nov 29 07:28:07 compute-0 systemd[1]: Reloading.
Nov 29 07:28:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 29 07:28:07 compute-0 ansible-async_wrapper.py[80910]: Done in kid B.
Nov 29 07:28:07 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1266825927' entity='client.admin' 
Nov 29 07:28:07 compute-0 podman[82715]: 2025-11-29 07:28:07.830852407 +0000 UTC m=+0.828294331 container died 599d7a1f604dfb526a3b1e7143c6854395acb75a6358538aa87b8f1a79fb33f6 (image=quay.io/ceph/ceph:v18, name=jovial_pike, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:28:07 compute-0 systemd-rc-local-generator[82909]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:28:07 compute-0 systemd-sysv-generator[82915]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:28:08 compute-0 systemd[1]: libpod-599d7a1f604dfb526a3b1e7143c6854395acb75a6358538aa87b8f1a79fb33f6.scope: Deactivated successfully.
Nov 29 07:28:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-c12ecc49ccdefc4617a3844228b18ca9f3b1d1a12e6a99e4321af6e448c7a34c-merged.mount: Deactivated successfully.
Nov 29 07:28:08 compute-0 podman[82715]: 2025-11-29 07:28:08.05701939 +0000 UTC m=+1.054461254 container remove 599d7a1f604dfb526a3b1e7143c6854395acb75a6358538aa87b8f1a79fb33f6 (image=quay.io/ceph/ceph:v18, name=jovial_pike, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:28:08 compute-0 systemd[1]: libpod-conmon-599d7a1f604dfb526a3b1e7143c6854395acb75a6358538aa87b8f1a79fb33f6.scope: Deactivated successfully.
Nov 29 07:28:08 compute-0 systemd[1]: Reloading.
Nov 29 07:28:08 compute-0 sudo[82663]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:08 compute-0 systemd-sysv-generator[82956]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:28:08 compute-0 systemd-rc-local-generator[82948]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:28:08 compute-0 ceph-mon[75237]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:08 compute-0 ceph-mon[75237]: Deploying daemon crash.compute-0 on compute-0
Nov 29 07:28:08 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1266825927' entity='client.admin' 
Nov 29 07:28:08 compute-0 sudo[82989]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vomyasyaebzqxoclkhtsgvrbxcmqnyzl ; /usr/bin/python3'
Nov 29 07:28:08 compute-0 sudo[82989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:08 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e...
Nov 29 07:28:08 compute-0 python3[82992]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:28:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:08 compute-0 podman[83020]: 2025-11-29 07:28:08.556234294 +0000 UTC m=+0.052174465 container create 8e4406e17c78e2f5a13dce8427721db6f042607dae2066a90388baacf97e48a2 (image=quay.io/ceph/ceph:v18, name=objective_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:28:08 compute-0 systemd[1]: Started libpod-conmon-8e4406e17c78e2f5a13dce8427721db6f042607dae2066a90388baacf97e48a2.scope.
Nov 29 07:28:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/808be1f5f4168072d5783f28319f2b1f94d4912c3934b4349a2bb40c8be0602a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/808be1f5f4168072d5783f28319f2b1f94d4912c3934b4349a2bb40c8be0602a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/808be1f5f4168072d5783f28319f2b1f94d4912c3934b4349a2bb40c8be0602a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:08 compute-0 podman[83020]: 2025-11-29 07:28:08.537785695 +0000 UTC m=+0.033725776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:28:08 compute-0 podman[83020]: 2025-11-29 07:28:08.643120471 +0000 UTC m=+0.139060562 container init 8e4406e17c78e2f5a13dce8427721db6f042607dae2066a90388baacf97e48a2 (image=quay.io/ceph/ceph:v18, name=objective_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:28:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:28:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:28:08 compute-0 podman[83055]: 2025-11-29 07:28:08.648215792 +0000 UTC m=+0.043788058 container create 0102e732daa3769766e69b6e8b821e919b8c2dc3abaf24bc69d0e85f9480227e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 07:28:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:28:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:28:08 compute-0 podman[83020]: 2025-11-29 07:28:08.655127052 +0000 UTC m=+0.151067113 container start 8e4406e17c78e2f5a13dce8427721db6f042607dae2066a90388baacf97e48a2 (image=quay.io/ceph/ceph:v18, name=objective_turing, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:28:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:28:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:28:08 compute-0 podman[83020]: 2025-11-29 07:28:08.663627382 +0000 UTC m=+0.159567463 container attach 8e4406e17c78e2f5a13dce8427721db6f042607dae2066a90388baacf97e48a2 (image=quay.io/ceph/ceph:v18, name=objective_turing, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:28:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03e6e1f5860c3fb3bf9ce9c5251a85b25787c7ffdb78cf3beabedbf726a72b49/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03e6e1f5860c3fb3bf9ce9c5251a85b25787c7ffdb78cf3beabedbf726a72b49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03e6e1f5860c3fb3bf9ce9c5251a85b25787c7ffdb78cf3beabedbf726a72b49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03e6e1f5860c3fb3bf9ce9c5251a85b25787c7ffdb78cf3beabedbf726a72b49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:08 compute-0 podman[83055]: 2025-11-29 07:28:08.630480102 +0000 UTC m=+0.026052378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:28:08 compute-0 podman[83055]: 2025-11-29 07:28:08.735370956 +0000 UTC m=+0.130943232 container init 0102e732daa3769766e69b6e8b821e919b8c2dc3abaf24bc69d0e85f9480227e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-crash-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:28:08 compute-0 podman[83055]: 2025-11-29 07:28:08.744864492 +0000 UTC m=+0.140436748 container start 0102e732daa3769766e69b6e8b821e919b8c2dc3abaf24bc69d0e85f9480227e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 07:28:08 compute-0 bash[83055]: 0102e732daa3769766e69b6e8b821e919b8c2dc3abaf24bc69d0e85f9480227e
Nov 29 07:28:08 compute-0 systemd[1]: Started Ceph crash.compute-0 for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e.
Nov 29 07:28:08 compute-0 sudo[82759]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:28:08 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:28:08 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 07:28:08 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:08 compute-0 ceph-mgr[75527]: [progress INFO root] complete: finished ev e4e7d23f-4ee9-4da0-8cb3-d0b7449b3abe (Updating crash deployment (+1 -> 1))
Nov 29 07:28:08 compute-0 ceph-mgr[75527]: [progress INFO root] Completed event e4e7d23f-4ee9-4da0-8cb3-d0b7449b3abe (Updating crash deployment (+1 -> 1)) in 2 seconds
Nov 29 07:28:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 07:28:08 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:08 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 21947209-a839-42e1-8452-f00b49a2ef61 does not exist
Nov 29 07:28:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 07:28:08 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:08 compute-0 ceph-mgr[75527]: [progress INFO root] update: starting ev ec93cb8b-2a46-4d96-8527-2a90ead658b3 (Updating mgr deployment (+1 -> 2))
Nov 29 07:28:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.pdxzyp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 07:28:08 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.pdxzyp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 07:28:08 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.pdxzyp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 07:28:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 07:28:08 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 07:28:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:28:08 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:08 compute-0 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.pdxzyp on compute-0
Nov 29 07:28:08 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.pdxzyp on compute-0
Nov 29 07:28:08 compute-0 sudo[83080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:08 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-crash-compute-0[83075]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 29 07:28:08 compute-0 sudo[83080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:08 compute-0 sudo[83080]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:09 compute-0 sudo[83109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:28:09 compute-0 sudo[83109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:09 compute-0 sudo[83109]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:09 compute-0 sudo[83151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:09 compute-0 sudo[83151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:09 compute-0 sudo[83151]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:09 compute-0 sudo[83176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:28:09 compute-0 sudo[83176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:09 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-crash-compute-0[83075]: 2025-11-29T07:28:09.201+0000 7fb215734640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 07:28:09 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-crash-compute-0[83075]: 2025-11-29T07:28:09.201+0000 7fb215734640 -1 AuthRegistry(0x7fb210067cf0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 07:28:09 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-crash-compute-0[83075]: 2025-11-29T07:28:09.203+0000 7fb215734640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 07:28:09 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-crash-compute-0[83075]: 2025-11-29T07:28:09.203+0000 7fb215734640 -1 AuthRegistry(0x7fb215733000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 07:28:09 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-crash-compute-0[83075]: 2025-11-29T07:28:09.204+0000 7fb20effd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 29 07:28:09 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-crash-compute-0[83075]: 2025-11-29T07:28:09.204+0000 7fb215734640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 29 07:28:09 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-crash-compute-0[83075]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 29 07:28:09 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-crash-compute-0[83075]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 29 07:28:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 29 07:28:09 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1263553250' entity='client.admin' 
Nov 29 07:28:09 compute-0 systemd[1]: libpod-8e4406e17c78e2f5a13dce8427721db6f042607dae2066a90388baacf97e48a2.scope: Deactivated successfully.
Nov 29 07:28:09 compute-0 podman[83020]: 2025-11-29 07:28:09.399657116 +0000 UTC m=+0.895597177 container died 8e4406e17c78e2f5a13dce8427721db6f042607dae2066a90388baacf97e48a2 (image=quay.io/ceph/ceph:v18, name=objective_turing, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:28:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-808be1f5f4168072d5783f28319f2b1f94d4912c3934b4349a2bb40c8be0602a-merged.mount: Deactivated successfully.
Nov 29 07:28:09 compute-0 podman[83020]: 2025-11-29 07:28:09.731998257 +0000 UTC m=+1.227938358 container remove 8e4406e17c78e2f5a13dce8427721db6f042607dae2066a90388baacf97e48a2 (image=quay.io/ceph/ceph:v18, name=objective_turing, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:28:09 compute-0 sudo[82989]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:09 compute-0 systemd[1]: libpod-conmon-8e4406e17c78e2f5a13dce8427721db6f042607dae2066a90388baacf97e48a2.scope: Deactivated successfully.
Nov 29 07:28:09 compute-0 ceph-mon[75237]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:09 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:09 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:09 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:09 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:09 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:09 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.pdxzyp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 07:28:09 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.pdxzyp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 07:28:09 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 07:28:09 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:09 compute-0 ceph-mon[75237]: Deploying daemon mgr.compute-0.pdxzyp on compute-0
Nov 29 07:28:09 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1263553250' entity='client.admin' 
Nov 29 07:28:09 compute-0 podman[83266]: 2025-11-29 07:28:09.87960531 +0000 UTC m=+0.047336920 container create 9c63d338520859d5b77559ef2d03a07f380c58e1346524648ed312636c953464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_banach, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:28:09 compute-0 podman[83266]: 2025-11-29 07:28:09.857158167 +0000 UTC m=+0.024889787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:28:09 compute-0 systemd[1]: Started libpod-conmon-9c63d338520859d5b77559ef2d03a07f380c58e1346524648ed312636c953464.scope.
Nov 29 07:28:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:10 compute-0 podman[83266]: 2025-11-29 07:28:10.012325537 +0000 UTC m=+0.180057167 container init 9c63d338520859d5b77559ef2d03a07f380c58e1346524648ed312636c953464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_banach, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:28:10 compute-0 podman[83266]: 2025-11-29 07:28:10.020812797 +0000 UTC m=+0.188544407 container start 9c63d338520859d5b77559ef2d03a07f380c58e1346524648ed312636c953464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_banach, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:28:10 compute-0 podman[83266]: 2025-11-29 07:28:10.024248386 +0000 UTC m=+0.191979996 container attach 9c63d338520859d5b77559ef2d03a07f380c58e1346524648ed312636c953464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_banach, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:28:10 compute-0 heuristic_banach[83285]: 167 167
Nov 29 07:28:10 compute-0 systemd[1]: libpod-9c63d338520859d5b77559ef2d03a07f380c58e1346524648ed312636c953464.scope: Deactivated successfully.
Nov 29 07:28:10 compute-0 podman[83266]: 2025-11-29 07:28:10.028300691 +0000 UTC m=+0.196032321 container died 9c63d338520859d5b77559ef2d03a07f380c58e1346524648ed312636c953464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_banach, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:28:10 compute-0 sudo[83310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fftfyontasutxbvklcjznclfcvonmitg ; /usr/bin/python3'
Nov 29 07:28:10 compute-0 sudo[83310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdef2ee1fe90f9c49bbb016045202117fef4852737ac44cd49aba50d60443b49-merged.mount: Deactivated successfully.
Nov 29 07:28:10 compute-0 podman[83266]: 2025-11-29 07:28:10.070676592 +0000 UTC m=+0.238408202 container remove 9c63d338520859d5b77559ef2d03a07f380c58e1346524648ed312636c953464 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_banach, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:28:10 compute-0 systemd[1]: libpod-conmon-9c63d338520859d5b77559ef2d03a07f380c58e1346524648ed312636c953464.scope: Deactivated successfully.
Nov 29 07:28:10 compute-0 systemd[1]: Reloading.
Nov 29 07:28:10 compute-0 python3[83315]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:28:10 compute-0 systemd-rc-local-generator[83352]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:28:10 compute-0 systemd-sysv-generator[83362]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:28:10 compute-0 podman[83357]: 2025-11-29 07:28:10.275567433 +0000 UTC m=+0.049443196 container create 2871dbd334ebf51c6b129d3bda7adcfcfac48cf74f459a70a88a762750d319de (image=quay.io/ceph/ceph:v18, name=hardcore_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:28:10 compute-0 podman[83357]: 2025-11-29 07:28:10.256244611 +0000 UTC m=+0.030120404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:28:10 compute-0 systemd[1]: Started libpod-conmon-2871dbd334ebf51c6b129d3bda7adcfcfac48cf74f459a70a88a762750d319de.scope.
Nov 29 07:28:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/679f9ef20890ae5a995e1caed44f50095a0f37bd18bd0f80c48d70242b8fd6fd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/679f9ef20890ae5a995e1caed44f50095a0f37bd18bd0f80c48d70242b8fd6fd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/679f9ef20890ae5a995e1caed44f50095a0f37bd18bd0f80c48d70242b8fd6fd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:10 compute-0 podman[83357]: 2025-11-29 07:28:10.451786259 +0000 UTC m=+0.225662042 container init 2871dbd334ebf51c6b129d3bda7adcfcfac48cf74f459a70a88a762750d319de (image=quay.io/ceph/ceph:v18, name=hardcore_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:28:10 compute-0 systemd[1]: Reloading.
Nov 29 07:28:10 compute-0 podman[83357]: 2025-11-29 07:28:10.460883685 +0000 UTC m=+0.234759448 container start 2871dbd334ebf51c6b129d3bda7adcfcfac48cf74f459a70a88a762750d319de (image=quay.io/ceph/ceph:v18, name=hardcore_germain, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:28:10 compute-0 podman[83357]: 2025-11-29 07:28:10.464325005 +0000 UTC m=+0.238200788 container attach 2871dbd334ebf51c6b129d3bda7adcfcfac48cf74f459a70a88a762750d319de (image=quay.io/ceph/ceph:v18, name=hardcore_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 07:28:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:10 compute-0 systemd-rc-local-generator[83414]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:28:10 compute-0 systemd-sysv-generator[83418]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:28:10 compute-0 systemd[1]: Starting Ceph mgr.compute-0.pdxzyp for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e...
Nov 29 07:28:11 compute-0 podman[83491]: 2025-11-29 07:28:10.992234684 +0000 UTC m=+0.023265325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:28:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 29 07:28:11 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1273584254' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 29 07:28:11 compute-0 podman[83491]: 2025-11-29 07:28:11.995271631 +0000 UTC m=+1.026302222 container create 3aa448d901daf2fb21247445f6366d7bc7d7e516517eac9d16a90f742107c768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-pdxzyp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:28:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 29 07:28:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:28:12 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1273584254' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 29 07:28:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 29 07:28:12 compute-0 hardcore_germain[83382]: set require_min_compat_client to mimic
Nov 29 07:28:12 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 29 07:28:12 compute-0 ceph-mon[75237]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:12 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1273584254' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 29 07:28:12 compute-0 systemd[1]: libpod-2871dbd334ebf51c6b129d3bda7adcfcfac48cf74f459a70a88a762750d319de.scope: Deactivated successfully.
Nov 29 07:28:12 compute-0 podman[83357]: 2025-11-29 07:28:12.073168685 +0000 UTC m=+1.847044448 container died 2871dbd334ebf51c6b129d3bda7adcfcfac48cf74f459a70a88a762750d319de (image=quay.io/ceph/ceph:v18, name=hardcore_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 07:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbf2f0aea2943c86480fc4c8357c34942e87927638890c802d51a58e308d3789/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbf2f0aea2943c86480fc4c8357c34942e87927638890c802d51a58e308d3789/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbf2f0aea2943c86480fc4c8357c34942e87927638890c802d51a58e308d3789/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbf2f0aea2943c86480fc4c8357c34942e87927638890c802d51a58e308d3789/merged/var/lib/ceph/mgr/ceph-compute-0.pdxzyp supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:12 compute-0 podman[83491]: 2025-11-29 07:28:12.133051439 +0000 UTC m=+1.164082050 container init 3aa448d901daf2fb21247445f6366d7bc7d7e516517eac9d16a90f742107c768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-pdxzyp, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:28:12 compute-0 podman[83491]: 2025-11-29 07:28:12.139357903 +0000 UTC m=+1.170388494 container start 3aa448d901daf2fb21247445f6366d7bc7d7e516517eac9d16a90f742107c768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-pdxzyp, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 07:28:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-679f9ef20890ae5a995e1caed44f50095a0f37bd18bd0f80c48d70242b8fd6fd-merged.mount: Deactivated successfully.
Nov 29 07:28:12 compute-0 bash[83491]: 3aa448d901daf2fb21247445f6366d7bc7d7e516517eac9d16a90f742107c768
Nov 29 07:28:12 compute-0 systemd[1]: Started Ceph mgr.compute-0.pdxzyp for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e.
Nov 29 07:28:12 compute-0 ceph-mgr[83525]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:28:12 compute-0 ceph-mgr[83525]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 07:28:12 compute-0 ceph-mgr[83525]: pidfile_write: ignore empty --pid-file
Nov 29 07:28:12 compute-0 sudo[83176]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:12 compute-0 podman[83357]: 2025-11-29 07:28:12.214783572 +0000 UTC m=+1.988659335 container remove 2871dbd334ebf51c6b129d3bda7adcfcfac48cf74f459a70a88a762750d319de (image=quay.io/ceph/ceph:v18, name=hardcore_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:28:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:28:12 compute-0 systemd[1]: libpod-conmon-2871dbd334ebf51c6b129d3bda7adcfcfac48cf74f459a70a88a762750d319de.scope: Deactivated successfully.
Nov 29 07:28:12 compute-0 sudo[83310]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:12 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:28:12 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 07:28:12 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:12 compute-0 ceph-mgr[75527]: [progress INFO root] complete: finished ev ec93cb8b-2a46-4d96-8527-2a90ead658b3 (Updating mgr deployment (+1 -> 2))
Nov 29 07:28:12 compute-0 ceph-mgr[75527]: [progress INFO root] Completed event ec93cb8b-2a46-4d96-8527-2a90ead658b3 (Updating mgr deployment (+1 -> 2)) in 3 seconds
Nov 29 07:28:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 07:28:12 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:12 compute-0 ceph-mgr[83525]: mgr[py] Loading python module 'alerts'
Nov 29 07:28:12 compute-0 sudo[83550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:12 compute-0 sudo[83550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:12 compute-0 sudo[83550]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:28:12 compute-0 sudo[83575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:28:12 compute-0 sudo[83575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:12 compute-0 sudo[83575]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:12 compute-0 sudo[83600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:12 compute-0 sudo[83600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:12 compute-0 sudo[83600]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:12 compute-0 sudo[83625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:28:12 compute-0 sudo[83625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:12 compute-0 sudo[83625]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:12 compute-0 ceph-mgr[83525]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 07:28:12 compute-0 ceph-mgr[83525]: mgr[py] Loading python module 'balancer'
Nov 29 07:28:12 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-pdxzyp[83509]: 2025-11-29T07:28:12.592+0000 7f82f1579140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 07:28:12 compute-0 sudo[83650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:12 compute-0 sudo[83650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:12 compute-0 sudo[83650]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:12 compute-0 sudo[83675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:28:12 compute-0 sudo[83675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:12 compute-0 sudo[83723]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntxeuwnwermjpjpqlnmfynievkzgsque ; /usr/bin/python3'
Nov 29 07:28:12 compute-0 sudo[83723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:12 compute-0 ceph-mgr[83525]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 07:28:12 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-pdxzyp[83509]: 2025-11-29T07:28:12.843+0000 7f82f1579140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 07:28:12 compute-0 ceph-mgr[83525]: mgr[py] Loading python module 'cephadm'
Nov 29 07:28:12 compute-0 python3[83725]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:28:13 compute-0 podman[83753]: 2025-11-29 07:28:12.942039177 +0000 UTC m=+0.021412846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:28:13 compute-0 podman[83753]: 2025-11-29 07:28:13.120146222 +0000 UTC m=+0.199519871 container create 42b6474b3127e9a7b6852b524ec292439026797d62803a109f0acb0179d560e8 (image=quay.io/ceph/ceph:v18, name=admiring_chaplygin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:28:13 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1273584254' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 29 07:28:13 compute-0 ceph-mon[75237]: osdmap e3: 0 total, 0 up, 0 in
Nov 29 07:28:13 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:13 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:13 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:13 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:13 compute-0 systemd[1]: Started libpod-conmon-42b6474b3127e9a7b6852b524ec292439026797d62803a109f0acb0179d560e8.scope.
Nov 29 07:28:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d989669ee0210df5908369faab5326a4d6cae1cccf52a825075b5ecca7212a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d989669ee0210df5908369faab5326a4d6cae1cccf52a825075b5ecca7212a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d989669ee0210df5908369faab5326a4d6cae1cccf52a825075b5ecca7212a2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:13 compute-0 podman[83753]: 2025-11-29 07:28:13.347955699 +0000 UTC m=+0.427329438 container init 42b6474b3127e9a7b6852b524ec292439026797d62803a109f0acb0179d560e8 (image=quay.io/ceph/ceph:v18, name=admiring_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:28:13 compute-0 podman[83753]: 2025-11-29 07:28:13.357967479 +0000 UTC m=+0.437341138 container start 42b6474b3127e9a7b6852b524ec292439026797d62803a109f0acb0179d560e8 (image=quay.io/ceph/ceph:v18, name=admiring_chaplygin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:28:13 compute-0 podman[83753]: 2025-11-29 07:28:13.38342989 +0000 UTC m=+0.462803579 container attach 42b6474b3127e9a7b6852b524ec292439026797d62803a109f0acb0179d560e8 (image=quay.io/ceph/ceph:v18, name=admiring_chaplygin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 07:28:13 compute-0 podman[83815]: 2025-11-29 07:28:13.526250969 +0000 UTC m=+0.082325399 container exec 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:28:13 compute-0 ceph-mgr[75527]: [progress INFO root] Writing back 2 completed events
Nov 29 07:28:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 07:28:13 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:13 compute-0 podman[83815]: 2025-11-29 07:28:13.649507319 +0000 UTC m=+0.205581729 container exec_died 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:28:13 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:28:13 compute-0 sudo[83675]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:28:13 compute-0 sudo[83919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:13 compute-0 sudo[83919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:14 compute-0 sudo[83919]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:14 compute-0 sudo[83944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:28:14 compute-0 sudo[83944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:14 compute-0 sudo[83944]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:14 compute-0 sudo[83969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:14 compute-0 sudo[83969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:14 compute-0 sudo[83969]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:14 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:28:14 compute-0 sudo[83994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 29 07:28:14 compute-0 sudo[83994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:14 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:28:14 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:28:14 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:28:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:28:14 compute-0 ceph-mon[75237]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:14 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:14 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:14 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:14 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev bc998420-69b3-4534-8a9d-3fb1b7e911b4 does not exist
Nov 29 07:28:14 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev b5df79a4-5fdc-4a8e-8c90-7f1e989ea301 does not exist
Nov 29 07:28:14 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev c13f9f64-8b72-458d-b23f-83b85b2aca85 does not exist
Nov 29 07:28:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:14 compute-0 sudo[83994]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 07:28:14 compute-0 sudo[84048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:14 compute-0 sudo[84048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:14 compute-0 sudo[84048]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:14 compute-0 sudo[84075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:28:14 compute-0 sudo[84075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:14 compute-0 sudo[84075]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 29 07:28:14 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 07:28:14 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 29 07:28:14 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 07:28:14 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 29 07:28:14 compute-0 ceph-mgr[83525]: mgr[py] Loading python module 'crash'
Nov 29 07:28:14 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:15 compute-0 ceph-mgr[83525]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 07:28:15 compute-0 ceph-mgr[83525]: mgr[py] Loading python module 'dashboard'
Nov 29 07:28:15 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-pdxzyp[83509]: 2025-11-29T07:28:15.252+0000 7f82f1579140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 07:28:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 07:28:15 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 29 07:28:15 compute-0 ceph-mon[75237]: from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:28:15 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:15 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:15 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:28:15 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:15 compute-0 ceph-mon[75237]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:15 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:15 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:15 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:15 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:15 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:15 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:15 compute-0 ceph-mgr[75527]: [cephadm INFO root] Added host compute-0
Nov 29 07:28:15 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 29 07:28:15 compute-0 ceph-mgr[75527]: [cephadm INFO root] Saving service mon spec with placement compute-0
Nov 29 07:28:15 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Nov 29 07:28:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 07:28:15 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:15 compute-0 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 07:28:15 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 07:28:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 07:28:15 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 07:28:15 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 07:28:15 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 07:28:15 compute-0 ceph-mgr[75527]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Nov 29 07:28:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 07:28:15 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Nov 29 07:28:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:28:15 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:15 compute-0 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 07:28:15 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 07:28:15 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:15 compute-0 ceph-mgr[75527]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 07:28:15 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 07:28:15 compute-0 ceph-mgr[75527]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Nov 29 07:28:15 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Nov 29 07:28:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 29 07:28:15 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:15 compute-0 admiring_chaplygin[83797]: Added host 'compute-0' with addr '192.168.122.100'
Nov 29 07:28:15 compute-0 admiring_chaplygin[83797]: Scheduled mon update...
Nov 29 07:28:15 compute-0 admiring_chaplygin[83797]: Scheduled mgr update...
Nov 29 07:28:15 compute-0 admiring_chaplygin[83797]: Scheduled osd.default_drive_group update...
Nov 29 07:28:15 compute-0 systemd[1]: libpod-42b6474b3127e9a7b6852b524ec292439026797d62803a109f0acb0179d560e8.scope: Deactivated successfully.
Nov 29 07:28:15 compute-0 podman[83753]: 2025-11-29 07:28:15.622374592 +0000 UTC m=+2.701748251 container died 42b6474b3127e9a7b6852b524ec292439026797d62803a109f0acb0179d560e8 (image=quay.io/ceph/ceph:v18, name=admiring_chaplygin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 07:28:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d989669ee0210df5908369faab5326a4d6cae1cccf52a825075b5ecca7212a2-merged.mount: Deactivated successfully.
Nov 29 07:28:15 compute-0 sudo[84100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:15 compute-0 sudo[84100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:15 compute-0 sudo[84100]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:15 compute-0 podman[83753]: 2025-11-29 07:28:15.687453593 +0000 UTC m=+2.766827242 container remove 42b6474b3127e9a7b6852b524ec292439026797d62803a109f0acb0179d560e8 (image=quay.io/ceph/ceph:v18, name=admiring_chaplygin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:28:15 compute-0 systemd[1]: libpod-conmon-42b6474b3127e9a7b6852b524ec292439026797d62803a109f0acb0179d560e8.scope: Deactivated successfully.
Nov 29 07:28:15 compute-0 sudo[83723]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:15 compute-0 sudo[84138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:28:15 compute-0 sudo[84138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:15 compute-0 sudo[84138]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:15 compute-0 sudo[84163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:15 compute-0 sudo[84163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:15 compute-0 sudo[84163]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:15 compute-0 sudo[84188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:28:15 compute-0 sudo[84188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:16 compute-0 sudo[84236]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukmxgpqhqiojewotaqxnrcxayvnuquas ; /usr/bin/python3'
Nov 29 07:28:16 compute-0 sudo[84236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:16 compute-0 podman[84256]: 2025-11-29 07:28:16.172737965 +0000 UTC m=+0.065425000 container create 539e2d0360b90d74675f0c40d419d45d97a6c2f1d52953bd806bfc23c021d5f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_payne, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:28:16 compute-0 python3[84245]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:28:16 compute-0 systemd[1]: Started libpod-conmon-539e2d0360b90d74675f0c40d419d45d97a6c2f1d52953bd806bfc23c021d5f9.scope.
Nov 29 07:28:16 compute-0 podman[84256]: 2025-11-29 07:28:16.150288882 +0000 UTC m=+0.042975927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:28:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:16 compute-0 podman[84256]: 2025-11-29 07:28:16.27038219 +0000 UTC m=+0.163069235 container init 539e2d0360b90d74675f0c40d419d45d97a6c2f1d52953bd806bfc23c021d5f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_payne, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 07:28:16 compute-0 podman[84269]: 2025-11-29 07:28:16.279736384 +0000 UTC m=+0.065218536 container create 0a97b7b2e5d2a2943179460dfbd54ebd412e5d4766e2b3885dbb47f45dee4edc (image=quay.io/ceph/ceph:v18, name=awesome_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 07:28:16 compute-0 podman[84256]: 2025-11-29 07:28:16.282903505 +0000 UTC m=+0.175590530 container start 539e2d0360b90d74675f0c40d419d45d97a6c2f1d52953bd806bfc23c021d5f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:28:16 compute-0 elegant_payne[84278]: 167 167
Nov 29 07:28:16 compute-0 podman[84256]: 2025-11-29 07:28:16.288757377 +0000 UTC m=+0.181444402 container attach 539e2d0360b90d74675f0c40d419d45d97a6c2f1d52953bd806bfc23c021d5f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_payne, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:28:16 compute-0 podman[84256]: 2025-11-29 07:28:16.289350883 +0000 UTC m=+0.182037928 container died 539e2d0360b90d74675f0c40d419d45d97a6c2f1d52953bd806bfc23c021d5f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_payne, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 07:28:16 compute-0 systemd[1]: libpod-539e2d0360b90d74675f0c40d419d45d97a6c2f1d52953bd806bfc23c021d5f9.scope: Deactivated successfully.
Nov 29 07:28:16 compute-0 systemd[1]: Started libpod-conmon-0a97b7b2e5d2a2943179460dfbd54ebd412e5d4766e2b3885dbb47f45dee4edc.scope.
Nov 29 07:28:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-bffe1c733219beff5ebb311954d4f50a7076d368aecc25222275f3930171c909-merged.mount: Deactivated successfully.
Nov 29 07:28:16 compute-0 podman[84256]: 2025-11-29 07:28:16.348042048 +0000 UTC m=+0.240729073 container remove 539e2d0360b90d74675f0c40d419d45d97a6c2f1d52953bd806bfc23c021d5f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:28:16 compute-0 podman[84269]: 2025-11-29 07:28:16.254530239 +0000 UTC m=+0.040012431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:28:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:16 compute-0 systemd[1]: libpod-conmon-539e2d0360b90d74675f0c40d419d45d97a6c2f1d52953bd806bfc23c021d5f9.scope: Deactivated successfully.
Nov 29 07:28:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073d67c462bd0410d45c7c96d55755ef2089fa8459a837a396fdc6cc8795026d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073d67c462bd0410d45c7c96d55755ef2089fa8459a837a396fdc6cc8795026d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073d67c462bd0410d45c7c96d55755ef2089fa8459a837a396fdc6cc8795026d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:16 compute-0 podman[84269]: 2025-11-29 07:28:16.385278794 +0000 UTC m=+0.170761006 container init 0a97b7b2e5d2a2943179460dfbd54ebd412e5d4766e2b3885dbb47f45dee4edc (image=quay.io/ceph/ceph:v18, name=awesome_raman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 29 07:28:16 compute-0 sudo[84188]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:16 compute-0 podman[84269]: 2025-11-29 07:28:16.396497935 +0000 UTC m=+0.181980097 container start 0a97b7b2e5d2a2943179460dfbd54ebd412e5d4766e2b3885dbb47f45dee4edc (image=quay.io/ceph/ceph:v18, name=awesome_raman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 07:28:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:28:16 compute-0 podman[84269]: 2025-11-29 07:28:16.401299091 +0000 UTC m=+0.186781243 container attach 0a97b7b2e5d2a2943179460dfbd54ebd412e5d4766e2b3885dbb47f45dee4edc (image=quay.io/ceph/ceph:v18, name=awesome_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Nov 29 07:28:16 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:28:16 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:16 compute-0 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.fwfehy (unknown last config time)...
Nov 29 07:28:16 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.fwfehy (unknown last config time)...
Nov 29 07:28:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.fwfehy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 07:28:16 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.fwfehy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 07:28:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 07:28:16 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 07:28:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:28:16 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:16 compute-0 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.fwfehy on compute-0
Nov 29 07:28:16 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.fwfehy on compute-0
Nov 29 07:28:16 compute-0 sudo[84309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:16 compute-0 sudo[84309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:16 compute-0 sudo[84309]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:16 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:16 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:16 compute-0 ceph-mon[75237]: Added host compute-0
Nov 29 07:28:16 compute-0 ceph-mon[75237]: Saving service mon spec with placement compute-0
Nov 29 07:28:16 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:16 compute-0 ceph-mon[75237]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 07:28:16 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 07:28:16 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:16 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 07:28:16 compute-0 ceph-mon[75237]: Saving service mgr spec with placement compute-0
Nov 29 07:28:16 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:16 compute-0 ceph-mon[75237]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 07:28:16 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:16 compute-0 ceph-mon[75237]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 07:28:16 compute-0 ceph-mon[75237]: Saving service osd.default_drive_group spec with placement compute-0
Nov 29 07:28:16 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:16 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:16 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:16 compute-0 ceph-mon[75237]: Reconfiguring mgr.compute-0.fwfehy (unknown last config time)...
Nov 29 07:28:16 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.fwfehy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 07:28:16 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 07:28:16 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:16 compute-0 ceph-mon[75237]: Reconfiguring daemon mgr.compute-0.fwfehy on compute-0
Nov 29 07:28:16 compute-0 sudo[84334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:28:16 compute-0 sudo[84334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:16 compute-0 sudo[84334]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:16 compute-0 sudo[84359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:16 compute-0 sudo[84359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:16 compute-0 sudo[84359]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:16 compute-0 sudo[84384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:28:16 compute-0 sudo[84384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:16 compute-0 ceph-mgr[83525]: mgr[py] Loading python module 'devicehealth'
Nov 29 07:28:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 07:28:17 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3496303120' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 07:28:17 compute-0 awesome_raman[84303]: 
Nov 29 07:28:17 compute-0 awesome_raman[84303]: {"fsid":"321e9cb7-01a2-5759-bf8c-981c9a64aa3e","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":90,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-29T07:26:43.858355+0000","services":{}},"progress_events":{}}
Nov 29 07:28:17 compute-0 podman[84445]: 2025-11-29 07:28:17.081350771 +0000 UTC m=+0.083035258 container create 627aaa93c76d537ad6b15ba816c145ade81d52a8459e0f62169ac6779e0d4f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:28:17 compute-0 ceph-mgr[83525]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 07:28:17 compute-0 ceph-mgr[83525]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 07:28:17 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-pdxzyp[83509]: 2025-11-29T07:28:17.081+0000 7f82f1579140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 07:28:17 compute-0 systemd[1]: libpod-0a97b7b2e5d2a2943179460dfbd54ebd412e5d4766e2b3885dbb47f45dee4edc.scope: Deactivated successfully.
Nov 29 07:28:17 compute-0 podman[84269]: 2025-11-29 07:28:17.096865754 +0000 UTC m=+0.882347936 container died 0a97b7b2e5d2a2943179460dfbd54ebd412e5d4766e2b3885dbb47f45dee4edc (image=quay.io/ceph/ceph:v18, name=awesome_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 07:28:17 compute-0 systemd[1]: Started libpod-conmon-627aaa93c76d537ad6b15ba816c145ade81d52a8459e0f62169ac6779e0d4f9b.scope.
Nov 29 07:28:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-073d67c462bd0410d45c7c96d55755ef2089fa8459a837a396fdc6cc8795026d-merged.mount: Deactivated successfully.
Nov 29 07:28:17 compute-0 podman[84445]: 2025-11-29 07:28:17.046587558 +0000 UTC m=+0.048272125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:28:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:17 compute-0 podman[84269]: 2025-11-29 07:28:17.155600299 +0000 UTC m=+0.941082471 container remove 0a97b7b2e5d2a2943179460dfbd54ebd412e5d4766e2b3885dbb47f45dee4edc (image=quay.io/ceph/ceph:v18, name=awesome_raman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:28:17 compute-0 systemd[1]: libpod-conmon-0a97b7b2e5d2a2943179460dfbd54ebd412e5d4766e2b3885dbb47f45dee4edc.scope: Deactivated successfully.
Nov 29 07:28:17 compute-0 podman[84445]: 2025-11-29 07:28:17.179958131 +0000 UTC m=+0.181642658 container init 627aaa93c76d537ad6b15ba816c145ade81d52a8459e0f62169ac6779e0d4f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bhaskara, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:28:17 compute-0 sudo[84236]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:17 compute-0 podman[84445]: 2025-11-29 07:28:17.191287345 +0000 UTC m=+0.192971832 container start 627aaa93c76d537ad6b15ba816c145ade81d52a8459e0f62169ac6779e0d4f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:28:17 compute-0 podman[84445]: 2025-11-29 07:28:17.194956421 +0000 UTC m=+0.196640918 container attach 627aaa93c76d537ad6b15ba816c145ade81d52a8459e0f62169ac6779e0d4f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bhaskara, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 07:28:17 compute-0 sad_bhaskara[84476]: 167 167
Nov 29 07:28:17 compute-0 systemd[1]: libpod-627aaa93c76d537ad6b15ba816c145ade81d52a8459e0f62169ac6779e0d4f9b.scope: Deactivated successfully.
Nov 29 07:28:17 compute-0 conmon[84476]: conmon 627aaa93c76d537ad6b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-627aaa93c76d537ad6b15ba816c145ade81d52a8459e0f62169ac6779e0d4f9b.scope/container/memory.events
Nov 29 07:28:17 compute-0 podman[84445]: 2025-11-29 07:28:17.198035071 +0000 UTC m=+0.199719558 container died 627aaa93c76d537ad6b15ba816c145ade81d52a8459e0f62169ac6779e0d4f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 07:28:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a284057ba2003a6888d1c15aa45896979a65956b30eb394910bb12888daf6b5a-merged.mount: Deactivated successfully.
Nov 29 07:28:17 compute-0 podman[84445]: 2025-11-29 07:28:17.243620684 +0000 UTC m=+0.245305171 container remove 627aaa93c76d537ad6b15ba816c145ade81d52a8459e0f62169ac6779e0d4f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bhaskara, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 07:28:17 compute-0 systemd[1]: libpod-conmon-627aaa93c76d537ad6b15ba816c145ade81d52a8459e0f62169ac6779e0d4f9b.scope: Deactivated successfully.
Nov 29 07:28:17 compute-0 sudo[84384]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:28:17 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:28:17 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:17 compute-0 sudo[84496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:17 compute-0 sudo[84496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:17 compute-0 sudo[84496]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:28:17 compute-0 sudo[84521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:28:17 compute-0 sudo[84521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:17 compute-0 sudo[84521]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:17 compute-0 sudo[84546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:17 compute-0 sudo[84546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:17 compute-0 sudo[84546]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:17 compute-0 ceph-mon[75237]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:17 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3496303120' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 07:28:17 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:17 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:17 compute-0 sudo[84571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:28:17 compute-0 sudo[84571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:17 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-pdxzyp[83509]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 07:28:17 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-pdxzyp[83509]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 07:28:17 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-pdxzyp[83509]:   from numpy import show_config as show_numpy_config
Nov 29 07:28:17 compute-0 ceph-mgr[83525]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 07:28:17 compute-0 ceph-mgr[83525]: mgr[py] Loading python module 'influx'
Nov 29 07:28:17 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-pdxzyp[83509]: 2025-11-29T07:28:17.645+0000 7f82f1579140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 07:28:17 compute-0 ceph-mgr[83525]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 07:28:17 compute-0 ceph-mgr[83525]: mgr[py] Loading python module 'insights'
Nov 29 07:28:17 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-pdxzyp[83509]: 2025-11-29T07:28:17.904+0000 7f82f1579140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 07:28:18 compute-0 ceph-mgr[83525]: mgr[py] Loading python module 'iostat'
Nov 29 07:28:18 compute-0 podman[84667]: 2025-11-29 07:28:18.252460573 +0000 UTC m=+0.091914138 container exec 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 07:28:18 compute-0 podman[84667]: 2025-11-29 07:28:18.346248928 +0000 UTC m=+0.185702493 container exec_died 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:28:18 compute-0 ceph-mgr[83525]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 07:28:18 compute-0 ceph-mgr[83525]: mgr[py] Loading python module 'k8sevents'
Nov 29 07:28:18 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-pdxzyp[83509]: 2025-11-29T07:28:18.399+0000 7f82f1579140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 07:28:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:18 compute-0 sudo[84571]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:28:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:28:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:28:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:28:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:28:18 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:28:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:28:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:28:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:18 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 86fb82ad-1742-402b-a807-af7a9e7fd95b does not exist
Nov 29 07:28:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 07:28:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:18 compute-0 ceph-mgr[75527]: [progress INFO root] update: starting ev 56b6d9b6-edd3-46fc-b83e-9c7047f26f2e (Updating mgr deployment (-1 -> 1))
Nov 29 07:28:18 compute-0 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.pdxzyp from compute-0 -- ports [8765]
Nov 29 07:28:18 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.pdxzyp from compute-0 -- ports [8765]
Nov 29 07:28:18 compute-0 sudo[84751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:18 compute-0 sudo[84751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:18 compute-0 sudo[84751]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:18 compute-0 sudo[84776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:28:18 compute-0 sudo[84776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:18 compute-0 sudo[84776]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:18 compute-0 sudo[84801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:18 compute-0 sudo[84801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:18 compute-0 sudo[84801]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:18 compute-0 sudo[84826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 rm-daemon --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --name mgr.compute-0.pdxzyp --force --tcp-ports 8765
Nov 29 07:28:18 compute-0 sudo[84826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:19 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.pdxzyp for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e...
Nov 29 07:28:19 compute-0 podman[84921]: 2025-11-29 07:28:19.482806824 +0000 UTC m=+0.065669867 container died 3aa448d901daf2fb21247445f6366d7bc7d7e516517eac9d16a90f742107c768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-pdxzyp, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:28:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbf2f0aea2943c86480fc4c8357c34942e87927638890c802d51a58e308d3789-merged.mount: Deactivated successfully.
Nov 29 07:28:19 compute-0 podman[84921]: 2025-11-29 07:28:19.529502556 +0000 UTC m=+0.112365599 container remove 3aa448d901daf2fb21247445f6366d7bc7d7e516517eac9d16a90f742107c768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-pdxzyp, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 07:28:19 compute-0 bash[84921]: ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-pdxzyp
Nov 29 07:28:19 compute-0 systemd[1]: ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e@mgr.compute-0.pdxzyp.service: Main process exited, code=exited, status=143/n/a
Nov 29 07:28:19 compute-0 ceph-mon[75237]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:19 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:19 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:19 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:19 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:19 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:19 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:28:19 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:19 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:19 compute-0 ceph-mon[75237]: Removing daemon mgr.compute-0.pdxzyp from compute-0 -- ports [8765]
Nov 29 07:28:19 compute-0 systemd[1]: ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e@mgr.compute-0.pdxzyp.service: Failed with result 'exit-code'.
Nov 29 07:28:19 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.pdxzyp for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e.
Nov 29 07:28:19 compute-0 systemd[1]: ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e@mgr.compute-0.pdxzyp.service: Consumed 8.221s CPU time.
Nov 29 07:28:19 compute-0 systemd[1]: Reloading.
Nov 29 07:28:19 compute-0 systemd-rc-local-generator[85004]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:28:19 compute-0 systemd-sysv-generator[85009]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:28:20 compute-0 sudo[84826]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:20 compute-0 ceph-mgr[75527]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.pdxzyp
Nov 29 07:28:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.pdxzyp"} v 0) v1
Nov 29 07:28:20 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.pdxzyp"}]: dispatch
Nov 29 07:28:20 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.pdxzyp
Nov 29 07:28:20 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.pdxzyp"}]': finished
Nov 29 07:28:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 07:28:20 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:20 compute-0 ceph-mgr[75527]: [progress INFO root] complete: finished ev 56b6d9b6-edd3-46fc-b83e-9c7047f26f2e (Updating mgr deployment (-1 -> 1))
Nov 29 07:28:20 compute-0 ceph-mgr[75527]: [progress INFO root] Completed event 56b6d9b6-edd3-46fc-b83e-9c7047f26f2e (Updating mgr deployment (-1 -> 1)) in 1 seconds
Nov 29 07:28:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 07:28:20 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:20 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 90da80b1-3a67-4f14-a2b7-d020f476dbe6 does not exist
Nov 29 07:28:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:28:20 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:28:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:28:20 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:28:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:28:20 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:20 compute-0 sudo[85016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:20 compute-0 sudo[85016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:20 compute-0 sudo[85016]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:20 compute-0 sudo[85041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:28:20 compute-0 sudo[85041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:20 compute-0 sudo[85041]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:20 compute-0 sudo[85066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:20 compute-0 sudo[85066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:20 compute-0 sudo[85066]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:20 compute-0 sudo[85091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:28:20 compute-0 sudo[85091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:20 compute-0 ceph-mon[75237]: Removing key for mgr.compute-0.pdxzyp
Nov 29 07:28:20 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.pdxzyp"}]: dispatch
Nov 29 07:28:20 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.pdxzyp"}]': finished
Nov 29 07:28:20 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:20 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:20 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:28:20 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:28:20 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:20 compute-0 podman[85157]: 2025-11-29 07:28:20.70029944 +0000 UTC m=+0.069570307 container create 86a458cba5b678d10e6abc514fecace359d3129a9bace91a44e8ca920347937a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 07:28:20 compute-0 systemd[1]: Started libpod-conmon-86a458cba5b678d10e6abc514fecace359d3129a9bace91a44e8ca920347937a.scope.
Nov 29 07:28:20 compute-0 podman[85157]: 2025-11-29 07:28:20.672478697 +0000 UTC m=+0.041749604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:28:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:20 compute-0 podman[85157]: 2025-11-29 07:28:20.816709943 +0000 UTC m=+0.185980770 container init 86a458cba5b678d10e6abc514fecace359d3129a9bace91a44e8ca920347937a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:28:20 compute-0 podman[85157]: 2025-11-29 07:28:20.82468445 +0000 UTC m=+0.193955277 container start 86a458cba5b678d10e6abc514fecace359d3129a9bace91a44e8ca920347937a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 07:28:20 compute-0 podman[85157]: 2025-11-29 07:28:20.828080918 +0000 UTC m=+0.197351755 container attach 86a458cba5b678d10e6abc514fecace359d3129a9bace91a44e8ca920347937a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 07:28:20 compute-0 focused_feistel[85173]: 167 167
Nov 29 07:28:20 compute-0 podman[85157]: 2025-11-29 07:28:20.84006384 +0000 UTC m=+0.209334657 container died 86a458cba5b678d10e6abc514fecace359d3129a9bace91a44e8ca920347937a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 07:28:20 compute-0 systemd[1]: libpod-86a458cba5b678d10e6abc514fecace359d3129a9bace91a44e8ca920347937a.scope: Deactivated successfully.
Nov 29 07:28:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-55bf7895a3a5ae4f9295fe10cd3d0809e4d5ad7eb5d564f6180d95e257d20113-merged.mount: Deactivated successfully.
Nov 29 07:28:20 compute-0 podman[85157]: 2025-11-29 07:28:20.875970862 +0000 UTC m=+0.245241689 container remove 86a458cba5b678d10e6abc514fecace359d3129a9bace91a44e8ca920347937a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:28:20 compute-0 systemd[1]: libpod-conmon-86a458cba5b678d10e6abc514fecace359d3129a9bace91a44e8ca920347937a.scope: Deactivated successfully.
Nov 29 07:28:21 compute-0 podman[85197]: 2025-11-29 07:28:21.071593742 +0000 UTC m=+0.061340983 container create 599b09e13bf37fcf3cc8a65c4b2288f90b13e13553ed1c875c8fa0c31bbfea9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chaum, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:28:21 compute-0 systemd[1]: Started libpod-conmon-599b09e13bf37fcf3cc8a65c4b2288f90b13e13553ed1c875c8fa0c31bbfea9f.scope.
Nov 29 07:28:21 compute-0 podman[85197]: 2025-11-29 07:28:21.045657809 +0000 UTC m=+0.035405050 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:28:21 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dac7f9956fd00196837c0fa40bad9d04e5b7179232d1b5836f46bf088340c237/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dac7f9956fd00196837c0fa40bad9d04e5b7179232d1b5836f46bf088340c237/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dac7f9956fd00196837c0fa40bad9d04e5b7179232d1b5836f46bf088340c237/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dac7f9956fd00196837c0fa40bad9d04e5b7179232d1b5836f46bf088340c237/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dac7f9956fd00196837c0fa40bad9d04e5b7179232d1b5836f46bf088340c237/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:21 compute-0 podman[85197]: 2025-11-29 07:28:21.192963234 +0000 UTC m=+0.182710515 container init 599b09e13bf37fcf3cc8a65c4b2288f90b13e13553ed1c875c8fa0c31bbfea9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chaum, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:28:21 compute-0 podman[85197]: 2025-11-29 07:28:21.204824302 +0000 UTC m=+0.194571543 container start 599b09e13bf37fcf3cc8a65c4b2288f90b13e13553ed1c875c8fa0c31bbfea9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chaum, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:28:21 compute-0 podman[85197]: 2025-11-29 07:28:21.211769673 +0000 UTC m=+0.201516904 container attach 599b09e13bf37fcf3cc8a65c4b2288f90b13e13553ed1c875c8fa0c31bbfea9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:28:22 compute-0 ceph-mon[75237]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:22 compute-0 xenodochial_chaum[85213]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:28:22 compute-0 xenodochial_chaum[85213]: --> relative data size: 1.0
Nov 29 07:28:22 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 07:28:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:28:22 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new d2206e5d-36d0-4dcd-a218-91d42a449afa
Nov 29 07:28:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa"} v 0) v1
Nov 29 07:28:22 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/687061256' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa"}]: dispatch
Nov 29 07:28:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 29 07:28:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:28:22 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/687061256' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa"}]': finished
Nov 29 07:28:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 29 07:28:22 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 29 07:28:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:28:22 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:28:22 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 07:28:22 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 07:28:23 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 29 07:28:23 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 29 07:28:23 compute-0 lvm[85276]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 07:28:23 compute-0 lvm[85276]: VG ceph_vg0 finished
Nov 29 07:28:23 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 07:28:23 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 07:28:23 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 29 07:28:23 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/687061256' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa"}]: dispatch
Nov 29 07:28:23 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/687061256' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa"}]': finished
Nov 29 07:28:23 compute-0 ceph-mon[75237]: osdmap e4: 1 total, 0 up, 1 in
Nov 29 07:28:23 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:28:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 29 07:28:23 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3529741819' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 07:28:23 compute-0 xenodochial_chaum[85213]:  stderr: got monmap epoch 1
Nov 29 07:28:23 compute-0 xenodochial_chaum[85213]: --> Creating keyring file for osd.0
Nov 29 07:28:23 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 29 07:28:23 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 29 07:28:23 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid d2206e5d-36d0-4dcd-a218-91d42a449afa --setuser ceph --setgroup ceph
Nov 29 07:28:23 compute-0 ceph-mgr[75527]: [progress INFO root] Writing back 3 completed events
Nov 29 07:28:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 07:28:23 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:24 compute-0 ceph-mon[75237]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:24 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3529741819' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 07:28:24 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:25 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 29 07:28:25 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 07:28:25 compute-0 ceph-mon[75237]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 29 07:28:25 compute-0 ceph-mon[75237]: Cluster is now healthy
Nov 29 07:28:25 compute-0 xenodochial_chaum[85213]:  stderr: 2025-11-29T07:28:23.527+0000 7fcfd56ec740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 07:28:25 compute-0 xenodochial_chaum[85213]:  stderr: 2025-11-29T07:28:23.527+0000 7fcfd56ec740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 07:28:25 compute-0 xenodochial_chaum[85213]:  stderr: 2025-11-29T07:28:23.527+0000 7fcfd56ec740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 07:28:25 compute-0 xenodochial_chaum[85213]:  stderr: 2025-11-29T07:28:23.527+0000 7fcfd56ec740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 29 07:28:25 compute-0 xenodochial_chaum[85213]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 29 07:28:25 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 07:28:25 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 29 07:28:25 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 07:28:25 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 29 07:28:25 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 07:28:25 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 07:28:25 compute-0 xenodochial_chaum[85213]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 29 07:28:25 compute-0 xenodochial_chaum[85213]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 29 07:28:25 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 07:28:25 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e72f2659-baec-4840-b3cf-a1856ca51c15
Nov 29 07:28:26 compute-0 ceph-mon[75237]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15"} v 0) v1
Nov 29 07:28:26 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3510365595' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15"}]: dispatch
Nov 29 07:28:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 29 07:28:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:28:26 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3510365595' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15"}]': finished
Nov 29 07:28:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 29 07:28:26 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 29 07:28:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:28:26 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:28:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:28:26 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:28:26 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 07:28:26 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:28:26 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 07:28:26 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 29 07:28:26 compute-0 lvm[86216]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 07:28:26 compute-0 lvm[86216]: VG ceph_vg1 finished
Nov 29 07:28:26 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Nov 29 07:28:26 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 29 07:28:26 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 29 07:28:26 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 29 07:28:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 29 07:28:26 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4105200583' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 07:28:26 compute-0 xenodochial_chaum[85213]:  stderr: got monmap epoch 1
Nov 29 07:28:26 compute-0 xenodochial_chaum[85213]: --> Creating keyring file for osd.1
Nov 29 07:28:26 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 29 07:28:26 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 29 07:28:26 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid e72f2659-baec-4840-b3cf-a1856ca51c15 --setuser ceph --setgroup ceph
Nov 29 07:28:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3510365595' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15"}]: dispatch
Nov 29 07:28:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3510365595' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15"}]': finished
Nov 29 07:28:27 compute-0 ceph-mon[75237]: osdmap e5: 2 total, 0 up, 2 in
Nov 29 07:28:27 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:28:27 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:28:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/4105200583' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 07:28:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:28:28 compute-0 ceph-mon[75237]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:29 compute-0 xenodochial_chaum[85213]:  stderr: 2025-11-29T07:28:27.010+0000 7fca71032740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 07:28:29 compute-0 xenodochial_chaum[85213]:  stderr: 2025-11-29T07:28:27.010+0000 7fca71032740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 07:28:29 compute-0 xenodochial_chaum[85213]:  stderr: 2025-11-29T07:28:27.010+0000 7fca71032740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 07:28:29 compute-0 xenodochial_chaum[85213]:  stderr: 2025-11-29T07:28:27.010+0000 7fca71032740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 29 07:28:29 compute-0 xenodochial_chaum[85213]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Nov 29 07:28:29 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 07:28:29 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 29 07:28:29 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 29 07:28:29 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 29 07:28:29 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 29 07:28:29 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 07:28:29 compute-0 xenodochial_chaum[85213]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 29 07:28:29 compute-0 xenodochial_chaum[85213]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Nov 29 07:28:29 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 07:28:29 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 2406c235-b877-477d-8a53-b5b71e6811ae
Nov 29 07:28:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "2406c235-b877-477d-8a53-b5b71e6811ae"} v 0) v1
Nov 29 07:28:30 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3896808209' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2406c235-b877-477d-8a53-b5b71e6811ae"}]: dispatch
Nov 29 07:28:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 29 07:28:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:28:30 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3896808209' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2406c235-b877-477d-8a53-b5b71e6811ae"}]': finished
Nov 29 07:28:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Nov 29 07:28:30 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Nov 29 07:28:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:28:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:28:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:28:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:28:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:28:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:28:30 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 07:28:30 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:28:30 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:28:30 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 07:28:30 compute-0 lvm[87152]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 07:28:30 compute-0 lvm[87152]: VG ceph_vg2 finished
Nov 29 07:28:30 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 29 07:28:30 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Nov 29 07:28:30 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 29 07:28:30 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 29 07:28:30 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 29 07:28:30 compute-0 ceph-mon[75237]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:30 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3896808209' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2406c235-b877-477d-8a53-b5b71e6811ae"}]: dispatch
Nov 29 07:28:30 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3896808209' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2406c235-b877-477d-8a53-b5b71e6811ae"}]': finished
Nov 29 07:28:30 compute-0 ceph-mon[75237]: osdmap e6: 3 total, 0 up, 3 in
Nov 29 07:28:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:28:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:28:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:28:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 29 07:28:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1883404893' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 07:28:30 compute-0 xenodochial_chaum[85213]:  stderr: got monmap epoch 1
Nov 29 07:28:30 compute-0 xenodochial_chaum[85213]: --> Creating keyring file for osd.2
Nov 29 07:28:30 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 29 07:28:30 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 29 07:28:30 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 2406c235-b877-477d-8a53-b5b71e6811ae --setuser ceph --setgroup ceph
Nov 29 07:28:31 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1883404893' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 07:28:32 compute-0 ceph-mon[75237]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:28:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:33 compute-0 xenodochial_chaum[85213]:  stderr: 2025-11-29T07:28:30.876+0000 7f5feb6f2740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 07:28:33 compute-0 xenodochial_chaum[85213]:  stderr: 2025-11-29T07:28:30.876+0000 7f5feb6f2740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 07:28:33 compute-0 xenodochial_chaum[85213]:  stderr: 2025-11-29T07:28:30.876+0000 7f5feb6f2740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 07:28:33 compute-0 xenodochial_chaum[85213]:  stderr: 2025-11-29T07:28:30.877+0000 7f5feb6f2740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 29 07:28:33 compute-0 xenodochial_chaum[85213]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Nov 29 07:28:33 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 07:28:33 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 29 07:28:33 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 29 07:28:33 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 29 07:28:33 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 29 07:28:33 compute-0 xenodochial_chaum[85213]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 07:28:33 compute-0 xenodochial_chaum[85213]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 29 07:28:33 compute-0 xenodochial_chaum[85213]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Nov 29 07:28:33 compute-0 systemd[1]: libpod-599b09e13bf37fcf3cc8a65c4b2288f90b13e13553ed1c875c8fa0c31bbfea9f.scope: Deactivated successfully.
Nov 29 07:28:33 compute-0 podman[85197]: 2025-11-29 07:28:33.504998351 +0000 UTC m=+12.494745552 container died 599b09e13bf37fcf3cc8a65c4b2288f90b13e13553ed1c875c8fa0c31bbfea9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chaum, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 07:28:33 compute-0 systemd[1]: libpod-599b09e13bf37fcf3cc8a65c4b2288f90b13e13553ed1c875c8fa0c31bbfea9f.scope: Consumed 6.267s CPU time.
Nov 29 07:28:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-dac7f9956fd00196837c0fa40bad9d04e5b7179232d1b5836f46bf088340c237-merged.mount: Deactivated successfully.
Nov 29 07:28:33 compute-0 podman[85197]: 2025-11-29 07:28:33.56887981 +0000 UTC m=+12.558627011 container remove 599b09e13bf37fcf3cc8a65c4b2288f90b13e13553ed1c875c8fa0c31bbfea9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:28:33 compute-0 systemd[1]: libpod-conmon-599b09e13bf37fcf3cc8a65c4b2288f90b13e13553ed1c875c8fa0c31bbfea9f.scope: Deactivated successfully.
Nov 29 07:28:33 compute-0 sudo[85091]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:33 compute-0 sudo[88074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:33 compute-0 sudo[88074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:33 compute-0 sudo[88074]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:33 compute-0 sudo[88099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:28:33 compute-0 sudo[88099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:33 compute-0 sudo[88099]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:33 compute-0 sudo[88124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:33 compute-0 sudo[88124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:33 compute-0 sudo[88124]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:33 compute-0 sudo[88149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:28:33 compute-0 sudo[88149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:34 compute-0 podman[88210]: 2025-11-29 07:28:34.309704328 +0000 UTC m=+0.058871609 container create 85e8daeb7ed86671cd9d14d01b23933b2c5c3adc7e86ad4c9c37f9ed398c733b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mccarthy, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:28:34 compute-0 systemd[1]: Started libpod-conmon-85e8daeb7ed86671cd9d14d01b23933b2c5c3adc7e86ad4c9c37f9ed398c733b.scope.
Nov 29 07:28:34 compute-0 podman[88210]: 2025-11-29 07:28:34.275929052 +0000 UTC m=+0.025096353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:28:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:34 compute-0 ceph-mon[75237]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:34 compute-0 podman[88210]: 2025-11-29 07:28:34.403370491 +0000 UTC m=+0.152537792 container init 85e8daeb7ed86671cd9d14d01b23933b2c5c3adc7e86ad4c9c37f9ed398c733b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 07:28:34 compute-0 podman[88210]: 2025-11-29 07:28:34.412112508 +0000 UTC m=+0.161279789 container start 85e8daeb7ed86671cd9d14d01b23933b2c5c3adc7e86ad4c9c37f9ed398c733b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:28:34 compute-0 nostalgic_mccarthy[88226]: 167 167
Nov 29 07:28:34 compute-0 systemd[1]: libpod-85e8daeb7ed86671cd9d14d01b23933b2c5c3adc7e86ad4c9c37f9ed398c733b.scope: Deactivated successfully.
Nov 29 07:28:34 compute-0 podman[88210]: 2025-11-29 07:28:34.420597238 +0000 UTC m=+0.169764539 container attach 85e8daeb7ed86671cd9d14d01b23933b2c5c3adc7e86ad4c9c37f9ed398c733b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mccarthy, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:28:34 compute-0 podman[88210]: 2025-11-29 07:28:34.42103437 +0000 UTC m=+0.170201651 container died 85e8daeb7ed86671cd9d14d01b23933b2c5c3adc7e86ad4c9c37f9ed398c733b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mccarthy, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:28:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-67a1fdb7f18f1b37c9383d831a7049097df2c55e74181562ab4792d61309ce39-merged.mount: Deactivated successfully.
Nov 29 07:28:34 compute-0 podman[88210]: 2025-11-29 07:28:34.494159819 +0000 UTC m=+0.243327120 container remove 85e8daeb7ed86671cd9d14d01b23933b2c5c3adc7e86ad4c9c37f9ed398c733b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mccarthy, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 07:28:34 compute-0 systemd[1]: libpod-conmon-85e8daeb7ed86671cd9d14d01b23933b2c5c3adc7e86ad4c9c37f9ed398c733b.scope: Deactivated successfully.
Nov 29 07:28:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:34 compute-0 podman[88250]: 2025-11-29 07:28:34.667727936 +0000 UTC m=+0.059061974 container create 008bf9b7e82a6c7f80e25a521a253b3b4be225838b51d2d0d0a26bab31bf6f62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:28:34 compute-0 systemd[1]: Started libpod-conmon-008bf9b7e82a6c7f80e25a521a253b3b4be225838b51d2d0d0a26bab31bf6f62.scope.
Nov 29 07:28:34 compute-0 podman[88250]: 2025-11-29 07:28:34.639557985 +0000 UTC m=+0.030892043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:28:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81ec4b355ac9c18439426b985ba836fde9146dd53c06f9967c2352fdf895cbc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81ec4b355ac9c18439426b985ba836fde9146dd53c06f9967c2352fdf895cbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81ec4b355ac9c18439426b985ba836fde9146dd53c06f9967c2352fdf895cbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81ec4b355ac9c18439426b985ba836fde9146dd53c06f9967c2352fdf895cbc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:34 compute-0 podman[88250]: 2025-11-29 07:28:34.767024055 +0000 UTC m=+0.158358143 container init 008bf9b7e82a6c7f80e25a521a253b3b4be225838b51d2d0d0a26bab31bf6f62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:28:34 compute-0 podman[88250]: 2025-11-29 07:28:34.77764126 +0000 UTC m=+0.168975298 container start 008bf9b7e82a6c7f80e25a521a253b3b4be225838b51d2d0d0a26bab31bf6f62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:28:34 compute-0 podman[88250]: 2025-11-29 07:28:34.781426489 +0000 UTC m=+0.172760497 container attach 008bf9b7e82a6c7f80e25a521a253b3b4be225838b51d2d0d0a26bab31bf6f62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jemison, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:28:35 compute-0 laughing_jemison[88266]: {
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:     "0": [
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:         {
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "devices": [
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "/dev/loop3"
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             ],
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "lv_name": "ceph_lv0",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "lv_size": "21470642176",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "name": "ceph_lv0",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "tags": {
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.cluster_name": "ceph",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.crush_device_class": "",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.encrypted": "0",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.osd_id": "0",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.type": "block",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.vdo": "0"
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             },
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "type": "block",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "vg_name": "ceph_vg0"
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:         }
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:     ],
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:     "1": [
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:         {
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "devices": [
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "/dev/loop4"
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             ],
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "lv_name": "ceph_lv1",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "lv_size": "21470642176",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "name": "ceph_lv1",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "tags": {
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.cluster_name": "ceph",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.crush_device_class": "",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.encrypted": "0",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.osd_id": "1",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.type": "block",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.vdo": "0"
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             },
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "type": "block",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "vg_name": "ceph_vg1"
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:         }
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:     ],
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:     "2": [
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:         {
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "devices": [
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "/dev/loop5"
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             ],
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "lv_name": "ceph_lv2",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "lv_size": "21470642176",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "name": "ceph_lv2",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "tags": {
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.cluster_name": "ceph",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.crush_device_class": "",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.encrypted": "0",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.osd_id": "2",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.type": "block",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:                 "ceph.vdo": "0"
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             },
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "type": "block",
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:             "vg_name": "ceph_vg2"
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:         }
Nov 29 07:28:35 compute-0 laughing_jemison[88266]:     ]
Nov 29 07:28:35 compute-0 laughing_jemison[88266]: }
Nov 29 07:28:35 compute-0 systemd[1]: libpod-008bf9b7e82a6c7f80e25a521a253b3b4be225838b51d2d0d0a26bab31bf6f62.scope: Deactivated successfully.
Nov 29 07:28:35 compute-0 podman[88250]: 2025-11-29 07:28:35.627045988 +0000 UTC m=+1.018379996 container died 008bf9b7e82a6c7f80e25a521a253b3b4be225838b51d2d0d0a26bab31bf6f62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jemison, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 07:28:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-c81ec4b355ac9c18439426b985ba836fde9146dd53c06f9967c2352fdf895cbc-merged.mount: Deactivated successfully.
Nov 29 07:28:35 compute-0 podman[88250]: 2025-11-29 07:28:35.692874538 +0000 UTC m=+1.084208546 container remove 008bf9b7e82a6c7f80e25a521a253b3b4be225838b51d2d0d0a26bab31bf6f62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:28:35 compute-0 systemd[1]: libpod-conmon-008bf9b7e82a6c7f80e25a521a253b3b4be225838b51d2d0d0a26bab31bf6f62.scope: Deactivated successfully.
Nov 29 07:28:35 compute-0 sudo[88149]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 29 07:28:35 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 07:28:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:28:35 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:35 compute-0 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 29 07:28:35 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 29 07:28:35 compute-0 sudo[88287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:35 compute-0 sudo[88287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:35 compute-0 sudo[88287]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:35 compute-0 sudo[88312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:28:35 compute-0 sudo[88312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:35 compute-0 sudo[88312]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:35 compute-0 sudo[88337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:35 compute-0 sudo[88337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:35 compute-0 sudo[88337]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:36 compute-0 sudo[88362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:28:36 compute-0 sudo[88362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:36 compute-0 ceph-mon[75237]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:36 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 07:28:36 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:36 compute-0 podman[88427]: 2025-11-29 07:28:36.44423053 +0000 UTC m=+0.046755186 container create 79932435f0396592d965506f58a846fcb93676ec12a16b15db604396a1036392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:28:36 compute-0 systemd[1]: Started libpod-conmon-79932435f0396592d965506f58a846fcb93676ec12a16b15db604396a1036392.scope.
Nov 29 07:28:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:36 compute-0 podman[88427]: 2025-11-29 07:28:36.422830884 +0000 UTC m=+0.025355570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:28:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:36 compute-0 podman[88427]: 2025-11-29 07:28:36.538442726 +0000 UTC m=+0.140967412 container init 79932435f0396592d965506f58a846fcb93676ec12a16b15db604396a1036392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 07:28:36 compute-0 podman[88427]: 2025-11-29 07:28:36.552518892 +0000 UTC m=+0.155043538 container start 79932435f0396592d965506f58a846fcb93676ec12a16b15db604396a1036392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:28:36 compute-0 podman[88427]: 2025-11-29 07:28:36.557264465 +0000 UTC m=+0.159789121 container attach 79932435f0396592d965506f58a846fcb93676ec12a16b15db604396a1036392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:28:36 compute-0 loving_proskuriakova[88443]: 167 167
Nov 29 07:28:36 compute-0 systemd[1]: libpod-79932435f0396592d965506f58a846fcb93676ec12a16b15db604396a1036392.scope: Deactivated successfully.
Nov 29 07:28:36 compute-0 conmon[88443]: conmon 79932435f0396592d965 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-79932435f0396592d965506f58a846fcb93676ec12a16b15db604396a1036392.scope/container/memory.events
Nov 29 07:28:36 compute-0 podman[88427]: 2025-11-29 07:28:36.560424127 +0000 UTC m=+0.162948783 container died 79932435f0396592d965506f58a846fcb93676ec12a16b15db604396a1036392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 07:28:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-88ab17c22cf2c73dd6885702a3e0f291ef9586f0d957c7a3363142267a392144-merged.mount: Deactivated successfully.
Nov 29 07:28:36 compute-0 podman[88427]: 2025-11-29 07:28:36.607574631 +0000 UTC m=+0.210099287 container remove 79932435f0396592d965506f58a846fcb93676ec12a16b15db604396a1036392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:28:36 compute-0 systemd[1]: libpod-conmon-79932435f0396592d965506f58a846fcb93676ec12a16b15db604396a1036392.scope: Deactivated successfully.
Nov 29 07:28:37 compute-0 podman[88475]: 2025-11-29 07:28:37.198356323 +0000 UTC m=+0.061342363 container create 69c0fbd2c8e402493c26e8b8607248b01213e36ce49afef31a4a765e1f7bec88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:28:37 compute-0 systemd[1]: Started libpod-conmon-69c0fbd2c8e402493c26e8b8607248b01213e36ce49afef31a4a765e1f7bec88.scope.
Nov 29 07:28:37 compute-0 podman[88475]: 2025-11-29 07:28:37.164208477 +0000 UTC m=+0.027194527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:28:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37d7fe90fc983fe525d4f6e6af500398936ee417230fa70201c5d61e88641c93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37d7fe90fc983fe525d4f6e6af500398936ee417230fa70201c5d61e88641c93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37d7fe90fc983fe525d4f6e6af500398936ee417230fa70201c5d61e88641c93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37d7fe90fc983fe525d4f6e6af500398936ee417230fa70201c5d61e88641c93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37d7fe90fc983fe525d4f6e6af500398936ee417230fa70201c5d61e88641c93/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:37 compute-0 podman[88475]: 2025-11-29 07:28:37.308612997 +0000 UTC m=+0.171599007 container init 69c0fbd2c8e402493c26e8b8607248b01213e36ce49afef31a4a765e1f7bec88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 07:28:37 compute-0 podman[88475]: 2025-11-29 07:28:37.317429685 +0000 UTC m=+0.180415685 container start 69c0fbd2c8e402493c26e8b8607248b01213e36ce49afef31a4a765e1f7bec88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0-activate-test, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 07:28:37 compute-0 podman[88475]: 2025-11-29 07:28:37.322086977 +0000 UTC m=+0.185072977 container attach 69c0fbd2c8e402493c26e8b8607248b01213e36ce49afef31a4a765e1f7bec88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:28:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:28:37 compute-0 ceph-mon[75237]: Deploying daemon osd.0 on compute-0
Nov 29 07:28:38 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0-activate-test[88491]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 29 07:28:38 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0-activate-test[88491]:                             [--no-systemd] [--no-tmpfs]
Nov 29 07:28:38 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0-activate-test[88491]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 29 07:28:38 compute-0 systemd[1]: libpod-69c0fbd2c8e402493c26e8b8607248b01213e36ce49afef31a4a765e1f7bec88.scope: Deactivated successfully.
Nov 29 07:28:38 compute-0 podman[88475]: 2025-11-29 07:28:38.047249288 +0000 UTC m=+0.910235288 container died 69c0fbd2c8e402493c26e8b8607248b01213e36ce49afef31a4a765e1f7bec88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 07:28:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-37d7fe90fc983fe525d4f6e6af500398936ee417230fa70201c5d61e88641c93-merged.mount: Deactivated successfully.
Nov 29 07:28:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:28:38
Nov 29 07:28:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:28:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:28:38 compute-0 ceph-mgr[75527]: [balancer INFO root] No pools available
Nov 29 07:28:38 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:28:38 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:28:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:28:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:28:38 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:28:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:28:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:28:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:28:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:28:39 compute-0 podman[88475]: 2025-11-29 07:28:39.097008389 +0000 UTC m=+1.959994389 container remove 69c0fbd2c8e402493c26e8b8607248b01213e36ce49afef31a4a765e1f7bec88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:28:39 compute-0 systemd[1]: libpod-conmon-69c0fbd2c8e402493c26e8b8607248b01213e36ce49afef31a4a765e1f7bec88.scope: Deactivated successfully.
Nov 29 07:28:39 compute-0 ceph-mon[75237]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:39 compute-0 sshd-session[88497]: Received disconnect from 103.236.140.19 port 49350:11: Bye Bye [preauth]
Nov 29 07:28:39 compute-0 sshd-session[88497]: Disconnected from authenticating user root 103.236.140.19 port 49350 [preauth]
Nov 29 07:28:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:40 compute-0 ceph-mon[75237]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:40 compute-0 systemd[1]: Reloading.
Nov 29 07:28:40 compute-0 systemd-rc-local-generator[88554]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:28:40 compute-0 systemd-sysv-generator[88557]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:28:41 compute-0 systemd[1]: Reloading.
Nov 29 07:28:41 compute-0 systemd-sysv-generator[88601]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:28:41 compute-0 systemd-rc-local-generator[88597]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:28:41 compute-0 systemd[1]: Starting Ceph osd.0 for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e...
Nov 29 07:28:41 compute-0 podman[88659]: 2025-11-29 07:28:41.754212124 +0000 UTC m=+0.031607402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:28:42 compute-0 podman[88659]: 2025-11-29 07:28:42.295785618 +0000 UTC m=+0.573180886 container create 5c0603ac7ff5f45cf18a49eae6d2da8c956365af7115f7ff21f2c22f9216da58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0-activate, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 07:28:42 compute-0 ceph-mon[75237]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:28:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c835a03821544c39811805bbbcc8295546ce2570af91693cd75e24edfea0e95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c835a03821544c39811805bbbcc8295546ce2570af91693cd75e24edfea0e95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c835a03821544c39811805bbbcc8295546ce2570af91693cd75e24edfea0e95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c835a03821544c39811805bbbcc8295546ce2570af91693cd75e24edfea0e95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c835a03821544c39811805bbbcc8295546ce2570af91693cd75e24edfea0e95/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:43 compute-0 podman[88659]: 2025-11-29 07:28:43.052724135 +0000 UTC m=+1.330119413 container init 5c0603ac7ff5f45cf18a49eae6d2da8c956365af7115f7ff21f2c22f9216da58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:28:43 compute-0 podman[88659]: 2025-11-29 07:28:43.060622969 +0000 UTC m=+1.338018197 container start 5c0603ac7ff5f45cf18a49eae6d2da8c956365af7115f7ff21f2c22f9216da58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0-activate, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:28:43 compute-0 podman[88659]: 2025-11-29 07:28:43.357068988 +0000 UTC m=+1.634464336 container attach 5c0603ac7ff5f45cf18a49eae6d2da8c956365af7115f7ff21f2c22f9216da58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 07:28:43 compute-0 ceph-mon[75237]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:44 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0-activate[88675]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 07:28:44 compute-0 bash[88659]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 07:28:44 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0-activate[88675]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 07:28:44 compute-0 bash[88659]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 07:28:44 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0-activate[88675]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 07:28:44 compute-0 bash[88659]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 07:28:44 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0-activate[88675]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 07:28:44 compute-0 bash[88659]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 07:28:44 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0-activate[88675]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 07:28:44 compute-0 bash[88659]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 07:28:44 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0-activate[88675]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 07:28:44 compute-0 bash[88659]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 07:28:44 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0-activate[88675]: --> ceph-volume raw activate successful for osd ID: 0
Nov 29 07:28:44 compute-0 bash[88659]: --> ceph-volume raw activate successful for osd ID: 0
Nov 29 07:28:44 compute-0 systemd[1]: libpod-5c0603ac7ff5f45cf18a49eae6d2da8c956365af7115f7ff21f2c22f9216da58.scope: Deactivated successfully.
Nov 29 07:28:44 compute-0 systemd[1]: libpod-5c0603ac7ff5f45cf18a49eae6d2da8c956365af7115f7ff21f2c22f9216da58.scope: Consumed 1.144s CPU time.
Nov 29 07:28:44 compute-0 podman[88797]: 2025-11-29 07:28:44.237243716 +0000 UTC m=+0.032448375 container died 5c0603ac7ff5f45cf18a49eae6d2da8c956365af7115f7ff21f2c22f9216da58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0-activate, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:28:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c835a03821544c39811805bbbcc8295546ce2570af91693cd75e24edfea0e95-merged.mount: Deactivated successfully.
Nov 29 07:28:45 compute-0 podman[88797]: 2025-11-29 07:28:45.770326777 +0000 UTC m=+1.565531426 container remove 5c0603ac7ff5f45cf18a49eae6d2da8c956365af7115f7ff21f2c22f9216da58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0-activate, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 07:28:46 compute-0 podman[88858]: 2025-11-29 07:28:45.999448877 +0000 UTC m=+0.019429785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:28:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:46 compute-0 podman[88858]: 2025-11-29 07:28:46.772189074 +0000 UTC m=+0.792169962 container create c5fd94a830bcf37115c943624491ae5c8276fb70c91544fdfc999801b83d5f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:28:46 compute-0 ceph-mon[75237]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:47 compute-0 sudo[88894]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyldspjualzhuiohrzcbgqtjdyrhlyrs ; /usr/bin/python3'
Nov 29 07:28:47 compute-0 sudo[88894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:28:47 compute-0 python3[88896]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:28:48 compute-0 sshd-session[88910]: Invalid user tempuser from 20.185.243.158 port 52326
Nov 29 07:28:48 compute-0 sshd-session[88910]: Received disconnect from 20.185.243.158 port 52326:11: Bye Bye [preauth]
Nov 29 07:28:48 compute-0 sshd-session[88910]: Disconnected from invalid user tempuser 20.185.243.158 port 52326 [preauth]
Nov 29 07:28:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/420fb9781a9a1b6691f91b4c73c61cd66fabc4fee0c9b747cf25b5942c5931fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/420fb9781a9a1b6691f91b4c73c61cd66fabc4fee0c9b747cf25b5942c5931fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/420fb9781a9a1b6691f91b4c73c61cd66fabc4fee0c9b747cf25b5942c5931fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/420fb9781a9a1b6691f91b4c73c61cd66fabc4fee0c9b747cf25b5942c5931fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/420fb9781a9a1b6691f91b4c73c61cd66fabc4fee0c9b747cf25b5942c5931fe/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:49 compute-0 podman[88898]: 2025-11-29 07:28:49.116099062 +0000 UTC m=+1.578907353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:28:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:28:51 compute-0 podman[88898]: 2025-11-29 07:28:51.039804308 +0000 UTC m=+3.502612509 container create ded77a2b61bc1dcd5b46b007b93a8dbcd9b38c38b054d1757c0b00bea5dc4b1e (image=quay.io/ceph/ceph:v18, name=friendly_kare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 29 07:28:51 compute-0 ceph-mon[75237]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:51 compute-0 ceph-mon[75237]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:51 compute-0 systemd[1]: Started libpod-conmon-ded77a2b61bc1dcd5b46b007b93a8dbcd9b38c38b054d1757c0b00bea5dc4b1e.scope.
Nov 29 07:28:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc0629fb4472210760cde929139907e0a08a883a032c451ff37663d6d504707/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc0629fb4472210760cde929139907e0a08a883a032c451ff37663d6d504707/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc0629fb4472210760cde929139907e0a08a883a032c451ff37663d6d504707/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:51 compute-0 podman[88858]: 2025-11-29 07:28:51.484012433 +0000 UTC m=+5.503993341 container init c5fd94a830bcf37115c943624491ae5c8276fb70c91544fdfc999801b83d5f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:28:51 compute-0 podman[88858]: 2025-11-29 07:28:51.492833189 +0000 UTC m=+5.512814107 container start c5fd94a830bcf37115c943624491ae5c8276fb70c91544fdfc999801b83d5f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:28:51 compute-0 ceph-osd[88926]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:28:51 compute-0 ceph-osd[88926]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 29 07:28:51 compute-0 ceph-osd[88926]: pidfile_write: ignore empty --pid-file
Nov 29 07:28:51 compute-0 ceph-osd[88926]: bdev(0x56222310d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 07:28:51 compute-0 ceph-osd[88926]: bdev(0x56222310d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 07:28:51 compute-0 ceph-osd[88926]: bdev(0x56222310d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:28:51 compute-0 ceph-osd[88926]: bdev(0x56222310d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:28:51 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:28:51 compute-0 ceph-osd[88926]: bdev(0x562223f45800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 07:28:51 compute-0 ceph-osd[88926]: bdev(0x562223f45800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 07:28:51 compute-0 ceph-osd[88926]: bdev(0x562223f45800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:28:51 compute-0 ceph-osd[88926]: bdev(0x562223f45800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:28:51 compute-0 ceph-osd[88926]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 29 07:28:51 compute-0 ceph-osd[88926]: bdev(0x562223f45800 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 07:28:51 compute-0 ceph-osd[88926]: bdev(0x56222310d800 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 07:28:51 compute-0 bash[88858]: c5fd94a830bcf37115c943624491ae5c8276fb70c91544fdfc999801b83d5f09
Nov 29 07:28:51 compute-0 systemd[1]: Started Ceph osd.0 for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e.
Nov 29 07:28:52 compute-0 ceph-osd[88926]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 29 07:28:52 compute-0 ceph-osd[88926]: load: jerasure load: lrc 
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc6c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc6c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc6c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc6c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc6c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc6c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc6c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc6c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc6c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc6c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 07:28:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:52 compute-0 ceph-osd[88926]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 29 07:28:52 compute-0 ceph-osd[88926]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc6c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc6c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc6c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc6c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc7400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc7400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc7400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc7400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bluefs mount
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bluefs mount shared_bdev_used = 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: RocksDB version: 7.9.2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Git sha 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: DB SUMMARY
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: DB Session ID:  3LH5S50M2N3PHPYZD0HJ
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: CURRENT file:  CURRENT
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                         Options.error_if_exists: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.create_if_missing: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                                     Options.env: 0x562223f97c70
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                                Options.info_log: 0x5622231948a0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                              Options.statistics: (nil)
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.use_fsync: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                              Options.db_log_dir: 
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.write_buffer_manager: 0x5622240a0460
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.unordered_write: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.row_cache: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                              Options.wal_filter: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.two_write_queues: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.wal_compression: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.atomic_flush: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.max_background_jobs: 4
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.max_background_compactions: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.max_subcompactions: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.max_open_files: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Compression algorithms supported:
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         kZSTD supported: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         kXpressCompression supported: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         kBZip2Compression supported: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         kLZ4Compression supported: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         kZlibCompression supported: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         kSnappyCompression supported: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5622231942c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5622231811f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.compression: LZ4
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.num_levels: 7
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.merge_operator: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5622231942c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5622231811f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.compression: LZ4
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.num_levels: 7
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.merge_operator: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5622231942c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5622231811f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.compression: LZ4
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.num_levels: 7
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.merge_operator: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5622231942c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5622231811f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.compression: LZ4
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.num_levels: 7
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.merge_operator: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5622231942c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5622231811f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.compression: LZ4
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.num_levels: 7
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.merge_operator: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5622231942c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5622231811f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.compression: LZ4
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.num_levels: 7
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.merge_operator: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5622231942c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5622231811f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.compression: LZ4
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.num_levels: 7
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.merge_operator: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562223194240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562223181090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.compression: LZ4
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.num_levels: 7
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.merge_operator: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562223194240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562223181090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.compression: LZ4
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.num_levels: 7
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.merge_operator: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562223194240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562223181090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.compression: LZ4
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.num_levels: 7
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2bc3201b-08f2-4af1-acc2-b73a031ac439
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401332658055, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401332658447, "job": 1, "event": "recovery_finished"}
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: freelist init
Nov 29 07:28:52 compute-0 ceph-osd[88926]: freelist _read_cfg
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bluefs umount
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc7400 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc7400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc7400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc7400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bdev(0x562223fc7400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bluefs mount
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bluefs mount shared_bdev_used = 4718592
Nov 29 07:28:52 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: RocksDB version: 7.9.2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Git sha 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: DB SUMMARY
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: DB Session ID:  3LH5S50M2N3PHPYZD0HI
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: CURRENT file:  CURRENT
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                         Options.error_if_exists: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.create_if_missing: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                                     Options.env: 0x562224148310
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                                Options.info_log: 0x56222318ac20
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                              Options.statistics: (nil)
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.use_fsync: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                              Options.db_log_dir: 
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.write_buffer_manager: 0x5622240a06e0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.unordered_write: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.row_cache: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                              Options.wal_filter: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.two_write_queues: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.wal_compression: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.atomic_flush: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.max_background_jobs: 4
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.max_background_compactions: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.max_subcompactions: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.max_open_files: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Compression algorithms supported:
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         kZSTD supported: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         kXpressCompression supported: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         kBZip2Compression supported: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         kLZ4Compression supported: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         kZlibCompression supported: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         kSnappyCompression supported: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562223f93c00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5622231811f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.compression: LZ4
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.num_levels: 7
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.merge_operator: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562223f93c00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5622231811f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.compression: LZ4
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.num_levels: 7
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.merge_operator: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562223f93c00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5622231811f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.compression: LZ4
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.num_levels: 7
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.merge_operator: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562223f93c00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5622231811f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.compression: LZ4
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.num_levels: 7
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.merge_operator: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562223f93c00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5622231811f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.compression: LZ4
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.num_levels: 7
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.merge_operator: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562223f93c00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5622231811f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.compression: LZ4
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.num_levels: 7
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.merge_operator: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562223f93c00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5622231811f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.compression: LZ4
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.num_levels: 7
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.merge_operator: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562223f93c40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562223181090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.compression: LZ4
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.num_levels: 7
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.merge_operator: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562223f93c40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562223181090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.compression: LZ4
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.num_levels: 7
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:           Options.merge_operator: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562223f93c40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562223181090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.compression: LZ4
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.num_levels: 7
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2bc3201b-08f2-4af1-acc2-b73a031ac439
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401332918062, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 07:28:52 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 07:28:53 compute-0 sudo[88362]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:28:53 compute-0 podman[88898]: 2025-11-29 07:28:53.471337814 +0000 UTC m=+5.934146085 container init ded77a2b61bc1dcd5b46b007b93a8dbcd9b38c38b054d1757c0b00bea5dc4b1e (image=quay.io/ceph/ceph:v18, name=friendly_kare, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:28:53 compute-0 podman[88898]: 2025-11-29 07:28:53.481770402 +0000 UTC m=+5.944578593 container start ded77a2b61bc1dcd5b46b007b93a8dbcd9b38c38b054d1757c0b00bea5dc4b1e (image=quay.io/ceph/ceph:v18, name=friendly_kare, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 07:28:53 compute-0 ceph-osd[88926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401333657634, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401332, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2bc3201b-08f2-4af1-acc2-b73a031ac439", "db_session_id": "3LH5S50M2N3PHPYZD0HI", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:28:53 compute-0 ceph-mon[75237]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:53 compute-0 podman[88898]: 2025-11-29 07:28:53.659782737 +0000 UTC m=+6.122590948 container attach ded77a2b61bc1dcd5b46b007b93a8dbcd9b38c38b054d1757c0b00bea5dc4b1e (image=quay.io/ceph/ceph:v18, name=friendly_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 07:28:53 compute-0 ceph-osd[88926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401333670807, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401333, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2bc3201b-08f2-4af1-acc2-b73a031ac439", "db_session_id": "3LH5S50M2N3PHPYZD0HI", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:28:53 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:28:53 compute-0 ceph-osd[88926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401333691549, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401333, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2bc3201b-08f2-4af1-acc2-b73a031ac439", "db_session_id": "3LH5S50M2N3PHPYZD0HI", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:28:53 compute-0 ceph-osd[88926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401333698103, "job": 1, "event": "recovery_finished"}
Nov 29 07:28:53 compute-0 ceph-osd[88926]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 29 07:28:53 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 29 07:28:53 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 07:28:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:28:53 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:53 compute-0 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 29 07:28:53 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 29 07:28:53 compute-0 sudo[89327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:53 compute-0 sudo[89327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:53 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562224155c00
Nov 29 07:28:53 compute-0 ceph-osd[88926]: rocksdb: DB pointer 0x562224089a00
Nov 29 07:28:53 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 07:28:53 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 29 07:28:53 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 29 07:28:53 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:28:53 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.74              0.00         1    0.739       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.74              0.00         1    0.739       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.74              0.00         1    0.739       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.74              0.00         1    0.739       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.7 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562223181090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562223181090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562223181090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.9 total, 0.9 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 07:28:53 compute-0 ceph-osd[88926]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 29 07:28:53 compute-0 ceph-osd[88926]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 29 07:28:53 compute-0 ceph-osd[88926]: _get_class not permitted to load lua
Nov 29 07:28:53 compute-0 sudo[89327]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:53 compute-0 ceph-osd[88926]: _get_class not permitted to load sdk
Nov 29 07:28:53 compute-0 ceph-osd[88926]: _get_class not permitted to load test_remote_reads
Nov 29 07:28:53 compute-0 ceph-osd[88926]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 29 07:28:53 compute-0 ceph-osd[88926]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 29 07:28:53 compute-0 ceph-osd[88926]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 29 07:28:53 compute-0 ceph-osd[88926]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 29 07:28:53 compute-0 ceph-osd[88926]: osd.0 0 load_pgs
Nov 29 07:28:53 compute-0 ceph-osd[88926]: osd.0 0 load_pgs opened 0 pgs
Nov 29 07:28:53 compute-0 ceph-osd[88926]: osd.0 0 log_to_monitors true
Nov 29 07:28:53 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0[88917]: 2025-11-29T07:28:53.779+0000 7f17e123e740 -1 osd.0 0 log_to_monitors true
Nov 29 07:28:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 29 07:28:53 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/154314275,v1:192.168.122.100:6803/154314275]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 29 07:28:53 compute-0 sudo[89385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:28:53 compute-0 sudo[89385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:53 compute-0 sudo[89385]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:53 compute-0 sudo[89431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:53 compute-0 sudo[89431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:53 compute-0 sudo[89431]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:53 compute-0 sudo[89456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:28:53 compute-0 sudo[89456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 07:28:54 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/979553553' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 07:28:54 compute-0 friendly_kare[88922]: 
Nov 29 07:28:54 compute-0 friendly_kare[88922]: {"fsid":"321e9cb7-01a2-5759-bf8c-981c9a64aa3e","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":127,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":6,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1764401310,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-29T07:28:40.536457+0000","services":{}},"progress_events":{}}
Nov 29 07:28:54 compute-0 systemd[1]: libpod-ded77a2b61bc1dcd5b46b007b93a8dbcd9b38c38b054d1757c0b00bea5dc4b1e.scope: Deactivated successfully.
Nov 29 07:28:54 compute-0 podman[88898]: 2025-11-29 07:28:54.154593584 +0000 UTC m=+6.617401845 container died ded77a2b61bc1dcd5b46b007b93a8dbcd9b38c38b054d1757c0b00bea5dc4b1e (image=quay.io/ceph/ceph:v18, name=friendly_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:28:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:54 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 29 07:28:54 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 29 07:28:55 compute-0 sshd-session[89407]: Invalid user bob from 103.234.151.178 port 21880
Nov 29 07:28:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 29 07:28:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:28:55 compute-0 sshd-session[89407]: Received disconnect from 103.234.151.178 port 21880:11: Bye Bye [preauth]
Nov 29 07:28:55 compute-0 sshd-session[89407]: Disconnected from invalid user bob 103.234.151.178 port 21880 [preauth]
Nov 29 07:28:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:58 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/154314275,v1:192.168.122.100:6803/154314275]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 29 07:28:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Nov 29 07:28:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:59 compute-0 ceph-mon[75237]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:28:59 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:59 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:28:59 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 07:28:59 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:59 compute-0 ceph-mon[75237]: from='osd.0 [v2:192.168.122.100:6802/154314275,v1:192.168.122.100:6803/154314275]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 29 07:28:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/979553553' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 07:28:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-9dc0629fb4472210760cde929139907e0a08a883a032c451ff37663d6d504707-merged.mount: Deactivated successfully.
Nov 29 07:28:59 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Nov 29 07:28:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 29 07:28:59 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/154314275,v1:192.168.122.100:6803/154314275]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 07:28:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 29 07:28:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:28:59 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:28:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:28:59 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:28:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:28:59 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:28:59 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 07:28:59 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:28:59 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:29:00 compute-0 podman[88898]: 2025-11-29 07:29:00.01822481 +0000 UTC m=+12.481033001 container remove ded77a2b61bc1dcd5b46b007b93a8dbcd9b38c38b054d1757c0b00bea5dc4b1e (image=quay.io/ceph/ceph:v18, name=friendly_kare, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:29:00 compute-0 sudo[88894]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 29 07:29:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:29:00 compute-0 ceph-mon[75237]: Deploying daemon osd.1 on compute-0
Nov 29 07:29:00 compute-0 ceph-mon[75237]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:29:00 compute-0 ceph-mon[75237]: purged_snaps scrub starts
Nov 29 07:29:00 compute-0 ceph-mon[75237]: purged_snaps scrub ok
Nov 29 07:29:00 compute-0 ceph-mon[75237]: pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:29:00 compute-0 ceph-mon[75237]: from='osd.0 [v2:192.168.122.100:6802/154314275,v1:192.168.122.100:6803/154314275]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 29 07:29:00 compute-0 ceph-mon[75237]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:29:00 compute-0 ceph-mon[75237]: osdmap e7: 3 total, 0 up, 3 in
Nov 29 07:29:00 compute-0 ceph-mon[75237]: from='osd.0 [v2:192.168.122.100:6802/154314275,v1:192.168.122.100:6803/154314275]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 07:29:00 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:29:00 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:00 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:00 compute-0 systemd[1]: libpod-conmon-ded77a2b61bc1dcd5b46b007b93a8dbcd9b38c38b054d1757c0b00bea5dc4b1e.scope: Deactivated successfully.
Nov 29 07:29:00 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/154314275,v1:192.168.122.100:6803/154314275]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 07:29:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Nov 29 07:29:00 compute-0 ceph-osd[88926]: osd.0 0 done with init, starting boot process
Nov 29 07:29:00 compute-0 ceph-osd[88926]: osd.0 0 start_boot
Nov 29 07:29:00 compute-0 ceph-osd[88926]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 29 07:29:00 compute-0 ceph-osd[88926]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 29 07:29:00 compute-0 ceph-osd[88926]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 29 07:29:00 compute-0 ceph-osd[88926]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 29 07:29:00 compute-0 ceph-osd[88926]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 29 07:29:00 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Nov 29 07:29:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:29:00 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:29:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:29:00 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:29:00 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:00 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 07:29:00 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:29:00 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:29:00 compute-0 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/154314275; not ready for session (expect reconnect)
Nov 29 07:29:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:29:00 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:29:00 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 07:29:00 compute-0 podman[89533]: 2025-11-29 07:29:00.357599874 +0000 UTC m=+0.096263553 container create e523af947fccb0e2ec9cfb2879743c9b86510673b56c627cbc74dd82d48f9786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_clarke, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:29:00 compute-0 podman[89533]: 2025-11-29 07:29:00.294798436 +0000 UTC m=+0.033462225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:00 compute-0 systemd[1]: Started libpod-conmon-e523af947fccb0e2ec9cfb2879743c9b86510673b56c627cbc74dd82d48f9786.scope.
Nov 29 07:29:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:00 compute-0 podman[89533]: 2025-11-29 07:29:00.530033779 +0000 UTC m=+0.268697568 container init e523af947fccb0e2ec9cfb2879743c9b86510673b56c627cbc74dd82d48f9786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:29:00 compute-0 podman[89533]: 2025-11-29 07:29:00.541404043 +0000 UTC m=+0.280067732 container start e523af947fccb0e2ec9cfb2879743c9b86510673b56c627cbc74dd82d48f9786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:29:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:29:00 compute-0 modest_clarke[89549]: 167 167
Nov 29 07:29:00 compute-0 systemd[1]: libpod-e523af947fccb0e2ec9cfb2879743c9b86510673b56c627cbc74dd82d48f9786.scope: Deactivated successfully.
Nov 29 07:29:00 compute-0 podman[89533]: 2025-11-29 07:29:00.574804005 +0000 UTC m=+0.313467684 container attach e523af947fccb0e2ec9cfb2879743c9b86510673b56c627cbc74dd82d48f9786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 07:29:00 compute-0 podman[89533]: 2025-11-29 07:29:00.575548965 +0000 UTC m=+0.314212684 container died e523af947fccb0e2ec9cfb2879743c9b86510673b56c627cbc74dd82d48f9786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:29:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-08dcc676eb5c1a4d054ad07128a5e36b649aa77c1a40ae72df4a1de92283e283-merged.mount: Deactivated successfully.
Nov 29 07:29:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:00 compute-0 podman[89533]: 2025-11-29 07:29:00.7517146 +0000 UTC m=+0.490378279 container remove e523af947fccb0e2ec9cfb2879743c9b86510673b56c627cbc74dd82d48f9786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:29:00 compute-0 systemd[1]: libpod-conmon-e523af947fccb0e2ec9cfb2879743c9b86510673b56c627cbc74dd82d48f9786.scope: Deactivated successfully.
Nov 29 07:29:01 compute-0 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/154314275; not ready for session (expect reconnect)
Nov 29 07:29:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:29:01 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:29:01 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 07:29:01 compute-0 ceph-mon[75237]: from='osd.0 [v2:192.168.122.100:6802/154314275,v1:192.168.122.100:6803/154314275]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 07:29:01 compute-0 ceph-mon[75237]: osdmap e8: 3 total, 0 up, 3 in
Nov 29 07:29:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:29:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:29:01 compute-0 podman[89584]: 2025-11-29 07:29:01.165138523 +0000 UTC m=+0.062489111 container create 3302a88a05fb0ace41b77cc62b7d0107b96120c8c7da8fce2dd090dc53696040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1-activate-test, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 07:29:01 compute-0 podman[89584]: 2025-11-29 07:29:01.140796842 +0000 UTC m=+0.038147470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:01 compute-0 systemd[1]: Started libpod-conmon-3302a88a05fb0ace41b77cc62b7d0107b96120c8c7da8fce2dd090dc53696040.scope.
Nov 29 07:29:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07f3f7d6b3a8bb6872ee8ed05e098543e10a05988755c7e5c8cf4088f0a31f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07f3f7d6b3a8bb6872ee8ed05e098543e10a05988755c7e5c8cf4088f0a31f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07f3f7d6b3a8bb6872ee8ed05e098543e10a05988755c7e5c8cf4088f0a31f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07f3f7d6b3a8bb6872ee8ed05e098543e10a05988755c7e5c8cf4088f0a31f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07f3f7d6b3a8bb6872ee8ed05e098543e10a05988755c7e5c8cf4088f0a31f9/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:01 compute-0 podman[89584]: 2025-11-29 07:29:01.312497898 +0000 UTC m=+0.209848546 container init 3302a88a05fb0ace41b77cc62b7d0107b96120c8c7da8fce2dd090dc53696040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:29:01 compute-0 podman[89584]: 2025-11-29 07:29:01.322481005 +0000 UTC m=+0.219831593 container start 3302a88a05fb0ace41b77cc62b7d0107b96120c8c7da8fce2dd090dc53696040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:29:01 compute-0 podman[89584]: 2025-11-29 07:29:01.357256204 +0000 UTC m=+0.254606872 container attach 3302a88a05fb0ace41b77cc62b7d0107b96120c8c7da8fce2dd090dc53696040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1-activate-test, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:29:01 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1-activate-test[89600]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 29 07:29:01 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1-activate-test[89600]:                             [--no-systemd] [--no-tmpfs]
Nov 29 07:29:01 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1-activate-test[89600]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 29 07:29:01 compute-0 systemd[1]: libpod-3302a88a05fb0ace41b77cc62b7d0107b96120c8c7da8fce2dd090dc53696040.scope: Deactivated successfully.
Nov 29 07:29:01 compute-0 podman[89584]: 2025-11-29 07:29:01.970018691 +0000 UTC m=+0.867369269 container died 3302a88a05fb0ace41b77cc62b7d0107b96120c8c7da8fce2dd090dc53696040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1-activate-test, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:29:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-a07f3f7d6b3a8bb6872ee8ed05e098543e10a05988755c7e5c8cf4088f0a31f9-merged.mount: Deactivated successfully.
Nov 29 07:29:02 compute-0 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/154314275; not ready for session (expect reconnect)
Nov 29 07:29:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:29:02 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:29:02 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 07:29:02 compute-0 ceph-mon[75237]: pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:29:02 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:29:02 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:29:02 compute-0 podman[89584]: 2025-11-29 07:29:02.175289873 +0000 UTC m=+1.072640451 container remove 3302a88a05fb0ace41b77cc62b7d0107b96120c8c7da8fce2dd090dc53696040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:29:02 compute-0 systemd[1]: libpod-conmon-3302a88a05fb0ace41b77cc62b7d0107b96120c8c7da8fce2dd090dc53696040.scope: Deactivated successfully.
Nov 29 07:29:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:29:02 compute-0 systemd[1]: Reloading.
Nov 29 07:29:02 compute-0 systemd-rc-local-generator[89660]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:29:02 compute-0 systemd-sysv-generator[89667]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:29:02 compute-0 systemd[1]: Reloading.
Nov 29 07:29:03 compute-0 systemd-rc-local-generator[89707]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:29:03 compute-0 systemd-sysv-generator[89711]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:29:03 compute-0 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/154314275; not ready for session (expect reconnect)
Nov 29 07:29:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:29:03 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:29:03 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 07:29:03 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:29:03 compute-0 systemd[1]: Starting Ceph osd.1 for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e...
Nov 29 07:29:03 compute-0 podman[89763]: 2025-11-29 07:29:03.600982762 +0000 UTC m=+0.047485388 container create eeceb9b999d206edc6786d6ebd052e7bbfb0df532d000c89da8caa2400249c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1-activate, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 07:29:03 compute-0 podman[89763]: 2025-11-29 07:29:03.580393853 +0000 UTC m=+0.026896499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:03 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a47a758f76d88cf38d130e86825c28d4d912f69ae9d1c41a49ee1d53ac984b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a47a758f76d88cf38d130e86825c28d4d912f69ae9d1c41a49ee1d53ac984b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a47a758f76d88cf38d130e86825c28d4d912f69ae9d1c41a49ee1d53ac984b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a47a758f76d88cf38d130e86825c28d4d912f69ae9d1c41a49ee1d53ac984b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a47a758f76d88cf38d130e86825c28d4d912f69ae9d1c41a49ee1d53ac984b/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:03 compute-0 podman[89763]: 2025-11-29 07:29:03.736524873 +0000 UTC m=+0.183027539 container init eeceb9b999d206edc6786d6ebd052e7bbfb0df532d000c89da8caa2400249c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:29:03 compute-0 podman[89763]: 2025-11-29 07:29:03.743348955 +0000 UTC m=+0.189851581 container start eeceb9b999d206edc6786d6ebd052e7bbfb0df532d000c89da8caa2400249c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1-activate, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Nov 29 07:29:03 compute-0 podman[89763]: 2025-11-29 07:29:03.787643619 +0000 UTC m=+0.234146285 container attach eeceb9b999d206edc6786d6ebd052e7bbfb0df532d000c89da8caa2400249c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1-activate, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:29:04 compute-0 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/154314275; not ready for session (expect reconnect)
Nov 29 07:29:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:29:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:29:04 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 07:29:04 compute-0 ceph-mon[75237]: pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:29:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:29:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:29:04 compute-0 sshd-session[89758]: Invalid user qwerty from 114.34.106.146 port 46706
Nov 29 07:29:04 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1-activate[89778]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 07:29:04 compute-0 bash[89763]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 07:29:04 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1-activate[89778]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 29 07:29:04 compute-0 bash[89763]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 29 07:29:04 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1-activate[89778]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 29 07:29:04 compute-0 bash[89763]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 29 07:29:04 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1-activate[89778]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 29 07:29:04 compute-0 bash[89763]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 29 07:29:04 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1-activate[89778]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 29 07:29:04 compute-0 bash[89763]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 29 07:29:04 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1-activate[89778]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 07:29:04 compute-0 bash[89763]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 07:29:04 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1-activate[89778]: --> ceph-volume raw activate successful for osd ID: 1
Nov 29 07:29:04 compute-0 bash[89763]: --> ceph-volume raw activate successful for osd ID: 1
Nov 29 07:29:04 compute-0 sshd-session[89758]: Received disconnect from 114.34.106.146 port 46706:11: Bye Bye [preauth]
Nov 29 07:29:04 compute-0 sshd-session[89758]: Disconnected from invalid user qwerty 114.34.106.146 port 46706 [preauth]
Nov 29 07:29:04 compute-0 systemd[1]: libpod-eeceb9b999d206edc6786d6ebd052e7bbfb0df532d000c89da8caa2400249c5e.scope: Deactivated successfully.
Nov 29 07:29:04 compute-0 podman[89763]: 2025-11-29 07:29:04.858351726 +0000 UTC m=+1.304854352 container died eeceb9b999d206edc6786d6ebd052e7bbfb0df532d000c89da8caa2400249c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1-activate, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 29 07:29:04 compute-0 systemd[1]: libpod-eeceb9b999d206edc6786d6ebd052e7bbfb0df532d000c89da8caa2400249c5e.scope: Consumed 1.134s CPU time.
Nov 29 07:29:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-47a47a758f76d88cf38d130e86825c28d4d912f69ae9d1c41a49ee1d53ac984b-merged.mount: Deactivated successfully.
Nov 29 07:29:04 compute-0 podman[89763]: 2025-11-29 07:29:04.942298568 +0000 UTC m=+1.388801194 container remove eeceb9b999d206edc6786d6ebd052e7bbfb0df532d000c89da8caa2400249c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1-activate, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 07:29:04 compute-0 ceph-osd[88926]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 26.279 iops: 6727.430 elapsed_sec: 0.446
Nov 29 07:29:04 compute-0 ceph-osd[88926]: log_channel(cluster) log [WRN] : OSD bench result of 6727.429845 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 07:29:04 compute-0 ceph-osd[88926]: osd.0 0 waiting for initial osdmap
Nov 29 07:29:04 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0[88917]: 2025-11-29T07:29:04.997+0000 7f17dd1be640 -1 osd.0 0 waiting for initial osdmap
Nov 29 07:29:05 compute-0 ceph-osd[88926]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 29 07:29:05 compute-0 ceph-osd[88926]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 29 07:29:05 compute-0 ceph-osd[88926]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 29 07:29:05 compute-0 ceph-osd[88926]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Nov 29 07:29:05 compute-0 ceph-osd[88926]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 07:29:05 compute-0 ceph-osd[88926]: osd.0 8 set_numa_affinity not setting numa affinity
Nov 29 07:29:05 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-0[88917]: 2025-11-29T07:29:05.023+0000 7f17d87e6640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 07:29:05 compute-0 ceph-osd[88926]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 29 07:29:05 compute-0 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/154314275; not ready for session (expect reconnect)
Nov 29 07:29:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:29:05 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:29:05 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 07:29:05 compute-0 podman[89948]: 2025-11-29 07:29:05.191770222 +0000 UTC m=+0.059827299 container create ca50ff0f22dbf64d497c9d3a58d6ae1d357277ad5cfac0032e2bacf2860940b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:29:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 29 07:29:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:29:05 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:29:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Nov 29 07:29:05 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/154314275,v1:192.168.122.100:6803/154314275] boot
Nov 29 07:29:05 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Nov 29 07:29:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:29:05 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:29:05 compute-0 ceph-osd[88926]: osd.0 9 state: booting -> active
Nov 29 07:29:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:29:05 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:29:05 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:05 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:29:05 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:29:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01dadea35e9283a2a747c71482e89c88d60c9a98f3b50f8601aaf891c5dd7131/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01dadea35e9283a2a747c71482e89c88d60c9a98f3b50f8601aaf891c5dd7131/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01dadea35e9283a2a747c71482e89c88d60c9a98f3b50f8601aaf891c5dd7131/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01dadea35e9283a2a747c71482e89c88d60c9a98f3b50f8601aaf891c5dd7131/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01dadea35e9283a2a747c71482e89c88d60c9a98f3b50f8601aaf891c5dd7131/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:05 compute-0 podman[89948]: 2025-11-29 07:29:05.25082777 +0000 UTC m=+0.118884857 container init ca50ff0f22dbf64d497c9d3a58d6ae1d357277ad5cfac0032e2bacf2860940b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:29:05 compute-0 podman[89948]: 2025-11-29 07:29:05.164307398 +0000 UTC m=+0.032364535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:05 compute-0 podman[89948]: 2025-11-29 07:29:05.2624432 +0000 UTC m=+0.130500247 container start ca50ff0f22dbf64d497c9d3a58d6ae1d357277ad5cfac0032e2bacf2860940b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:29:05 compute-0 bash[89948]: ca50ff0f22dbf64d497c9d3a58d6ae1d357277ad5cfac0032e2bacf2860940b2
Nov 29 07:29:05 compute-0 systemd[1]: Started Ceph osd.1 for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e.
Nov 29 07:29:05 compute-0 ceph-osd[89968]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:29:05 compute-0 ceph-osd[89968]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 29 07:29:05 compute-0 ceph-osd[89968]: pidfile_write: ignore empty --pid-file
Nov 29 07:29:05 compute-0 ceph-osd[89968]: bdev(0x558bf08db800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 07:29:05 compute-0 ceph-osd[89968]: bdev(0x558bf08db800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 07:29:05 compute-0 ceph-osd[89968]: bdev(0x558bf08db800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:29:05 compute-0 ceph-osd[89968]: bdev(0x558bf08db800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:29:05 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:29:05 compute-0 ceph-osd[89968]: bdev(0x558bf171d800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 07:29:05 compute-0 ceph-osd[89968]: bdev(0x558bf171d800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 07:29:05 compute-0 ceph-osd[89968]: bdev(0x558bf171d800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:29:05 compute-0 ceph-osd[89968]: bdev(0x558bf171d800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:29:05 compute-0 ceph-osd[89968]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 29 07:29:05 compute-0 ceph-osd[89968]: bdev(0x558bf171d800 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 07:29:05 compute-0 sudo[89456]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:29:05 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:29:05 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 29 07:29:05 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 29 07:29:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:29:05 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:29:05 compute-0 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Nov 29 07:29:05 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Nov 29 07:29:05 compute-0 sudo[89981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:05 compute-0 sudo[89981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:05 compute-0 sudo[89981]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:05 compute-0 sudo[90006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:29:05 compute-0 sudo[90006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:05 compute-0 sudo[90006]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:05 compute-0 sudo[90031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:05 compute-0 sudo[90031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:05 compute-0 sudo[90031]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:05 compute-0 ceph-osd[89968]: bdev(0x558bf08db800 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 07:29:05 compute-0 sudo[90056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:29:05 compute-0 sudo[90056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:05 compute-0 ceph-osd[89968]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 29 07:29:05 compute-0 ceph-osd[89968]: load: jerasure load: lrc 
Nov 29 07:29:05 compute-0 ceph-osd[89968]: bdev(0x558bf0aa4c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 07:29:05 compute-0 ceph-osd[89968]: bdev(0x558bf0aa4c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 07:29:05 compute-0 ceph-osd[89968]: bdev(0x558bf0aa4c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:29:05 compute-0 ceph-osd[89968]: bdev(0x558bf0aa4c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:29:05 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:29:05 compute-0 ceph-osd[89968]: bdev(0x558bf0aa4c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 07:29:06 compute-0 podman[90125]: 2025-11-29 07:29:06.037222743 +0000 UTC m=+0.041022636 container create 5c86bef04e38f0b29217ff33798f7cdaaf30ac09c76e1256895fb9239e6133a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_robinson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:29:06 compute-0 systemd[1]: Started libpod-conmon-5c86bef04e38f0b29217ff33798f7cdaaf30ac09c76e1256895fb9239e6133a3.scope.
Nov 29 07:29:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:06 compute-0 podman[90125]: 2025-11-29 07:29:06.02135045 +0000 UTC m=+0.025150373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bdev(0x558bf0aa4c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bdev(0x558bf0aa4c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bdev(0x558bf0aa4c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bdev(0x558bf0aa4c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bdev(0x558bf0aa4c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 07:29:06 compute-0 podman[90125]: 2025-11-29 07:29:06.128298526 +0000 UTC m=+0.132098439 container init 5c86bef04e38f0b29217ff33798f7cdaaf30ac09c76e1256895fb9239e6133a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 07:29:06 compute-0 podman[90125]: 2025-11-29 07:29:06.137268766 +0000 UTC m=+0.141068689 container start 5c86bef04e38f0b29217ff33798f7cdaaf30ac09c76e1256895fb9239e6133a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:29:06 compute-0 systemd[1]: libpod-5c86bef04e38f0b29217ff33798f7cdaaf30ac09c76e1256895fb9239e6133a3.scope: Deactivated successfully.
Nov 29 07:29:06 compute-0 clever_robinson[90142]: 167 167
Nov 29 07:29:06 compute-0 conmon[90142]: conmon 5c86bef04e38f0b29217 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c86bef04e38f0b29217ff33798f7cdaaf30ac09c76e1256895fb9239e6133a3.scope/container/memory.events
Nov 29 07:29:06 compute-0 podman[90125]: 2025-11-29 07:29:06.143994476 +0000 UTC m=+0.147794459 container attach 5c86bef04e38f0b29217ff33798f7cdaaf30ac09c76e1256895fb9239e6133a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:29:06 compute-0 podman[90125]: 2025-11-29 07:29:06.144647963 +0000 UTC m=+0.148447856 container died 5c86bef04e38f0b29217ff33798f7cdaaf30ac09c76e1256895fb9239e6133a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 07:29:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0972b260b54882d8b06829ff47fa1ee4a125e4917aaac429a04520fd8db8fe1-merged.mount: Deactivated successfully.
Nov 29 07:29:06 compute-0 podman[90125]: 2025-11-29 07:29:06.194007151 +0000 UTC m=+0.197807044 container remove 5c86bef04e38f0b29217ff33798f7cdaaf30ac09c76e1256895fb9239e6133a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 07:29:06 compute-0 systemd[1]: libpod-conmon-5c86bef04e38f0b29217ff33798f7cdaaf30ac09c76e1256895fb9239e6133a3.scope: Deactivated successfully.
Nov 29 07:29:06 compute-0 ceph-mon[75237]: pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:29:06 compute-0 ceph-mon[75237]: OSD bench result of 6727.429845 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 07:29:06 compute-0 ceph-mon[75237]: osd.0 [v2:192.168.122.100:6802/154314275,v1:192.168.122.100:6803/154314275] boot
Nov 29 07:29:06 compute-0 ceph-mon[75237]: osdmap e9: 3 total, 1 up, 3 in
Nov 29 07:29:06 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:29:06 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:06 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:06 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:06 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:06 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 29 07:29:06 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:29:06 compute-0 ceph-mon[75237]: Deploying daemon osd.2 on compute-0
Nov 29 07:29:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 29 07:29:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:29:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Nov 29 07:29:06 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Nov 29 07:29:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:29:06 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:29:06 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:06 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:29:06 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:29:06 compute-0 ceph-osd[89968]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 29 07:29:06 compute-0 ceph-osd[89968]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bdev(0x558bf0aa4c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bdev(0x558bf0aa4c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bdev(0x558bf0aa4c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bdev(0x558bf0aa4c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bdev(0x558bf0aa5400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bdev(0x558bf0aa5400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bdev(0x558bf0aa5400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bdev(0x558bf0aa5400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bluefs mount
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bluefs mount shared_bdev_used = 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: RocksDB version: 7.9.2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Git sha 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: DB SUMMARY
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: DB Session ID:  EE5524XPMC1WL31QK2T9
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: CURRENT file:  CURRENT
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                         Options.error_if_exists: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.create_if_missing: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                                     Options.env: 0x558bf176fd50
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                                Options.info_log: 0x558bf0966800
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                              Options.statistics: (nil)
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.use_fsync: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                              Options.db_log_dir: 
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.write_buffer_manager: 0x558bf185c460
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.unordered_write: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.row_cache: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                              Options.wal_filter: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.two_write_queues: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.wal_compression: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.atomic_flush: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.max_background_jobs: 4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.max_background_compactions: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.max_subcompactions: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.max_open_files: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Compression algorithms supported:
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         kZSTD supported: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         kXpressCompression supported: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         kBZip2Compression supported: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         kLZ4Compression supported: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         kZlibCompression supported: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         kSnappyCompression supported: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558bf0966e80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558bf094edd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558bf0966e80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558bf094edd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558bf0966e80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558bf094edd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558bf0966e80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558bf094edd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558bf0966e80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558bf094edd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558bf0966e80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558bf094edd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558bf0966e80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558bf094edd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558bf0966e60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558bf094e430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558bf0966e60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558bf094e430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558bf0966e60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558bf094e430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1d5c29f0-2faf-4d90-9068-dcd970900225
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401346439157, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401346439461, "job": 1, "event": "recovery_finished"}
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: freelist init
Nov 29 07:29:06 compute-0 ceph-osd[89968]: freelist _read_cfg
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bluefs umount
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bdev(0x558bf0aa5400 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 07:29:06 compute-0 podman[90178]: 2025-11-29 07:29:06.488647171 +0000 UTC m=+0.069412905 container create 0b90ddded2837f50470cae2b2c3d054de35c17eb46e6f4dae98670423eb8ddb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:29:06 compute-0 systemd[1]: Started libpod-conmon-0b90ddded2837f50470cae2b2c3d054de35c17eb46e6f4dae98670423eb8ddb0.scope.
Nov 29 07:29:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v45: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 29 07:29:06 compute-0 podman[90178]: 2025-11-29 07:29:06.463623463 +0000 UTC m=+0.044389257 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3009e1227bb1bbcfc49350988ba1dd5a611a3629df573b82103bc31629e180f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3009e1227bb1bbcfc49350988ba1dd5a611a3629df573b82103bc31629e180f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3009e1227bb1bbcfc49350988ba1dd5a611a3629df573b82103bc31629e180f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3009e1227bb1bbcfc49350988ba1dd5a611a3629df573b82103bc31629e180f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3009e1227bb1bbcfc49350988ba1dd5a611a3629df573b82103bc31629e180f/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:06 compute-0 podman[90178]: 2025-11-29 07:29:06.607877885 +0000 UTC m=+0.188643639 container init 0b90ddded2837f50470cae2b2c3d054de35c17eb46e6f4dae98670423eb8ddb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2-activate-test, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 07:29:06 compute-0 podman[90178]: 2025-11-29 07:29:06.621137929 +0000 UTC m=+0.201903663 container start 0b90ddded2837f50470cae2b2c3d054de35c17eb46e6f4dae98670423eb8ddb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2-activate-test, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 07:29:06 compute-0 podman[90178]: 2025-11-29 07:29:06.625229119 +0000 UTC m=+0.205994873 container attach 0b90ddded2837f50470cae2b2c3d054de35c17eb46e6f4dae98670423eb8ddb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 29 07:29:06 compute-0 ceph-mgr[75527]: [devicehealth INFO root] creating mgr pool
Nov 29 07:29:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 29 07:29:06 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bdev(0x558bf0aa5400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bdev(0x558bf0aa5400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bdev(0x558bf0aa5400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bdev(0x558bf0aa5400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bluefs mount
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bluefs mount shared_bdev_used = 4718592
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: RocksDB version: 7.9.2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Git sha 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: DB SUMMARY
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: DB Session ID:  EE5524XPMC1WL31QK2T8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: CURRENT file:  CURRENT
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                         Options.error_if_exists: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.create_if_missing: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                                     Options.env: 0x558bf190c460
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                                Options.info_log: 0x558bf09671c0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                              Options.statistics: (nil)
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.use_fsync: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                              Options.db_log_dir: 
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.write_buffer_manager: 0x558bf185c6e0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.unordered_write: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.row_cache: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                              Options.wal_filter: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.two_write_queues: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.wal_compression: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.atomic_flush: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.max_background_jobs: 4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.max_background_compactions: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.max_subcompactions: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.max_open_files: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Compression algorithms supported:
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         kZSTD supported: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         kXpressCompression supported: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         kBZip2Compression supported: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         kLZ4Compression supported: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         kZlibCompression supported: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         kSnappyCompression supported: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558bf09669c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558bf094edd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558bf09669c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558bf094edd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558bf09669c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558bf094edd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558bf09669c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558bf094edd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558bf09669c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558bf094edd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558bf09669c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558bf094edd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558bf09669c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558bf094edd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558bf0966f60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558bf094e430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558bf0966f60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558bf094e430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558bf0966f60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558bf094e430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1d5c29f0-2faf-4d90-9068-dcd970900225
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401346696688, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401346701597, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401346, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1d5c29f0-2faf-4d90-9068-dcd970900225", "db_session_id": "EE5524XPMC1WL31QK2T8", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401346705042, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401346, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1d5c29f0-2faf-4d90-9068-dcd970900225", "db_session_id": "EE5524XPMC1WL31QK2T8", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401346708136, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401346, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1d5c29f0-2faf-4d90-9068-dcd970900225", "db_session_id": "EE5524XPMC1WL31QK2T8", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401346709984, "job": 1, "event": "recovery_finished"}
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558bf194fc00
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: DB pointer 0x558bf0989a00
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 29 07:29:06 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:29:06 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094e430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094e430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094e430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 07:29:06 compute-0 ceph-osd[89968]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 29 07:29:06 compute-0 ceph-osd[89968]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 29 07:29:06 compute-0 ceph-osd[89968]: _get_class not permitted to load lua
Nov 29 07:29:06 compute-0 ceph-osd[89968]: _get_class not permitted to load sdk
Nov 29 07:29:06 compute-0 ceph-osd[89968]: _get_class not permitted to load test_remote_reads
Nov 29 07:29:06 compute-0 ceph-osd[89968]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 29 07:29:06 compute-0 ceph-osd[89968]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 29 07:29:06 compute-0 ceph-osd[89968]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 29 07:29:06 compute-0 ceph-osd[89968]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 29 07:29:06 compute-0 ceph-osd[89968]: osd.1 0 load_pgs
Nov 29 07:29:06 compute-0 ceph-osd[89968]: osd.1 0 load_pgs opened 0 pgs
Nov 29 07:29:06 compute-0 ceph-osd[89968]: osd.1 0 log_to_monitors true
Nov 29 07:29:06 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1[89964]: 2025-11-29T07:29:06.749+0000 7fe7c6658740 -1 osd.1 0 log_to_monitors true
Nov 29 07:29:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 29 07:29:06 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2520611032,v1:192.168.122.100:6807/2520611032]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 29 07:29:07 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2-activate-test[90388]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 29 07:29:07 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2-activate-test[90388]:                             [--no-systemd] [--no-tmpfs]
Nov 29 07:29:07 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2-activate-test[90388]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 29 07:29:07 compute-0 systemd[1]: libpod-0b90ddded2837f50470cae2b2c3d054de35c17eb46e6f4dae98670423eb8ddb0.scope: Deactivated successfully.
Nov 29 07:29:07 compute-0 podman[90178]: 2025-11-29 07:29:07.329775697 +0000 UTC m=+0.910541431 container died 0b90ddded2837f50470cae2b2c3d054de35c17eb46e6f4dae98670423eb8ddb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 07:29:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 29 07:29:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:29:07 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 29 07:29:07 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2520611032,v1:192.168.122.100:6807/2520611032]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 29 07:29:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Nov 29 07:29:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Nov 29 07:29:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 07:29:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 07:29:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 07:29:07 compute-0 ceph-mon[75237]: osdmap e10: 3 total, 1 up, 3 in
Nov 29 07:29:07 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:07 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:07 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 29 07:29:07 compute-0 ceph-mon[75237]: from='osd.1 [v2:192.168.122.100:6806/2520611032,v1:192.168.122.100:6807/2520611032]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 29 07:29:07 compute-0 ceph-osd[88926]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 29 07:29:07 compute-0 ceph-osd[88926]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 29 07:29:07 compute-0 ceph-osd[88926]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 29 07:29:07 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Nov 29 07:29:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 29 07:29:07 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2520611032,v1:192.168.122.100:6807/2520611032]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 07:29:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e11 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 29 07:29:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:29:07 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:07 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 11 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=11) [0] r=0 lpr=11 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:29:07 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:07 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:29:07 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:29:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 29 07:29:07 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 29 07:29:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3009e1227bb1bbcfc49350988ba1dd5a611a3629df573b82103bc31629e180f-merged.mount: Deactivated successfully.
Nov 29 07:29:07 compute-0 podman[90178]: 2025-11-29 07:29:07.419220816 +0000 UTC m=+0.999986560 container remove 0b90ddded2837f50470cae2b2c3d054de35c17eb46e6f4dae98670423eb8ddb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2-activate-test, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 07:29:07 compute-0 systemd[1]: libpod-conmon-0b90ddded2837f50470cae2b2c3d054de35c17eb46e6f4dae98670423eb8ddb0.scope: Deactivated successfully.
Nov 29 07:29:07 compute-0 systemd[1]: Reloading.
Nov 29 07:29:07 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 29 07:29:07 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 29 07:29:07 compute-0 systemd-rc-local-generator[90665]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:29:07 compute-0 systemd-sysv-generator[90669]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:29:08 compute-0 systemd[1]: Reloading.
Nov 29 07:29:08 compute-0 systemd-sysv-generator[90707]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:29:08 compute-0 systemd-rc-local-generator[90704]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:29:08 compute-0 systemd[1]: Starting Ceph osd.2 for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e...
Nov 29 07:29:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 29 07:29:08 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2520611032,v1:192.168.122.100:6807/2520611032]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 07:29:08 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 29 07:29:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Nov 29 07:29:08 compute-0 ceph-osd[89968]: osd.1 0 done with init, starting boot process
Nov 29 07:29:08 compute-0 ceph-osd[89968]: osd.1 0 start_boot
Nov 29 07:29:08 compute-0 ceph-osd[89968]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 29 07:29:08 compute-0 ceph-osd[89968]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 29 07:29:08 compute-0 ceph-osd[89968]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 29 07:29:08 compute-0 ceph-osd[89968]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 29 07:29:08 compute-0 ceph-osd[89968]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 29 07:29:08 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Nov 29 07:29:08 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=12) [] r=-1 lpr=12 pi=[11,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:08 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=12) [] r=-1 lpr=12 pi=[11,12)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:29:08 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:29:08 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:08 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:29:08 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:29:08 compute-0 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2520611032; not ready for session (expect reconnect)
Nov 29 07:29:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:29:08 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:08 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:29:08 compute-0 ceph-mon[75237]: pgmap v45: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 29 07:29:08 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 29 07:29:08 compute-0 ceph-mon[75237]: from='osd.1 [v2:192.168.122.100:6806/2520611032,v1:192.168.122.100:6807/2520611032]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 29 07:29:08 compute-0 ceph-mon[75237]: osdmap e11: 3 total, 1 up, 3 in
Nov 29 07:29:08 compute-0 ceph-mon[75237]: from='osd.1 [v2:192.168.122.100:6806/2520611032,v1:192.168.122.100:6807/2520611032]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 07:29:08 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:08 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:08 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 29 07:29:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 29 07:29:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:29:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:29:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:29:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:29:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:29:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:29:08 compute-0 podman[90762]: 2025-11-29 07:29:08.71252546 +0000 UTC m=+0.090077917 container create 677b62071cf19547667b6518eccca619dd3aab49e94f545277c0587cf11fed5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2-activate, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:29:08 compute-0 podman[90762]: 2025-11-29 07:29:08.663860779 +0000 UTC m=+0.041413256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93cd1c9b5f9fbced1e183c77535874c2970b87f00fe69f1da2289a05a20ebac1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93cd1c9b5f9fbced1e183c77535874c2970b87f00fe69f1da2289a05a20ebac1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93cd1c9b5f9fbced1e183c77535874c2970b87f00fe69f1da2289a05a20ebac1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93cd1c9b5f9fbced1e183c77535874c2970b87f00fe69f1da2289a05a20ebac1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93cd1c9b5f9fbced1e183c77535874c2970b87f00fe69f1da2289a05a20ebac1/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:08 compute-0 podman[90762]: 2025-11-29 07:29:08.875615506 +0000 UTC m=+0.253167953 container init 677b62071cf19547667b6518eccca619dd3aab49e94f545277c0587cf11fed5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:29:08 compute-0 podman[90762]: 2025-11-29 07:29:08.886121186 +0000 UTC m=+0.263673613 container start 677b62071cf19547667b6518eccca619dd3aab49e94f545277c0587cf11fed5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2-activate, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 07:29:08 compute-0 podman[90762]: 2025-11-29 07:29:08.914780422 +0000 UTC m=+0.292332939 container attach 677b62071cf19547667b6518eccca619dd3aab49e94f545277c0587cf11fed5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2-activate, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 07:29:09 compute-0 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2520611032; not ready for session (expect reconnect)
Nov 29 07:29:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:29:09 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:09 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:29:09 compute-0 ceph-mon[75237]: from='osd.1 [v2:192.168.122.100:6806/2520611032,v1:192.168.122.100:6807/2520611032]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 07:29:09 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 29 07:29:09 compute-0 ceph-mon[75237]: osdmap e12: 3 total, 1 up, 3 in
Nov 29 07:29:09 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:09 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:09 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:09 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:10 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2-activate[90778]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 07:29:10 compute-0 bash[90762]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 07:29:10 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2-activate[90778]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 29 07:29:10 compute-0 bash[90762]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 29 07:29:10 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2-activate[90778]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 29 07:29:10 compute-0 bash[90762]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 29 07:29:10 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2-activate[90778]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 29 07:29:10 compute-0 bash[90762]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 29 07:29:10 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2-activate[90778]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 29 07:29:10 compute-0 bash[90762]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 29 07:29:10 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2-activate[90778]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 07:29:10 compute-0 bash[90762]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 07:29:10 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2-activate[90778]: --> ceph-volume raw activate successful for osd ID: 2
Nov 29 07:29:10 compute-0 bash[90762]: --> ceph-volume raw activate successful for osd ID: 2
Nov 29 07:29:10 compute-0 systemd[1]: libpod-677b62071cf19547667b6518eccca619dd3aab49e94f545277c0587cf11fed5c.scope: Deactivated successfully.
Nov 29 07:29:10 compute-0 systemd[1]: libpod-677b62071cf19547667b6518eccca619dd3aab49e94f545277c0587cf11fed5c.scope: Consumed 1.309s CPU time.
Nov 29 07:29:10 compute-0 podman[90896]: 2025-11-29 07:29:10.217410805 +0000 UTC m=+0.032661484 container died 677b62071cf19547667b6518eccca619dd3aab49e94f545277c0587cf11fed5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2-activate, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:29:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-93cd1c9b5f9fbced1e183c77535874c2970b87f00fe69f1da2289a05a20ebac1-merged.mount: Deactivated successfully.
Nov 29 07:29:10 compute-0 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2520611032; not ready for session (expect reconnect)
Nov 29 07:29:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:29:10 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:10 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:29:10 compute-0 podman[90896]: 2025-11-29 07:29:10.389566943 +0000 UTC m=+0.204817582 container remove 677b62071cf19547667b6518eccca619dd3aab49e94f545277c0587cf11fed5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2-activate, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 07:29:10 compute-0 ceph-mon[75237]: purged_snaps scrub starts
Nov 29 07:29:10 compute-0 ceph-mon[75237]: purged_snaps scrub ok
Nov 29 07:29:10 compute-0 ceph-mon[75237]: pgmap v48: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 29 07:29:10 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 29 07:29:10 compute-0 podman[90957]: 2025-11-29 07:29:10.678411188 +0000 UTC m=+0.070037902 container create c7b261a6c7aac2da343c25b74c789ed573ccc29fd5dc6e6b9c57142d83f16cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:29:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:10 compute-0 podman[90957]: 2025-11-29 07:29:10.63621181 +0000 UTC m=+0.027838504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7be72c914fa338a9654f4d9a6aabdc9161d90ee6c1c773a3e08d9461870f8f74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7be72c914fa338a9654f4d9a6aabdc9161d90ee6c1c773a3e08d9461870f8f74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7be72c914fa338a9654f4d9a6aabdc9161d90ee6c1c773a3e08d9461870f8f74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7be72c914fa338a9654f4d9a6aabdc9161d90ee6c1c773a3e08d9461870f8f74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7be72c914fa338a9654f4d9a6aabdc9161d90ee6c1c773a3e08d9461870f8f74/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:10 compute-0 podman[90957]: 2025-11-29 07:29:10.821434927 +0000 UTC m=+0.213061691 container init c7b261a6c7aac2da343c25b74c789ed573ccc29fd5dc6e6b9c57142d83f16cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 07:29:10 compute-0 podman[90957]: 2025-11-29 07:29:10.836059599 +0000 UTC m=+0.227686313 container start c7b261a6c7aac2da343c25b74c789ed573ccc29fd5dc6e6b9c57142d83f16cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 07:29:10 compute-0 bash[90957]: c7b261a6c7aac2da343c25b74c789ed573ccc29fd5dc6e6b9c57142d83f16cd4
Nov 29 07:29:10 compute-0 systemd[1]: Started Ceph osd.2 for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e.
Nov 29 07:29:10 compute-0 ceph-osd[90977]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:29:10 compute-0 ceph-osd[90977]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 29 07:29:10 compute-0 ceph-osd[90977]: pidfile_write: ignore empty --pid-file
Nov 29 07:29:10 compute-0 ceph-osd[90977]: bdev(0x5571f16cf800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 07:29:10 compute-0 ceph-osd[90977]: bdev(0x5571f16cf800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 07:29:10 compute-0 ceph-osd[90977]: bdev(0x5571f16cf800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:29:10 compute-0 ceph-osd[90977]: bdev(0x5571f16cf800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:29:10 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:29:10 compute-0 ceph-osd[90977]: bdev(0x5571f2511000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 07:29:10 compute-0 ceph-osd[90977]: bdev(0x5571f2511000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 07:29:10 compute-0 ceph-osd[90977]: bdev(0x5571f2511000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:29:10 compute-0 ceph-osd[90977]: bdev(0x5571f2511000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:29:10 compute-0 ceph-osd[90977]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 29 07:29:10 compute-0 ceph-osd[90977]: bdev(0x5571f2511000 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 07:29:10 compute-0 sudo[90056]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:29:10 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:29:11 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:11 compute-0 sudo[90990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:11 compute-0 sudo[90990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:11 compute-0 sudo[90990]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:11 compute-0 ceph-osd[90977]: bdev(0x5571f16cf800 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 07:29:11 compute-0 sudo[91015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:29:11 compute-0 sudo[91015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:11 compute-0 sudo[91015]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:11 compute-0 sudo[91040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:11 compute-0 sudo[91040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:11 compute-0 sudo[91040]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:11 compute-0 sudo[91068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:29:11 compute-0 sudo[91068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:11 compute-0 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2520611032; not ready for session (expect reconnect)
Nov 29 07:29:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:29:11 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:11 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:29:11 compute-0 ceph-osd[90977]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 29 07:29:11 compute-0 ceph-osd[90977]: load: jerasure load: lrc 
Nov 29 07:29:11 compute-0 ceph-osd[90977]: bdev(0x5571f2511c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 07:29:11 compute-0 ceph-osd[90977]: bdev(0x5571f2511c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 07:29:11 compute-0 ceph-osd[90977]: bdev(0x5571f2511c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:29:11 compute-0 ceph-osd[90977]: bdev(0x5571f2511c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:29:11 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:29:11 compute-0 ceph-osd[90977]: bdev(0x5571f2511c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 07:29:11 compute-0 ceph-osd[90977]: bdev(0x5571f2511c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 07:29:11 compute-0 ceph-osd[90977]: bdev(0x5571f2511c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 07:29:11 compute-0 ceph-osd[90977]: bdev(0x5571f2511c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:29:11 compute-0 ceph-osd[90977]: bdev(0x5571f2511c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:29:11 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:29:11 compute-0 ceph-osd[90977]: bdev(0x5571f2511c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 07:29:11 compute-0 podman[91137]: 2025-11-29 07:29:11.752033344 +0000 UTC m=+0.075695673 container create 0d1b0c4655c8ad07485c101a20c888a70c473058f9574c7e88d8e72d4ebb99a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wilson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:29:11 compute-0 podman[91137]: 2025-11-29 07:29:11.702198522 +0000 UTC m=+0.025860831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:11 compute-0 systemd[1]: Started libpod-conmon-0d1b0c4655c8ad07485c101a20c888a70c473058f9574c7e88d8e72d4ebb99a8.scope.
Nov 29 07:29:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:11 compute-0 podman[91137]: 2025-11-29 07:29:11.91248863 +0000 UTC m=+0.236150929 container init 0d1b0c4655c8ad07485c101a20c888a70c473058f9574c7e88d8e72d4ebb99a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wilson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 29 07:29:11 compute-0 podman[91137]: 2025-11-29 07:29:11.923894464 +0000 UTC m=+0.247556753 container start 0d1b0c4655c8ad07485c101a20c888a70c473058f9574c7e88d8e72d4ebb99a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wilson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:29:11 compute-0 heuristic_wilson[91157]: 167 167
Nov 29 07:29:11 compute-0 systemd[1]: libpod-0d1b0c4655c8ad07485c101a20c888a70c473058f9574c7e88d8e72d4ebb99a8.scope: Deactivated successfully.
Nov 29 07:29:11 compute-0 conmon[91157]: conmon 0d1b0c4655c8ad07485c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d1b0c4655c8ad07485c101a20c888a70c473058f9574c7e88d8e72d4ebb99a8.scope/container/memory.events
Nov 29 07:29:11 compute-0 podman[91137]: 2025-11-29 07:29:11.950606788 +0000 UTC m=+0.274269087 container attach 0d1b0c4655c8ad07485c101a20c888a70c473058f9574c7e88d8e72d4ebb99a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wilson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:29:11 compute-0 podman[91137]: 2025-11-29 07:29:11.95218038 +0000 UTC m=+0.275842709 container died 0d1b0c4655c8ad07485c101a20c888a70c473058f9574c7e88d8e72d4ebb99a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wilson, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:29:11 compute-0 ceph-mon[75237]: pgmap v49: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 29 07:29:11 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:11 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:11 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:12 compute-0 ceph-osd[90977]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 29 07:29:12 compute-0 ceph-osd[90977]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bdev(0x5571f2511c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bdev(0x5571f2511c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bdev(0x5571f2511c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bdev(0x5571f2511c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bdev(0x5571f26f4400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bdev(0x5571f26f4400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bdev(0x5571f26f4400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bdev(0x5571f26f4400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bluefs mount
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bluefs mount shared_bdev_used = 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: RocksDB version: 7.9.2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Git sha 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: DB SUMMARY
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: DB Session ID:  P8FJFJQJ4UDWN8L9SM4G
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: CURRENT file:  CURRENT
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                         Options.error_if_exists: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.create_if_missing: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                                     Options.env: 0x5571f2563c70
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                                Options.info_log: 0x5571f1756800
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                              Options.statistics: (nil)
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.use_fsync: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                              Options.db_log_dir: 
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.write_buffer_manager: 0x5571f266e460
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.unordered_write: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.row_cache: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                              Options.wal_filter: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.two_write_queues: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.wal_compression: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.atomic_flush: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.max_background_jobs: 4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.max_background_compactions: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.max_subcompactions: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.max_open_files: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Compression algorithms supported:
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         kZSTD supported: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         kXpressCompression supported: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         kBZip2Compression supported: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         kLZ4Compression supported: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         kZlibCompression supported: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         kSnappyCompression supported: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5571f1756260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5571f17431f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5571f1756260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5571f17431f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5571f1756260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5571f17431f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5571f1756260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5571f17431f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5571f1756260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5571f17431f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5571f1756260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5571f17431f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5571f1756260)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5571f17431f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5571f1756200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5571f1743090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5571f1756200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5571f1743090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5571f1756200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5571f1743090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 07:29:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-096846a9b9620caf58e32e50d6695dce54eb84588c50e56319eb00bed8cb44f0-merged.mount: Deactivated successfully.
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 93c5e90b-0a02-4f24-bd18-5ea5e5950a4d
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401352032083, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401352032324, "job": 1, "event": "recovery_finished"}
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: freelist init
Nov 29 07:29:12 compute-0 ceph-osd[90977]: freelist _read_cfg
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bluefs umount
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bdev(0x5571f26f4400 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 07:29:12 compute-0 podman[91137]: 2025-11-29 07:29:12.096453003 +0000 UTC m=+0.420115302 container remove 0d1b0c4655c8ad07485c101a20c888a70c473058f9574c7e88d8e72d4ebb99a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:29:12 compute-0 systemd[1]: libpod-conmon-0d1b0c4655c8ad07485c101a20c888a70c473058f9574c7e88d8e72d4ebb99a8.scope: Deactivated successfully.
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bdev(0x5571f26f4400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bdev(0x5571f26f4400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bdev(0x5571f26f4400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bdev(0x5571f26f4400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bluefs mount
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bluefs mount shared_bdev_used = 4718592
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: RocksDB version: 7.9.2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Git sha 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: DB SUMMARY
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: DB Session ID:  P8FJFJQJ4UDWN8L9SM4H
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: CURRENT file:  CURRENT
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                         Options.error_if_exists: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.create_if_missing: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                                     Options.env: 0x5571f2714460
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                                Options.info_log: 0x5571f1756560
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                              Options.statistics: (nil)
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.use_fsync: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                              Options.db_log_dir: 
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.write_buffer_manager: 0x5571f266e460
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.unordered_write: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.row_cache: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                              Options.wal_filter: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.two_write_queues: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.wal_compression: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.atomic_flush: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.max_background_jobs: 4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.max_background_compactions: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.max_subcompactions: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.max_open_files: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Compression algorithms supported:
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         kZSTD supported: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         kXpressCompression supported: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         kBZip2Compression supported: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         kLZ4Compression supported: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         kZlibCompression supported: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         kSnappyCompression supported: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5571f17569c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5571f17431f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5571f17569c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5571f17431f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5571f17569c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5571f17431f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5571f17569c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5571f17431f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5571f17569c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5571f17431f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5571f17569c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5571f17431f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5571f17569c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5571f17431f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5571f1756320)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5571f1743090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5571f1756320)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5571f1743090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:           Options.merge_operator: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5571f1756320)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5571f1743090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.compression: LZ4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.num_levels: 7
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 93c5e90b-0a02-4f24-bd18-5ea5e5950a4d
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401352312902, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401352336877, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401352, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93c5e90b-0a02-4f24-bd18-5ea5e5950a4d", "db_session_id": "P8FJFJQJ4UDWN8L9SM4H", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401352356162, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401352, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93c5e90b-0a02-4f24-bd18-5ea5e5950a4d", "db_session_id": "P8FJFJQJ4UDWN8L9SM4H", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:29:12 compute-0 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2520611032; not ready for session (expect reconnect)
Nov 29 07:29:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:29:12 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:12 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:29:12 compute-0 podman[91376]: 2025-11-29 07:29:12.386085749 +0000 UTC m=+0.100387543 container create 035ce3c3265cc649afafe00e2037bac09a734ef0bbc545ddeb3e5bac65f424b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401352387325, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401352, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "93c5e90b-0a02-4f24-bd18-5ea5e5950a4d", "db_session_id": "P8FJFJQJ4UDWN8L9SM4H", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401352388903, "job": 1, "event": "recovery_finished"}
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 29 07:29:12 compute-0 podman[91376]: 2025-11-29 07:29:12.327938996 +0000 UTC m=+0.042240800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:12 compute-0 systemd[1]: Started libpod-conmon-035ce3c3265cc649afafe00e2037bac09a734ef0bbc545ddeb3e5bac65f424b8.scope.
Nov 29 07:29:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de034dbab627db14f17c2356ca25c0a772c77cb018ab1857028406e7f091d2e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de034dbab627db14f17c2356ca25c0a772c77cb018ab1857028406e7f091d2e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de034dbab627db14f17c2356ca25c0a772c77cb018ab1857028406e7f091d2e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de034dbab627db14f17c2356ca25c0a772c77cb018ab1857028406e7f091d2e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5571f18b0000
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: DB pointer 0x5571f2657a00
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 29 07:29:12 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:29:12 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.024       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.024       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.024       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.024       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f1743090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f1743090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f1743090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 07:29:12 compute-0 ceph-osd[90977]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 29 07:29:12 compute-0 ceph-osd[90977]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 29 07:29:12 compute-0 ceph-osd[90977]: _get_class not permitted to load lua
Nov 29 07:29:12 compute-0 ceph-osd[90977]: _get_class not permitted to load sdk
Nov 29 07:29:12 compute-0 ceph-osd[90977]: _get_class not permitted to load test_remote_reads
Nov 29 07:29:12 compute-0 ceph-osd[90977]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 29 07:29:12 compute-0 ceph-osd[90977]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 29 07:29:12 compute-0 ceph-osd[90977]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 29 07:29:12 compute-0 ceph-osd[90977]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 29 07:29:12 compute-0 ceph-osd[90977]: osd.2 0 load_pgs
Nov 29 07:29:12 compute-0 ceph-osd[90977]: osd.2 0 load_pgs opened 0 pgs
Nov 29 07:29:12 compute-0 ceph-osd[90977]: osd.2 0 log_to_monitors true
Nov 29 07:29:12 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2[90973]: 2025-11-29T07:29:12.532+0000 7f81743ac740 -1 osd.2 0 log_to_monitors true
Nov 29 07:29:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 29 07:29:12 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3761086254,v1:192.168.122.100:6811/3761086254]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 07:29:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 29 07:29:12 compute-0 podman[91376]: 2025-11-29 07:29:12.559205842 +0000 UTC m=+0.273507646 container init 035ce3c3265cc649afafe00e2037bac09a734ef0bbc545ddeb3e5bac65f424b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_banach, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Nov 29 07:29:12 compute-0 podman[91376]: 2025-11-29 07:29:12.581454657 +0000 UTC m=+0.295756431 container start 035ce3c3265cc649afafe00e2037bac09a734ef0bbc545ddeb3e5bac65f424b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Nov 29 07:29:12 compute-0 podman[91376]: 2025-11-29 07:29:12.610891203 +0000 UTC m=+0.325192987 container attach 035ce3c3265cc649afafe00e2037bac09a734ef0bbc545ddeb3e5bac65f424b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 07:29:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 29 07:29:13 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3761086254,v1:192.168.122.100:6811/3761086254]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 29 07:29:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Nov 29 07:29:13 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:13 compute-0 ceph-mon[75237]: from='osd.2 [v2:192.168.122.100:6810/3761086254,v1:192.168.122.100:6811/3761086254]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 07:29:13 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Nov 29 07:29:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 29 07:29:13 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3761086254,v1:192.168.122.100:6811/3761086254]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 07:29:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 29 07:29:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:29:13 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:29:13 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:13 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:29:13 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:29:13 compute-0 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2520611032; not ready for session (expect reconnect)
Nov 29 07:29:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:29:13 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:13 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:29:13 compute-0 ceph-osd[89968]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 19.573 iops: 5010.574 elapsed_sec: 0.599
Nov 29 07:29:13 compute-0 ceph-osd[89968]: log_channel(cluster) log [WRN] : OSD bench result of 5010.573973 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 07:29:13 compute-0 ceph-osd[89968]: osd.1 0 waiting for initial osdmap
Nov 29 07:29:13 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1[89964]: 2025-11-29T07:29:13.403+0000 7fe7c25d8640 -1 osd.1 0 waiting for initial osdmap
Nov 29 07:29:13 compute-0 ceph-osd[89968]: osd.1 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 29 07:29:13 compute-0 ceph-osd[89968]: osd.1 13 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 29 07:29:13 compute-0 ceph-osd[89968]: osd.1 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 29 07:29:13 compute-0 ceph-osd[89968]: osd.1 13 check_osdmap_features require_osd_release unknown -> reef
Nov 29 07:29:13 compute-0 ceph-osd[89968]: osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 07:29:13 compute-0 ceph-osd[89968]: osd.1 13 set_numa_affinity not setting numa affinity
Nov 29 07:29:13 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1[89964]: 2025-11-29T07:29:13.437+0000 7fe7bdc00640 -1 osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 07:29:13 compute-0 ceph-osd[89968]: osd.1 13 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Nov 29 07:29:13 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 29 07:29:13 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 29 07:29:13 compute-0 happy_banach[91575]: {
Nov 29 07:29:13 compute-0 happy_banach[91575]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:29:13 compute-0 happy_banach[91575]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:29:13 compute-0 happy_banach[91575]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:29:13 compute-0 happy_banach[91575]:         "osd_id": 2,
Nov 29 07:29:13 compute-0 happy_banach[91575]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:29:13 compute-0 happy_banach[91575]:         "type": "bluestore"
Nov 29 07:29:13 compute-0 happy_banach[91575]:     },
Nov 29 07:29:13 compute-0 happy_banach[91575]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:29:13 compute-0 happy_banach[91575]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:29:13 compute-0 happy_banach[91575]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:29:13 compute-0 happy_banach[91575]:         "osd_id": 0,
Nov 29 07:29:13 compute-0 happy_banach[91575]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:29:13 compute-0 happy_banach[91575]:         "type": "bluestore"
Nov 29 07:29:13 compute-0 happy_banach[91575]:     },
Nov 29 07:29:13 compute-0 happy_banach[91575]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:29:13 compute-0 happy_banach[91575]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:29:13 compute-0 happy_banach[91575]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:29:13 compute-0 happy_banach[91575]:         "osd_id": 1,
Nov 29 07:29:13 compute-0 happy_banach[91575]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:29:13 compute-0 happy_banach[91575]:         "type": "bluestore"
Nov 29 07:29:13 compute-0 happy_banach[91575]:     }
Nov 29 07:29:13 compute-0 happy_banach[91575]: }
Nov 29 07:29:13 compute-0 systemd[1]: libpod-035ce3c3265cc649afafe00e2037bac09a734ef0bbc545ddeb3e5bac65f424b8.scope: Deactivated successfully.
Nov 29 07:29:13 compute-0 podman[91376]: 2025-11-29 07:29:13.739996071 +0000 UTC m=+1.454297835 container died 035ce3c3265cc649afafe00e2037bac09a734ef0bbc545ddeb3e5bac65f424b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_banach, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:29:13 compute-0 systemd[1]: libpod-035ce3c3265cc649afafe00e2037bac09a734ef0bbc545ddeb3e5bac65f424b8.scope: Consumed 1.158s CPU time.
Nov 29 07:29:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-de034dbab627db14f17c2356ca25c0a772c77cb018ab1857028406e7f091d2e5-merged.mount: Deactivated successfully.
Nov 29 07:29:13 compute-0 podman[91376]: 2025-11-29 07:29:13.805986014 +0000 UTC m=+1.520287788 container remove 035ce3c3265cc649afafe00e2037bac09a734ef0bbc545ddeb3e5bac65f424b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_banach, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:29:13 compute-0 systemd[1]: libpod-conmon-035ce3c3265cc649afafe00e2037bac09a734ef0bbc545ddeb3e5bac65f424b8.scope: Deactivated successfully.
Nov 29 07:29:13 compute-0 sudo[91068]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:29:13 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:29:13 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:13 compute-0 sudo[91653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:13 compute-0 sudo[91653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:13 compute-0 sudo[91653]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:14 compute-0 sudo[91678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:29:14 compute-0 sudo[91678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:14 compute-0 sudo[91678]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 29 07:29:14 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3761086254,v1:192.168.122.100:6811/3761086254]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 07:29:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Nov 29 07:29:14 compute-0 ceph-osd[90977]: osd.2 0 done with init, starting boot process
Nov 29 07:29:14 compute-0 ceph-osd[90977]: osd.2 0 start_boot
Nov 29 07:29:14 compute-0 ceph-osd[90977]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 29 07:29:14 compute-0 ceph-osd[90977]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 29 07:29:14 compute-0 ceph-osd[90977]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 29 07:29:14 compute-0 ceph-osd[90977]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 29 07:29:14 compute-0 ceph-osd[90977]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 29 07:29:14 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/2520611032,v1:192.168.122.100:6807/2520611032] boot
Nov 29 07:29:14 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Nov 29 07:29:14 compute-0 ceph-osd[89968]: osd.1 14 state: booting -> active
Nov 29 07:29:14 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[11,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:29:14 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:29:14 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:14 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:29:14 compute-0 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3761086254; not ready for session (expect reconnect)
Nov 29 07:29:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:29:14 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:14 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:29:14 compute-0 ceph-mon[75237]: pgmap v50: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 29 07:29:14 compute-0 ceph-mon[75237]: from='osd.2 [v2:192.168.122.100:6810/3761086254,v1:192.168.122.100:6811/3761086254]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 29 07:29:14 compute-0 ceph-mon[75237]: osdmap e13: 3 total, 1 up, 3 in
Nov 29 07:29:14 compute-0 ceph-mon[75237]: from='osd.2 [v2:192.168.122.100:6810/3761086254,v1:192.168.122.100:6811/3761086254]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 07:29:14 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:14 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:14 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:14 compute-0 ceph-mon[75237]: OSD bench result of 5010.573973 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 07:29:14 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:14 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:14 compute-0 sudo[91703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:14 compute-0 sudo[91703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:14 compute-0 sudo[91703]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:14 compute-0 sudo[91728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:29:14 compute-0 sudo[91728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:14 compute-0 sudo[91728]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:14 compute-0 sudo[91753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:14 compute-0 sudo[91753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:14 compute-0 sudo[91753]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:14 compute-0 sudo[91778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:29:14 compute-0 sudo[91778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 29 07:29:14 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 14 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=-1 lpr=14 pi=[11,14)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [1], acting [] -> [1], acting_primary ? -> 1, up_primary ? -> 1, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:14 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 14 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=-1 lpr=14 pi=[11,14)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:14 compute-0 podman[91871]: 2025-11-29 07:29:14.862781921 +0000 UTC m=+0.088028343 container exec 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 07:29:14 compute-0 podman[91871]: 2025-11-29 07:29:14.981138371 +0000 UTC m=+0.206384803 container exec_died 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 07:29:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 29 07:29:15 compute-0 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3761086254; not ready for session (expect reconnect)
Nov 29 07:29:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:29:15 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:15 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:29:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Nov 29 07:29:15 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Nov 29 07:29:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:29:15 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:15 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:29:15 compute-0 ceph-mon[75237]: from='osd.2 [v2:192.168.122.100:6810/3761086254,v1:192.168.122.100:6811/3761086254]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 07:29:15 compute-0 ceph-mon[75237]: osd.1 [v2:192.168.122.100:6806/2520611032,v1:192.168.122.100:6807/2520611032] boot
Nov 29 07:29:15 compute-0 ceph-mon[75237]: osdmap e14: 3 total, 2 up, 3 in
Nov 29 07:29:15 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:29:15 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:15 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:15 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:15 compute-0 ceph-mon[75237]: osdmap e15: 3 total, 2 up, 3 in
Nov 29 07:29:15 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:15 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=14/15 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[11,14)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:15 compute-0 ceph-mgr[75527]: [devicehealth INFO root] creating main.db for devicehealth
Nov 29 07:29:15 compute-0 sudo[91778]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:29:15 compute-0 ceph-mgr[75527]: [devicehealth INFO root] Check health
Nov 29 07:29:15 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:29:15 compute-0 ceph-mgr[75527]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Nov 29 07:29:15 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 29 07:29:15 compute-0 sudo[92006]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Nov 29 07:29:15 compute-0 sudo[92006]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 07:29:15 compute-0 sudo[92006]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Nov 29 07:29:15 compute-0 sudo[92006]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:15 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 29 07:29:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 07:29:15 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 07:29:15 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:15 compute-0 sudo[92009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:15 compute-0 sudo[92009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:15 compute-0 sudo[92009]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:15 compute-0 sudo[92034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:29:15 compute-0 sudo[92034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:15 compute-0 sudo[92034]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:15 compute-0 sudo[92059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:15 compute-0 sudo[92059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:15 compute-0 sudo[92059]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:16 compute-0 sudo[92084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:29:16 compute-0 sudo[92084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:16 compute-0 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3761086254; not ready for session (expect reconnect)
Nov 29 07:29:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:29:16 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:16 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:29:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 29 07:29:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Nov 29 07:29:16 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Nov 29 07:29:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:29:16 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:16 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:29:16 compute-0 ceph-mon[75237]: purged_snaps scrub starts
Nov 29 07:29:16 compute-0 ceph-mon[75237]: purged_snaps scrub ok
Nov 29 07:29:16 compute-0 ceph-mon[75237]: pgmap v53: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 29 07:29:16 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:16 compute-0 ceph-mon[75237]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 29 07:29:16 compute-0 ceph-mon[75237]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 29 07:29:16 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 07:29:16 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:16 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 29 07:29:16 compute-0 sudo[92084]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:16 compute-0 sudo[92140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:16 compute-0 sudo[92140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:16 compute-0 sudo[92140]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:16 compute-0 sudo[92165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:29:16 compute-0 sudo[92165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:16 compute-0 sudo[92165]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:16 compute-0 sudo[92190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:16 compute-0 sudo[92190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:16 compute-0 sudo[92190]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:16 compute-0 sudo[92215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- inventory --format=json-pretty --filter-for-batch
Nov 29 07:29:16 compute-0 sudo[92215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:17 compute-0 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3761086254; not ready for session (expect reconnect)
Nov 29 07:29:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:29:17 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:17 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:29:17 compute-0 podman[92278]: 2025-11-29 07:29:17.333695567 +0000 UTC m=+0.075761544 container create 5458392a72bbff3482f1b0667df1b38d5dc1f5f9500c0bfe63f2e32f91df7d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 07:29:17 compute-0 podman[92278]: 2025-11-29 07:29:17.285541251 +0000 UTC m=+0.027607278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:17 compute-0 systemd[1]: Started libpod-conmon-5458392a72bbff3482f1b0667df1b38d5dc1f5f9500c0bfe63f2e32f91df7d93.scope.
Nov 29 07:29:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:17 compute-0 podman[92278]: 2025-11-29 07:29:17.456888888 +0000 UTC m=+0.198954935 container init 5458392a72bbff3482f1b0667df1b38d5dc1f5f9500c0bfe63f2e32f91df7d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:29:17 compute-0 podman[92278]: 2025-11-29 07:29:17.466679129 +0000 UTC m=+0.208745116 container start 5458392a72bbff3482f1b0667df1b38d5dc1f5f9500c0bfe63f2e32f91df7d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:29:17 compute-0 modest_margulis[92294]: 167 167
Nov 29 07:29:17 compute-0 systemd[1]: libpod-5458392a72bbff3482f1b0667df1b38d5dc1f5f9500c0bfe63f2e32f91df7d93.scope: Deactivated successfully.
Nov 29 07:29:17 compute-0 podman[92278]: 2025-11-29 07:29:17.501594962 +0000 UTC m=+0.243660949 container attach 5458392a72bbff3482f1b0667df1b38d5dc1f5f9500c0bfe63f2e32f91df7d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 07:29:17 compute-0 podman[92278]: 2025-11-29 07:29:17.502268 +0000 UTC m=+0.244333977 container died 5458392a72bbff3482f1b0667df1b38d5dc1f5f9500c0bfe63f2e32f91df7d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_margulis, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:29:17 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.fwfehy(active, since 99s)
Nov 29 07:29:17 compute-0 ceph-mon[75237]: osdmap e16: 3 total, 2 up, 3 in
Nov 29 07:29:17 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:17 compute-0 ceph-mon[75237]: pgmap v56: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 29 07:29:17 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff05c77921682f10b50390ab04daf335cb0686cac80ac10dce2f98e5fbc99213-merged.mount: Deactivated successfully.
Nov 29 07:29:17 compute-0 podman[92278]: 2025-11-29 07:29:17.689464681 +0000 UTC m=+0.431530698 container remove 5458392a72bbff3482f1b0667df1b38d5dc1f5f9500c0bfe63f2e32f91df7d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:29:17 compute-0 systemd[1]: libpod-conmon-5458392a72bbff3482f1b0667df1b38d5dc1f5f9500c0bfe63f2e32f91df7d93.scope: Deactivated successfully.
Nov 29 07:29:17 compute-0 podman[92320]: 2025-11-29 07:29:17.896162701 +0000 UTC m=+0.075421066 container create 3b9b3082a06d00f6b93c02c963130fcd7bb59727d96334742ff83e7546eb6e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:29:17 compute-0 podman[92320]: 2025-11-29 07:29:17.849944036 +0000 UTC m=+0.029202421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:17 compute-0 systemd[1]: Started libpod-conmon-3b9b3082a06d00f6b93c02c963130fcd7bb59727d96334742ff83e7546eb6e2a.scope.
Nov 29 07:29:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e8b1fbf49d1a8d0ab977065ab53927153824e98dc728bf5adfb5f607f5d5609/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e8b1fbf49d1a8d0ab977065ab53927153824e98dc728bf5adfb5f607f5d5609/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e8b1fbf49d1a8d0ab977065ab53927153824e98dc728bf5adfb5f607f5d5609/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e8b1fbf49d1a8d0ab977065ab53927153824e98dc728bf5adfb5f607f5d5609/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:18 compute-0 podman[92320]: 2025-11-29 07:29:18.042804618 +0000 UTC m=+0.222063063 container init 3b9b3082a06d00f6b93c02c963130fcd7bb59727d96334742ff83e7546eb6e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dirac, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:29:18 compute-0 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3761086254; not ready for session (expect reconnect)
Nov 29 07:29:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:29:18 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:18 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:29:18 compute-0 podman[92320]: 2025-11-29 07:29:18.050506043 +0000 UTC m=+0.229764428 container start 3b9b3082a06d00f6b93c02c963130fcd7bb59727d96334742ff83e7546eb6e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dirac, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:29:18 compute-0 podman[92320]: 2025-11-29 07:29:18.091051956 +0000 UTC m=+0.270310341 container attach 3b9b3082a06d00f6b93c02c963130fcd7bb59727d96334742ff83e7546eb6e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:29:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 29 07:29:18 compute-0 ceph-mon[75237]: mgrmap e9: compute-0.fwfehy(active, since 99s)
Nov 29 07:29:18 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:19 compute-0 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3761086254; not ready for session (expect reconnect)
Nov 29 07:29:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:29:19 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:19 compute-0 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:29:19 compute-0 ceph-osd[90977]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 18.599 iops: 4761.375 elapsed_sec: 0.630
Nov 29 07:29:19 compute-0 ceph-osd[90977]: log_channel(cluster) log [WRN] : OSD bench result of 4761.374896 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 07:29:19 compute-0 ceph-osd[90977]: osd.2 0 waiting for initial osdmap
Nov 29 07:29:19 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2[90973]: 2025-11-29T07:29:19.569+0000 7f817032c640 -1 osd.2 0 waiting for initial osdmap
Nov 29 07:29:19 compute-0 ceph-osd[90977]: osd.2 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 29 07:29:19 compute-0 ceph-osd[90977]: osd.2 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 29 07:29:19 compute-0 ceph-osd[90977]: osd.2 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 29 07:29:19 compute-0 ceph-osd[90977]: osd.2 16 check_osdmap_features require_osd_release unknown -> reef
Nov 29 07:29:19 compute-0 ceph-osd[90977]: osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 07:29:19 compute-0 ceph-osd[90977]: osd.2 16 set_numa_affinity not setting numa affinity
Nov 29 07:29:19 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2[90973]: 2025-11-29T07:29:19.597+0000 7f816b954640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 07:29:19 compute-0 ceph-osd[90977]: osd.2 16 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Nov 29 07:29:19 compute-0 frosty_dirac[92336]: [
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:     {
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:         "available": false,
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:         "ceph_device": false,
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:         "lsm_data": {},
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:         "lvs": [],
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:         "path": "/dev/sr0",
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:         "rejected_reasons": [
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "Has a FileSystem",
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "Insufficient space (<5GB)"
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:         ],
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:         "sys_api": {
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "actuators": null,
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "device_nodes": "sr0",
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "devname": "sr0",
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "human_readable_size": "482.00 KB",
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "id_bus": "ata",
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "model": "QEMU DVD-ROM",
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "nr_requests": "2",
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "parent": "/dev/sr0",
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "partitions": {},
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "path": "/dev/sr0",
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "removable": "1",
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "rev": "2.5+",
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "ro": "0",
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "rotational": "1",
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "sas_address": "",
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "sas_device_handle": "",
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "scheduler_mode": "mq-deadline",
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "sectors": 0,
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "sectorsize": "2048",
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "size": 493568.0,
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "support_discard": "2048",
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "type": "disk",
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:             "vendor": "QEMU"
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:         }
Nov 29 07:29:19 compute-0 frosty_dirac[92336]:     }
Nov 29 07:29:19 compute-0 frosty_dirac[92336]: ]
Nov 29 07:29:19 compute-0 systemd[1]: libpod-3b9b3082a06d00f6b93c02c963130fcd7bb59727d96334742ff83e7546eb6e2a.scope: Deactivated successfully.
Nov 29 07:29:19 compute-0 systemd[1]: libpod-3b9b3082a06d00f6b93c02c963130fcd7bb59727d96334742ff83e7546eb6e2a.scope: Consumed 1.590s CPU time.
Nov 29 07:29:19 compute-0 podman[92320]: 2025-11-29 07:29:19.644313774 +0000 UTC m=+1.823572169 container died 3b9b3082a06d00f6b93c02c963130fcd7bb59727d96334742ff83e7546eb6e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dirac, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:29:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 29 07:29:19 compute-0 ceph-mon[75237]: pgmap v57: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 29 07:29:19 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Nov 29 07:29:19 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/3761086254,v1:192.168.122.100:6811/3761086254] boot
Nov 29 07:29:19 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Nov 29 07:29:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:29:19 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e8b1fbf49d1a8d0ab977065ab53927153824e98dc728bf5adfb5f607f5d5609-merged.mount: Deactivated successfully.
Nov 29 07:29:19 compute-0 ceph-osd[90977]: osd.2 17 state: booting -> active
Nov 29 07:29:19 compute-0 podman[92320]: 2025-11-29 07:29:19.711488298 +0000 UTC m=+1.890746653 container remove 3b9b3082a06d00f6b93c02c963130fcd7bb59727d96334742ff83e7546eb6e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:29:19 compute-0 systemd[1]: libpod-conmon-3b9b3082a06d00f6b93c02c963130fcd7bb59727d96334742ff83e7546eb6e2a.scope: Deactivated successfully.
Nov 29 07:29:19 compute-0 sudo[92215]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:29:19 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:29:19 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 29 07:29:19 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:29:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 29 07:29:19 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 29 07:29:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 29 07:29:19 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 29 07:29:19 compute-0 ceph-mgr[75527]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43691k
Nov 29 07:29:19 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43691k
Nov 29 07:29:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 29 07:29:19 compute-0 ceph-mgr[75527]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44740198: error parsing value: Value '44740198' is below minimum 939524096
Nov 29 07:29:19 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44740198: error parsing value: Value '44740198' is below minimum 939524096
Nov 29 07:29:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:29:19 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:29:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:29:19 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:29:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:29:19 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:19 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 6e2d245c-b7f3-423e-a7b3-dbdc06f45fc8 does not exist
Nov 29 07:29:19 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 194f50a5-1e1f-4f4b-887f-6071750c82b2 does not exist
Nov 29 07:29:19 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 017a4fb0-4488-4e06-bce6-f6a4b1235e90 does not exist
Nov 29 07:29:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:29:19 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:29:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:29:19 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:29:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:29:19 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:29:19 compute-0 sudo[94117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:19 compute-0 sudo[94117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:19 compute-0 sudo[94117]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:19 compute-0 sudo[94142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:29:19 compute-0 sudo[94142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:19 compute-0 sudo[94142]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:19 compute-0 sudo[94167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:19 compute-0 sudo[94167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:19 compute-0 sudo[94167]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:20 compute-0 sudo[94192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:29:20 compute-0 sudo[94192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:20 compute-0 podman[94258]: 2025-11-29 07:29:20.384253027 +0000 UTC m=+0.026769066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 879 MiB used, 59 GiB / 60 GiB avail
Nov 29 07:29:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 29 07:29:21 compute-0 podman[94258]: 2025-11-29 07:29:21.810608265 +0000 UTC m=+1.453124274 container create ab30d9b1ac0c337202d1da86c7648f6e2c0125f5d54de5d62a671dbbedcd054f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_darwin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:29:21 compute-0 ceph-mon[75237]: OSD bench result of 4761.374896 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 07:29:21 compute-0 ceph-mon[75237]: osd.2 [v2:192.168.122.100:6810/3761086254,v1:192.168.122.100:6811/3761086254] boot
Nov 29 07:29:21 compute-0 ceph-mon[75237]: osdmap e17: 3 total, 3 up, 3 in
Nov 29 07:29:21 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:29:21 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:21 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:21 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:29:21 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 29 07:29:21 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 29 07:29:21 compute-0 ceph-mon[75237]: Adjusting osd_memory_target on compute-0 to 43691k
Nov 29 07:29:21 compute-0 ceph-mon[75237]: Unable to set osd_memory_target on compute-0 to 44740198: error parsing value: Value '44740198' is below minimum 939524096
Nov 29 07:29:21 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:29:21 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:29:21 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:21 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:29:21 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:29:21 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:29:21 compute-0 systemd[1]: Started libpod-conmon-ab30d9b1ac0c337202d1da86c7648f6e2c0125f5d54de5d62a671dbbedcd054f.scope.
Nov 29 07:29:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Nov 29 07:29:21 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Nov 29 07:29:21 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:21 compute-0 podman[94258]: 2025-11-29 07:29:21.950295685 +0000 UTC m=+1.592811744 container init ab30d9b1ac0c337202d1da86c7648f6e2c0125f5d54de5d62a671dbbedcd054f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:29:21 compute-0 podman[94258]: 2025-11-29 07:29:21.963427786 +0000 UTC m=+1.605943785 container start ab30d9b1ac0c337202d1da86c7648f6e2c0125f5d54de5d62a671dbbedcd054f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_darwin, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 07:29:21 compute-0 mystifying_darwin[94274]: 167 167
Nov 29 07:29:21 compute-0 systemd[1]: libpod-ab30d9b1ac0c337202d1da86c7648f6e2c0125f5d54de5d62a671dbbedcd054f.scope: Deactivated successfully.
Nov 29 07:29:21 compute-0 podman[94258]: 2025-11-29 07:29:21.970662209 +0000 UTC m=+1.613178268 container attach ab30d9b1ac0c337202d1da86c7648f6e2c0125f5d54de5d62a671dbbedcd054f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_darwin, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:29:21 compute-0 podman[94258]: 2025-11-29 07:29:21.971963604 +0000 UTC m=+1.614479653 container died ab30d9b1ac0c337202d1da86c7648f6e2c0125f5d54de5d62a671dbbedcd054f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 07:29:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd8516c2f9b79768cafa12cc5bdc835920364a2989c8c848e916134e7bb76896-merged.mount: Deactivated successfully.
Nov 29 07:29:22 compute-0 podman[94258]: 2025-11-29 07:29:22.023507611 +0000 UTC m=+1.666023620 container remove ab30d9b1ac0c337202d1da86c7648f6e2c0125f5d54de5d62a671dbbedcd054f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:29:22 compute-0 systemd[1]: libpod-conmon-ab30d9b1ac0c337202d1da86c7648f6e2c0125f5d54de5d62a671dbbedcd054f.scope: Deactivated successfully.
Nov 29 07:29:22 compute-0 podman[94297]: 2025-11-29 07:29:22.184206963 +0000 UTC m=+0.046616467 container create 18afd6ab13cf00077270e5603c24a5bd06d0ec8fa1e58cbd7d7762cba8cbd855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lichterman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:29:22 compute-0 systemd[1]: Started libpod-conmon-18afd6ab13cf00077270e5603c24a5bd06d0ec8fa1e58cbd7d7762cba8cbd855.scope.
Nov 29 07:29:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59df4bed7e8982aa2efd2cb72d9d0a0ea0dd2fcedd9d0923fca7dd99033e0584/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59df4bed7e8982aa2efd2cb72d9d0a0ea0dd2fcedd9d0923fca7dd99033e0584/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59df4bed7e8982aa2efd2cb72d9d0a0ea0dd2fcedd9d0923fca7dd99033e0584/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59df4bed7e8982aa2efd2cb72d9d0a0ea0dd2fcedd9d0923fca7dd99033e0584/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59df4bed7e8982aa2efd2cb72d9d0a0ea0dd2fcedd9d0923fca7dd99033e0584/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:22 compute-0 podman[94297]: 2025-11-29 07:29:22.167700992 +0000 UTC m=+0.030110516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:22 compute-0 podman[94297]: 2025-11-29 07:29:22.266050599 +0000 UTC m=+0.128460123 container init 18afd6ab13cf00077270e5603c24a5bd06d0ec8fa1e58cbd7d7762cba8cbd855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 07:29:22 compute-0 podman[94297]: 2025-11-29 07:29:22.272319037 +0000 UTC m=+0.134728541 container start 18afd6ab13cf00077270e5603c24a5bd06d0ec8fa1e58cbd7d7762cba8cbd855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lichterman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 07:29:22 compute-0 podman[94297]: 2025-11-29 07:29:22.27543985 +0000 UTC m=+0.137849344 container attach 18afd6ab13cf00077270e5603c24a5bd06d0ec8fa1e58cbd7d7762cba8cbd855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 07:29:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 879 MiB used, 59 GiB / 60 GiB avail
Nov 29 07:29:22 compute-0 ceph-mon[75237]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 879 MiB used, 59 GiB / 60 GiB avail
Nov 29 07:29:22 compute-0 ceph-mon[75237]: osdmap e18: 3 total, 3 up, 3 in
Nov 29 07:29:23 compute-0 optimistic_lichterman[94314]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:29:23 compute-0 optimistic_lichterman[94314]: --> relative data size: 1.0
Nov 29 07:29:23 compute-0 optimistic_lichterman[94314]: --> All data devices are unavailable
Nov 29 07:29:23 compute-0 systemd[1]: libpod-18afd6ab13cf00077270e5603c24a5bd06d0ec8fa1e58cbd7d7762cba8cbd855.scope: Deactivated successfully.
Nov 29 07:29:23 compute-0 systemd[1]: libpod-18afd6ab13cf00077270e5603c24a5bd06d0ec8fa1e58cbd7d7762cba8cbd855.scope: Consumed 1.087s CPU time.
Nov 29 07:29:23 compute-0 podman[94297]: 2025-11-29 07:29:23.392625629 +0000 UTC m=+1.255035173 container died 18afd6ab13cf00077270e5603c24a5bd06d0ec8fa1e58cbd7d7762cba8cbd855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lichterman, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 07:29:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-59df4bed7e8982aa2efd2cb72d9d0a0ea0dd2fcedd9d0923fca7dd99033e0584-merged.mount: Deactivated successfully.
Nov 29 07:29:23 compute-0 podman[94297]: 2025-11-29 07:29:23.470316534 +0000 UTC m=+1.332726078 container remove 18afd6ab13cf00077270e5603c24a5bd06d0ec8fa1e58cbd7d7762cba8cbd855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lichterman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 07:29:23 compute-0 systemd[1]: libpod-conmon-18afd6ab13cf00077270e5603c24a5bd06d0ec8fa1e58cbd7d7762cba8cbd855.scope: Deactivated successfully.
Nov 29 07:29:23 compute-0 sudo[94192]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:23 compute-0 sudo[94357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:23 compute-0 sudo[94357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:23 compute-0 sudo[94357]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:23 compute-0 sudo[94382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:29:23 compute-0 sudo[94382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:23 compute-0 sudo[94382]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:23 compute-0 sudo[94407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:23 compute-0 sudo[94407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:23 compute-0 sudo[94407]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:23 compute-0 sudo[94432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:29:23 compute-0 sudo[94432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:23 compute-0 ceph-mon[75237]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 879 MiB used, 59 GiB / 60 GiB avail
Nov 29 07:29:24 compute-0 podman[94497]: 2025-11-29 07:29:24.175460118 +0000 UTC m=+0.041904900 container create 09cea18aeac5199fc01674cf375939305e9ffae88c8640fdd92968f16eb7c892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wilbur, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 07:29:24 compute-0 systemd[1]: Started libpod-conmon-09cea18aeac5199fc01674cf375939305e9ffae88c8640fdd92968f16eb7c892.scope.
Nov 29 07:29:24 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:24 compute-0 podman[94497]: 2025-11-29 07:29:24.244397669 +0000 UTC m=+0.110842491 container init 09cea18aeac5199fc01674cf375939305e9ffae88c8640fdd92968f16eb7c892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wilbur, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 07:29:24 compute-0 podman[94497]: 2025-11-29 07:29:24.250718579 +0000 UTC m=+0.117163361 container start 09cea18aeac5199fc01674cf375939305e9ffae88c8640fdd92968f16eb7c892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Nov 29 07:29:24 compute-0 interesting_wilbur[94513]: 167 167
Nov 29 07:29:24 compute-0 podman[94497]: 2025-11-29 07:29:24.254724256 +0000 UTC m=+0.121169058 container attach 09cea18aeac5199fc01674cf375939305e9ffae88c8640fdd92968f16eb7c892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wilbur, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 07:29:24 compute-0 podman[94497]: 2025-11-29 07:29:24.159467342 +0000 UTC m=+0.025912134 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:24 compute-0 systemd[1]: libpod-09cea18aeac5199fc01674cf375939305e9ffae88c8640fdd92968f16eb7c892.scope: Deactivated successfully.
Nov 29 07:29:24 compute-0 podman[94497]: 2025-11-29 07:29:24.255538097 +0000 UTC m=+0.121982879 container died 09cea18aeac5199fc01674cf375939305e9ffae88c8640fdd92968f16eb7c892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 07:29:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-62f26ac75362b9718cfef3b0e14a1ac5cd1392734950cf368f30499bd166582c-merged.mount: Deactivated successfully.
Nov 29 07:29:24 compute-0 podman[94497]: 2025-11-29 07:29:24.482452458 +0000 UTC m=+0.348897270 container remove 09cea18aeac5199fc01674cf375939305e9ffae88c8640fdd92968f16eb7c892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:29:24 compute-0 systemd[1]: libpod-conmon-09cea18aeac5199fc01674cf375939305e9ffae88c8640fdd92968f16eb7c892.scope: Deactivated successfully.
Nov 29 07:29:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:24 compute-0 podman[94536]: 2025-11-29 07:29:24.709828451 +0000 UTC m=+0.060319322 container create dda713f759c25602ef5ce520f6fc5244e02b904915e0c0fdc617e7bf6a0f6d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ellis, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 07:29:24 compute-0 systemd[1]: Started libpod-conmon-dda713f759c25602ef5ce520f6fc5244e02b904915e0c0fdc617e7bf6a0f6d1d.scope.
Nov 29 07:29:24 compute-0 podman[94536]: 2025-11-29 07:29:24.687648219 +0000 UTC m=+0.038139140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:24 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c3b22e3836a239b77a48370f5fc465d6e7ea1a078e02e26e56d1b133259b2a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c3b22e3836a239b77a48370f5fc465d6e7ea1a078e02e26e56d1b133259b2a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c3b22e3836a239b77a48370f5fc465d6e7ea1a078e02e26e56d1b133259b2a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c3b22e3836a239b77a48370f5fc465d6e7ea1a078e02e26e56d1b133259b2a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:24 compute-0 podman[94536]: 2025-11-29 07:29:24.813234653 +0000 UTC m=+0.163725534 container init dda713f759c25602ef5ce520f6fc5244e02b904915e0c0fdc617e7bf6a0f6d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:29:24 compute-0 podman[94536]: 2025-11-29 07:29:24.826475127 +0000 UTC m=+0.176965998 container start dda713f759c25602ef5ce520f6fc5244e02b904915e0c0fdc617e7bf6a0f6d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ellis, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:29:24 compute-0 podman[94536]: 2025-11-29 07:29:24.831294505 +0000 UTC m=+0.181785376 container attach dda713f759c25602ef5ce520f6fc5244e02b904915e0c0fdc617e7bf6a0f6d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ellis, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:29:25 compute-0 festive_ellis[94553]: {
Nov 29 07:29:25 compute-0 festive_ellis[94553]:     "0": [
Nov 29 07:29:25 compute-0 festive_ellis[94553]:         {
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "devices": [
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "/dev/loop3"
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             ],
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "lv_name": "ceph_lv0",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "lv_size": "21470642176",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "name": "ceph_lv0",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "tags": {
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.cluster_name": "ceph",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.crush_device_class": "",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.encrypted": "0",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.osd_id": "0",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.type": "block",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.vdo": "0"
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             },
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "type": "block",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "vg_name": "ceph_vg0"
Nov 29 07:29:25 compute-0 festive_ellis[94553]:         }
Nov 29 07:29:25 compute-0 festive_ellis[94553]:     ],
Nov 29 07:29:25 compute-0 festive_ellis[94553]:     "1": [
Nov 29 07:29:25 compute-0 festive_ellis[94553]:         {
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "devices": [
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "/dev/loop4"
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             ],
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "lv_name": "ceph_lv1",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "lv_size": "21470642176",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "name": "ceph_lv1",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "tags": {
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.cluster_name": "ceph",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.crush_device_class": "",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.encrypted": "0",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.osd_id": "1",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.type": "block",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.vdo": "0"
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             },
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "type": "block",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "vg_name": "ceph_vg1"
Nov 29 07:29:25 compute-0 festive_ellis[94553]:         }
Nov 29 07:29:25 compute-0 festive_ellis[94553]:     ],
Nov 29 07:29:25 compute-0 festive_ellis[94553]:     "2": [
Nov 29 07:29:25 compute-0 festive_ellis[94553]:         {
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "devices": [
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "/dev/loop5"
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             ],
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "lv_name": "ceph_lv2",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "lv_size": "21470642176",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "name": "ceph_lv2",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "tags": {
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.cluster_name": "ceph",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.crush_device_class": "",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.encrypted": "0",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.osd_id": "2",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.type": "block",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:                 "ceph.vdo": "0"
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             },
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "type": "block",
Nov 29 07:29:25 compute-0 festive_ellis[94553]:             "vg_name": "ceph_vg2"
Nov 29 07:29:25 compute-0 festive_ellis[94553]:         }
Nov 29 07:29:25 compute-0 festive_ellis[94553]:     ]
Nov 29 07:29:25 compute-0 festive_ellis[94553]: }
Nov 29 07:29:25 compute-0 systemd[1]: libpod-dda713f759c25602ef5ce520f6fc5244e02b904915e0c0fdc617e7bf6a0f6d1d.scope: Deactivated successfully.
Nov 29 07:29:25 compute-0 podman[94536]: 2025-11-29 07:29:25.626873775 +0000 UTC m=+0.977364656 container died dda713f759c25602ef5ce520f6fc5244e02b904915e0c0fdc617e7bf6a0f6d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ellis, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:29:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c3b22e3836a239b77a48370f5fc465d6e7ea1a078e02e26e56d1b133259b2a2-merged.mount: Deactivated successfully.
Nov 29 07:29:25 compute-0 podman[94536]: 2025-11-29 07:29:25.701440537 +0000 UTC m=+1.051931418 container remove dda713f759c25602ef5ce520f6fc5244e02b904915e0c0fdc617e7bf6a0f6d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:29:25 compute-0 systemd[1]: libpod-conmon-dda713f759c25602ef5ce520f6fc5244e02b904915e0c0fdc617e7bf6a0f6d1d.scope: Deactivated successfully.
Nov 29 07:29:25 compute-0 sudo[94432]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:25 compute-0 sudo[94572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:25 compute-0 sudo[94572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:25 compute-0 sudo[94572]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:25 compute-0 sudo[94597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:29:25 compute-0 sudo[94597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:25 compute-0 sudo[94597]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:25 compute-0 sudo[94622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:25 compute-0 sudo[94622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:25 compute-0 ceph-mon[75237]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:25 compute-0 sudo[94622]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:26 compute-0 sudo[94647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:29:26 compute-0 sudo[94647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:26 compute-0 podman[94714]: 2025-11-29 07:29:26.391758635 +0000 UTC m=+0.051321172 container create 45d99507c7f9ff564b4a62ee3c66c563b3abeb7e4305443ec8ae9f83987dd898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 07:29:26 compute-0 systemd[1]: Started libpod-conmon-45d99507c7f9ff564b4a62ee3c66c563b3abeb7e4305443ec8ae9f83987dd898.scope.
Nov 29 07:29:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:26 compute-0 podman[94714]: 2025-11-29 07:29:26.371668919 +0000 UTC m=+0.031231456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:26 compute-0 podman[94714]: 2025-11-29 07:29:26.473372034 +0000 UTC m=+0.132934581 container init 45d99507c7f9ff564b4a62ee3c66c563b3abeb7e4305443ec8ae9f83987dd898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 07:29:26 compute-0 podman[94714]: 2025-11-29 07:29:26.479689614 +0000 UTC m=+0.139252131 container start 45d99507c7f9ff564b4a62ee3c66c563b3abeb7e4305443ec8ae9f83987dd898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 07:29:26 compute-0 podman[94714]: 2025-11-29 07:29:26.482794916 +0000 UTC m=+0.142357443 container attach 45d99507c7f9ff564b4a62ee3c66c563b3abeb7e4305443ec8ae9f83987dd898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 29 07:29:26 compute-0 quirky_ptolemy[94730]: 167 167
Nov 29 07:29:26 compute-0 systemd[1]: libpod-45d99507c7f9ff564b4a62ee3c66c563b3abeb7e4305443ec8ae9f83987dd898.scope: Deactivated successfully.
Nov 29 07:29:26 compute-0 podman[94735]: 2025-11-29 07:29:26.529440922 +0000 UTC m=+0.026959391 container died 45d99507c7f9ff564b4a62ee3c66c563b3abeb7e4305443ec8ae9f83987dd898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ptolemy, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:29:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-72f6c3faef8adb967b340f4eb548acb19ad8591c38c4a0c319c98b45ad94cfcf-merged.mount: Deactivated successfully.
Nov 29 07:29:26 compute-0 podman[94735]: 2025-11-29 07:29:26.566717848 +0000 UTC m=+0.064236287 container remove 45d99507c7f9ff564b4a62ee3c66c563b3abeb7e4305443ec8ae9f83987dd898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ptolemy, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 07:29:26 compute-0 systemd[1]: libpod-conmon-45d99507c7f9ff564b4a62ee3c66c563b3abeb7e4305443ec8ae9f83987dd898.scope: Deactivated successfully.
Nov 29 07:29:26 compute-0 podman[94756]: 2025-11-29 07:29:26.74985947 +0000 UTC m=+0.046620427 container create 3eb300b37dc39e529eb18b677a7dbb3d3fc751c0a816b63f7401c2a01ddba627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:29:26 compute-0 systemd[1]: Started libpod-conmon-3eb300b37dc39e529eb18b677a7dbb3d3fc751c0a816b63f7401c2a01ddba627.scope.
Nov 29 07:29:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:26 compute-0 podman[94756]: 2025-11-29 07:29:26.729753632 +0000 UTC m=+0.026514599 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e18cba1bd5f42eb3cde3dcae03e4ca199efe6acbaef698f4718e42d8a3cb7ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e18cba1bd5f42eb3cde3dcae03e4ca199efe6acbaef698f4718e42d8a3cb7ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e18cba1bd5f42eb3cde3dcae03e4ca199efe6acbaef698f4718e42d8a3cb7ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e18cba1bd5f42eb3cde3dcae03e4ca199efe6acbaef698f4718e42d8a3cb7ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:26 compute-0 podman[94756]: 2025-11-29 07:29:26.840293335 +0000 UTC m=+0.137054292 container init 3eb300b37dc39e529eb18b677a7dbb3d3fc751c0a816b63f7401c2a01ddba627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:29:26 compute-0 podman[94756]: 2025-11-29 07:29:26.851297239 +0000 UTC m=+0.148058206 container start 3eb300b37dc39e529eb18b677a7dbb3d3fc751c0a816b63f7401c2a01ddba627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:29:26 compute-0 podman[94756]: 2025-11-29 07:29:26.855509611 +0000 UTC m=+0.152270578 container attach 3eb300b37dc39e529eb18b677a7dbb3d3fc751c0a816b63f7401c2a01ddba627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_haslett, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]: {
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]:         "osd_id": 2,
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]:         "type": "bluestore"
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]:     },
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]:         "osd_id": 0,
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]:         "type": "bluestore"
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]:     },
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]:         "osd_id": 1,
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]:         "type": "bluestore"
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]:     }
Nov 29 07:29:27 compute-0 nostalgic_haslett[94772]: }
Nov 29 07:29:27 compute-0 systemd[1]: libpod-3eb300b37dc39e529eb18b677a7dbb3d3fc751c0a816b63f7401c2a01ddba627.scope: Deactivated successfully.
Nov 29 07:29:27 compute-0 systemd[1]: libpod-3eb300b37dc39e529eb18b677a7dbb3d3fc751c0a816b63f7401c2a01ddba627.scope: Consumed 1.092s CPU time.
Nov 29 07:29:27 compute-0 podman[94756]: 2025-11-29 07:29:27.936163745 +0000 UTC m=+1.232924722 container died 3eb300b37dc39e529eb18b677a7dbb3d3fc751c0a816b63f7401c2a01ddba627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 07:29:27 compute-0 ceph-mon[75237]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e18cba1bd5f42eb3cde3dcae03e4ca199efe6acbaef698f4718e42d8a3cb7ce-merged.mount: Deactivated successfully.
Nov 29 07:29:28 compute-0 podman[94756]: 2025-11-29 07:29:28.000504094 +0000 UTC m=+1.297265051 container remove 3eb300b37dc39e529eb18b677a7dbb3d3fc751c0a816b63f7401c2a01ddba627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_haslett, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:29:28 compute-0 systemd[1]: libpod-conmon-3eb300b37dc39e529eb18b677a7dbb3d3fc751c0a816b63f7401c2a01ddba627.scope: Deactivated successfully.
Nov 29 07:29:28 compute-0 sudo[94647]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:29:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:29:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:28 compute-0 sudo[94819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:28 compute-0 sudo[94819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:28 compute-0 sudo[94819]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:28 compute-0 sudo[94844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:29:28 compute-0 sudo[94844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:28 compute-0 sudo[94844]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:28 compute-0 sudo[94869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:28 compute-0 sudo[94869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:28 compute-0 sudo[94869]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:28 compute-0 sudo[94894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:29:28 compute-0 sudo[94894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:28 compute-0 sudo[94894]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:28 compute-0 sudo[94919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:28 compute-0 sudo[94919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:28 compute-0 sudo[94919]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:28 compute-0 sudo[94944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:29:28 compute-0 sudo[94944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:29 compute-0 podman[95041]: 2025-11-29 07:29:29.680933167 +0000 UTC m=+0.731594521 container exec 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:29:29 compute-0 podman[95041]: 2025-11-29 07:29:29.825731204 +0000 UTC m=+0.876392558 container exec_died 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 07:29:30 compute-0 sudo[95155]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdvrdctginfnpqpkchgqjwhybtmlkhwp ; /usr/bin/python3'
Nov 29 07:29:30 compute-0 sudo[95155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:30 compute-0 python3[95164]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:29:30 compute-0 sudo[94944]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:29:30 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:29:30 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:30 compute-0 podman[95192]: 2025-11-29 07:29:30.372675823 +0000 UTC m=+0.051205138 container create 9efaa33b731e7d8e3cf3b0752d04d412905164ae165e3354554c7eec37fd6b05 (image=quay.io/ceph/ceph:v18, name=naughty_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:29:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:29:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:29:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:29:30 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:29:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:29:30 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:30 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 621a5f90-ff24-4746-8e20-c767a20312eb does not exist
Nov 29 07:29:30 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 43727923-97b2-4f14-8c48-6e40ec483ad9 does not exist
Nov 29 07:29:30 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 876a3469-56f1-4bcf-a9b7-f42d3344cb63 does not exist
Nov 29 07:29:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:29:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:29:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:29:30 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:29:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:29:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:29:30 compute-0 systemd[1]: Started libpod-conmon-9efaa33b731e7d8e3cf3b0752d04d412905164ae165e3354554c7eec37fd6b05.scope.
Nov 29 07:29:30 compute-0 podman[95192]: 2025-11-29 07:29:30.350552392 +0000 UTC m=+0.029081687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:29:30 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfbc13b2c2ffec50b40241f27176e53d135268a4201b55f3feb35fb1a8a2ef5d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfbc13b2c2ffec50b40241f27176e53d135268a4201b55f3feb35fb1a8a2ef5d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfbc13b2c2ffec50b40241f27176e53d135268a4201b55f3feb35fb1a8a2ef5d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:30 compute-0 sudo[95206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:30 compute-0 podman[95192]: 2025-11-29 07:29:30.473671681 +0000 UTC m=+0.152200956 container init 9efaa33b731e7d8e3cf3b0752d04d412905164ae165e3354554c7eec37fd6b05 (image=quay.io/ceph/ceph:v18, name=naughty_bassi, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:29:30 compute-0 sudo[95206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:30 compute-0 sudo[95206]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:30 compute-0 podman[95192]: 2025-11-29 07:29:30.485687562 +0000 UTC m=+0.164216837 container start 9efaa33b731e7d8e3cf3b0752d04d412905164ae165e3354554c7eec37fd6b05 (image=quay.io/ceph/ceph:v18, name=naughty_bassi, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:29:30 compute-0 podman[95192]: 2025-11-29 07:29:30.488906758 +0000 UTC m=+0.167436033 container attach 9efaa33b731e7d8e3cf3b0752d04d412905164ae165e3354554c7eec37fd6b05 (image=quay.io/ceph/ceph:v18, name=naughty_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:29:30 compute-0 ceph-mon[75237]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:29:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:29:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:29:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:29:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:29:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:30 compute-0 sudo[95238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:29:30 compute-0 sudo[95238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:30 compute-0 sudo[95238]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:30 compute-0 sudo[95263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:30 compute-0 sudo[95263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:30 compute-0 sudo[95263]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:30 compute-0 sudo[95288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:29:30 compute-0 sudo[95288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 07:29:31 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/357840130' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 07:29:31 compute-0 naughty_bassi[95216]: 
Nov 29 07:29:31 compute-0 naughty_bassi[95216]: {"fsid":"321e9cb7-01a2-5759-bf8c-981c9a64aa3e","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":164,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":18,"num_osds":3,"num_up_osds":3,"osd_up_since":1764401359,"num_in_osds":3,"osd_in_since":1764401310,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":502792192,"bytes_avail":63909134336,"bytes_total":64411926528},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-29T07:28:40.536457+0000","services":{}},"progress_events":{}}
Nov 29 07:29:31 compute-0 podman[95371]: 2025-11-29 07:29:31.015759979 +0000 UTC m=+0.029312944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:31 compute-0 systemd[1]: libpod-9efaa33b731e7d8e3cf3b0752d04d412905164ae165e3354554c7eec37fd6b05.scope: Deactivated successfully.
Nov 29 07:29:31 compute-0 podman[95371]: 2025-11-29 07:29:31.503794975 +0000 UTC m=+0.517347940 container create 81f7651c7720fef909d0945e15bcf51aab84924617407ec3e52f0ae04fb725e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pike, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 07:29:31 compute-0 systemd[1]: Started libpod-conmon-81f7651c7720fef909d0945e15bcf51aab84924617407ec3e52f0ae04fb725e3.scope.
Nov 29 07:29:31 compute-0 podman[95192]: 2025-11-29 07:29:31.549023823 +0000 UTC m=+1.227553098 container died 9efaa33b731e7d8e3cf3b0752d04d412905164ae165e3354554c7eec37fd6b05 (image=quay.io/ceph/ceph:v18, name=naughty_bassi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:29:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:31 compute-0 ceph-mon[75237]: pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:31 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/357840130' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 07:29:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfbc13b2c2ffec50b40241f27176e53d135268a4201b55f3feb35fb1a8a2ef5d-merged.mount: Deactivated successfully.
Nov 29 07:29:31 compute-0 podman[95192]: 2025-11-29 07:29:31.851720077 +0000 UTC m=+1.530249352 container remove 9efaa33b731e7d8e3cf3b0752d04d412905164ae165e3354554c7eec37fd6b05 (image=quay.io/ceph/ceph:v18, name=naughty_bassi, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 07:29:31 compute-0 sudo[95155]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:31 compute-0 systemd[1]: libpod-conmon-9efaa33b731e7d8e3cf3b0752d04d412905164ae165e3354554c7eec37fd6b05.scope: Deactivated successfully.
Nov 29 07:29:32 compute-0 sudo[95429]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xofduytlmrkbxmtwvszalhkffrmtcbba ; /usr/bin/python3'
Nov 29 07:29:32 compute-0 sudo[95429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:32 compute-0 python3[95431]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:29:32 compute-0 podman[95371]: 2025-11-29 07:29:32.408277703 +0000 UTC m=+1.421830738 container init 81f7651c7720fef909d0945e15bcf51aab84924617407ec3e52f0ae04fb725e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pike, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 07:29:32 compute-0 podman[95371]: 2025-11-29 07:29:32.414581461 +0000 UTC m=+1.428134406 container start 81f7651c7720fef909d0945e15bcf51aab84924617407ec3e52f0ae04fb725e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pike, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 29 07:29:32 compute-0 flamboyant_pike[95400]: 167 167
Nov 29 07:29:32 compute-0 systemd[1]: libpod-81f7651c7720fef909d0945e15bcf51aab84924617407ec3e52f0ae04fb725e3.scope: Deactivated successfully.
Nov 29 07:29:32 compute-0 conmon[95400]: conmon 81f7651c7720fef909d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-81f7651c7720fef909d0945e15bcf51aab84924617407ec3e52f0ae04fb725e3.scope/container/memory.events
Nov 29 07:29:32 compute-0 podman[95371]: 2025-11-29 07:29:32.479949218 +0000 UTC m=+1.493502193 container attach 81f7651c7720fef909d0945e15bcf51aab84924617407ec3e52f0ae04fb725e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:29:32 compute-0 podman[95371]: 2025-11-29 07:29:32.482807184 +0000 UTC m=+1.496360169 container died 81f7651c7720fef909d0945e15bcf51aab84924617407ec3e52f0ae04fb725e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 07:29:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-94b550180df34ccfd2a46d358015a8d90d17d4eedaf06fadb790e88721e19089-merged.mount: Deactivated successfully.
Nov 29 07:29:32 compute-0 podman[95371]: 2025-11-29 07:29:32.693340678 +0000 UTC m=+1.706893623 container remove 81f7651c7720fef909d0945e15bcf51aab84924617407ec3e52f0ae04fb725e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pike, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 07:29:32 compute-0 podman[95432]: 2025-11-29 07:29:32.738677878 +0000 UTC m=+0.323653375 container create d1e77600beb9ac136aaee424c21cd10fc7f9dbf2c2c8f6dee38233c1ee0c462d (image=quay.io/ceph/ceph:v18, name=objective_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:29:32 compute-0 systemd[1]: Started libpod-conmon-d1e77600beb9ac136aaee424c21cd10fc7f9dbf2c2c8f6dee38233c1ee0c462d.scope.
Nov 29 07:29:32 compute-0 systemd[1]: libpod-conmon-81f7651c7720fef909d0945e15bcf51aab84924617407ec3e52f0ae04fb725e3.scope: Deactivated successfully.
Nov 29 07:29:32 compute-0 podman[95432]: 2025-11-29 07:29:32.715371355 +0000 UTC m=+0.300346862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:29:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206d8a555eb86d0783bd268135adf7ed70b545aaa7a16488eb2bd992ffc21df5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206d8a555eb86d0783bd268135adf7ed70b545aaa7a16488eb2bd992ffc21df5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:32 compute-0 podman[95432]: 2025-11-29 07:29:32.841860325 +0000 UTC m=+0.426835862 container init d1e77600beb9ac136aaee424c21cd10fc7f9dbf2c2c8f6dee38233c1ee0c462d (image=quay.io/ceph/ceph:v18, name=objective_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:29:32 compute-0 podman[95432]: 2025-11-29 07:29:32.849216051 +0000 UTC m=+0.434191548 container start d1e77600beb9ac136aaee424c21cd10fc7f9dbf2c2c8f6dee38233c1ee0c462d (image=quay.io/ceph/ceph:v18, name=objective_hypatia, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 07:29:32 compute-0 podman[95432]: 2025-11-29 07:29:32.8532979 +0000 UTC m=+0.438273477 container attach d1e77600beb9ac136aaee424c21cd10fc7f9dbf2c2c8f6dee38233c1ee0c462d (image=quay.io/ceph/ceph:v18, name=objective_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 07:29:32 compute-0 podman[95471]: 2025-11-29 07:29:32.910274611 +0000 UTC m=+0.082402691 container create 46c6e2bfbab6f07941ebe732a46e561e803ac7595fdf3e95514328b78d1c5f44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_williamson, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 07:29:32 compute-0 podman[95471]: 2025-11-29 07:29:32.865210078 +0000 UTC m=+0.037338208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:32 compute-0 systemd[1]: Started libpod-conmon-46c6e2bfbab6f07941ebe732a46e561e803ac7595fdf3e95514328b78d1c5f44.scope.
Nov 29 07:29:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64968177f65588a24ac3699a29dffefa640b1ac928ebba75b276981aa4f18312/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64968177f65588a24ac3699a29dffefa640b1ac928ebba75b276981aa4f18312/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64968177f65588a24ac3699a29dffefa640b1ac928ebba75b276981aa4f18312/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64968177f65588a24ac3699a29dffefa640b1ac928ebba75b276981aa4f18312/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64968177f65588a24ac3699a29dffefa640b1ac928ebba75b276981aa4f18312/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:33 compute-0 podman[95471]: 2025-11-29 07:29:33.023836035 +0000 UTC m=+0.195964125 container init 46c6e2bfbab6f07941ebe732a46e561e803ac7595fdf3e95514328b78d1c5f44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 07:29:33 compute-0 podman[95471]: 2025-11-29 07:29:33.039343499 +0000 UTC m=+0.211471559 container start 46c6e2bfbab6f07941ebe732a46e561e803ac7595fdf3e95514328b78d1c5f44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_williamson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:29:33 compute-0 podman[95471]: 2025-11-29 07:29:33.045257007 +0000 UTC m=+0.217385087 container attach 46c6e2bfbab6f07941ebe732a46e561e803ac7595fdf3e95514328b78d1c5f44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_williamson, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 07:29:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 07:29:33 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/597849647' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:29:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 29 07:29:33 compute-0 ceph-mon[75237]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:33 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/597849647' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:29:33 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/597849647' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:29:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Nov 29 07:29:33 compute-0 objective_hypatia[95464]: pool 'vms' created
Nov 29 07:29:33 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Nov 29 07:29:33 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:33 compute-0 systemd[1]: libpod-d1e77600beb9ac136aaee424c21cd10fc7f9dbf2c2c8f6dee38233c1ee0c462d.scope: Deactivated successfully.
Nov 29 07:29:33 compute-0 podman[95527]: 2025-11-29 07:29:33.933801519 +0000 UTC m=+0.042019753 container died d1e77600beb9ac136aaee424c21cd10fc7f9dbf2c2c8f6dee38233c1ee0c462d (image=quay.io/ceph/ceph:v18, name=objective_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:29:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-206d8a555eb86d0783bd268135adf7ed70b545aaa7a16488eb2bd992ffc21df5-merged.mount: Deactivated successfully.
Nov 29 07:29:33 compute-0 podman[95527]: 2025-11-29 07:29:33.981823982 +0000 UTC m=+0.090042196 container remove d1e77600beb9ac136aaee424c21cd10fc7f9dbf2c2c8f6dee38233c1ee0c462d (image=quay.io/ceph/ceph:v18, name=objective_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 07:29:33 compute-0 systemd[1]: libpod-conmon-d1e77600beb9ac136aaee424c21cd10fc7f9dbf2c2c8f6dee38233c1ee0c462d.scope: Deactivated successfully.
Nov 29 07:29:34 compute-0 sudo[95429]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:34 compute-0 pensive_williamson[95490]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:29:34 compute-0 pensive_williamson[95490]: --> relative data size: 1.0
Nov 29 07:29:34 compute-0 pensive_williamson[95490]: --> All data devices are unavailable
Nov 29 07:29:34 compute-0 systemd[1]: libpod-46c6e2bfbab6f07941ebe732a46e561e803ac7595fdf3e95514328b78d1c5f44.scope: Deactivated successfully.
Nov 29 07:29:34 compute-0 systemd[1]: libpod-46c6e2bfbab6f07941ebe732a46e561e803ac7595fdf3e95514328b78d1c5f44.scope: Consumed 1.048s CPU time.
Nov 29 07:29:34 compute-0 podman[95471]: 2025-11-29 07:29:34.149410138 +0000 UTC m=+1.321538208 container died 46c6e2bfbab6f07941ebe732a46e561e803ac7595fdf3e95514328b78d1c5f44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 29 07:29:34 compute-0 sudo[95581]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opzgtivarzvntvhlzvjmoazecbsalbfk ; /usr/bin/python3'
Nov 29 07:29:34 compute-0 sudo[95581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-64968177f65588a24ac3699a29dffefa640b1ac928ebba75b276981aa4f18312-merged.mount: Deactivated successfully.
Nov 29 07:29:34 compute-0 podman[95471]: 2025-11-29 07:29:34.213707005 +0000 UTC m=+1.385835055 container remove 46c6e2bfbab6f07941ebe732a46e561e803ac7595fdf3e95514328b78d1c5f44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_williamson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 07:29:34 compute-0 systemd[1]: libpod-conmon-46c6e2bfbab6f07941ebe732a46e561e803ac7595fdf3e95514328b78d1c5f44.scope: Deactivated successfully.
Nov 29 07:29:34 compute-0 sudo[95288]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:34 compute-0 python3[95584]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:29:34 compute-0 sudo[95597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:34 compute-0 sudo[95597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:34 compute-0 sudo[95597]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:34 compute-0 podman[95620]: 2025-11-29 07:29:34.390391745 +0000 UTC m=+0.049493633 container create 4acbf58979092caae45beb2c78f0f57661fc4ef3018fcb1d657a069467c46100 (image=quay.io/ceph/ceph:v18, name=infallible_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:29:34 compute-0 sudo[95628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:29:34 compute-0 sudo[95628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:34 compute-0 sudo[95628]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:34 compute-0 systemd[1]: Started libpod-conmon-4acbf58979092caae45beb2c78f0f57661fc4ef3018fcb1d657a069467c46100.scope.
Nov 29 07:29:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657ed5d9c6a429fae5d1a72b8e3e4332ce54842f275d432db3a2e15355f9b141/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657ed5d9c6a429fae5d1a72b8e3e4332ce54842f275d432db3a2e15355f9b141/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:34 compute-0 podman[95620]: 2025-11-29 07:29:34.465580963 +0000 UTC m=+0.124682901 container init 4acbf58979092caae45beb2c78f0f57661fc4ef3018fcb1d657a069467c46100 (image=quay.io/ceph/ceph:v18, name=infallible_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 07:29:34 compute-0 podman[95620]: 2025-11-29 07:29:34.371023368 +0000 UTC m=+0.030125276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:29:34 compute-0 sudo[95662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:34 compute-0 podman[95620]: 2025-11-29 07:29:34.473822633 +0000 UTC m=+0.132924541 container start 4acbf58979092caae45beb2c78f0f57661fc4ef3018fcb1d657a069467c46100 (image=quay.io/ceph/ceph:v18, name=infallible_dirac, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:29:34 compute-0 sudo[95662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:34 compute-0 podman[95620]: 2025-11-29 07:29:34.477276636 +0000 UTC m=+0.136378514 container attach 4acbf58979092caae45beb2c78f0f57661fc4ef3018fcb1d657a069467c46100 (image=quay.io/ceph/ceph:v18, name=infallible_dirac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:29:34 compute-0 sudo[95662]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:34 compute-0 sudo[95691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:29:34 compute-0 sudo[95691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v68: 2 pgs: 1 active+clean, 1 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:34 compute-0 ceph-mon[75237]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:29:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 29 07:29:34 compute-0 podman[95755]: 2025-11-29 07:29:34.854331676 +0000 UTC m=+0.052973486 container create b52a6ad166846218986f5fc12fc8c2ea8ac111defbdfacbb1b1350246114b708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_napier, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:29:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Nov 29 07:29:34 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/597849647' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:29:34 compute-0 ceph-mon[75237]: osdmap e19: 3 total, 3 up, 3 in
Nov 29 07:29:34 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Nov 29 07:29:34 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 20 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:34 compute-0 systemd[1]: Started libpod-conmon-b52a6ad166846218986f5fc12fc8c2ea8ac111defbdfacbb1b1350246114b708.scope.
Nov 29 07:29:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:34 compute-0 podman[95755]: 2025-11-29 07:29:34.831706902 +0000 UTC m=+0.030348732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:34 compute-0 podman[95755]: 2025-11-29 07:29:34.936005948 +0000 UTC m=+0.134647778 container init b52a6ad166846218986f5fc12fc8c2ea8ac111defbdfacbb1b1350246114b708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_napier, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 07:29:34 compute-0 podman[95755]: 2025-11-29 07:29:34.942898502 +0000 UTC m=+0.141540312 container start b52a6ad166846218986f5fc12fc8c2ea8ac111defbdfacbb1b1350246114b708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_napier, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 29 07:29:34 compute-0 intelligent_napier[95790]: 167 167
Nov 29 07:29:34 compute-0 systemd[1]: libpod-b52a6ad166846218986f5fc12fc8c2ea8ac111defbdfacbb1b1350246114b708.scope: Deactivated successfully.
Nov 29 07:29:34 compute-0 podman[95755]: 2025-11-29 07:29:34.948485551 +0000 UTC m=+0.147127381 container attach b52a6ad166846218986f5fc12fc8c2ea8ac111defbdfacbb1b1350246114b708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_napier, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:29:34 compute-0 podman[95755]: 2025-11-29 07:29:34.949514319 +0000 UTC m=+0.148156149 container died b52a6ad166846218986f5fc12fc8c2ea8ac111defbdfacbb1b1350246114b708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_napier, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 07:29:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-91f8bb24dbccddacb530d660191825b882130ccb503747811b208c44e7baaa36-merged.mount: Deactivated successfully.
Nov 29 07:29:34 compute-0 podman[95755]: 2025-11-29 07:29:34.990989167 +0000 UTC m=+0.189630977 container remove b52a6ad166846218986f5fc12fc8c2ea8ac111defbdfacbb1b1350246114b708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_napier, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:29:35 compute-0 systemd[1]: libpod-conmon-b52a6ad166846218986f5fc12fc8c2ea8ac111defbdfacbb1b1350246114b708.scope: Deactivated successfully.
Nov 29 07:29:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 07:29:35 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4055843320' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:29:35 compute-0 podman[95816]: 2025-11-29 07:29:35.146595753 +0000 UTC m=+0.044660884 container create 03d039e333379f8b900f30ffb009f1636046c788f42ad4c23222ba8643e979ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:29:35 compute-0 systemd[1]: Started libpod-conmon-03d039e333379f8b900f30ffb009f1636046c788f42ad4c23222ba8643e979ad.scope.
Nov 29 07:29:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75affeb2449b28dd966b1ad1199020150d5f9197f84e64c33cf0c514ea9cdf24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75affeb2449b28dd966b1ad1199020150d5f9197f84e64c33cf0c514ea9cdf24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75affeb2449b28dd966b1ad1199020150d5f9197f84e64c33cf0c514ea9cdf24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75affeb2449b28dd966b1ad1199020150d5f9197f84e64c33cf0c514ea9cdf24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:35 compute-0 podman[95816]: 2025-11-29 07:29:35.124283697 +0000 UTC m=+0.022348868 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:35 compute-0 podman[95816]: 2025-11-29 07:29:35.222956582 +0000 UTC m=+0.121021783 container init 03d039e333379f8b900f30ffb009f1636046c788f42ad4c23222ba8643e979ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brattain, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 07:29:35 compute-0 podman[95816]: 2025-11-29 07:29:35.228601493 +0000 UTC m=+0.126666634 container start 03d039e333379f8b900f30ffb009f1636046c788f42ad4c23222ba8643e979ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brattain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Nov 29 07:29:35 compute-0 podman[95816]: 2025-11-29 07:29:35.233457753 +0000 UTC m=+0.131522904 container attach 03d039e333379f8b900f30ffb009f1636046c788f42ad4c23222ba8643e979ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:29:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 29 07:29:35 compute-0 ceph-mon[75237]: pgmap v68: 2 pgs: 1 active+clean, 1 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:35 compute-0 ceph-mon[75237]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:29:35 compute-0 ceph-mon[75237]: osdmap e20: 3 total, 3 up, 3 in
Nov 29 07:29:35 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/4055843320' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:29:35 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4055843320' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:29:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Nov 29 07:29:35 compute-0 infallible_dirac[95670]: pool 'volumes' created
Nov 29 07:29:35 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Nov 29 07:29:35 compute-0 systemd[1]: libpod-4acbf58979092caae45beb2c78f0f57661fc4ef3018fcb1d657a069467c46100.scope: Deactivated successfully.
Nov 29 07:29:35 compute-0 podman[95620]: 2025-11-29 07:29:35.900583251 +0000 UTC m=+1.559685129 container died 4acbf58979092caae45beb2c78f0f57661fc4ef3018fcb1d657a069467c46100 (image=quay.io/ceph/ceph:v18, name=infallible_dirac, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 29 07:29:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-657ed5d9c6a429fae5d1a72b8e3e4332ce54842f275d432db3a2e15355f9b141-merged.mount: Deactivated successfully.
Nov 29 07:29:35 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:35 compute-0 podman[95620]: 2025-11-29 07:29:35.958283832 +0000 UTC m=+1.617385720 container remove 4acbf58979092caae45beb2c78f0f57661fc4ef3018fcb1d657a069467c46100 (image=quay.io/ceph/ceph:v18, name=infallible_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 07:29:35 compute-0 systemd[1]: libpod-conmon-4acbf58979092caae45beb2c78f0f57661fc4ef3018fcb1d657a069467c46100.scope: Deactivated successfully.
Nov 29 07:29:35 compute-0 sudo[95581]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]: {
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:     "0": [
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:         {
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "devices": [
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "/dev/loop3"
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             ],
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "lv_name": "ceph_lv0",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "lv_size": "21470642176",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "name": "ceph_lv0",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "tags": {
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.cluster_name": "ceph",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.crush_device_class": "",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.encrypted": "0",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.osd_id": "0",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.type": "block",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.vdo": "0"
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             },
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "type": "block",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "vg_name": "ceph_vg0"
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:         }
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:     ],
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:     "1": [
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:         {
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "devices": [
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "/dev/loop4"
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             ],
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "lv_name": "ceph_lv1",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "lv_size": "21470642176",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "name": "ceph_lv1",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "tags": {
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.cluster_name": "ceph",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.crush_device_class": "",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.encrypted": "0",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.osd_id": "1",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.type": "block",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.vdo": "0"
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             },
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "type": "block",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "vg_name": "ceph_vg1"
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:         }
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:     ],
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:     "2": [
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:         {
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "devices": [
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "/dev/loop5"
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             ],
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "lv_name": "ceph_lv2",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "lv_size": "21470642176",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "name": "ceph_lv2",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "tags": {
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.cluster_name": "ceph",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.crush_device_class": "",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.encrypted": "0",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.osd_id": "2",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.type": "block",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:                 "ceph.vdo": "0"
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             },
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "type": "block",
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:             "vg_name": "ceph_vg2"
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:         }
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]:     ]
Nov 29 07:29:35 compute-0 inspiring_brattain[95832]: }
Nov 29 07:29:36 compute-0 systemd[1]: libpod-03d039e333379f8b900f30ffb009f1636046c788f42ad4c23222ba8643e979ad.scope: Deactivated successfully.
Nov 29 07:29:36 compute-0 podman[95816]: 2025-11-29 07:29:36.013814256 +0000 UTC m=+0.911879397 container died 03d039e333379f8b900f30ffb009f1636046c788f42ad4c23222ba8643e979ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 07:29:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-75affeb2449b28dd966b1ad1199020150d5f9197f84e64c33cf0c514ea9cdf24-merged.mount: Deactivated successfully.
Nov 29 07:29:36 compute-0 podman[95816]: 2025-11-29 07:29:36.070260613 +0000 UTC m=+0.968325764 container remove 03d039e333379f8b900f30ffb009f1636046c788f42ad4c23222ba8643e979ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brattain, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:29:36 compute-0 systemd[1]: libpod-conmon-03d039e333379f8b900f30ffb009f1636046c788f42ad4c23222ba8643e979ad.scope: Deactivated successfully.
Nov 29 07:29:36 compute-0 sudo[95891]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpwunitabztdgojqwlvtaflieuilmaem ; /usr/bin/python3'
Nov 29 07:29:36 compute-0 sudo[95691]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:36 compute-0 sudo[95891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:36 compute-0 sudo[95894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:36 compute-0 sudo[95894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:36 compute-0 sudo[95894]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:36 compute-0 sudo[95919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:29:36 compute-0 sudo[95919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:36 compute-0 sudo[95919]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:36 compute-0 python3[95893]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:29:36 compute-0 sudo[95944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:36 compute-0 sudo[95944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:36 compute-0 sudo[95944]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:36 compute-0 sudo[95981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:29:36 compute-0 sudo[95981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:36 compute-0 podman[95945]: 2025-11-29 07:29:36.278391202 +0000 UTC m=+0.024123615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:29:36 compute-0 podman[95945]: 2025-11-29 07:29:36.534075412 +0000 UTC m=+0.279807845 container create c509491294010e728d02e151e4e2ad8d988b69e50b4a49a4abd11a8632223032 (image=quay.io/ceph/ceph:v18, name=youthful_hellman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:29:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v71: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:36 compute-0 systemd[1]: Started libpod-conmon-c509491294010e728d02e151e4e2ad8d988b69e50b4a49a4abd11a8632223032.scope.
Nov 29 07:29:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c73fc9230f0118de8a7fa3d29852be0f025b3eb87eca41f3954b73980b15a78a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c73fc9230f0118de8a7fa3d29852be0f025b3eb87eca41f3954b73980b15a78a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:36 compute-0 podman[95945]: 2025-11-29 07:29:36.62796365 +0000 UTC m=+0.373696123 container init c509491294010e728d02e151e4e2ad8d988b69e50b4a49a4abd11a8632223032 (image=quay.io/ceph/ceph:v18, name=youthful_hellman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 07:29:36 compute-0 podman[95945]: 2025-11-29 07:29:36.640381061 +0000 UTC m=+0.386113494 container start c509491294010e728d02e151e4e2ad8d988b69e50b4a49a4abd11a8632223032 (image=quay.io/ceph/ceph:v18, name=youthful_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 07:29:36 compute-0 podman[95945]: 2025-11-29 07:29:36.645120838 +0000 UTC m=+0.390853341 container attach c509491294010e728d02e151e4e2ad8d988b69e50b4a49a4abd11a8632223032 (image=quay.io/ceph/ceph:v18, name=youthful_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:29:36 compute-0 podman[96053]: 2025-11-29 07:29:36.798276938 +0000 UTC m=+0.037311317 container create 3fcb11f51913df4a48ca7b4e1d46f7e179256baa872efa83ad773388f88ea876 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:29:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e21 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:36 compute-0 systemd[1]: Started libpod-conmon-3fcb11f51913df4a48ca7b4e1d46f7e179256baa872efa83ad773388f88ea876.scope.
Nov 29 07:29:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:36 compute-0 podman[96053]: 2025-11-29 07:29:36.863263044 +0000 UTC m=+0.102297443 container init 3fcb11f51913df4a48ca7b4e1d46f7e179256baa872efa83ad773388f88ea876 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 07:29:36 compute-0 podman[96053]: 2025-11-29 07:29:36.869021328 +0000 UTC m=+0.108055707 container start 3fcb11f51913df4a48ca7b4e1d46f7e179256baa872efa83ad773388f88ea876 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:29:36 compute-0 objective_keller[96069]: 167 167
Nov 29 07:29:36 compute-0 systemd[1]: libpod-3fcb11f51913df4a48ca7b4e1d46f7e179256baa872efa83ad773388f88ea876.scope: Deactivated successfully.
Nov 29 07:29:36 compute-0 podman[96053]: 2025-11-29 07:29:36.87471689 +0000 UTC m=+0.113751299 container attach 3fcb11f51913df4a48ca7b4e1d46f7e179256baa872efa83ad773388f88ea876 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keller, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:29:36 compute-0 podman[96053]: 2025-11-29 07:29:36.875202163 +0000 UTC m=+0.114236542 container died 3fcb11f51913df4a48ca7b4e1d46f7e179256baa872efa83ad773388f88ea876 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keller, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:29:36 compute-0 podman[96053]: 2025-11-29 07:29:36.78184437 +0000 UTC m=+0.020878769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 29 07:29:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Nov 29 07:29:36 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/4055843320' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:29:36 compute-0 ceph-mon[75237]: osdmap e21: 3 total, 3 up, 3 in
Nov 29 07:29:36 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Nov 29 07:29:36 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 22 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa42de934f5ee12f2fc7b68d84b98f93615e3fbcca282d39537fd84f34aa1ff5-merged.mount: Deactivated successfully.
Nov 29 07:29:36 compute-0 podman[96053]: 2025-11-29 07:29:36.92524848 +0000 UTC m=+0.164282869 container remove 3fcb11f51913df4a48ca7b4e1d46f7e179256baa872efa83ad773388f88ea876 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:29:36 compute-0 systemd[1]: libpod-conmon-3fcb11f51913df4a48ca7b4e1d46f7e179256baa872efa83ad773388f88ea876.scope: Deactivated successfully.
Nov 29 07:29:37 compute-0 podman[96111]: 2025-11-29 07:29:37.076841388 +0000 UTC m=+0.047815338 container create c5d68fe5c1a7a2dbc9ec337814718721df127118eb527349e9eafee9e48e9254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_poitras, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:29:37 compute-0 systemd[1]: Started libpod-conmon-c5d68fe5c1a7a2dbc9ec337814718721df127118eb527349e9eafee9e48e9254.scope.
Nov 29 07:29:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055ac975cad9767034c6becfe70adf98f068498696adaaa9204ca66f663244c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055ac975cad9767034c6becfe70adf98f068498696adaaa9204ca66f663244c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055ac975cad9767034c6becfe70adf98f068498696adaaa9204ca66f663244c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055ac975cad9767034c6becfe70adf98f068498696adaaa9204ca66f663244c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:37 compute-0 podman[96111]: 2025-11-29 07:29:37.059239709 +0000 UTC m=+0.030213669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:37 compute-0 podman[96111]: 2025-11-29 07:29:37.158999573 +0000 UTC m=+0.129973533 container init c5d68fe5c1a7a2dbc9ec337814718721df127118eb527349e9eafee9e48e9254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_poitras, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 07:29:37 compute-0 podman[96111]: 2025-11-29 07:29:37.166889504 +0000 UTC m=+0.137863444 container start c5d68fe5c1a7a2dbc9ec337814718721df127118eb527349e9eafee9e48e9254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_poitras, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:29:37 compute-0 podman[96111]: 2025-11-29 07:29:37.191257095 +0000 UTC m=+0.162231055 container attach c5d68fe5c1a7a2dbc9ec337814718721df127118eb527349e9eafee9e48e9254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 07:29:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 07:29:37 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3087998429' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:29:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 29 07:29:37 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3087998429' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:29:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Nov 29 07:29:37 compute-0 youthful_hellman[96028]: pool 'backups' created
Nov 29 07:29:37 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Nov 29 07:29:37 compute-0 ceph-mon[75237]: pgmap v71: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:37 compute-0 ceph-mon[75237]: osdmap e22: 3 total, 3 up, 3 in
Nov 29 07:29:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3087998429' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:29:37 compute-0 systemd[1]: libpod-c509491294010e728d02e151e4e2ad8d988b69e50b4a49a4abd11a8632223032.scope: Deactivated successfully.
Nov 29 07:29:37 compute-0 podman[95945]: 2025-11-29 07:29:37.92713434 +0000 UTC m=+1.672866813 container died c509491294010e728d02e151e4e2ad8d988b69e50b4a49a4abd11a8632223032 (image=quay.io/ceph/ceph:v18, name=youthful_hellman, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:29:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-c73fc9230f0118de8a7fa3d29852be0f025b3eb87eca41f3954b73980b15a78a-merged.mount: Deactivated successfully.
Nov 29 07:29:37 compute-0 podman[95945]: 2025-11-29 07:29:37.974829524 +0000 UTC m=+1.720561937 container remove c509491294010e728d02e151e4e2ad8d988b69e50b4a49a4abd11a8632223032 (image=quay.io/ceph/ceph:v18, name=youthful_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:29:37 compute-0 systemd[1]: libpod-conmon-c509491294010e728d02e151e4e2ad8d988b69e50b4a49a4abd11a8632223032.scope: Deactivated successfully.
Nov 29 07:29:37 compute-0 sudo[95891]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:38 compute-0 sudo[96193]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jitawaczjaqzqfrckgfowzeskjdzvldh ; /usr/bin/python3'
Nov 29 07:29:38 compute-0 sudo[96193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:38 compute-0 musing_poitras[96128]: {
Nov 29 07:29:38 compute-0 musing_poitras[96128]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:29:38 compute-0 musing_poitras[96128]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:29:38 compute-0 musing_poitras[96128]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:29:38 compute-0 musing_poitras[96128]:         "osd_id": 2,
Nov 29 07:29:38 compute-0 musing_poitras[96128]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:29:38 compute-0 musing_poitras[96128]:         "type": "bluestore"
Nov 29 07:29:38 compute-0 musing_poitras[96128]:     },
Nov 29 07:29:38 compute-0 musing_poitras[96128]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:29:38 compute-0 musing_poitras[96128]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:29:38 compute-0 musing_poitras[96128]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:29:38 compute-0 musing_poitras[96128]:         "osd_id": 0,
Nov 29 07:29:38 compute-0 musing_poitras[96128]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:29:38 compute-0 musing_poitras[96128]:         "type": "bluestore"
Nov 29 07:29:38 compute-0 musing_poitras[96128]:     },
Nov 29 07:29:38 compute-0 musing_poitras[96128]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:29:38 compute-0 musing_poitras[96128]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:29:38 compute-0 musing_poitras[96128]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:29:38 compute-0 musing_poitras[96128]:         "osd_id": 1,
Nov 29 07:29:38 compute-0 musing_poitras[96128]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:29:38 compute-0 musing_poitras[96128]:         "type": "bluestore"
Nov 29 07:29:38 compute-0 musing_poitras[96128]:     }
Nov 29 07:29:38 compute-0 musing_poitras[96128]: }
Nov 29 07:29:38 compute-0 systemd[1]: libpod-c5d68fe5c1a7a2dbc9ec337814718721df127118eb527349e9eafee9e48e9254.scope: Deactivated successfully.
Nov 29 07:29:38 compute-0 podman[96111]: 2025-11-29 07:29:38.199984897 +0000 UTC m=+1.170958847 container died c5d68fe5c1a7a2dbc9ec337814718721df127118eb527349e9eafee9e48e9254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:29:38 compute-0 systemd[1]: libpod-c5d68fe5c1a7a2dbc9ec337814718721df127118eb527349e9eafee9e48e9254.scope: Consumed 1.034s CPU time.
Nov 29 07:29:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-055ac975cad9767034c6becfe70adf98f068498696adaaa9204ca66f663244c8-merged.mount: Deactivated successfully.
Nov 29 07:29:38 compute-0 podman[96111]: 2025-11-29 07:29:38.261783268 +0000 UTC m=+1.232757198 container remove c5d68fe5c1a7a2dbc9ec337814718721df127118eb527349e9eafee9e48e9254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_poitras, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:29:38 compute-0 systemd[1]: libpod-conmon-c5d68fe5c1a7a2dbc9ec337814718721df127118eb527349e9eafee9e48e9254.scope: Deactivated successfully.
Nov 29 07:29:38 compute-0 sudo[95981]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:29:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:29:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:38 compute-0 python3[96198]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:29:38 compute-0 sudo[96214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:38 compute-0 sudo[96214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:38 compute-0 sudo[96214]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:38 compute-0 podman[96218]: 2025-11-29 07:29:38.409975876 +0000 UTC m=+0.041535020 container create 3bc3a92587f6ee842d355a451d174af84f1c7db4fde6c1c451209d045d06341f (image=quay.io/ceph/ceph:v18, name=blissful_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:29:38 compute-0 systemd[1]: Started libpod-conmon-3bc3a92587f6ee842d355a451d174af84f1c7db4fde6c1c451209d045d06341f.scope.
Nov 29 07:29:38 compute-0 sudo[96252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:29:38 compute-0 sudo[96252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:38 compute-0 sudo[96252]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1c808b436b3c9ad4236e4629947f8b0ef5da978e233f32795e0123cf4d66c6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1c808b436b3c9ad4236e4629947f8b0ef5da978e233f32795e0123cf4d66c6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:38 compute-0 podman[96218]: 2025-11-29 07:29:38.394876373 +0000 UTC m=+0.026435537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:29:38 compute-0 podman[96218]: 2025-11-29 07:29:38.491662808 +0000 UTC m=+0.123221972 container init 3bc3a92587f6ee842d355a451d174af84f1c7db4fde6c1c451209d045d06341f (image=quay.io/ceph/ceph:v18, name=blissful_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:29:38 compute-0 podman[96218]: 2025-11-29 07:29:38.50407611 +0000 UTC m=+0.135635254 container start 3bc3a92587f6ee842d355a451d174af84f1c7db4fde6c1c451209d045d06341f (image=quay.io/ceph/ceph:v18, name=blissful_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:29:38 compute-0 podman[96218]: 2025-11-29 07:29:38.508007044 +0000 UTC m=+0.139566198 container attach 3bc3a92587f6ee842d355a451d174af84f1c7db4fde6c1c451209d045d06341f (image=quay.io/ceph/ceph:v18, name=blissful_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 07:29:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v74: 4 pgs: 2 unknown, 2 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:29:38
Nov 29 07:29:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:29:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Some PGs (0.500000) are unknown; try again later
Nov 29 07:29:38 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:29:38 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:29:38 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:29:38 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:29:38 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 07:29:38 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:29:38 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 07:29:38 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:29:38 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 07:29:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 07:29:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:29:38 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:29:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:29:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:29:38 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:29:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:29:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:29:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:29:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:29:38 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3087998429' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:29:38 compute-0 ceph-mon[75237]: osdmap e23: 3 total, 3 up, 3 in
Nov 29 07:29:38 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:38 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:38 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:29:38 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 07:29:39 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3937314498' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:29:39 compute-0 sshd-session[95403]: Received disconnect from 45.78.219.195 port 46236:11: Bye Bye [preauth]
Nov 29 07:29:39 compute-0 sshd-session[95403]: Disconnected from authenticating user root 45.78.219.195 port 46236 [preauth]
Nov 29 07:29:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 29 07:29:39 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:29:39 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3937314498' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:29:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Nov 29 07:29:39 compute-0 blissful_dhawan[96278]: pool 'images' created
Nov 29 07:29:39 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Nov 29 07:29:39 compute-0 ceph-mgr[75527]: [progress INFO root] update: starting ev cea619b0-062c-4abd-9194-c9d7dfdaecbb (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 29 07:29:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 07:29:39 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:29:39 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 24 pg[4.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:39 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [2] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:39 compute-0 systemd[1]: libpod-3bc3a92587f6ee842d355a451d174af84f1c7db4fde6c1c451209d045d06341f.scope: Deactivated successfully.
Nov 29 07:29:39 compute-0 podman[96218]: 2025-11-29 07:29:39.371662113 +0000 UTC m=+1.003221297 container died 3bc3a92587f6ee842d355a451d174af84f1c7db4fde6c1c451209d045d06341f (image=quay.io/ceph/ceph:v18, name=blissful_dhawan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:29:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c1c808b436b3c9ad4236e4629947f8b0ef5da978e233f32795e0123cf4d66c6-merged.mount: Deactivated successfully.
Nov 29 07:29:39 compute-0 podman[96218]: 2025-11-29 07:29:39.460186907 +0000 UTC m=+1.091746061 container remove 3bc3a92587f6ee842d355a451d174af84f1c7db4fde6c1c451209d045d06341f (image=quay.io/ceph/ceph:v18, name=blissful_dhawan, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 07:29:39 compute-0 systemd[1]: libpod-conmon-3bc3a92587f6ee842d355a451d174af84f1c7db4fde6c1c451209d045d06341f.scope: Deactivated successfully.
Nov 29 07:29:39 compute-0 sudo[96193]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:39 compute-0 sudo[96343]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylsvxppdzuxpvqzatixcdinpwbhatier ; /usr/bin/python3'
Nov 29 07:29:39 compute-0 sudo[96343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:39 compute-0 python3[96345]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:29:39 compute-0 podman[96346]: 2025-11-29 07:29:39.916143315 +0000 UTC m=+0.047103109 container create 83f558ce25435d932541ac0f961e116926a36ceb18f7591f6bef80c61e1cd01f (image=quay.io/ceph/ceph:v18, name=sad_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:29:39 compute-0 ceph-mon[75237]: pgmap v74: 4 pgs: 2 unknown, 2 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:39 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3937314498' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:29:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:29:39 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3937314498' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:29:39 compute-0 ceph-mon[75237]: osdmap e24: 3 total, 3 up, 3 in
Nov 29 07:29:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:29:39 compute-0 systemd[1]: Started libpod-conmon-83f558ce25435d932541ac0f961e116926a36ceb18f7591f6bef80c61e1cd01f.scope.
Nov 29 07:29:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0129097dafa74f2ab0a87316d469345973fa34a7b037b63bb29a5be6e037416/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0129097dafa74f2ab0a87316d469345973fa34a7b037b63bb29a5be6e037416/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:39 compute-0 podman[96346]: 2025-11-29 07:29:39.893154431 +0000 UTC m=+0.024114255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:29:39 compute-0 podman[96346]: 2025-11-29 07:29:39.994775115 +0000 UTC m=+0.125735009 container init 83f558ce25435d932541ac0f961e116926a36ceb18f7591f6bef80c61e1cd01f (image=quay.io/ceph/ceph:v18, name=sad_borg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 07:29:40 compute-0 podman[96346]: 2025-11-29 07:29:40.000551119 +0000 UTC m=+0.131510983 container start 83f558ce25435d932541ac0f961e116926a36ceb18f7591f6bef80c61e1cd01f (image=quay.io/ceph/ceph:v18, name=sad_borg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:29:40 compute-0 podman[96346]: 2025-11-29 07:29:40.004578867 +0000 UTC m=+0.135538751 container attach 83f558ce25435d932541ac0f961e116926a36ceb18f7591f6bef80c61e1cd01f (image=quay.io/ceph/ceph:v18, name=sad_borg, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:29:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 29 07:29:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v76: 5 pgs: 1 creating+peering, 1 unknown, 3 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:29:40 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:29:40 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:29:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Nov 29 07:29:41 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Nov 29 07:29:41 compute-0 ceph-mon[75237]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:29:41 compute-0 ceph-mgr[75527]: [progress INFO root] update: starting ev 4155bd8b-43a2-4672-8589-e15dfbd63831 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 29 07:29:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 07:29:41 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:29:41 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 25 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [2] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:41 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:29:41 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:29:41 compute-0 ceph-mon[75237]: osdmap e25: 3 total, 3 up, 3 in
Nov 29 07:29:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 07:29:41 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/970856652' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:29:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 29 07:29:41 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:29:41 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:29:41 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/970856652' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:29:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Nov 29 07:29:41 compute-0 sad_borg[96362]: pool 'cephfs.cephfs.meta' created
Nov 29 07:29:41 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Nov 29 07:29:41 compute-0 ceph-mgr[75527]: [progress INFO root] update: starting ev e1ffde1a-b2d1-479e-a5de-bbe822478b26 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 29 07:29:41 compute-0 ceph-mgr[75527]: [progress INFO root] complete: finished ev cea619b0-062c-4abd-9194-c9d7dfdaecbb (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 29 07:29:41 compute-0 ceph-mgr[75527]: [progress INFO root] Completed event cea619b0-062c-4abd-9194-c9d7dfdaecbb (PG autoscaler increasing pool 2 PGs from 1 to 32) in 3 seconds
Nov 29 07:29:41 compute-0 ceph-mgr[75527]: [progress INFO root] complete: finished ev 4155bd8b-43a2-4672-8589-e15dfbd63831 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 29 07:29:41 compute-0 ceph-mgr[75527]: [progress INFO root] Completed event 4155bd8b-43a2-4672-8589-e15dfbd63831 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 1 seconds
Nov 29 07:29:41 compute-0 ceph-mgr[75527]: [progress INFO root] complete: finished ev e1ffde1a-b2d1-479e-a5de-bbe822478b26 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 29 07:29:41 compute-0 ceph-mgr[75527]: [progress INFO root] Completed event e1ffde1a-b2d1-479e-a5de-bbe822478b26 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 0 seconds
Nov 29 07:29:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 26 pg[6.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [0] r=0 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:42 compute-0 systemd[1]: libpod-83f558ce25435d932541ac0f961e116926a36ceb18f7591f6bef80c61e1cd01f.scope: Deactivated successfully.
Nov 29 07:29:42 compute-0 podman[96346]: 2025-11-29 07:29:42.007468064 +0000 UTC m=+2.138427898 container died 83f558ce25435d932541ac0f961e116926a36ceb18f7591f6bef80c61e1cd01f (image=quay.io/ceph/ceph:v18, name=sad_borg, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:29:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0129097dafa74f2ab0a87316d469345973fa34a7b037b63bb29a5be6e037416-merged.mount: Deactivated successfully.
Nov 29 07:29:42 compute-0 podman[96346]: 2025-11-29 07:29:42.056164294 +0000 UTC m=+2.187124108 container remove 83f558ce25435d932541ac0f961e116926a36ceb18f7591f6bef80c61e1cd01f (image=quay.io/ceph/ceph:v18, name=sad_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 07:29:42 compute-0 systemd[1]: libpod-conmon-83f558ce25435d932541ac0f961e116926a36ceb18f7591f6bef80c61e1cd01f.scope: Deactivated successfully.
Nov 29 07:29:42 compute-0 sudo[96343]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:42 compute-0 ceph-mon[75237]: pgmap v76: 5 pgs: 1 creating+peering, 1 unknown, 3 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:42 compute-0 ceph-mon[75237]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:29:42 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:29:42 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/970856652' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:29:42 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:29:42 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:29:42 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/970856652' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:29:42 compute-0 ceph-mon[75237]: osdmap e26: 3 total, 3 up, 3 in
Nov 29 07:29:42 compute-0 sudo[96424]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osceuxdzpismgdqvjmavokwyamxodetn ; /usr/bin/python3'
Nov 29 07:29:42 compute-0 sudo[96424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:42 compute-0 python3[96426]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:29:42 compute-0 podman[96427]: 2025-11-29 07:29:42.448986956 +0000 UTC m=+0.050158541 container create 488f602bbccc2ba95dd541da6288a88af39fd126b237827b1948f1213a4588f2 (image=quay.io/ceph/ceph:v18, name=xenodochial_newton, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 07:29:42 compute-0 systemd[1]: Started libpod-conmon-488f602bbccc2ba95dd541da6288a88af39fd126b237827b1948f1213a4588f2.scope.
Nov 29 07:29:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb76de7406d5692d5c054b9f8f4bbe7cd8a51057a46fddc99785b52439f209a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb76de7406d5692d5c054b9f8f4bbe7cd8a51057a46fddc99785b52439f209a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:42 compute-0 podman[96427]: 2025-11-29 07:29:42.42630516 +0000 UTC m=+0.027476795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:29:42 compute-0 podman[96427]: 2025-11-29 07:29:42.538310212 +0000 UTC m=+0.139481807 container init 488f602bbccc2ba95dd541da6288a88af39fd126b237827b1948f1213a4588f2 (image=quay.io/ceph/ceph:v18, name=xenodochial_newton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 07:29:42 compute-0 podman[96427]: 2025-11-29 07:29:42.545492194 +0000 UTC m=+0.146663779 container start 488f602bbccc2ba95dd541da6288a88af39fd126b237827b1948f1213a4588f2 (image=quay.io/ceph/ceph:v18, name=xenodochial_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:29:42 compute-0 podman[96427]: 2025-11-29 07:29:42.549520571 +0000 UTC m=+0.150692156 container attach 488f602bbccc2ba95dd541da6288a88af39fd126b237827b1948f1213a4588f2 (image=quay.io/ceph/ceph:v18, name=xenodochial_newton, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 07:29:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v79: 37 pgs: 1 creating+peering, 32 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:29:42 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:29:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:29:42 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:29:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 29 07:29:42 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:29:42 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:29:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Nov 29 07:29:42 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Nov 29 07:29:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 27 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=9.914534569s) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 46.159664154s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 27 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=9.914534569s) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown pruub 46.159664154s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:42 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 27 pg[4.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.375383377s) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 61.589588165s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:42 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 27 pg[4.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=12.375383377s) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown pruub 61.589588165s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:42 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 27 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [0] r=0 lpr=26 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 07:29:43 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1480996138' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:29:43 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:29:43 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:29:43 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:29:43 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:29:43 compute-0 ceph-mon[75237]: osdmap e27: 3 total, 3 up, 3 in
Nov 29 07:29:43 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1480996138' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 26 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=26 pruub=15.446021080s) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active pruub 46.368198395s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=26 pruub=15.446021080s) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown pruub 46.368198395s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.1( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.2( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.3( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.14( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.15( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.16( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.17( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.1a( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.1b( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.18( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.19( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.1c( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.1d( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.1e( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.1f( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.4( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.5( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.8( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.9( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.6( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.7( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.c( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.d( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.a( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.b( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.e( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.f( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.12( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.13( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.10( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 27 pg[2.11( empty local-lis/les=19/20 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-mgr[75527]: [progress INFO root] Writing back 6 completed events
Nov 29 07:29:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 07:29:43 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 29 07:29:43 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1480996138' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:29:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Nov 29 07:29:43 compute-0 xenodochial_newton[96442]: pool 'cephfs.cephfs.data' created
Nov 29 07:29:43 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[7.0( empty local-lis/les=0/0 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.1e( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.1d( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.1f( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.1c( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.b( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.a( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.9( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.8( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.7( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.6( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.5( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.4( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.3( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.1( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.2( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.c( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.d( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.e( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.f( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.10( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.11( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.12( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.13( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.14( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.15( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.17( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.16( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.18( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.19( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.1a( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:43 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.1b( empty local-lis/les=21/22 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.1d( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.1c( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.1f( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.8( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.1e( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.7( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.b( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.6( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.1b( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.a( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.1a( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.4( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.9( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.3( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.1( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.2( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.c( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.d( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.e( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.f( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.10( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.19( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.11( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.12( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.14( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.15( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.13( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.16( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.17( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.18( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.5( empty local-lis/les=23/24 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.1e( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.1d( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.1f( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.1c( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.a( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.b( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.9( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.7( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.8( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.6( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.5( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.4( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.1( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.3( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.c( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.2( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.d( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.0( empty local-lis/les=27/28 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.f( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.e( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.10( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.13( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.11( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.12( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.14( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.15( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.16( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.17( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.18( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.1a( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.1d( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.8( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.19( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 28 pg[3.1b( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.1e( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.1c( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.1f( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.1d( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.a( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.9( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.b( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.8( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.3( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.5( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.6( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.4( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.2( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.1( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.7( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.0( empty local-lis/les=26/28 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.c( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.e( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.d( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.f( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.10( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.12( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.13( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.11( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.14( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.18( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.17( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.16( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.19( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.1b( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.15( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.7( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.1e( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.a( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.b( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.1b( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.6( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.1f( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.1c( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 systemd[1]: libpod-488f602bbccc2ba95dd541da6288a88af39fd126b237827b1948f1213a4588f2.scope: Deactivated successfully.
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.1a( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.9( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.3( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.2( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.1( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.0( empty local-lis/les=27/28 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.c( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.d( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.f( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.10( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.19( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.11( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.4( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.12( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.13( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.15( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.14( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.16( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.18( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.17( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.5( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 28 pg[4.e( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=23/23 les/c/f=24/24/0 sis=27) [0] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 28 pg[2.1a( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=19/19 les/c/f=20/20/0 sis=26) [2] r=0 lpr=26 pi=[19,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:44 compute-0 podman[96427]: 2025-11-29 07:29:44.030779995 +0000 UTC m=+1.631951610 container died 488f602bbccc2ba95dd541da6288a88af39fd126b237827b1948f1213a4588f2 (image=quay.io/ceph/ceph:v18, name=xenodochial_newton, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 07:29:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-4eb76de7406d5692d5c054b9f8f4bbe7cd8a51057a46fddc99785b52439f209a-merged.mount: Deactivated successfully.
Nov 29 07:29:44 compute-0 podman[96427]: 2025-11-29 07:29:44.110298759 +0000 UTC m=+1.711470364 container remove 488f602bbccc2ba95dd541da6288a88af39fd126b237827b1948f1213a4588f2 (image=quay.io/ceph/ceph:v18, name=xenodochial_newton, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:29:44 compute-0 systemd[1]: libpod-conmon-488f602bbccc2ba95dd541da6288a88af39fd126b237827b1948f1213a4588f2.scope: Deactivated successfully.
Nov 29 07:29:44 compute-0 sudo[96424]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:44 compute-0 ceph-mon[75237]: pgmap v79: 37 pgs: 1 creating+peering, 32 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:29:44 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1480996138' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:29:44 compute-0 ceph-mon[75237]: osdmap e28: 3 total, 3 up, 3 in
Nov 29 07:29:44 compute-0 sudo[96507]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzzortizgsxiocudfctpqojyrqdymuxv ; /usr/bin/python3'
Nov 29 07:29:44 compute-0 sudo[96507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:44 compute-0 python3[96509]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:29:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v82: 100 pgs: 2 creating+peering, 63 unknown, 35 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:44 compute-0 podman[96510]: 2025-11-29 07:29:44.575043043 +0000 UTC m=+0.045287232 container create 5d5f18c7e468851faa022def367e4a650c32e61c3cd016d1f1ca0ca0e27e3db3 (image=quay.io/ceph/ceph:v18, name=sleepy_swartz, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:29:44 compute-0 systemd[1]: Started libpod-conmon-5d5f18c7e468851faa022def367e4a650c32e61c3cd016d1f1ca0ca0e27e3db3.scope.
Nov 29 07:29:44 compute-0 podman[96510]: 2025-11-29 07:29:44.554463603 +0000 UTC m=+0.024707812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:29:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c64dfb570d85fe194db0bc908bf7105bd0fb83d38d89b1111ebf0ae057a449/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c64dfb570d85fe194db0bc908bf7105bd0fb83d38d89b1111ebf0ae057a449/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:44 compute-0 podman[96510]: 2025-11-29 07:29:44.698068638 +0000 UTC m=+0.168312877 container init 5d5f18c7e468851faa022def367e4a650c32e61c3cd016d1f1ca0ca0e27e3db3 (image=quay.io/ceph/ceph:v18, name=sleepy_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 07:29:44 compute-0 podman[96510]: 2025-11-29 07:29:44.705551968 +0000 UTC m=+0.175796157 container start 5d5f18c7e468851faa022def367e4a650c32e61c3cd016d1f1ca0ca0e27e3db3 (image=quay.io/ceph/ceph:v18, name=sleepy_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:29:44 compute-0 podman[96510]: 2025-11-29 07:29:44.729747554 +0000 UTC m=+0.199991773 container attach 5d5f18c7e468851faa022def367e4a650c32e61c3cd016d1f1ca0ca0e27e3db3 (image=quay.io/ceph/ceph:v18, name=sleepy_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:29:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 29 07:29:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Nov 29 07:29:44 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Nov 29 07:29:45 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 29 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 29 07:29:45 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2181785716' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 29 07:29:45 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 29 07:29:45 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 29 07:29:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 29 07:29:45 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2181785716' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 29 07:29:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Nov 29 07:29:45 compute-0 sleepy_swartz[96525]: enabled application 'rbd' on pool 'vms'
Nov 29 07:29:46 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Nov 29 07:29:46 compute-0 ceph-mon[75237]: pgmap v82: 100 pgs: 2 creating+peering, 63 unknown, 35 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:46 compute-0 ceph-mon[75237]: osdmap e29: 3 total, 3 up, 3 in
Nov 29 07:29:46 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2181785716' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 29 07:29:46 compute-0 systemd[1]: libpod-5d5f18c7e468851faa022def367e4a650c32e61c3cd016d1f1ca0ca0e27e3db3.scope: Deactivated successfully.
Nov 29 07:29:46 compute-0 podman[96510]: 2025-11-29 07:29:46.024568568 +0000 UTC m=+1.494812797 container died 5d5f18c7e468851faa022def367e4a650c32e61c3cd016d1f1ca0ca0e27e3db3 (image=quay.io/ceph/ceph:v18, name=sleepy_swartz, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:29:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-23c64dfb570d85fe194db0bc908bf7105bd0fb83d38d89b1111ebf0ae057a449-merged.mount: Deactivated successfully.
Nov 29 07:29:46 compute-0 podman[96510]: 2025-11-29 07:29:46.08268497 +0000 UTC m=+1.552929159 container remove 5d5f18c7e468851faa022def367e4a650c32e61c3cd016d1f1ca0ca0e27e3db3 (image=quay.io/ceph/ceph:v18, name=sleepy_swartz, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:29:46 compute-0 systemd[1]: libpod-conmon-5d5f18c7e468851faa022def367e4a650c32e61c3cd016d1f1ca0ca0e27e3db3.scope: Deactivated successfully.
Nov 29 07:29:46 compute-0 sudo[96507]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:46 compute-0 sudo[96587]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjsxrxhaxwzoyfunpebothzpgssvmofi ; /usr/bin/python3'
Nov 29 07:29:46 compute-0 sudo[96587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:46 compute-0 python3[96589]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:29:46 compute-0 podman[96590]: 2025-11-29 07:29:46.515580073 +0000 UTC m=+0.054980170 container create 69982b600b5d8afbce6a69616bb5181fcb2a8401d5e61869b5697065636c12a6 (image=quay.io/ceph/ceph:v18, name=silly_lehmann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:29:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v85: 100 pgs: 1 creating+peering, 99 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:46 compute-0 systemd[1]: Started libpod-conmon-69982b600b5d8afbce6a69616bb5181fcb2a8401d5e61869b5697065636c12a6.scope.
Nov 29 07:29:46 compute-0 podman[96590]: 2025-11-29 07:29:46.489683561 +0000 UTC m=+0.029083748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:29:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a59840dac6d8ec572c74c519f9e18d446c5ecdd04bf127024bf15d07c11689b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a59840dac6d8ec572c74c519f9e18d446c5ecdd04bf127024bf15d07c11689b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:46 compute-0 podman[96590]: 2025-11-29 07:29:46.623890376 +0000 UTC m=+0.163290513 container init 69982b600b5d8afbce6a69616bb5181fcb2a8401d5e61869b5697065636c12a6 (image=quay.io/ceph/ceph:v18, name=silly_lehmann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:29:46 compute-0 podman[96590]: 2025-11-29 07:29:46.631863919 +0000 UTC m=+0.171264026 container start 69982b600b5d8afbce6a69616bb5181fcb2a8401d5e61869b5697065636c12a6 (image=quay.io/ceph/ceph:v18, name=silly_lehmann, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:29:46 compute-0 podman[96590]: 2025-11-29 07:29:46.63602542 +0000 UTC m=+0.175425527 container attach 69982b600b5d8afbce6a69616bb5181fcb2a8401d5e61869b5697065636c12a6 (image=quay.io/ceph/ceph:v18, name=silly_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:29:46 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Nov 29 07:29:46 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Nov 29 07:29:46 compute-0 ceph-mon[75237]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:29:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:46 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.2 deep-scrub starts
Nov 29 07:29:46 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.2 deep-scrub ok
Nov 29 07:29:47 compute-0 ceph-mon[75237]: 4.1 scrub starts
Nov 29 07:29:47 compute-0 ceph-mon[75237]: 4.1 scrub ok
Nov 29 07:29:47 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2181785716' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 29 07:29:47 compute-0 ceph-mon[75237]: osdmap e30: 3 total, 3 up, 3 in
Nov 29 07:29:47 compute-0 ceph-mon[75237]: 3.1 scrub starts
Nov 29 07:29:47 compute-0 ceph-mon[75237]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:29:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 29 07:29:47 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1613465980' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 29 07:29:47 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 29 07:29:47 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 29 07:29:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 29 07:29:48 compute-0 ceph-mon[75237]: pgmap v85: 100 pgs: 1 creating+peering, 99 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:48 compute-0 ceph-mon[75237]: 3.1 scrub ok
Nov 29 07:29:48 compute-0 ceph-mon[75237]: 4.2 deep-scrub starts
Nov 29 07:29:48 compute-0 ceph-mon[75237]: 4.2 deep-scrub ok
Nov 29 07:29:48 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1613465980' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 29 07:29:48 compute-0 ceph-mon[75237]: 3.2 scrub starts
Nov 29 07:29:48 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1613465980' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 29 07:29:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Nov 29 07:29:48 compute-0 silly_lehmann[96605]: enabled application 'rbd' on pool 'volumes'
Nov 29 07:29:48 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Nov 29 07:29:48 compute-0 systemd[1]: libpod-69982b600b5d8afbce6a69616bb5181fcb2a8401d5e61869b5697065636c12a6.scope: Deactivated successfully.
Nov 29 07:29:48 compute-0 podman[96590]: 2025-11-29 07:29:48.073074083 +0000 UTC m=+1.612474180 container died 69982b600b5d8afbce6a69616bb5181fcb2a8401d5e61869b5697065636c12a6 (image=quay.io/ceph/ceph:v18, name=silly_lehmann, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:29:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a59840dac6d8ec572c74c519f9e18d446c5ecdd04bf127024bf15d07c11689b-merged.mount: Deactivated successfully.
Nov 29 07:29:48 compute-0 systemd[76685]: Starting Mark boot as successful...
Nov 29 07:29:48 compute-0 systemd[76685]: Finished Mark boot as successful.
Nov 29 07:29:48 compute-0 podman[96590]: 2025-11-29 07:29:48.128932815 +0000 UTC m=+1.668332922 container remove 69982b600b5d8afbce6a69616bb5181fcb2a8401d5e61869b5697065636c12a6 (image=quay.io/ceph/ceph:v18, name=silly_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:29:48 compute-0 systemd[1]: libpod-conmon-69982b600b5d8afbce6a69616bb5181fcb2a8401d5e61869b5697065636c12a6.scope: Deactivated successfully.
Nov 29 07:29:48 compute-0 sudo[96587]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:48 compute-0 sudo[96666]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkxyjxiihjdhyzfuwrxhssbtcbghiqch ; /usr/bin/python3'
Nov 29 07:29:48 compute-0 sudo[96666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:48 compute-0 python3[96668]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:29:48 compute-0 podman[96669]: 2025-11-29 07:29:48.555526009 +0000 UTC m=+0.054811956 container create 1707a15ef5732c27cb854d85040a4fa32937c66f0e0ccdbb3150ace847b9ab75 (image=quay.io/ceph/ceph:v18, name=agitated_yonath, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:29:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v87: 100 pgs: 1 creating+peering, 99 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:48 compute-0 systemd[1]: Started libpod-conmon-1707a15ef5732c27cb854d85040a4fa32937c66f0e0ccdbb3150ace847b9ab75.scope.
Nov 29 07:29:48 compute-0 podman[96669]: 2025-11-29 07:29:48.523544965 +0000 UTC m=+0.022830932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:29:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0707fb911dadbcee819cc7ea7a0d4189c1eef8e520623306212eb48e2b58c18c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0707fb911dadbcee819cc7ea7a0d4189c1eef8e520623306212eb48e2b58c18c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:48 compute-0 podman[96669]: 2025-11-29 07:29:48.644393522 +0000 UTC m=+0.143679479 container init 1707a15ef5732c27cb854d85040a4fa32937c66f0e0ccdbb3150ace847b9ab75 (image=quay.io/ceph/ceph:v18, name=agitated_yonath, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:29:48 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 29 07:29:48 compute-0 podman[96669]: 2025-11-29 07:29:48.652644763 +0000 UTC m=+0.151930720 container start 1707a15ef5732c27cb854d85040a4fa32937c66f0e0ccdbb3150ace847b9ab75 (image=quay.io/ceph/ceph:v18, name=agitated_yonath, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:29:48 compute-0 podman[96669]: 2025-11-29 07:29:48.65630139 +0000 UTC m=+0.155587327 container attach 1707a15ef5732c27cb854d85040a4fa32937c66f0e0ccdbb3150ace847b9ab75 (image=quay.io/ceph/ceph:v18, name=agitated_yonath, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:29:48 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 29 07:29:49 compute-0 ceph-mon[75237]: 3.2 scrub ok
Nov 29 07:29:49 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1613465980' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 29 07:29:49 compute-0 ceph-mon[75237]: osdmap e31: 3 total, 3 up, 3 in
Nov 29 07:29:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 29 07:29:49 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2230904722' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 29 07:29:49 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 29 07:29:49 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 29 07:29:49 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.3 deep-scrub starts
Nov 29 07:29:49 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.3 deep-scrub ok
Nov 29 07:29:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 29 07:29:50 compute-0 ceph-mon[75237]: pgmap v87: 100 pgs: 1 creating+peering, 99 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:50 compute-0 ceph-mon[75237]: 2.1 scrub starts
Nov 29 07:29:50 compute-0 ceph-mon[75237]: 2.1 scrub ok
Nov 29 07:29:50 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2230904722' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 29 07:29:50 compute-0 ceph-mon[75237]: 3.3 deep-scrub starts
Nov 29 07:29:50 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2230904722' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 29 07:29:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Nov 29 07:29:50 compute-0 agitated_yonath[96684]: enabled application 'rbd' on pool 'backups'
Nov 29 07:29:50 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Nov 29 07:29:50 compute-0 systemd[1]: libpod-1707a15ef5732c27cb854d85040a4fa32937c66f0e0ccdbb3150ace847b9ab75.scope: Deactivated successfully.
Nov 29 07:29:50 compute-0 podman[96669]: 2025-11-29 07:29:50.122154122 +0000 UTC m=+1.621440109 container died 1707a15ef5732c27cb854d85040a4fa32937c66f0e0ccdbb3150ace847b9ab75 (image=quay.io/ceph/ceph:v18, name=agitated_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:29:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-0707fb911dadbcee819cc7ea7a0d4189c1eef8e520623306212eb48e2b58c18c-merged.mount: Deactivated successfully.
Nov 29 07:29:50 compute-0 podman[96669]: 2025-11-29 07:29:50.173969106 +0000 UTC m=+1.673255053 container remove 1707a15ef5732c27cb854d85040a4fa32937c66f0e0ccdbb3150ace847b9ab75 (image=quay.io/ceph/ceph:v18, name=agitated_yonath, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:29:50 compute-0 systemd[1]: libpod-conmon-1707a15ef5732c27cb854d85040a4fa32937c66f0e0ccdbb3150ace847b9ab75.scope: Deactivated successfully.
Nov 29 07:29:50 compute-0 sudo[96666]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:50 compute-0 sudo[96746]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qddlqjwmnqdnfyhnpcimpvhokjslyupu ; /usr/bin/python3'
Nov 29 07:29:50 compute-0 sudo[96746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:50 compute-0 python3[96748]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:29:50 compute-0 podman[96749]: 2025-11-29 07:29:50.491047536 +0000 UTC m=+0.038234552 container create 1111bd4bb309bc0936c9f2f039a93908f7970c148dec61f512a7a2fb24f52ca9 (image=quay.io/ceph/ceph:v18, name=lucid_easley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 29 07:29:50 compute-0 systemd[1]: Started libpod-conmon-1111bd4bb309bc0936c9f2f039a93908f7970c148dec61f512a7a2fb24f52ca9.scope.
Nov 29 07:29:50 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v89: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:29:50 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:29:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:29:50 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:29:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9eba9adb9dc0b38fd9cd44f1bec61f3032bf04c496b2bb650a980cc24d4941e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9eba9adb9dc0b38fd9cd44f1bec61f3032bf04c496b2bb650a980cc24d4941e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:29:50 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:29:50 compute-0 podman[96749]: 2025-11-29 07:29:50.476510968 +0000 UTC m=+0.023698004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:29:50 compute-0 podman[96749]: 2025-11-29 07:29:50.573970021 +0000 UTC m=+0.121157057 container init 1111bd4bb309bc0936c9f2f039a93908f7970c148dec61f512a7a2fb24f52ca9 (image=quay.io/ceph/ceph:v18, name=lucid_easley, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 07:29:50 compute-0 podman[96749]: 2025-11-29 07:29:50.584386279 +0000 UTC m=+0.131573325 container start 1111bd4bb309bc0936c9f2f039a93908f7970c148dec61f512a7a2fb24f52ca9 (image=quay.io/ceph/ceph:v18, name=lucid_easley, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:29:50 compute-0 podman[96749]: 2025-11-29 07:29:50.589030883 +0000 UTC m=+0.136217919 container attach 1111bd4bb309bc0936c9f2f039a93908f7970c148dec61f512a7a2fb24f52ca9 (image=quay.io/ceph/ceph:v18, name=lucid_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 07:29:50 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 29 07:29:50 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 29 07:29:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 29 07:29:51 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:29:51 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:29:51 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:29:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Nov 29 07:29:51 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.1d( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906474113s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.496658325s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.1f( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906311035s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.496597290s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.1c( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906338692s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.496589661s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.1d( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906332970s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.496658325s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.1f( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906229973s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.496597290s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.1c( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906214714s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.496589661s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.b( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906467438s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.497051239s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.b( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906433105s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.497051239s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.a( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906247139s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.496894836s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.9( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906384468s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.497058868s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.a( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906207085s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.496894836s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.9( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906342506s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.497058868s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.8( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906900406s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.497657776s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.8( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906880379s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.497657776s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.6( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906705856s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.497665405s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.6( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906682014s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.497665405s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.5( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906669617s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.497688293s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.4( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906731606s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.497753143s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.5( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906631470s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.497688293s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.4( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906692505s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.497753143s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.3( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906561852s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.497684479s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.3( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906542778s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.497684479s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.7( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906470299s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.497814178s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.f( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906320572s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.497932434s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.2( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906148911s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.497772217s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.f( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906296730s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.497932434s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.11( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906258583s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.498027802s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.2( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906038284s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.497772217s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.11( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906238556s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.498027802s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.13( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906143188s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.497989655s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.13( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906098366s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.497989655s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.16( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906172752s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.498134613s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.16( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906153679s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.498134613s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.15( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906019211s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.498023987s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.15( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.905980110s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.498023987s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.18( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.905952454s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.498081207s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.18( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.905935287s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.498081207s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.17( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.905900002s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.498085022s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.17( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.905817986s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.498085022s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.19( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.905790329s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.498138428s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.1b( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.905796051s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.498153687s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.1b( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.905769348s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.498153687s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.19( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.905742645s) [0] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.498138428s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.d( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906375885s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active pruub 47.497917175s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.d( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.905415535s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.497917175s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[2.7( empty local-lis/les=26/28 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33 pruub=8.906444550s) [1] r=-1 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 47.497814178s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-mon[75237]: 2.2 scrub starts
Nov 29 07:29:51 compute-0 ceph-mon[75237]: 2.2 scrub ok
Nov 29 07:29:51 compute-0 ceph-mon[75237]: 3.3 deep-scrub ok
Nov 29 07:29:51 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2230904722' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 29 07:29:51 compute-0 ceph-mon[75237]: osdmap e32: 3 total, 3 up, 3 in
Nov 29 07:29:51 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:29:51 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:29:51 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[2.11( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[2.13( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[2.1b( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[2.17( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[2.15( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[2.d( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[2.3( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[2.5( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[2.16( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[2.4( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[2.7( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[2.6( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[2.9( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[2.a( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.1f( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882472992s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 53.271232605s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.1f( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882447243s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.271232605s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.1d( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882300377s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 53.271190643s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.1d( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882287979s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.271190643s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.a( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882656097s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 53.271636963s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.a( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882642746s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.271636963s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.1e( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.881850243s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 53.270862579s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.7( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882586479s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 53.271675110s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.9( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882562637s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 53.271648407s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.7( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882570267s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.271675110s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.9( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882510185s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.271648407s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.8( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882513046s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 53.271671295s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.6( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882516861s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 53.271713257s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.5( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882492065s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 53.271747589s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.8( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882467270s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.271671295s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.5( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882415771s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.271747589s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.6( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882448196s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.271713257s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.3( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882379532s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 53.271785736s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.1( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882349014s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 53.271778107s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.1( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882308006s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.271778107s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.c( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882287025s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 53.271804810s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[2.8( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[2.b( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.c( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882264137s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.271804810s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.f( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882311821s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 53.271877289s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.3( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882307053s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.271785736s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.f( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882295609s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.271877289s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.e( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882285118s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 53.271884918s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.e( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882238388s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.271884918s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.11( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882293701s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 53.272003174s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.11( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882277489s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.272003174s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.12( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882220268s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 53.271999359s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.1e( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.880908012s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.270862579s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.15( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882079124s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 53.272060394s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.15( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882061958s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.272060394s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.12( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.881950378s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.271999359s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.17( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.882006645s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 53.272117615s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.16( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.881973267s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 53.272075653s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.17( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.881965637s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.272117615s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.16( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.881919861s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.272075653s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[4.8( empty local-lis/les=0/0 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.1b( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.886142731s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 53.276496887s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.1b( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.886103630s) [0] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.276496887s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.18( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.881595612s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 53.272125244s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[4.7( empty local-lis/les=0/0 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[3.18( empty local-lis/les=27/28 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.881567001s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 53.272125244s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[4.5( empty local-lis/les=0/0 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[4.9( empty local-lis/les=0/0 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[4.4( empty local-lis/les=0/0 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[4.2( empty local-lis/les=0/0 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[4.d( empty local-lis/les=0/0 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[4.f( empty local-lis/les=0/0 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[2.1f( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[2.2( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[2.f( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[2.1c( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[2.1d( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[2.18( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[2.19( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.1c( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.896784782s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 66.250961304s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.1c( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.896753311s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.250961304s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.8( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.889733315s) [1] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 66.244056702s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.8( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.889714241s) [1] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.244056702s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.7( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.896154404s) [1] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 66.250602722s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.7( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.896135330s) [1] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.250602722s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.1b( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.896311760s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 66.250923157s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.1b( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.896292686s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.250923157s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.a( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.895940781s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 66.250679016s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.a( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.895921707s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.250679016s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.5( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.897169113s) [1] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 66.252029419s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.5( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.897151947s) [1] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.252029419s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.1a( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.896361351s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 66.251327515s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.1a( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.896343231s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.251327515s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.9( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.896282196s) [1] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 66.251365662s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.9( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.896264076s) [1] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.251365662s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.4( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.896518707s) [1] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 66.251731873s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.4( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.896502495s) [1] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.251731873s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.1( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.896169662s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 66.251518250s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.1( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.896150589s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.251518250s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.2( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.895541191s) [1] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 66.251495361s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.2( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.895515442s) [1] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.251495361s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.d( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.895389557s) [1] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 66.251571655s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.e( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.895298004s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 66.251571655s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.d( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.895337105s) [1] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.251571655s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.f( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.895292282s) [1] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 66.251609802s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.f( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.895270348s) [1] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.251609802s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.e( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.895244598s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.251571655s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.11( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.895119667s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 66.251670837s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.10( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.895100594s) [1] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 66.251663208s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.11( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.895096779s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.251670837s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.10( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.895054817s) [1] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.251663208s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.13( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.892441750s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 66.251762390s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.12( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.892390251s) [1] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 66.251739502s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.12( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.892340660s) [1] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.251739502s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.14( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.892477989s) [1] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 66.251930237s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.14( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.892432213s) [1] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.251930237s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.13( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.892158508s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.251762390s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[3.1f( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.18( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.891510963s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active pruub 66.251976013s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[3.a( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[4.18( empty local-lis/les=27/28 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33 pruub=8.891471863s) [2] r=-1 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.251976013s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[3.9( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[3.6( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[3.1( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[3.c( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[3.f( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[3.3( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[3.15( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[3.12( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[3.17( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 33 pg[3.1b( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[4.1c( empty local-lis/les=0/0 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[4.1b( empty local-lis/les=0/0 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[4.a( empty local-lis/les=0/0 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[4.1a( empty local-lis/les=0/0 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[4.1( empty local-lis/les=0/0 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[4.e( empty local-lis/les=0/0 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[4.11( empty local-lis/les=0/0 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[4.13( empty local-lis/les=0/0 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[4.18( empty local-lis/les=0/0 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[3.1d( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[3.7( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[3.5( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[3.8( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[3.e( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[3.11( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[3.16( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[3.1e( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 33 pg[3.18( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[4.10( empty local-lis/les=0/0 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[4.12( empty local-lis/les=0/0 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 33 pg[4.14( empty local-lis/les=0/0 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:29:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 29 07:29:51 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2372230939' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 07:29:51 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 29 07:29:51 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 29 07:29:51 compute-0 ceph-mon[75237]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:29:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:51 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.6 deep-scrub starts
Nov 29 07:29:51 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.6 deep-scrub ok
Nov 29 07:29:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 29 07:29:52 compute-0 ceph-mon[75237]: pgmap v89: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:52 compute-0 ceph-mon[75237]: 4.3 scrub starts
Nov 29 07:29:52 compute-0 ceph-mon[75237]: 4.3 scrub ok
Nov 29 07:29:52 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:29:52 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:29:52 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:29:52 compute-0 ceph-mon[75237]: osdmap e33: 3 total, 3 up, 3 in
Nov 29 07:29:52 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2372230939' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 07:29:52 compute-0 ceph-mon[75237]: 3.4 scrub starts
Nov 29 07:29:52 compute-0 ceph-mon[75237]: 3.4 scrub ok
Nov 29 07:29:52 compute-0 ceph-mon[75237]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:29:52 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2372230939' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 29 07:29:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 29 07:29:52 compute-0 lucid_easley[96764]: enabled application 'rbd' on pool 'images'
Nov 29 07:29:52 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 29 07:29:52 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 34 pg[4.18( empty local-lis/les=33/34 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 34 pg[4.1a( empty local-lis/les=33/34 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 34 pg[3.1d( empty local-lis/les=33/34 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 34 pg[3.1e( empty local-lis/les=33/34 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 34 pg[4.1b( empty local-lis/les=33/34 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 34 pg[2.1b( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 34 pg[2.15( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 34 pg[2.3( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 34 pg[2.5( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 34 pg[2.4( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 34 pg[2.d( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 34 pg[2.17( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 34 pg[2.7( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 34 pg[2.9( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 34 pg[2.6( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[2.19( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 systemd[1]: libpod-1111bd4bb309bc0936c9f2f039a93908f7970c148dec61f512a7a2fb24f52ca9.scope: Deactivated successfully.
Nov 29 07:29:52 compute-0 podman[96749]: 2025-11-29 07:29:52.164626336 +0000 UTC m=+1.711813352 container died 1111bd4bb309bc0936c9f2f039a93908f7970c148dec61f512a7a2fb24f52ca9 (image=quay.io/ceph/ceph:v18, name=lucid_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:29:52 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 34 pg[3.8( empty local-lis/les=33/34 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 34 pg[3.5( empty local-lis/les=33/34 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 34 pg[4.1( empty local-lis/les=33/34 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 34 pg[3.7( empty local-lis/les=33/34 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 34 pg[4.a( empty local-lis/les=33/34 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 34 pg[3.e( empty local-lis/les=33/34 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 34 pg[3.16( empty local-lis/les=33/34 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 34 pg[3.18( empty local-lis/les=33/34 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 34 pg[4.11( empty local-lis/les=33/34 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 34 pg[4.1c( empty local-lis/les=33/34 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 34 pg[3.11( empty local-lis/les=33/34 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 34 pg[4.13( empty local-lis/les=33/34 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 34 pg[4.e( empty local-lis/les=33/34 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [2] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 34 pg[2.a( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 34 pg[4.d( empty local-lis/les=33/34 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 34 pg[4.f( empty local-lis/les=33/34 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 34 pg[4.2( empty local-lis/les=33/34 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 34 pg[4.4( empty local-lis/les=33/34 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 34 pg[4.7( empty local-lis/les=33/34 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 34 pg[4.5( empty local-lis/les=33/34 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 34 pg[4.9( empty local-lis/les=33/34 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 34 pg[4.8( empty local-lis/les=33/34 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 34 pg[4.14( empty local-lis/les=33/34 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 34 pg[4.10( empty local-lis/les=33/34 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 34 pg[4.12( empty local-lis/les=33/34 n=0 ec=27/23 lis/c=27/27 les/c/f=28/28/0 sis=33) [1] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[2.1f( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[2.18( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[3.1b( empty local-lis/les=33/34 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[3.f( empty local-lis/les=33/34 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[2.1d( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[3.1( empty local-lis/les=33/34 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[3.c( empty local-lis/les=33/34 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[2.1c( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[2.f( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[3.3( empty local-lis/les=33/34 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[2.2( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[3.a( empty local-lis/les=33/34 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[2.b( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[3.9( empty local-lis/les=33/34 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[2.8( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[3.17( empty local-lis/les=33/34 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[3.15( empty local-lis/les=33/34 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[3.12( empty local-lis/les=33/34 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[2.13( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[2.11( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[3.1f( empty local-lis/les=33/34 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[2.16( empty local-lis/les=33/34 n=0 ec=26/19 lis/c=26/26 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[26,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 34 pg[3.6( empty local-lis/les=33/34 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:29:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9eba9adb9dc0b38fd9cd44f1bec61f3032bf04c496b2bb650a980cc24d4941e-merged.mount: Deactivated successfully.
Nov 29 07:29:52 compute-0 podman[96749]: 2025-11-29 07:29:52.235731055 +0000 UTC m=+1.782918071 container remove 1111bd4bb309bc0936c9f2f039a93908f7970c148dec61f512a7a2fb24f52ca9 (image=quay.io/ceph/ceph:v18, name=lucid_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:29:52 compute-0 systemd[1]: libpod-conmon-1111bd4bb309bc0936c9f2f039a93908f7970c148dec61f512a7a2fb24f52ca9.scope: Deactivated successfully.
Nov 29 07:29:52 compute-0 sudo[96746]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:52 compute-0 sudo[96823]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaadhtonzfsddvsglpjrmnuqqlxzjxct ; /usr/bin/python3'
Nov 29 07:29:52 compute-0 sudo[96823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:52 compute-0 python3[96825]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:29:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v92: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:52 compute-0 sshd-session[96609]: Received disconnect from 101.47.142.104 port 48886:11: Bye Bye [preauth]
Nov 29 07:29:52 compute-0 sshd-session[96609]: Disconnected from authenticating user root 101.47.142.104 port 48886 [preauth]
Nov 29 07:29:52 compute-0 podman[96826]: 2025-11-29 07:29:52.592820473 +0000 UTC m=+0.048410284 container create 3f31096447439b60ab5415d0101648f51b3a0d885e08e81125e8c86723e7e0b7 (image=quay.io/ceph/ceph:v18, name=frosty_mclaren, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:29:52 compute-0 systemd[1]: Started libpod-conmon-3f31096447439b60ab5415d0101648f51b3a0d885e08e81125e8c86723e7e0b7.scope.
Nov 29 07:29:52 compute-0 podman[96826]: 2025-11-29 07:29:52.56872726 +0000 UTC m=+0.024317111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:29:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a10c4784fb4443283945796d716aeb016866e379dda9a787c9db295bfae197c5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a10c4784fb4443283945796d716aeb016866e379dda9a787c9db295bfae197c5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:52 compute-0 podman[96826]: 2025-11-29 07:29:52.678879191 +0000 UTC m=+0.134469002 container init 3f31096447439b60ab5415d0101648f51b3a0d885e08e81125e8c86723e7e0b7 (image=quay.io/ceph/ceph:v18, name=frosty_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:29:52 compute-0 podman[96826]: 2025-11-29 07:29:52.685419766 +0000 UTC m=+0.141009557 container start 3f31096447439b60ab5415d0101648f51b3a0d885e08e81125e8c86723e7e0b7 (image=quay.io/ceph/ceph:v18, name=frosty_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 29 07:29:52 compute-0 podman[96826]: 2025-11-29 07:29:52.6889357 +0000 UTC m=+0.144525521 container attach 3f31096447439b60ab5415d0101648f51b3a0d885e08e81125e8c86723e7e0b7 (image=quay.io/ceph/ceph:v18, name=frosty_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:29:52 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 29 07:29:52 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 29 07:29:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 29 07:29:53 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3239167236' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 29 07:29:53 compute-0 ceph-mon[75237]: 4.6 deep-scrub starts
Nov 29 07:29:53 compute-0 ceph-mon[75237]: 4.6 deep-scrub ok
Nov 29 07:29:53 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2372230939' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 29 07:29:53 compute-0 ceph-mon[75237]: osdmap e34: 3 total, 3 up, 3 in
Nov 29 07:29:53 compute-0 ceph-mon[75237]: 3.b scrub starts
Nov 29 07:29:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 29 07:29:53 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3239167236' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 29 07:29:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 29 07:29:53 compute-0 frosty_mclaren[96841]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 29 07:29:53 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 29 07:29:53 compute-0 systemd[1]: libpod-3f31096447439b60ab5415d0101648f51b3a0d885e08e81125e8c86723e7e0b7.scope: Deactivated successfully.
Nov 29 07:29:53 compute-0 podman[96826]: 2025-11-29 07:29:53.283555882 +0000 UTC m=+0.739145683 container died 3f31096447439b60ab5415d0101648f51b3a0d885e08e81125e8c86723e7e0b7 (image=quay.io/ceph/ceph:v18, name=frosty_mclaren, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 07:29:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-a10c4784fb4443283945796d716aeb016866e379dda9a787c9db295bfae197c5-merged.mount: Deactivated successfully.
Nov 29 07:29:53 compute-0 podman[96826]: 2025-11-29 07:29:53.336069165 +0000 UTC m=+0.791658976 container remove 3f31096447439b60ab5415d0101648f51b3a0d885e08e81125e8c86723e7e0b7 (image=quay.io/ceph/ceph:v18, name=frosty_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:29:53 compute-0 systemd[1]: libpod-conmon-3f31096447439b60ab5415d0101648f51b3a0d885e08e81125e8c86723e7e0b7.scope: Deactivated successfully.
Nov 29 07:29:53 compute-0 sudo[96823]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:53 compute-0 sudo[96902]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jemxmlvxnnjdagsunybefhiriutgmdiq ; /usr/bin/python3'
Nov 29 07:29:53 compute-0 sudo[96902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:53 compute-0 python3[96904]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:29:53 compute-0 podman[96905]: 2025-11-29 07:29:53.762065223 +0000 UTC m=+0.077497721 container create 5722726234e4ff887fa316a7557e8e6ec9026b9d295f0ce40aca8ca16c311266 (image=quay.io/ceph/ceph:v18, name=eloquent_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:29:53 compute-0 systemd[1]: Started libpod-conmon-5722726234e4ff887fa316a7557e8e6ec9026b9d295f0ce40aca8ca16c311266.scope.
Nov 29 07:29:53 compute-0 podman[96905]: 2025-11-29 07:29:53.729587946 +0000 UTC m=+0.045020494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:29:53 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e86756f3f8086f3c49df437450041ce34e7a081fb9bdf6b5ec87e8c3f73a30f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e86756f3f8086f3c49df437450041ce34e7a081fb9bdf6b5ec87e8c3f73a30f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:53 compute-0 podman[96905]: 2025-11-29 07:29:53.852222831 +0000 UTC m=+0.167655309 container init 5722726234e4ff887fa316a7557e8e6ec9026b9d295f0ce40aca8ca16c311266 (image=quay.io/ceph/ceph:v18, name=eloquent_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 07:29:53 compute-0 podman[96905]: 2025-11-29 07:29:53.858368505 +0000 UTC m=+0.173800973 container start 5722726234e4ff887fa316a7557e8e6ec9026b9d295f0ce40aca8ca16c311266 (image=quay.io/ceph/ceph:v18, name=eloquent_cori, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:29:53 compute-0 podman[96905]: 2025-11-29 07:29:53.862818074 +0000 UTC m=+0.178250562 container attach 5722726234e4ff887fa316a7557e8e6ec9026b9d295f0ce40aca8ca16c311266 (image=quay.io/ceph/ceph:v18, name=eloquent_cori, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:29:54 compute-0 ceph-mon[75237]: pgmap v92: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:54 compute-0 ceph-mon[75237]: 3.b scrub ok
Nov 29 07:29:54 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3239167236' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 29 07:29:54 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3239167236' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 29 07:29:54 compute-0 ceph-mon[75237]: osdmap e35: 3 total, 3 up, 3 in
Nov 29 07:29:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 29 07:29:54 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2753434047' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 29 07:29:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v94: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:54 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 29 07:29:54 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 2.c scrub ok
Nov 29 07:29:54 compute-0 sshd-session[96864]: Received disconnect from 103.236.140.19 port 36886:11: Bye Bye [preauth]
Nov 29 07:29:54 compute-0 sshd-session[96864]: Disconnected from authenticating user root 103.236.140.19 port 36886 [preauth]
Nov 29 07:29:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 29 07:29:55 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2753434047' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 29 07:29:55 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2753434047' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 29 07:29:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 29 07:29:55 compute-0 eloquent_cori[96920]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 29 07:29:55 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 29 07:29:55 compute-0 systemd[1]: libpod-5722726234e4ff887fa316a7557e8e6ec9026b9d295f0ce40aca8ca16c311266.scope: Deactivated successfully.
Nov 29 07:29:55 compute-0 podman[96905]: 2025-11-29 07:29:55.266340251 +0000 UTC m=+1.581772699 container died 5722726234e4ff887fa316a7557e8e6ec9026b9d295f0ce40aca8ca16c311266 (image=quay.io/ceph/ceph:v18, name=eloquent_cori, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 07:29:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e86756f3f8086f3c49df437450041ce34e7a081fb9bdf6b5ec87e8c3f73a30f-merged.mount: Deactivated successfully.
Nov 29 07:29:55 compute-0 podman[96905]: 2025-11-29 07:29:55.319137272 +0000 UTC m=+1.634569730 container remove 5722726234e4ff887fa316a7557e8e6ec9026b9d295f0ce40aca8ca16c311266 (image=quay.io/ceph/ceph:v18, name=eloquent_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 07:29:55 compute-0 systemd[1]: libpod-conmon-5722726234e4ff887fa316a7557e8e6ec9026b9d295f0ce40aca8ca16c311266.scope: Deactivated successfully.
Nov 29 07:29:55 compute-0 sudo[96902]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:55 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.b deep-scrub starts
Nov 29 07:29:55 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.b deep-scrub ok
Nov 29 07:29:56 compute-0 ceph-mon[75237]: pgmap v94: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:56 compute-0 ceph-mon[75237]: 2.c scrub starts
Nov 29 07:29:56 compute-0 ceph-mon[75237]: 2.c scrub ok
Nov 29 07:29:56 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2753434047' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 29 07:29:56 compute-0 ceph-mon[75237]: osdmap e36: 3 total, 3 up, 3 in
Nov 29 07:29:56 compute-0 python3[97030]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 07:29:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v96: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:56 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 29 07:29:56 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 29 07:29:56 compute-0 python3[97101]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401395.99588-36912-151820817285394/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:56 compute-0 ceph-mon[75237]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:29:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:56 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 29 07:29:56 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 29 07:29:56 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 29 07:29:56 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 29 07:29:57 compute-0 sudo[97201]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avpkagqdolrmbxnmvbbtznmxoahthiib ; /usr/bin/python3'
Nov 29 07:29:57 compute-0 sudo[97201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:57 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 07:29:57 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 07:29:57 compute-0 ceph-mon[75237]: 4.b deep-scrub starts
Nov 29 07:29:57 compute-0 ceph-mon[75237]: 4.b deep-scrub ok
Nov 29 07:29:57 compute-0 ceph-mon[75237]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:29:57 compute-0 python3[97203]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 07:29:57 compute-0 sudo[97201]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:57 compute-0 sudo[97276]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdqpiceuhugsnqunxbdpezlypazltebz ; /usr/bin/python3'
Nov 29 07:29:57 compute-0 sudo[97276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:57 compute-0 python3[97278]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401397.02569-36926-43379350933184/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=f3617a273b9324871fcfcdbc3b57560807749112 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:29:57 compute-0 sudo[97276]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:57 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 29 07:29:57 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 29 07:29:58 compute-0 sudo[97326]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdcphsnlzkbpjuqeursvtsnqzxtddncq ; /usr/bin/python3'
Nov 29 07:29:58 compute-0 sudo[97326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:58 compute-0 python3[97328]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:29:58 compute-0 ceph-mon[75237]: pgmap v96: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:58 compute-0 ceph-mon[75237]: 2.e scrub starts
Nov 29 07:29:58 compute-0 ceph-mon[75237]: 2.e scrub ok
Nov 29 07:29:58 compute-0 ceph-mon[75237]: 4.c scrub starts
Nov 29 07:29:58 compute-0 ceph-mon[75237]: 3.d scrub starts
Nov 29 07:29:58 compute-0 ceph-mon[75237]: 4.c scrub ok
Nov 29 07:29:58 compute-0 ceph-mon[75237]: 3.d scrub ok
Nov 29 07:29:58 compute-0 ceph-mon[75237]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 07:29:58 compute-0 ceph-mon[75237]: Cluster is now healthy
Nov 29 07:29:58 compute-0 podman[97329]: 2025-11-29 07:29:58.279843291 +0000 UTC m=+0.072576770 container create 6e7da37e27561bcad7b9f3034ab79f723f6d4085622791bcfdab73bbc7257f03 (image=quay.io/ceph/ceph:v18, name=focused_raman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Nov 29 07:29:58 compute-0 systemd[1]: Started libpod-conmon-6e7da37e27561bcad7b9f3034ab79f723f6d4085622791bcfdab73bbc7257f03.scope.
Nov 29 07:29:58 compute-0 podman[97329]: 2025-11-29 07:29:58.246767507 +0000 UTC m=+0.039501086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:29:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fecfbcf2d3b0e58cc3cb6a939a7c3d1e1dd036fc4da5e96459856c2d72f45174/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fecfbcf2d3b0e58cc3cb6a939a7c3d1e1dd036fc4da5e96459856c2d72f45174/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fecfbcf2d3b0e58cc3cb6a939a7c3d1e1dd036fc4da5e96459856c2d72f45174/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v97: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:58 compute-0 podman[97329]: 2025-11-29 07:29:58.59953239 +0000 UTC m=+0.392265959 container init 6e7da37e27561bcad7b9f3034ab79f723f6d4085622791bcfdab73bbc7257f03 (image=quay.io/ceph/ceph:v18, name=focused_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:29:58 compute-0 podman[97329]: 2025-11-29 07:29:58.611040517 +0000 UTC m=+0.403774026 container start 6e7da37e27561bcad7b9f3034ab79f723f6d4085622791bcfdab73bbc7257f03 (image=quay.io/ceph/ceph:v18, name=focused_raman, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 07:29:58 compute-0 podman[97329]: 2025-11-29 07:29:58.624047625 +0000 UTC m=+0.416781114 container attach 6e7da37e27561bcad7b9f3034ab79f723f6d4085622791bcfdab73bbc7257f03 (image=quay.io/ceph/ceph:v18, name=focused_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:29:58 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Nov 29 07:29:58 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Nov 29 07:29:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 07:29:59 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3986314926' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 07:29:59 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3986314926' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 07:29:59 compute-0 focused_raman[97344]: 
Nov 29 07:29:59 compute-0 focused_raman[97344]: [global]
Nov 29 07:29:59 compute-0 focused_raman[97344]:         fsid = 321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:29:59 compute-0 focused_raman[97344]:         mon_host = 192.168.122.100
Nov 29 07:29:59 compute-0 systemd[1]: libpod-6e7da37e27561bcad7b9f3034ab79f723f6d4085622791bcfdab73bbc7257f03.scope: Deactivated successfully.
Nov 29 07:29:59 compute-0 podman[97329]: 2025-11-29 07:29:59.205609257 +0000 UTC m=+0.998342776 container died 6e7da37e27561bcad7b9f3034ab79f723f6d4085622791bcfdab73bbc7257f03 (image=quay.io/ceph/ceph:v18, name=focused_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:29:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-fecfbcf2d3b0e58cc3cb6a939a7c3d1e1dd036fc4da5e96459856c2d72f45174-merged.mount: Deactivated successfully.
Nov 29 07:29:59 compute-0 podman[97329]: 2025-11-29 07:29:59.254119914 +0000 UTC m=+1.046853393 container remove 6e7da37e27561bcad7b9f3034ab79f723f6d4085622791bcfdab73bbc7257f03 (image=quay.io/ceph/ceph:v18, name=focused_raman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:29:59 compute-0 sudo[97369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:59 compute-0 systemd[1]: libpod-conmon-6e7da37e27561bcad7b9f3034ab79f723f6d4085622791bcfdab73bbc7257f03.scope: Deactivated successfully.
Nov 29 07:29:59 compute-0 sudo[97369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:59 compute-0 sudo[97369]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:59 compute-0 ceph-mon[75237]: 3.10 scrub starts
Nov 29 07:29:59 compute-0 ceph-mon[75237]: 3.10 scrub ok
Nov 29 07:29:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3986314926' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 07:29:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3986314926' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 07:29:59 compute-0 sudo[97326]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:59 compute-0 sudo[97408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:29:59 compute-0 sudo[97408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:59 compute-0 sudo[97408]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:59 compute-0 sudo[97433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:59 compute-0 sudo[97433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:59 compute-0 sudo[97433]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:59 compute-0 sudo[97479]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwuqpgjrkxlwhuqqnzggqfhdubdgfnyv ; /usr/bin/python3'
Nov 29 07:29:59 compute-0 sudo[97479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:29:59 compute-0 sudo[97484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:29:59 compute-0 sudo[97484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:59 compute-0 python3[97483]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:29:59 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 29 07:29:59 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 29 07:29:59 compute-0 podman[97509]: 2025-11-29 07:29:59.643269738 +0000 UTC m=+0.056121041 container create c33ffaba92deb7125e969d62576ecfa96849774be9f97636e980108148371335 (image=quay.io/ceph/ceph:v18, name=agitated_borg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:29:59 compute-0 systemd[1]: Started libpod-conmon-c33ffaba92deb7125e969d62576ecfa96849774be9f97636e980108148371335.scope.
Nov 29 07:29:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e49e300411ada12fc90d637168135b49f23cb8b7f2e384be88145e9c16d495a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e49e300411ada12fc90d637168135b49f23cb8b7f2e384be88145e9c16d495a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e49e300411ada12fc90d637168135b49f23cb8b7f2e384be88145e9c16d495a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:59 compute-0 podman[97509]: 2025-11-29 07:29:59.625737729 +0000 UTC m=+0.038589052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:29:59 compute-0 podman[97509]: 2025-11-29 07:29:59.731167855 +0000 UTC m=+0.144019198 container init c33ffaba92deb7125e969d62576ecfa96849774be9f97636e980108148371335 (image=quay.io/ceph/ceph:v18, name=agitated_borg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:29:59 compute-0 podman[97509]: 2025-11-29 07:29:59.742524228 +0000 UTC m=+0.155375541 container start c33ffaba92deb7125e969d62576ecfa96849774be9f97636e980108148371335 (image=quay.io/ceph/ceph:v18, name=agitated_borg, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 07:29:59 compute-0 podman[97509]: 2025-11-29 07:29:59.747426419 +0000 UTC m=+0.160277732 container attach c33ffaba92deb7125e969d62576ecfa96849774be9f97636e980108148371335 (image=quay.io/ceph/ceph:v18, name=agitated_borg, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:29:59 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 29 07:29:59 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 29 07:30:00 compute-0 sshd-session[97591]: Invalid user ftpuser from 20.185.243.158 port 33958
Nov 29 07:30:00 compute-0 podman[97600]: 2025-11-29 07:30:00.194144341 +0000 UTC m=+0.172677613 container exec 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 07:30:00 compute-0 sshd-session[97591]: Received disconnect from 20.185.243.158 port 33958:11: Bye Bye [preauth]
Nov 29 07:30:00 compute-0 sshd-session[97591]: Disconnected from invalid user ftpuser 20.185.243.158 port 33958 [preauth]
Nov 29 07:30:00 compute-0 ceph-mon[75237]: pgmap v97: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:00 compute-0 ceph-mon[75237]: 4.15 scrub starts
Nov 29 07:30:00 compute-0 ceph-mon[75237]: 4.15 scrub ok
Nov 29 07:30:00 compute-0 podman[97600]: 2025-11-29 07:30:00.330606455 +0000 UTC m=+0.309139677 container exec_died 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 07:30:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Nov 29 07:30:00 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2304091632' entity='client.admin' 
Nov 29 07:30:00 compute-0 agitated_borg[97546]: set ssl_option
Nov 29 07:30:00 compute-0 systemd[1]: libpod-c33ffaba92deb7125e969d62576ecfa96849774be9f97636e980108148371335.scope: Deactivated successfully.
Nov 29 07:30:00 compute-0 podman[97509]: 2025-11-29 07:30:00.461768529 +0000 UTC m=+0.874619862 container died c33ffaba92deb7125e969d62576ecfa96849774be9f97636e980108148371335 (image=quay.io/ceph/ceph:v18, name=agitated_borg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:30:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e49e300411ada12fc90d637168135b49f23cb8b7f2e384be88145e9c16d495a-merged.mount: Deactivated successfully.
Nov 29 07:30:00 compute-0 podman[97509]: 2025-11-29 07:30:00.518164655 +0000 UTC m=+0.931015978 container remove c33ffaba92deb7125e969d62576ecfa96849774be9f97636e980108148371335 (image=quay.io/ceph/ceph:v18, name=agitated_borg, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:30:00 compute-0 systemd[1]: libpod-conmon-c33ffaba92deb7125e969d62576ecfa96849774be9f97636e980108148371335.scope: Deactivated successfully.
Nov 29 07:30:00 compute-0 sudo[97479]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v98: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:00 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 29 07:30:00 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 29 07:30:00 compute-0 sudo[97751]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktfljxsheymzhgityrrkmwmvbbdkymem ; /usr/bin/python3'
Nov 29 07:30:00 compute-0 sudo[97751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:00 compute-0 sudo[97484]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:30:00 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:30:00 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:30:00 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:30:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:30:00 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:30:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:30:00 compute-0 python3[97759]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:30:00 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:00 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 9257be2d-447c-4b4d-89ad-931adfc92e73 does not exist
Nov 29 07:30:00 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 30ef91a6-c343-42aa-939b-8d2c8ddf2b99 does not exist
Nov 29 07:30:00 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev eb2401d6-7c02-4087-b1cb-6694cb54afda does not exist
Nov 29 07:30:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:30:00 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:30:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:30:00 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:30:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:30:00 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:30:00 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 29 07:30:00 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 29 07:30:00 compute-0 podman[97776]: 2025-11-29 07:30:00.889199135 +0000 UTC m=+0.039063735 container create 75b2c2b5eed51cb994d325213880a4c0b64a161cca6fe54478f96d818dcb2abc (image=quay.io/ceph/ceph:v18, name=quirky_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Nov 29 07:30:00 compute-0 sudo[97782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:00 compute-0 sudo[97782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:00 compute-0 sudo[97782]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:00 compute-0 systemd[1]: Started libpod-conmon-75b2c2b5eed51cb994d325213880a4c0b64a161cca6fe54478f96d818dcb2abc.scope.
Nov 29 07:30:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/550b6eb868f72972659fcf946d09f5c2d4f069eb2939b114bd2da393f7f98513/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/550b6eb868f72972659fcf946d09f5c2d4f069eb2939b114bd2da393f7f98513/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/550b6eb868f72972659fcf946d09f5c2d4f069eb2939b114bd2da393f7f98513/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:00 compute-0 podman[97776]: 2025-11-29 07:30:00.872350115 +0000 UTC m=+0.022214715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:30:00 compute-0 sudo[97817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:30:00 compute-0 sudo[97817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:00 compute-0 podman[97776]: 2025-11-29 07:30:00.978779038 +0000 UTC m=+0.128643698 container init 75b2c2b5eed51cb994d325213880a4c0b64a161cca6fe54478f96d818dcb2abc (image=quay.io/ceph/ceph:v18, name=quirky_faraday, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 07:30:00 compute-0 sudo[97817]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:00 compute-0 podman[97776]: 2025-11-29 07:30:00.989376381 +0000 UTC m=+0.139240981 container start 75b2c2b5eed51cb994d325213880a4c0b64a161cca6fe54478f96d818dcb2abc (image=quay.io/ceph/ceph:v18, name=quirky_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:30:00 compute-0 podman[97776]: 2025-11-29 07:30:00.99273006 +0000 UTC m=+0.142594710 container attach 75b2c2b5eed51cb994d325213880a4c0b64a161cca6fe54478f96d818dcb2abc (image=quay.io/ceph/ceph:v18, name=quirky_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:30:01 compute-0 sudo[97846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:01 compute-0 sudo[97846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:01 compute-0 sudo[97846]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:01 compute-0 sudo[97871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:30:01 compute-0 sudo[97871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:01 compute-0 ceph-mon[75237]: 2.10 scrub starts
Nov 29 07:30:01 compute-0 ceph-mon[75237]: 2.10 scrub ok
Nov 29 07:30:01 compute-0 ceph-mon[75237]: 4.16 scrub starts
Nov 29 07:30:01 compute-0 ceph-mon[75237]: 4.16 scrub ok
Nov 29 07:30:01 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2304091632' entity='client.admin' 
Nov 29 07:30:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:30:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:30:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:30:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:30:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:30:01 compute-0 podman[97951]: 2025-11-29 07:30:01.469821054 +0000 UTC m=+0.059695756 container create 6753330ae153646af5b3f4fe0c9e9b1573f4078efbfbf0332cb2206103e12d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_northcutt, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:30:01 compute-0 systemd[1]: Started libpod-conmon-6753330ae153646af5b3f4fe0c9e9b1573f4078efbfbf0332cb2206103e12d8a.scope.
Nov 29 07:30:01 compute-0 podman[97951]: 2025-11-29 07:30:01.441910378 +0000 UTC m=+0.031785140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:01 compute-0 podman[97951]: 2025-11-29 07:30:01.575500407 +0000 UTC m=+0.165375109 container init 6753330ae153646af5b3f4fe0c9e9b1573f4078efbfbf0332cb2206103e12d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:30:01 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14240 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:30:01 compute-0 podman[97951]: 2025-11-29 07:30:01.583017657 +0000 UTC m=+0.172892389 container start 6753330ae153646af5b3f4fe0c9e9b1573f4078efbfbf0332cb2206103e12d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_northcutt, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:30:01 compute-0 ceph-mgr[75527]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Nov 29 07:30:01 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 29 07:30:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 07:30:01 compute-0 peaceful_northcutt[97967]: 167 167
Nov 29 07:30:01 compute-0 systemd[1]: libpod-6753330ae153646af5b3f4fe0c9e9b1573f4078efbfbf0332cb2206103e12d8a.scope: Deactivated successfully.
Nov 29 07:30:01 compute-0 conmon[97967]: conmon 6753330ae153646af5b3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6753330ae153646af5b3f4fe0c9e9b1573f4078efbfbf0332cb2206103e12d8a.scope/container/memory.events
Nov 29 07:30:01 compute-0 podman[97951]: 2025-11-29 07:30:01.58947339 +0000 UTC m=+0.179348092 container attach 6753330ae153646af5b3f4fe0c9e9b1573f4078efbfbf0332cb2206103e12d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 07:30:01 compute-0 podman[97951]: 2025-11-29 07:30:01.591842833 +0000 UTC m=+0.181717535 container died 6753330ae153646af5b3f4fe0c9e9b1573f4078efbfbf0332cb2206103e12d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_northcutt, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:30:01 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:01 compute-0 quirky_faraday[97820]: Scheduled rgw.rgw update...
Nov 29 07:30:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd4d2a40fa96f1f5f599455bd77194758a1702f31ac9eef3b04fbb0945dec42a-merged.mount: Deactivated successfully.
Nov 29 07:30:01 compute-0 systemd[1]: libpod-75b2c2b5eed51cb994d325213880a4c0b64a161cca6fe54478f96d818dcb2abc.scope: Deactivated successfully.
Nov 29 07:30:01 compute-0 podman[97776]: 2025-11-29 07:30:01.621910406 +0000 UTC m=+0.771775006 container died 75b2c2b5eed51cb994d325213880a4c0b64a161cca6fe54478f96d818dcb2abc (image=quay.io/ceph/ceph:v18, name=quirky_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 07:30:01 compute-0 podman[97951]: 2025-11-29 07:30:01.636139906 +0000 UTC m=+0.226014608 container remove 6753330ae153646af5b3f4fe0c9e9b1573f4078efbfbf0332cb2206103e12d8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_northcutt, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 07:30:01 compute-0 systemd[1]: libpod-conmon-6753330ae153646af5b3f4fe0c9e9b1573f4078efbfbf0332cb2206103e12d8a.scope: Deactivated successfully.
Nov 29 07:30:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-550b6eb868f72972659fcf946d09f5c2d4f069eb2939b114bd2da393f7f98513-merged.mount: Deactivated successfully.
Nov 29 07:30:01 compute-0 podman[97776]: 2025-11-29 07:30:01.68085269 +0000 UTC m=+0.830717290 container remove 75b2c2b5eed51cb994d325213880a4c0b64a161cca6fe54478f96d818dcb2abc (image=quay.io/ceph/ceph:v18, name=quirky_faraday, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 07:30:01 compute-0 systemd[1]: libpod-conmon-75b2c2b5eed51cb994d325213880a4c0b64a161cca6fe54478f96d818dcb2abc.scope: Deactivated successfully.
Nov 29 07:30:01 compute-0 sudo[97751]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:01 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 29 07:30:01 compute-0 podman[98005]: 2025-11-29 07:30:01.811590902 +0000 UTC m=+0.039260800 container create d8cd91ce11b79a868efd40d62c13e848f5658be32db076a4ef73bb3e553dcec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 07:30:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:30:01 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 29 07:30:01 compute-0 systemd[1]: Started libpod-conmon-d8cd91ce11b79a868efd40d62c13e848f5658be32db076a4ef73bb3e553dcec2.scope.
Nov 29 07:30:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb70ba713a77ff6f32b47d81509d0428c6604ad7e787c958f672a2bc629106c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb70ba713a77ff6f32b47d81509d0428c6604ad7e787c958f672a2bc629106c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:01 compute-0 podman[98005]: 2025-11-29 07:30:01.79390214 +0000 UTC m=+0.021572058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb70ba713a77ff6f32b47d81509d0428c6604ad7e787c958f672a2bc629106c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb70ba713a77ff6f32b47d81509d0428c6604ad7e787c958f672a2bc629106c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb70ba713a77ff6f32b47d81509d0428c6604ad7e787c958f672a2bc629106c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:01 compute-0 podman[98005]: 2025-11-29 07:30:01.916840033 +0000 UTC m=+0.144509991 container init d8cd91ce11b79a868efd40d62c13e848f5658be32db076a4ef73bb3e553dcec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 07:30:01 compute-0 podman[98005]: 2025-11-29 07:30:01.929011048 +0000 UTC m=+0.156680956 container start d8cd91ce11b79a868efd40d62c13e848f5658be32db076a4ef73bb3e553dcec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:01 compute-0 podman[98005]: 2025-11-29 07:30:01.943412483 +0000 UTC m=+0.171082431 container attach d8cd91ce11b79a868efd40d62c13e848f5658be32db076a4ef73bb3e553dcec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mirzakhani, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 07:30:02 compute-0 ceph-mon[75237]: pgmap v98: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:02 compute-0 ceph-mon[75237]: 2.12 scrub starts
Nov 29 07:30:02 compute-0 ceph-mon[75237]: 2.12 scrub ok
Nov 29 07:30:02 compute-0 ceph-mon[75237]: 3.13 scrub starts
Nov 29 07:30:02 compute-0 ceph-mon[75237]: 3.13 scrub ok
Nov 29 07:30:02 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v99: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:02 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Nov 29 07:30:02 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Nov 29 07:30:02 compute-0 python3[98102]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 07:30:03 compute-0 trusting_mirzakhani[98022]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:30:03 compute-0 trusting_mirzakhani[98022]: --> relative data size: 1.0
Nov 29 07:30:03 compute-0 trusting_mirzakhani[98022]: --> All data devices are unavailable
Nov 29 07:30:03 compute-0 systemd[1]: libpod-d8cd91ce11b79a868efd40d62c13e848f5658be32db076a4ef73bb3e553dcec2.scope: Deactivated successfully.
Nov 29 07:30:03 compute-0 podman[98005]: 2025-11-29 07:30:03.088030346 +0000 UTC m=+1.315700244 container died d8cd91ce11b79a868efd40d62c13e848f5658be32db076a4ef73bb3e553dcec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 07:30:03 compute-0 systemd[1]: libpod-d8cd91ce11b79a868efd40d62c13e848f5658be32db076a4ef73bb3e553dcec2.scope: Consumed 1.083s CPU time.
Nov 29 07:30:03 compute-0 python3[98193]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401402.4452593-36969-81924256073560/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:30:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb70ba713a77ff6f32b47d81509d0428c6604ad7e787c958f672a2bc629106c4-merged.mount: Deactivated successfully.
Nov 29 07:30:03 compute-0 podman[98005]: 2025-11-29 07:30:03.155381984 +0000 UTC m=+1.383051882 container remove d8cd91ce11b79a868efd40d62c13e848f5658be32db076a4ef73bb3e553dcec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:30:03 compute-0 systemd[1]: libpod-conmon-d8cd91ce11b79a868efd40d62c13e848f5658be32db076a4ef73bb3e553dcec2.scope: Deactivated successfully.
Nov 29 07:30:03 compute-0 sudo[97871]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:03 compute-0 sudo[98233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:03 compute-0 sudo[98233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:03 compute-0 sudo[98233]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:03 compute-0 sudo[98258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:30:03 compute-0 sudo[98258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:03 compute-0 sudo[98258]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:03 compute-0 sudo[98327]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txgaibxiuivytatrrshldjwgtqfmfjid ; /usr/bin/python3'
Nov 29 07:30:03 compute-0 sudo[98327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:03 compute-0 sudo[98286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:03 compute-0 sudo[98286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:03 compute-0 sudo[98286]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:03 compute-0 sudo[98334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:30:03 compute-0 sudo[98334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:03 compute-0 ceph-mon[75237]: from='client.14240 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:30:03 compute-0 ceph-mon[75237]: Saving service rgw.rgw spec with placement compute-0
Nov 29 07:30:03 compute-0 ceph-mon[75237]: 3.14 scrub starts
Nov 29 07:30:03 compute-0 ceph-mon[75237]: 3.14 scrub ok
Nov 29 07:30:03 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.19 deep-scrub starts
Nov 29 07:30:03 compute-0 python3[98332]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:30:03 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.19 deep-scrub ok
Nov 29 07:30:03 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 29 07:30:03 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 29 07:30:03 compute-0 podman[98386]: 2025-11-29 07:30:03.862296716 +0000 UTC m=+0.049561635 container create 5f154276b7ec50134620029448f3c194b274c8f71e3667bc2c6c0dd15f705f6d (image=quay.io/ceph/ceph:v18, name=sad_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:30:03 compute-0 systemd[1]: Started libpod-conmon-5f154276b7ec50134620029448f3c194b274c8f71e3667bc2c6c0dd15f705f6d.scope.
Nov 29 07:30:03 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08d5da1997f9c817193301d4a306c9c35429252e9d09e29e6abd0fd11a708e4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08d5da1997f9c817193301d4a306c9c35429252e9d09e29e6abd0fd11a708e4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08d5da1997f9c817193301d4a306c9c35429252e9d09e29e6abd0fd11a708e4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:03 compute-0 podman[98386]: 2025-11-29 07:30:03.84150658 +0000 UTC m=+0.028771519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:30:03 compute-0 podman[98386]: 2025-11-29 07:30:03.958023232 +0000 UTC m=+0.145288181 container init 5f154276b7ec50134620029448f3c194b274c8f71e3667bc2c6c0dd15f705f6d (image=quay.io/ceph/ceph:v18, name=sad_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 07:30:03 compute-0 podman[98386]: 2025-11-29 07:30:03.969442037 +0000 UTC m=+0.156706966 container start 5f154276b7ec50134620029448f3c194b274c8f71e3667bc2c6c0dd15f705f6d (image=quay.io/ceph/ceph:v18, name=sad_boyd, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 29 07:30:03 compute-0 podman[98386]: 2025-11-29 07:30:03.973611789 +0000 UTC m=+0.160876738 container attach 5f154276b7ec50134620029448f3c194b274c8f71e3667bc2c6c0dd15f705f6d (image=quay.io/ceph/ceph:v18, name=sad_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:30:04 compute-0 podman[98419]: 2025-11-29 07:30:04.083978837 +0000 UTC m=+0.069231890 container create a8e62aa28bd9b7cc50f080dd05239a8c7cf15717a0a464d7ce73fc4ae1a7745a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moser, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:30:04 compute-0 systemd[1]: Started libpod-conmon-a8e62aa28bd9b7cc50f080dd05239a8c7cf15717a0a464d7ce73fc4ae1a7745a.scope.
Nov 29 07:30:04 compute-0 podman[98419]: 2025-11-29 07:30:04.062211496 +0000 UTC m=+0.047464539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:04 compute-0 podman[98419]: 2025-11-29 07:30:04.176129218 +0000 UTC m=+0.161382261 container init a8e62aa28bd9b7cc50f080dd05239a8c7cf15717a0a464d7ce73fc4ae1a7745a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 07:30:04 compute-0 podman[98419]: 2025-11-29 07:30:04.184905992 +0000 UTC m=+0.170159045 container start a8e62aa28bd9b7cc50f080dd05239a8c7cf15717a0a464d7ce73fc4ae1a7745a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:30:04 compute-0 podman[98419]: 2025-11-29 07:30:04.189724491 +0000 UTC m=+0.174977524 container attach a8e62aa28bd9b7cc50f080dd05239a8c7cf15717a0a464d7ce73fc4ae1a7745a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:30:04 compute-0 distracted_moser[98436]: 167 167
Nov 29 07:30:04 compute-0 podman[98419]: 2025-11-29 07:30:04.192139055 +0000 UTC m=+0.177392168 container died a8e62aa28bd9b7cc50f080dd05239a8c7cf15717a0a464d7ce73fc4ae1a7745a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 07:30:04 compute-0 systemd[1]: libpod-a8e62aa28bd9b7cc50f080dd05239a8c7cf15717a0a464d7ce73fc4ae1a7745a.scope: Deactivated successfully.
Nov 29 07:30:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-df6710f72c1e493a3bf6e9e76e9814bd2db4ccc4a7024bd69d27a9dda1b50b7e-merged.mount: Deactivated successfully.
Nov 29 07:30:04 compute-0 podman[98419]: 2025-11-29 07:30:04.239246174 +0000 UTC m=+0.224499227 container remove a8e62aa28bd9b7cc50f080dd05239a8c7cf15717a0a464d7ce73fc4ae1a7745a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moser, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:30:04 compute-0 systemd[1]: libpod-conmon-a8e62aa28bd9b7cc50f080dd05239a8c7cf15717a0a464d7ce73fc4ae1a7745a.scope: Deactivated successfully.
Nov 29 07:30:04 compute-0 podman[98479]: 2025-11-29 07:30:04.450555748 +0000 UTC m=+0.047609813 container create 095c96eccf2bb852a3bec2cc95272d287a660421a37d2ad1561e25ea8d70fd2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 07:30:04 compute-0 systemd[1]: Started libpod-conmon-095c96eccf2bb852a3bec2cc95272d287a660421a37d2ad1561e25ea8d70fd2d.scope.
Nov 29 07:30:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c1ed97dc245bb484f9f7d89ae89af3c958f8e55d24cc07b639ef63d3469e536/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:04 compute-0 podman[98479]: 2025-11-29 07:30:04.430929804 +0000 UTC m=+0.027983849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c1ed97dc245bb484f9f7d89ae89af3c958f8e55d24cc07b639ef63d3469e536/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c1ed97dc245bb484f9f7d89ae89af3c958f8e55d24cc07b639ef63d3469e536/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c1ed97dc245bb484f9f7d89ae89af3c958f8e55d24cc07b639ef63d3469e536/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:04 compute-0 podman[98479]: 2025-11-29 07:30:04.538263991 +0000 UTC m=+0.135318046 container init 095c96eccf2bb852a3bec2cc95272d287a660421a37d2ad1561e25ea8d70fd2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:04 compute-0 podman[98479]: 2025-11-29 07:30:04.551882624 +0000 UTC m=+0.148936689 container start 095c96eccf2bb852a3bec2cc95272d287a660421a37d2ad1561e25ea8d70fd2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_almeida, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 07:30:04 compute-0 podman[98479]: 2025-11-29 07:30:04.556349613 +0000 UTC m=+0.153403658 container attach 095c96eccf2bb852a3bec2cc95272d287a660421a37d2ad1561e25ea8d70fd2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v100: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:04 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14242 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:30:04 compute-0 ceph-mgr[75527]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 29 07:30:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 29 07:30:04 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 29 07:30:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 29 07:30:04 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 29 07:30:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 29 07:30:04 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 29 07:30:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 29 07:30:04 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0[75233]: 2025-11-29T07:30:04.633+0000 7ff97d6c6640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 07:30:04 compute-0 ceph-mon[75237]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 07:30:04 compute-0 ceph-mon[75237]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 29 07:30:04 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 29 07:30:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).mds e2 new map
Nov 29 07:30:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T07:30:04.634900+0000
                                           modified        2025-11-29T07:30:04.634983+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Nov 29 07:30:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 29 07:30:04 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 29 07:30:04 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 29 07:30:04 compute-0 ceph-mgr[75527]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 29 07:30:04 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 29 07:30:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 07:30:04 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:04 compute-0 ceph-mgr[75527]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 29 07:30:04 compute-0 systemd[1]: libpod-5f154276b7ec50134620029448f3c194b274c8f71e3667bc2c6c0dd15f705f6d.scope: Deactivated successfully.
Nov 29 07:30:04 compute-0 podman[98386]: 2025-11-29 07:30:04.67751987 +0000 UTC m=+0.864784829 container died 5f154276b7ec50134620029448f3c194b274c8f71e3667bc2c6c0dd15f705f6d (image=quay.io/ceph/ceph:v18, name=sad_boyd, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 07:30:04 compute-0 ceph-mon[75237]: pgmap v99: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:04 compute-0 ceph-mon[75237]: 2.14 scrub starts
Nov 29 07:30:04 compute-0 ceph-mon[75237]: 2.14 scrub ok
Nov 29 07:30:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 29 07:30:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 29 07:30:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 29 07:30:04 compute-0 ceph-mon[75237]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 07:30:04 compute-0 ceph-mon[75237]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 29 07:30:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 29 07:30:04 compute-0 ceph-mon[75237]: osdmap e37: 3 total, 3 up, 3 in
Nov 29 07:30:04 compute-0 ceph-mon[75237]: fsmap cephfs:0
Nov 29 07:30:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-d08d5da1997f9c817193301d4a306c9c35429252e9d09e29e6abd0fd11a708e4-merged.mount: Deactivated successfully.
Nov 29 07:30:04 compute-0 podman[98386]: 2025-11-29 07:30:04.733419133 +0000 UTC m=+0.920684052 container remove 5f154276b7ec50134620029448f3c194b274c8f71e3667bc2c6c0dd15f705f6d (image=quay.io/ceph/ceph:v18, name=sad_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:30:04 compute-0 systemd[1]: libpod-conmon-5f154276b7ec50134620029448f3c194b274c8f71e3667bc2c6c0dd15f705f6d.scope: Deactivated successfully.
Nov 29 07:30:04 compute-0 sudo[98327]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:04 compute-0 sudo[98537]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbpvlwzmcvocdjzxoczvybtlnxxhyrse ; /usr/bin/python3'
Nov 29 07:30:04 compute-0 sudo[98537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:05 compute-0 python3[98539]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:30:05 compute-0 podman[98540]: 2025-11-29 07:30:05.138358929 +0000 UTC m=+0.050783498 container create fc6aa72aa0abea556311438d82cfa5e373285c5629634bd4b70d55a911918c4e (image=quay.io/ceph/ceph:v18, name=xenodochial_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 07:30:05 compute-0 systemd[1]: Started libpod-conmon-fc6aa72aa0abea556311438d82cfa5e373285c5629634bd4b70d55a911918c4e.scope.
Nov 29 07:30:05 compute-0 podman[98540]: 2025-11-29 07:30:05.117619174 +0000 UTC m=+0.030043723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:30:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/381d21df1e881c484d1a62f1a8d688bc539c9c8bfea82303032dacdc5a54faef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/381d21df1e881c484d1a62f1a8d688bc539c9c8bfea82303032dacdc5a54faef/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/381d21df1e881c484d1a62f1a8d688bc539c9c8bfea82303032dacdc5a54faef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:05 compute-0 podman[98540]: 2025-11-29 07:30:05.241148085 +0000 UTC m=+0.153572644 container init fc6aa72aa0abea556311438d82cfa5e373285c5629634bd4b70d55a911918c4e (image=quay.io/ceph/ceph:v18, name=xenodochial_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:30:05 compute-0 podman[98540]: 2025-11-29 07:30:05.246680482 +0000 UTC m=+0.159105051 container start fc6aa72aa0abea556311438d82cfa5e373285c5629634bd4b70d55a911918c4e (image=quay.io/ceph/ceph:v18, name=xenodochial_goodall, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:30:05 compute-0 podman[98540]: 2025-11-29 07:30:05.251390778 +0000 UTC m=+0.163815347 container attach fc6aa72aa0abea556311438d82cfa5e373285c5629634bd4b70d55a911918c4e (image=quay.io/ceph/ceph:v18, name=xenodochial_goodall, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]: {
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:     "0": [
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:         {
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "devices": [
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "/dev/loop3"
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             ],
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "lv_name": "ceph_lv0",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "lv_size": "21470642176",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "name": "ceph_lv0",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "tags": {
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.cluster_name": "ceph",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.crush_device_class": "",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.encrypted": "0",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.osd_id": "0",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.type": "block",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.vdo": "0"
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             },
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "type": "block",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "vg_name": "ceph_vg0"
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:         }
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:     ],
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:     "1": [
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:         {
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "devices": [
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "/dev/loop4"
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             ],
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "lv_name": "ceph_lv1",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "lv_size": "21470642176",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "name": "ceph_lv1",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "tags": {
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.cluster_name": "ceph",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.crush_device_class": "",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.encrypted": "0",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.osd_id": "1",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.type": "block",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.vdo": "0"
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             },
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "type": "block",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "vg_name": "ceph_vg1"
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:         }
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:     ],
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:     "2": [
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:         {
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "devices": [
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "/dev/loop5"
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             ],
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "lv_name": "ceph_lv2",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "lv_size": "21470642176",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "name": "ceph_lv2",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "tags": {
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.cluster_name": "ceph",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.crush_device_class": "",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.encrypted": "0",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.osd_id": "2",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.type": "block",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:                 "ceph.vdo": "0"
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             },
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "type": "block",
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:             "vg_name": "ceph_vg2"
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:         }
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]:     ]
Nov 29 07:30:05 compute-0 flamboyant_almeida[98496]: }
Nov 29 07:30:05 compute-0 systemd[1]: libpod-095c96eccf2bb852a3bec2cc95272d287a660421a37d2ad1561e25ea8d70fd2d.scope: Deactivated successfully.
Nov 29 07:30:05 compute-0 conmon[98496]: conmon 095c96eccf2bb852a3be <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-095c96eccf2bb852a3bec2cc95272d287a660421a37d2ad1561e25ea8d70fd2d.scope/container/memory.events
Nov 29 07:30:05 compute-0 podman[98563]: 2025-11-29 07:30:05.456131026 +0000 UTC m=+0.034183563 container died 095c96eccf2bb852a3bec2cc95272d287a660421a37d2ad1561e25ea8d70fd2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_almeida, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:30:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c1ed97dc245bb484f9f7d89ae89af3c958f8e55d24cc07b639ef63d3469e536-merged.mount: Deactivated successfully.
Nov 29 07:30:05 compute-0 podman[98563]: 2025-11-29 07:30:05.527279347 +0000 UTC m=+0.105331804 container remove 095c96eccf2bb852a3bec2cc95272d287a660421a37d2ad1561e25ea8d70fd2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_almeida, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:30:05 compute-0 systemd[1]: libpod-conmon-095c96eccf2bb852a3bec2cc95272d287a660421a37d2ad1561e25ea8d70fd2d.scope: Deactivated successfully.
Nov 29 07:30:05 compute-0 sudo[98334]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:05 compute-0 sudo[98578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:05 compute-0 sudo[98578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:05 compute-0 sudo[98578]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:05 compute-0 ceph-mon[75237]: 3.19 deep-scrub starts
Nov 29 07:30:05 compute-0 ceph-mon[75237]: 3.19 deep-scrub ok
Nov 29 07:30:05 compute-0 ceph-mon[75237]: 4.17 scrub starts
Nov 29 07:30:05 compute-0 ceph-mon[75237]: 4.17 scrub ok
Nov 29 07:30:05 compute-0 ceph-mon[75237]: pgmap v100: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:05 compute-0 ceph-mon[75237]: from='client.14242 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:30:05 compute-0 ceph-mon[75237]: Saving service mds.cephfs spec with placement compute-0
Nov 29 07:30:05 compute-0 sudo[98622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:30:05 compute-0 sudo[98622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:05 compute-0 sudo[98622]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:05 compute-0 sudo[98647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:05 compute-0 sudo[98647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:05 compute-0 sudo[98647]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:05 compute-0 sudo[98672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:30:05 compute-0 sudo[98672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:05 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:30:05 compute-0 ceph-mgr[75527]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 29 07:30:05 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 29 07:30:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 07:30:06 compute-0 podman[98739]: 2025-11-29 07:30:06.244736179 +0000 UTC m=+0.043089581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v102: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:06 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:06 compute-0 xenodochial_goodall[98555]: Scheduled mds.cephfs update...
Nov 29 07:30:06 compute-0 podman[98739]: 2025-11-29 07:30:06.584251748 +0000 UTC m=+0.382605060 container create 79f3287d27ef0abd9d3b37bef335e6bb99b4668d183f063853b80423fb13880e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:30:06 compute-0 systemd[1]: libpod-fc6aa72aa0abea556311438d82cfa5e373285c5629634bd4b70d55a911918c4e.scope: Deactivated successfully.
Nov 29 07:30:06 compute-0 podman[98540]: 2025-11-29 07:30:06.602841484 +0000 UTC m=+1.515266023 container died fc6aa72aa0abea556311438d82cfa5e373285c5629634bd4b70d55a911918c4e (image=quay.io/ceph/ceph:v18, name=xenodochial_goodall, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:30:06 compute-0 systemd[1]: Started libpod-conmon-79f3287d27ef0abd9d3b37bef335e6bb99b4668d183f063853b80423fb13880e.scope.
Nov 29 07:30:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-381d21df1e881c484d1a62f1a8d688bc539c9c8bfea82303032dacdc5a54faef-merged.mount: Deactivated successfully.
Nov 29 07:30:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:06 compute-0 podman[98540]: 2025-11-29 07:30:06.655035478 +0000 UTC m=+1.567460007 container remove fc6aa72aa0abea556311438d82cfa5e373285c5629634bd4b70d55a911918c4e (image=quay.io/ceph/ceph:v18, name=xenodochial_goodall, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 07:30:06 compute-0 systemd[1]: libpod-conmon-fc6aa72aa0abea556311438d82cfa5e373285c5629634bd4b70d55a911918c4e.scope: Deactivated successfully.
Nov 29 07:30:06 compute-0 podman[98739]: 2025-11-29 07:30:06.664780908 +0000 UTC m=+0.463134240 container init 79f3287d27ef0abd9d3b37bef335e6bb99b4668d183f063853b80423fb13880e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_heyrovsky, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:30:06 compute-0 podman[98739]: 2025-11-29 07:30:06.670654616 +0000 UTC m=+0.469007928 container start 79f3287d27ef0abd9d3b37bef335e6bb99b4668d183f063853b80423fb13880e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:30:06 compute-0 friendly_heyrovsky[98764]: 167 167
Nov 29 07:30:06 compute-0 podman[98739]: 2025-11-29 07:30:06.67458232 +0000 UTC m=+0.472935632 container attach 79f3287d27ef0abd9d3b37bef335e6bb99b4668d183f063853b80423fb13880e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 07:30:06 compute-0 systemd[1]: libpod-79f3287d27ef0abd9d3b37bef335e6bb99b4668d183f063853b80423fb13880e.scope: Deactivated successfully.
Nov 29 07:30:06 compute-0 conmon[98764]: conmon 79f3287d27ef0abd9d3b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-79f3287d27ef0abd9d3b37bef335e6bb99b4668d183f063853b80423fb13880e.scope/container/memory.events
Nov 29 07:30:06 compute-0 podman[98739]: 2025-11-29 07:30:06.675826264 +0000 UTC m=+0.474179566 container died 79f3287d27ef0abd9d3b37bef335e6bb99b4668d183f063853b80423fb13880e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:30:06 compute-0 sudo[98537]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-223170c9b2209afe0dfaa6a22256643d738d9ad79c0376b525cc71fb79c91db8-merged.mount: Deactivated successfully.
Nov 29 07:30:06 compute-0 podman[98739]: 2025-11-29 07:30:06.722167281 +0000 UTC m=+0.520520633 container remove 79f3287d27ef0abd9d3b37bef335e6bb99b4668d183f063853b80423fb13880e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:06 compute-0 ceph-mon[75237]: from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:30:06 compute-0 ceph-mon[75237]: Saving service mds.cephfs spec with placement compute-0
Nov 29 07:30:06 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:06 compute-0 systemd[1]: libpod-conmon-79f3287d27ef0abd9d3b37bef335e6bb99b4668d183f063853b80423fb13880e.scope: Deactivated successfully.
Nov 29 07:30:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:30:06 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.19 deep-scrub starts
Nov 29 07:30:06 compute-0 podman[98791]: 2025-11-29 07:30:06.897464314 +0000 UTC m=+0.051648120 container create ab0b23ccedb980886f555be5934531df8195109ac388e5775461ab81b5994a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 29 07:30:06 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.19 deep-scrub ok
Nov 29 07:30:06 compute-0 systemd[1]: Started libpod-conmon-ab0b23ccedb980886f555be5934531df8195109ac388e5775461ab81b5994a8b.scope.
Nov 29 07:30:06 compute-0 podman[98791]: 2025-11-29 07:30:06.869832636 +0000 UTC m=+0.024016542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e1aca97f94fc6d3795b6674f9d941a72ddb9d14f35df5800db71795d5bbc159/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e1aca97f94fc6d3795b6674f9d941a72ddb9d14f35df5800db71795d5bbc159/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e1aca97f94fc6d3795b6674f9d941a72ddb9d14f35df5800db71795d5bbc159/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e1aca97f94fc6d3795b6674f9d941a72ddb9d14f35df5800db71795d5bbc159/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:07 compute-0 sudo[98885]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsbkdhviqhutxflaywlriyrrsdaxxgra ; /usr/bin/python3'
Nov 29 07:30:07 compute-0 sudo[98885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:07 compute-0 python3[98887]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 07:30:07 compute-0 sudo[98885]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:07 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Nov 29 07:30:07 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Nov 29 07:30:07 compute-0 sudo[98958]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecxcwprhwijmloakotostkxvtivwfzbt ; /usr/bin/python3'
Nov 29 07:30:07 compute-0 sudo[98958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:07 compute-0 podman[98791]: 2025-11-29 07:30:07.944995773 +0000 UTC m=+1.099179639 container init ab0b23ccedb980886f555be5934531df8195109ac388e5775461ab81b5994a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 07:30:07 compute-0 ceph-mon[75237]: pgmap v102: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:07 compute-0 podman[98791]: 2025-11-29 07:30:07.952908664 +0000 UTC m=+1.107092470 container start ab0b23ccedb980886f555be5934531df8195109ac388e5775461ab81b5994a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Nov 29 07:30:07 compute-0 podman[98791]: 2025-11-29 07:30:07.957648911 +0000 UTC m=+1.111832717 container attach ab0b23ccedb980886f555be5934531df8195109ac388e5775461ab81b5994a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 07:30:08 compute-0 python3[98960]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401407.1421554-36999-127917435374627/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=73fd3d3bf796904cf7cd5d4cb8d16865a2ca06f9 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:30:08 compute-0 sudo[98958]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:08 compute-0 sudo[99010]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nztstlwujbbvqulzysbmosizeuivrxvk ; /usr/bin/python3'
Nov 29 07:30:08 compute-0 sudo[99010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:08 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 29 07:30:08 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 29 07:30:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v103: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:08 compute-0 python3[99012]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:30:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:30:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:30:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:30:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:30:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:30:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:30:08 compute-0 podman[99014]: 2025-11-29 07:30:08.634052977 +0000 UTC m=+0.023755785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:30:08 compute-0 podman[99014]: 2025-11-29 07:30:08.874685805 +0000 UTC m=+0.264388623 container create 435335e8888360db9a137aef93202d8dab715dd4dbb36a2bced6dd168a6ed4b1 (image=quay.io/ceph/ceph:v18, name=trusting_cori, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:30:08 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Nov 29 07:30:08 compute-0 systemd[1]: Started libpod-conmon-435335e8888360db9a137aef93202d8dab715dd4dbb36a2bced6dd168a6ed4b1.scope.
Nov 29 07:30:08 compute-0 sweet_hoover[98807]: {
Nov 29 07:30:08 compute-0 sweet_hoover[98807]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:30:08 compute-0 sweet_hoover[98807]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:30:08 compute-0 sweet_hoover[98807]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:30:08 compute-0 sweet_hoover[98807]:         "osd_id": 2,
Nov 29 07:30:08 compute-0 sweet_hoover[98807]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:30:08 compute-0 sweet_hoover[98807]:         "type": "bluestore"
Nov 29 07:30:08 compute-0 sweet_hoover[98807]:     },
Nov 29 07:30:08 compute-0 sweet_hoover[98807]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:30:08 compute-0 sweet_hoover[98807]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:30:08 compute-0 sweet_hoover[98807]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:30:08 compute-0 sweet_hoover[98807]:         "osd_id": 0,
Nov 29 07:30:08 compute-0 sweet_hoover[98807]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:30:08 compute-0 sweet_hoover[98807]:         "type": "bluestore"
Nov 29 07:30:08 compute-0 sweet_hoover[98807]:     },
Nov 29 07:30:08 compute-0 sweet_hoover[98807]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:30:08 compute-0 sweet_hoover[98807]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:30:08 compute-0 sweet_hoover[98807]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:30:08 compute-0 sweet_hoover[98807]:         "osd_id": 1,
Nov 29 07:30:08 compute-0 sweet_hoover[98807]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:30:08 compute-0 sweet_hoover[98807]:         "type": "bluestore"
Nov 29 07:30:08 compute-0 sweet_hoover[98807]:     }
Nov 29 07:30:08 compute-0 sweet_hoover[98807]: }
Nov 29 07:30:08 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Nov 29 07:30:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86361f87b1d394ed0b93ca2798e0b2a840e53b9883554a32a2f91ffd57c34d6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86361f87b1d394ed0b93ca2798e0b2a840e53b9883554a32a2f91ffd57c34d6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:08 compute-0 systemd[1]: libpod-ab0b23ccedb980886f555be5934531df8195109ac388e5775461ab81b5994a8b.scope: Deactivated successfully.
Nov 29 07:30:08 compute-0 systemd[1]: libpod-ab0b23ccedb980886f555be5934531df8195109ac388e5775461ab81b5994a8b.scope: Consumed 1.000s CPU time.
Nov 29 07:30:08 compute-0 podman[98791]: 2025-11-29 07:30:08.951838015 +0000 UTC m=+2.106021831 container died ab0b23ccedb980886f555be5934531df8195109ac388e5775461ab81b5994a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hoover, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 07:30:08 compute-0 ceph-mon[75237]: 4.19 deep-scrub starts
Nov 29 07:30:08 compute-0 ceph-mon[75237]: 4.19 deep-scrub ok
Nov 29 07:30:08 compute-0 ceph-mon[75237]: 3.1a scrub starts
Nov 29 07:30:08 compute-0 ceph-mon[75237]: 3.1a scrub ok
Nov 29 07:30:08 compute-0 ceph-mon[75237]: 2.1a scrub starts
Nov 29 07:30:08 compute-0 ceph-mon[75237]: 2.1a scrub ok
Nov 29 07:30:08 compute-0 podman[99014]: 2025-11-29 07:30:08.977774777 +0000 UTC m=+0.367477565 container init 435335e8888360db9a137aef93202d8dab715dd4dbb36a2bced6dd168a6ed4b1 (image=quay.io/ceph/ceph:v18, name=trusting_cori, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:30:08 compute-0 podman[99014]: 2025-11-29 07:30:08.988402802 +0000 UTC m=+0.378105580 container start 435335e8888360db9a137aef93202d8dab715dd4dbb36a2bced6dd168a6ed4b1 (image=quay.io/ceph/ceph:v18, name=trusting_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:30:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e1aca97f94fc6d3795b6674f9d941a72ddb9d14f35df5800db71795d5bbc159-merged.mount: Deactivated successfully.
Nov 29 07:30:09 compute-0 podman[99014]: 2025-11-29 07:30:09.007395858 +0000 UTC m=+0.397098636 container attach 435335e8888360db9a137aef93202d8dab715dd4dbb36a2bced6dd168a6ed4b1 (image=quay.io/ceph/ceph:v18, name=trusting_cori, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:30:09 compute-0 podman[98791]: 2025-11-29 07:30:09.04560552 +0000 UTC m=+2.199789336 container remove ab0b23ccedb980886f555be5934531df8195109ac388e5775461ab81b5994a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hoover, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 07:30:09 compute-0 systemd[1]: libpod-conmon-ab0b23ccedb980886f555be5934531df8195109ac388e5775461ab81b5994a8b.scope: Deactivated successfully.
Nov 29 07:30:09 compute-0 sudo[98672]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:30:09 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:30:09 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:09 compute-0 sudo[99071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:09 compute-0 sudo[99071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:09 compute-0 sudo[99071]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:09 compute-0 sudo[99096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:30:09 compute-0 sudo[99096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:09 compute-0 sudo[99096]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:09 compute-0 sudo[99121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:09 compute-0 sudo[99121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:09 compute-0 sudo[99121]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:09 compute-0 sudo[99146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:30:09 compute-0 sudo[99146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:09 compute-0 sudo[99146]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:09 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 2.1e deep-scrub starts
Nov 29 07:30:09 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 2.1e deep-scrub ok
Nov 29 07:30:09 compute-0 sudo[99190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:09 compute-0 sudo[99190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:09 compute-0 sudo[99190]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:09 compute-0 sudo[99215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:30:09 compute-0 sudo[99215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 29 07:30:09 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3383892769' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 29 07:30:09 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3383892769' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 29 07:30:09 compute-0 systemd[1]: libpod-435335e8888360db9a137aef93202d8dab715dd4dbb36a2bced6dd168a6ed4b1.scope: Deactivated successfully.
Nov 29 07:30:09 compute-0 podman[99014]: 2025-11-29 07:30:09.672668359 +0000 UTC m=+1.062371147 container died 435335e8888360db9a137aef93202d8dab715dd4dbb36a2bced6dd168a6ed4b1 (image=quay.io/ceph/ceph:v18, name=trusting_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:30:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c86361f87b1d394ed0b93ca2798e0b2a840e53b9883554a32a2f91ffd57c34d6-merged.mount: Deactivated successfully.
Nov 29 07:30:09 compute-0 podman[99014]: 2025-11-29 07:30:09.727590715 +0000 UTC m=+1.117293493 container remove 435335e8888360db9a137aef93202d8dab715dd4dbb36a2bced6dd168a6ed4b1 (image=quay.io/ceph/ceph:v18, name=trusting_cori, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:30:09 compute-0 systemd[1]: libpod-conmon-435335e8888360db9a137aef93202d8dab715dd4dbb36a2bced6dd168a6ed4b1.scope: Deactivated successfully.
Nov 29 07:30:09 compute-0 sudo[99010]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:10 compute-0 podman[99325]: 2025-11-29 07:30:10.076624367 +0000 UTC m=+0.056072908 container exec 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:30:10 compute-0 ceph-mon[75237]: pgmap v103: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:10 compute-0 ceph-mon[75237]: 4.1d scrub starts
Nov 29 07:30:10 compute-0 ceph-mon[75237]: 4.1d scrub ok
Nov 29 07:30:10 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:10 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:10 compute-0 ceph-mon[75237]: 2.1e deep-scrub starts
Nov 29 07:30:10 compute-0 ceph-mon[75237]: 2.1e deep-scrub ok
Nov 29 07:30:10 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3383892769' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 29 07:30:10 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3383892769' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 29 07:30:10 compute-0 podman[99325]: 2025-11-29 07:30:10.209683731 +0000 UTC m=+0.189132252 container exec_died 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:30:10 compute-0 sudo[99402]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkqndwnxclzjffdyrmkxcthogndbecbv ; /usr/bin/python3'
Nov 29 07:30:10 compute-0 sudo[99402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:10 compute-0 python3[99407]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:30:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v104: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:10 compute-0 podman[99429]: 2025-11-29 07:30:10.525013393 +0000 UTC m=+0.033367301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:30:10 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.1e deep-scrub starts
Nov 29 07:30:10 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.1e deep-scrub ok
Nov 29 07:30:11 compute-0 podman[99429]: 2025-11-29 07:30:11.283064992 +0000 UTC m=+0.791418830 container create d6c513a5d404fc3e96e334441cd8f5adf03b3c338a893336dd8c207c1317dee4 (image=quay.io/ceph/ceph:v18, name=hopeful_pare, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 07:30:11 compute-0 systemd[1]: Started libpod-conmon-d6c513a5d404fc3e96e334441cd8f5adf03b3c338a893336dd8c207c1317dee4.scope.
Nov 29 07:30:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df1b566fadf059a04f767c7012154b2ff0cdf0c9b3d2d2dd653bcd374aea62c9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df1b566fadf059a04f767c7012154b2ff0cdf0c9b3d2d2dd653bcd374aea62c9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:11 compute-0 podman[99429]: 2025-11-29 07:30:11.382696952 +0000 UTC m=+0.891050780 container init d6c513a5d404fc3e96e334441cd8f5adf03b3c338a893336dd8c207c1317dee4 (image=quay.io/ceph/ceph:v18, name=hopeful_pare, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:30:11 compute-0 podman[99429]: 2025-11-29 07:30:11.397061665 +0000 UTC m=+0.905415513 container start d6c513a5d404fc3e96e334441cd8f5adf03b3c338a893336dd8c207c1317dee4 (image=quay.io/ceph/ceph:v18, name=hopeful_pare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 07:30:11 compute-0 podman[99429]: 2025-11-29 07:30:11.400849037 +0000 UTC m=+0.909202835 container attach d6c513a5d404fc3e96e334441cd8f5adf03b3c338a893336dd8c207c1317dee4 (image=quay.io/ceph/ceph:v18, name=hopeful_pare, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:11 compute-0 sudo[99215]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:30:11 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:30:11 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:30:11 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:30:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:30:11 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:30:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:30:11 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:11 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev c4d805e2-d23d-42ec-aba9-1e349ec97f6f does not exist
Nov 29 07:30:11 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 65dfc655-de6b-4c1c-abfa-02c7d86c9962 does not exist
Nov 29 07:30:11 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev d9f2ec36-41e6-4081-bde9-c4d8a118829c does not exist
Nov 29 07:30:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:30:11 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:30:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:30:11 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:30:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:30:11 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:30:11 compute-0 sudo[99491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:11 compute-0 sudo[99491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:11 compute-0 sudo[99491]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:11 compute-0 sudo[99516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:30:11 compute-0 sudo[99516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:11 compute-0 sudo[99516]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:11 compute-0 sudo[99541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:11 compute-0 sudo[99541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:11 compute-0 sudo[99541]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:11 compute-0 sudo[99585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:30:11 compute-0 sudo[99585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:30:11 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 29 07:30:11 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 29 07:30:11 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 29 07:30:11 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 29 07:30:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 07:30:12 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3960106602' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 07:30:12 compute-0 hopeful_pare[99466]: 
Nov 29 07:30:12 compute-0 hopeful_pare[99466]: {"fsid":"321e9cb7-01a2-5759-bf8c-981c9a64aa3e","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":205,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":37,"num_osds":3,"num_up_osds":3,"osd_up_since":1764401359,"num_in_osds":3,"osd_in_since":1764401310,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":100}],"num_pgs":100,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83976192,"bytes_avail":64327950336,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":3,"modified":"2025-11-29T07:30:06.561832+0000","services":{"osd":{"daemons":{"summary":"","1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Nov 29 07:30:12 compute-0 systemd[1]: libpod-d6c513a5d404fc3e96e334441cd8f5adf03b3c338a893336dd8c207c1317dee4.scope: Deactivated successfully.
Nov 29 07:30:12 compute-0 podman[99429]: 2025-11-29 07:30:12.062606323 +0000 UTC m=+1.570960131 container died d6c513a5d404fc3e96e334441cd8f5adf03b3c338a893336dd8c207c1317dee4 (image=quay.io/ceph/ceph:v18, name=hopeful_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 07:30:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-df1b566fadf059a04f767c7012154b2ff0cdf0c9b3d2d2dd653bcd374aea62c9-merged.mount: Deactivated successfully.
Nov 29 07:30:12 compute-0 podman[99429]: 2025-11-29 07:30:12.110867151 +0000 UTC m=+1.619220959 container remove d6c513a5d404fc3e96e334441cd8f5adf03b3c338a893336dd8c207c1317dee4 (image=quay.io/ceph/ceph:v18, name=hopeful_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 07:30:12 compute-0 systemd[1]: libpod-conmon-d6c513a5d404fc3e96e334441cd8f5adf03b3c338a893336dd8c207c1317dee4.scope: Deactivated successfully.
Nov 29 07:30:12 compute-0 sudo[99402]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:12 compute-0 podman[99664]: 2025-11-29 07:30:12.230872427 +0000 UTC m=+0.049284937 container create 45b4fc3f8e9f3e749fed56faa7c303e32076c0e8a36fe75113b13c1508d962ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ishizaka, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 07:30:12 compute-0 systemd[1]: Started libpod-conmon-45b4fc3f8e9f3e749fed56faa7c303e32076c0e8a36fe75113b13c1508d962ac.scope.
Nov 29 07:30:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:12 compute-0 sudo[99704]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sumtskgviplocrrbkrjcxuwswvkacqfv ; /usr/bin/python3'
Nov 29 07:30:12 compute-0 sudo[99704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:12 compute-0 podman[99664]: 2025-11-29 07:30:12.206160636 +0000 UTC m=+0.024573186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:12 compute-0 podman[99664]: 2025-11-29 07:30:12.322266938 +0000 UTC m=+0.140679468 container init 45b4fc3f8e9f3e749fed56faa7c303e32076c0e8a36fe75113b13c1508d962ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ishizaka, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:12 compute-0 podman[99664]: 2025-11-29 07:30:12.330880027 +0000 UTC m=+0.149292567 container start 45b4fc3f8e9f3e749fed56faa7c303e32076c0e8a36fe75113b13c1508d962ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 07:30:12 compute-0 elated_ishizaka[99705]: 167 167
Nov 29 07:30:12 compute-0 podman[99664]: 2025-11-29 07:30:12.336818586 +0000 UTC m=+0.155231136 container attach 45b4fc3f8e9f3e749fed56faa7c303e32076c0e8a36fe75113b13c1508d962ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ishizaka, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 07:30:12 compute-0 systemd[1]: libpod-45b4fc3f8e9f3e749fed56faa7c303e32076c0e8a36fe75113b13c1508d962ac.scope: Deactivated successfully.
Nov 29 07:30:12 compute-0 podman[99664]: 2025-11-29 07:30:12.33881123 +0000 UTC m=+0.157223820 container died 45b4fc3f8e9f3e749fed56faa7c303e32076c0e8a36fe75113b13c1508d962ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 07:30:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8d58cbb22681bda718a67432aaee1985dae099619897fc1be7c56736c9e6833-merged.mount: Deactivated successfully.
Nov 29 07:30:12 compute-0 podman[99664]: 2025-11-29 07:30:12.39350168 +0000 UTC m=+0.211914210 container remove 45b4fc3f8e9f3e749fed56faa7c303e32076c0e8a36fe75113b13c1508d962ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 07:30:12 compute-0 systemd[1]: libpod-conmon-45b4fc3f8e9f3e749fed56faa7c303e32076c0e8a36fe75113b13c1508d962ac.scope: Deactivated successfully.
Nov 29 07:30:12 compute-0 python3[99709]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:30:12 compute-0 podman[99725]: 2025-11-29 07:30:12.530834828 +0000 UTC m=+0.045134256 container create 11a109b54faf515b12493db58f7c07a87a8764b3b2610619776c99c56d70735b (image=quay.io/ceph/ceph:v18, name=epic_franklin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:30:12 compute-0 ceph-mon[75237]: pgmap v104: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:12 compute-0 ceph-mon[75237]: 4.1e deep-scrub starts
Nov 29 07:30:12 compute-0 ceph-mon[75237]: 4.1e deep-scrub ok
Nov 29 07:30:12 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:12 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:12 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:30:12 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:30:12 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:12 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:30:12 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:30:12 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:30:12 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3960106602' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 07:30:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v105: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:12 compute-0 podman[99743]: 2025-11-29 07:30:12.576085837 +0000 UTC m=+0.057850556 container create fa2ce9abf1fac7557d1b9d65ef4a9d6432bb0925aadfbe5cec882b2673d35d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gauss, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:30:12 compute-0 systemd[1]: Started libpod-conmon-11a109b54faf515b12493db58f7c07a87a8764b3b2610619776c99c56d70735b.scope.
Nov 29 07:30:12 compute-0 podman[99725]: 2025-11-29 07:30:12.509015475 +0000 UTC m=+0.023314903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:30:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5855abbb7920d5f4c27a1cd626664bd36d287bbcb3f42b0eb1aa7e3fe741c30e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5855abbb7920d5f4c27a1cd626664bd36d287bbcb3f42b0eb1aa7e3fe741c30e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:12 compute-0 systemd[1]: Started libpod-conmon-fa2ce9abf1fac7557d1b9d65ef4a9d6432bb0925aadfbe5cec882b2673d35d73.scope.
Nov 29 07:30:12 compute-0 podman[99725]: 2025-11-29 07:30:12.633983114 +0000 UTC m=+0.148282532 container init 11a109b54faf515b12493db58f7c07a87a8764b3b2610619776c99c56d70735b (image=quay.io/ceph/ceph:v18, name=epic_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 07:30:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/372c5a768d3fb5364385068c97f84e6fd6ee70686bd1bb186553abae139f7b04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/372c5a768d3fb5364385068c97f84e6fd6ee70686bd1bb186553abae139f7b04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/372c5a768d3fb5364385068c97f84e6fd6ee70686bd1bb186553abae139f7b04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:12 compute-0 podman[99725]: 2025-11-29 07:30:12.641557496 +0000 UTC m=+0.155856924 container start 11a109b54faf515b12493db58f7c07a87a8764b3b2610619776c99c56d70735b (image=quay.io/ceph/ceph:v18, name=epic_franklin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 07:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/372c5a768d3fb5364385068c97f84e6fd6ee70686bd1bb186553abae139f7b04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/372c5a768d3fb5364385068c97f84e6fd6ee70686bd1bb186553abae139f7b04/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:12 compute-0 podman[99725]: 2025-11-29 07:30:12.648152412 +0000 UTC m=+0.162451850 container attach 11a109b54faf515b12493db58f7c07a87a8764b3b2610619776c99c56d70735b (image=quay.io/ceph/ceph:v18, name=epic_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:12 compute-0 podman[99743]: 2025-11-29 07:30:12.558402965 +0000 UTC m=+0.040167704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:12 compute-0 podman[99743]: 2025-11-29 07:30:12.661260372 +0000 UTC m=+0.143025121 container init fa2ce9abf1fac7557d1b9d65ef4a9d6432bb0925aadfbe5cec882b2673d35d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gauss, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 07:30:12 compute-0 podman[99743]: 2025-11-29 07:30:12.668195367 +0000 UTC m=+0.149960086 container start fa2ce9abf1fac7557d1b9d65ef4a9d6432bb0925aadfbe5cec882b2673d35d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 07:30:12 compute-0 podman[99743]: 2025-11-29 07:30:12.672501462 +0000 UTC m=+0.154266201 container attach fa2ce9abf1fac7557d1b9d65ef4a9d6432bb0925aadfbe5cec882b2673d35d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:30:12 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 29 07:30:12 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 29 07:30:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:30:13 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/687992749' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:30:13 compute-0 epic_franklin[99760]: 
Nov 29 07:30:13 compute-0 epic_franklin[99760]: {"epoch":1,"fsid":"321e9cb7-01a2-5759-bf8c-981c9a64aa3e","modified":"2025-11-29T07:26:40.732989Z","created":"2025-11-29T07:26:40.732989Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Nov 29 07:30:13 compute-0 epic_franklin[99760]: dumped monmap epoch 1
Nov 29 07:30:13 compute-0 systemd[1]: libpod-11a109b54faf515b12493db58f7c07a87a8764b3b2610619776c99c56d70735b.scope: Deactivated successfully.
Nov 29 07:30:13 compute-0 podman[99725]: 2025-11-29 07:30:13.271540982 +0000 UTC m=+0.785840360 container died 11a109b54faf515b12493db58f7c07a87a8764b3b2610619776c99c56d70735b (image=quay.io/ceph/ceph:v18, name=epic_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:30:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5855abbb7920d5f4c27a1cd626664bd36d287bbcb3f42b0eb1aa7e3fe741c30e-merged.mount: Deactivated successfully.
Nov 29 07:30:13 compute-0 podman[99725]: 2025-11-29 07:30:13.320967022 +0000 UTC m=+0.835266410 container remove 11a109b54faf515b12493db58f7c07a87a8764b3b2610619776c99c56d70735b (image=quay.io/ceph/ceph:v18, name=epic_franklin, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 07:30:13 compute-0 systemd[1]: libpod-conmon-11a109b54faf515b12493db58f7c07a87a8764b3b2610619776c99c56d70735b.scope: Deactivated successfully.
Nov 29 07:30:13 compute-0 sudo[99704]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:13 compute-0 ceph-mon[75237]: 3.1c scrub starts
Nov 29 07:30:13 compute-0 ceph-mon[75237]: 3.1c scrub ok
Nov 29 07:30:13 compute-0 ceph-mon[75237]: 4.1f scrub starts
Nov 29 07:30:13 compute-0 ceph-mon[75237]: 4.1f scrub ok
Nov 29 07:30:13 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/687992749' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:30:13 compute-0 epic_gauss[99767]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:30:13 compute-0 epic_gauss[99767]: --> relative data size: 1.0
Nov 29 07:30:13 compute-0 epic_gauss[99767]: --> All data devices are unavailable
Nov 29 07:30:13 compute-0 systemd[1]: libpod-fa2ce9abf1fac7557d1b9d65ef4a9d6432bb0925aadfbe5cec882b2673d35d73.scope: Deactivated successfully.
Nov 29 07:30:13 compute-0 podman[99743]: 2025-11-29 07:30:13.793272078 +0000 UTC m=+1.275036797 container died fa2ce9abf1fac7557d1b9d65ef4a9d6432bb0925aadfbe5cec882b2673d35d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gauss, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:30:13 compute-0 systemd[1]: libpod-fa2ce9abf1fac7557d1b9d65ef4a9d6432bb0925aadfbe5cec882b2673d35d73.scope: Consumed 1.077s CPU time.
Nov 29 07:30:13 compute-0 sudo[99853]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxofkteuyquoyfpmsdbxjeourzhwlwrm ; /usr/bin/python3'
Nov 29 07:30:13 compute-0 sudo[99853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:13 compute-0 sshd-session[99732]: Invalid user admin123 from 103.234.151.178 port 45688
Nov 29 07:30:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-372c5a768d3fb5364385068c97f84e6fd6ee70686bd1bb186553abae139f7b04-merged.mount: Deactivated successfully.
Nov 29 07:30:13 compute-0 podman[99743]: 2025-11-29 07:30:13.860155023 +0000 UTC m=+1.341919762 container remove fa2ce9abf1fac7557d1b9d65ef4a9d6432bb0925aadfbe5cec882b2673d35d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gauss, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:30:13 compute-0 systemd[1]: libpod-conmon-fa2ce9abf1fac7557d1b9d65ef4a9d6432bb0925aadfbe5cec882b2673d35d73.scope: Deactivated successfully.
Nov 29 07:30:13 compute-0 sudo[99585]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:13 compute-0 python3[99857]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:30:13 compute-0 sudo[99868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:13 compute-0 sudo[99868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:13 compute-0 sudo[99868]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:14 compute-0 podman[99884]: 2025-11-29 07:30:14.021666448 +0000 UTC m=+0.058300199 container create c1c08a58b09ca8ddba128177d58e72fbcb76cda35253a76044fda7e15d705489 (image=quay.io/ceph/ceph:v18, name=vibrant_shaw, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 07:30:14 compute-0 systemd[1]: Started libpod-conmon-c1c08a58b09ca8ddba128177d58e72fbcb76cda35253a76044fda7e15d705489.scope.
Nov 29 07:30:14 compute-0 sshd-session[99732]: Received disconnect from 103.234.151.178 port 45688:11: Bye Bye [preauth]
Nov 29 07:30:14 compute-0 sshd-session[99732]: Disconnected from invalid user admin123 103.234.151.178 port 45688 [preauth]
Nov 29 07:30:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:14 compute-0 sudo[99905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:30:14 compute-0 sudo[99905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bbcf9f11d91249fa4b1342992a7a0a719796969cd2b2a28b73270bb559178d3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bbcf9f11d91249fa4b1342992a7a0a719796969cd2b2a28b73270bb559178d3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:14 compute-0 podman[99884]: 2025-11-29 07:30:14.000971755 +0000 UTC m=+0.037605496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:30:14 compute-0 sudo[99905]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:14 compute-0 podman[99884]: 2025-11-29 07:30:14.110011748 +0000 UTC m=+0.146645519 container init c1c08a58b09ca8ddba128177d58e72fbcb76cda35253a76044fda7e15d705489 (image=quay.io/ceph/ceph:v18, name=vibrant_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 07:30:14 compute-0 podman[99884]: 2025-11-29 07:30:14.121248738 +0000 UTC m=+0.157882469 container start c1c08a58b09ca8ddba128177d58e72fbcb76cda35253a76044fda7e15d705489 (image=quay.io/ceph/ceph:v18, name=vibrant_shaw, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 07:30:14 compute-0 podman[99884]: 2025-11-29 07:30:14.125354547 +0000 UTC m=+0.161988318 container attach c1c08a58b09ca8ddba128177d58e72fbcb76cda35253a76044fda7e15d705489 (image=quay.io/ceph/ceph:v18, name=vibrant_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:30:14 compute-0 sudo[99936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:14 compute-0 sudo[99936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:14 compute-0 sudo[99936]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:14 compute-0 sudo[99962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:30:14 compute-0 sudo[99962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:14 compute-0 ceph-mon[75237]: pgmap v105: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:14 compute-0 ceph-mon[75237]: 4.8 scrub starts
Nov 29 07:30:14 compute-0 ceph-mon[75237]: 4.8 scrub ok
Nov 29 07:30:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v106: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:14 compute-0 podman[100040]: 2025-11-29 07:30:14.621976301 +0000 UTC m=+0.042732862 container create aa76cbab05385e0a5dcf82603e6f824abe2c0e08a41908c730b5eda4259fd3ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ardinghelli, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 29 07:30:14 compute-0 systemd[1]: Started libpod-conmon-aa76cbab05385e0a5dcf82603e6f824abe2c0e08a41908c730b5eda4259fd3ce.scope.
Nov 29 07:30:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:14 compute-0 podman[100040]: 2025-11-29 07:30:14.690180033 +0000 UTC m=+0.110936614 container init aa76cbab05385e0a5dcf82603e6f824abe2c0e08a41908c730b5eda4259fd3ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ardinghelli, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:30:14 compute-0 podman[100040]: 2025-11-29 07:30:14.695484215 +0000 UTC m=+0.116240786 container start aa76cbab05385e0a5dcf82603e6f824abe2c0e08a41908c730b5eda4259fd3ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:30:14 compute-0 dazzling_ardinghelli[100056]: 167 167
Nov 29 07:30:14 compute-0 systemd[1]: libpod-aa76cbab05385e0a5dcf82603e6f824abe2c0e08a41908c730b5eda4259fd3ce.scope: Deactivated successfully.
Nov 29 07:30:14 compute-0 podman[100040]: 2025-11-29 07:30:14.699971805 +0000 UTC m=+0.120728386 container attach aa76cbab05385e0a5dcf82603e6f824abe2c0e08a41908c730b5eda4259fd3ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ardinghelli, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:14 compute-0 podman[100040]: 2025-11-29 07:30:14.605002598 +0000 UTC m=+0.025759189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:14 compute-0 conmon[100056]: conmon aa76cbab05385e0a5dcf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa76cbab05385e0a5dcf82603e6f824abe2c0e08a41908c730b5eda4259fd3ce.scope/container/memory.events
Nov 29 07:30:14 compute-0 podman[100040]: 2025-11-29 07:30:14.701392243 +0000 UTC m=+0.122148814 container died aa76cbab05385e0a5dcf82603e6f824abe2c0e08a41908c730b5eda4259fd3ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:30:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b3393fa10ed52eece18c5be7c63f08306c99e97a4269613287e4fa37e905ead-merged.mount: Deactivated successfully.
Nov 29 07:30:14 compute-0 podman[100040]: 2025-11-29 07:30:14.742910802 +0000 UTC m=+0.163667363 container remove aa76cbab05385e0a5dcf82603e6f824abe2c0e08a41908c730b5eda4259fd3ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ardinghelli, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:30:14 compute-0 systemd[1]: libpod-conmon-aa76cbab05385e0a5dcf82603e6f824abe2c0e08a41908c730b5eda4259fd3ce.scope: Deactivated successfully.
Nov 29 07:30:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 29 07:30:14 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/912323948' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 29 07:30:14 compute-0 vibrant_shaw[99931]: [client.openstack]
Nov 29 07:30:14 compute-0 vibrant_shaw[99931]:         key = AQACoCppAAAAABAAUMBgxavMgjAgzQEp37H3Rw==
Nov 29 07:30:14 compute-0 vibrant_shaw[99931]:         caps mgr = "allow *"
Nov 29 07:30:14 compute-0 vibrant_shaw[99931]:         caps mon = "profile rbd"
Nov 29 07:30:14 compute-0 vibrant_shaw[99931]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 29 07:30:14 compute-0 systemd[1]: libpod-c1c08a58b09ca8ddba128177d58e72fbcb76cda35253a76044fda7e15d705489.scope: Deactivated successfully.
Nov 29 07:30:14 compute-0 podman[99884]: 2025-11-29 07:30:14.805312108 +0000 UTC m=+0.841945839 container died c1c08a58b09ca8ddba128177d58e72fbcb76cda35253a76044fda7e15d705489 (image=quay.io/ceph/ceph:v18, name=vibrant_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:30:14 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 29 07:30:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bbcf9f11d91249fa4b1342992a7a0a719796969cd2b2a28b73270bb559178d3-merged.mount: Deactivated successfully.
Nov 29 07:30:14 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 29 07:30:14 compute-0 podman[99884]: 2025-11-29 07:30:14.852968681 +0000 UTC m=+0.889602412 container remove c1c08a58b09ca8ddba128177d58e72fbcb76cda35253a76044fda7e15d705489 (image=quay.io/ceph/ceph:v18, name=vibrant_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 07:30:14 compute-0 systemd[1]: libpod-conmon-c1c08a58b09ca8ddba128177d58e72fbcb76cda35253a76044fda7e15d705489.scope: Deactivated successfully.
Nov 29 07:30:14 compute-0 sudo[99853]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:14 compute-0 podman[100095]: 2025-11-29 07:30:14.925475058 +0000 UTC m=+0.044169440 container create 382444642efb7b65ec8bd8ba2925487ea21015f80ca56cacfaa2b123b840fc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bardeen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:14 compute-0 systemd[1]: Started libpod-conmon-382444642efb7b65ec8bd8ba2925487ea21015f80ca56cacfaa2b123b840fc6b.scope.
Nov 29 07:30:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:15 compute-0 podman[100095]: 2025-11-29 07:30:14.90792847 +0000 UTC m=+0.026622832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc92355a231f090fbc76ad7719c57a9777ad10369f5aaf40d5d99a938f5c9c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc92355a231f090fbc76ad7719c57a9777ad10369f5aaf40d5d99a938f5c9c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc92355a231f090fbc76ad7719c57a9777ad10369f5aaf40d5d99a938f5c9c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc92355a231f090fbc76ad7719c57a9777ad10369f5aaf40d5d99a938f5c9c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:15 compute-0 podman[100095]: 2025-11-29 07:30:15.022347085 +0000 UTC m=+0.141041507 container init 382444642efb7b65ec8bd8ba2925487ea21015f80ca56cacfaa2b123b840fc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bardeen, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 07:30:15 compute-0 podman[100095]: 2025-11-29 07:30:15.02925338 +0000 UTC m=+0.147947732 container start 382444642efb7b65ec8bd8ba2925487ea21015f80ca56cacfaa2b123b840fc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 07:30:15 compute-0 podman[100095]: 2025-11-29 07:30:15.03301035 +0000 UTC m=+0.151704692 container attach 382444642efb7b65ec8bd8ba2925487ea21015f80ca56cacfaa2b123b840fc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bardeen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:30:15 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:30:15 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/912323948' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]: {
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:     "0": [
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:         {
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "devices": [
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "/dev/loop3"
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             ],
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "lv_name": "ceph_lv0",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "lv_size": "21470642176",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "name": "ceph_lv0",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "tags": {
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.cluster_name": "ceph",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.crush_device_class": "",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.encrypted": "0",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.osd_id": "0",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.type": "block",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.vdo": "0"
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             },
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "type": "block",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "vg_name": "ceph_vg0"
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:         }
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:     ],
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:     "1": [
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:         {
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "devices": [
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "/dev/loop4"
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             ],
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "lv_name": "ceph_lv1",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "lv_size": "21470642176",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "name": "ceph_lv1",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "tags": {
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.cluster_name": "ceph",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.crush_device_class": "",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.encrypted": "0",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.osd_id": "1",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.type": "block",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.vdo": "0"
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             },
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "type": "block",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "vg_name": "ceph_vg1"
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:         }
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:     ],
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:     "2": [
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:         {
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "devices": [
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "/dev/loop5"
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             ],
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "lv_name": "ceph_lv2",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "lv_size": "21470642176",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "name": "ceph_lv2",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "tags": {
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.cluster_name": "ceph",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.crush_device_class": "",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.encrypted": "0",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.osd_id": "2",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.type": "block",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:                 "ceph.vdo": "0"
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             },
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "type": "block",
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:             "vg_name": "ceph_vg2"
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:         }
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]:     ]
Nov 29 07:30:15 compute-0 stoic_bardeen[100111]: }
Nov 29 07:30:15 compute-0 systemd[1]: libpod-382444642efb7b65ec8bd8ba2925487ea21015f80ca56cacfaa2b123b840fc6b.scope: Deactivated successfully.
Nov 29 07:30:15 compute-0 podman[100095]: 2025-11-29 07:30:15.810909198 +0000 UTC m=+0.929603540 container died 382444642efb7b65ec8bd8ba2925487ea21015f80ca56cacfaa2b123b840fc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:30:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-abc92355a231f090fbc76ad7719c57a9777ad10369f5aaf40d5d99a938f5c9c5-merged.mount: Deactivated successfully.
Nov 29 07:30:15 compute-0 podman[100095]: 2025-11-29 07:30:15.873975052 +0000 UTC m=+0.992669434 container remove 382444642efb7b65ec8bd8ba2925487ea21015f80ca56cacfaa2b123b840fc6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bardeen, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:15 compute-0 systemd[1]: libpod-conmon-382444642efb7b65ec8bd8ba2925487ea21015f80ca56cacfaa2b123b840fc6b.scope: Deactivated successfully.
Nov 29 07:30:15 compute-0 sudo[99962]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:15 compute-0 sudo[100154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:16 compute-0 sudo[100154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:16 compute-0 sudo[100154]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:16 compute-0 sudo[100209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:30:16 compute-0 sudo[100209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:16 compute-0 sudo[100209]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:16 compute-0 sudo[100273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:16 compute-0 sudo[100273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:16 compute-0 sudo[100273]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:16 compute-0 sudo[100318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:30:16 compute-0 sudo[100318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:16 compute-0 sudo[100380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikucvajvzagfsdlliqlephaewtiugjry ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764401415.9135866-37073-75061446631391/async_wrapper.py j278314238448 30 /home/zuul/.ansible/tmp/ansible-tmp-1764401415.9135866-37073-75061446631391/AnsiballZ_command.py _'
Nov 29 07:30:16 compute-0 sudo[100380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:16 compute-0 ansible-async_wrapper.py[100382]: Invoked with j278314238448 30 /home/zuul/.ansible/tmp/ansible-tmp-1764401415.9135866-37073-75061446631391/AnsiballZ_command.py _
Nov 29 07:30:16 compute-0 ansible-async_wrapper.py[100396]: Starting module and watcher
Nov 29 07:30:16 compute-0 ansible-async_wrapper.py[100396]: Start watching 100397 (30)
Nov 29 07:30:16 compute-0 ansible-async_wrapper.py[100397]: Start module (100397)
Nov 29 07:30:16 compute-0 ansible-async_wrapper.py[100382]: Return async_wrapper task started.
Nov 29 07:30:16 compute-0 sudo[100380]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:16 compute-0 python3[100399]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:30:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v107: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:16 compute-0 podman[100425]: 2025-11-29 07:30:16.554171429 +0000 UTC m=+0.024983087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:16 compute-0 podman[100425]: 2025-11-29 07:30:16.75519144 +0000 UTC m=+0.226003048 container create 9052d61cebe3dea4f9307d6d391bbb12f82691eeebeb79a0a57f10881316e5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:30:16 compute-0 ceph-mon[75237]: pgmap v106: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:16 compute-0 ceph-mon[75237]: 3.1f scrub starts
Nov 29 07:30:16 compute-0 ceph-mon[75237]: 3.1f scrub ok
Nov 29 07:30:16 compute-0 podman[100437]: 2025-11-29 07:30:16.796337068 +0000 UTC m=+0.230377254 container create 8b74afea815841332fbfaf46dfbb8418090168d6866cf7ac1ba8d69763aca9e9 (image=quay.io/ceph/ceph:v18, name=upbeat_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 07:30:16 compute-0 systemd[1]: Started libpod-conmon-9052d61cebe3dea4f9307d6d391bbb12f82691eeebeb79a0a57f10881316e5e3.scope.
Nov 29 07:30:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:30:16 compute-0 systemd[1]: Started libpod-conmon-8b74afea815841332fbfaf46dfbb8418090168d6866cf7ac1ba8d69763aca9e9.scope.
Nov 29 07:30:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0b1140a7588781a6ac22d5a85fad6a0ff5d7b1760151aedfde9d962839c4a6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0b1140a7588781a6ac22d5a85fad6a0ff5d7b1760151aedfde9d962839c4a6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:16 compute-0 podman[100425]: 2025-11-29 07:30:16.860719247 +0000 UTC m=+0.331530845 container init 9052d61cebe3dea4f9307d6d391bbb12f82691eeebeb79a0a57f10881316e5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hawking, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 07:30:16 compute-0 podman[100437]: 2025-11-29 07:30:16.773270962 +0000 UTC m=+0.207311148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:30:16 compute-0 podman[100425]: 2025-11-29 07:30:16.870452937 +0000 UTC m=+0.341264505 container start 9052d61cebe3dea4f9307d6d391bbb12f82691eeebeb79a0a57f10881316e5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hawking, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:30:16 compute-0 podman[100437]: 2025-11-29 07:30:16.871755692 +0000 UTC m=+0.305795888 container init 8b74afea815841332fbfaf46dfbb8418090168d6866cf7ac1ba8d69763aca9e9 (image=quay.io/ceph/ceph:v18, name=upbeat_volhard, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 07:30:16 compute-0 podman[100425]: 2025-11-29 07:30:16.873816468 +0000 UTC m=+0.344628036 container attach 9052d61cebe3dea4f9307d6d391bbb12f82691eeebeb79a0a57f10881316e5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hawking, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 29 07:30:16 compute-0 podman[100437]: 2025-11-29 07:30:16.876660294 +0000 UTC m=+0.310700470 container start 8b74afea815841332fbfaf46dfbb8418090168d6866cf7ac1ba8d69763aca9e9 (image=quay.io/ceph/ceph:v18, name=upbeat_volhard, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:30:16 compute-0 lucid_hawking[100452]: 167 167
Nov 29 07:30:16 compute-0 systemd[1]: libpod-9052d61cebe3dea4f9307d6d391bbb12f82691eeebeb79a0a57f10881316e5e3.scope: Deactivated successfully.
Nov 29 07:30:16 compute-0 podman[100437]: 2025-11-29 07:30:16.880618889 +0000 UTC m=+0.314659075 container attach 8b74afea815841332fbfaf46dfbb8418090168d6866cf7ac1ba8d69763aca9e9 (image=quay.io/ceph/ceph:v18, name=upbeat_volhard, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 07:30:16 compute-0 podman[100425]: 2025-11-29 07:30:16.882259653 +0000 UTC m=+0.353071241 container died 9052d61cebe3dea4f9307d6d391bbb12f82691eeebeb79a0a57f10881316e5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hawking, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 07:30:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d6ab7b0836259cfbfeb3deb2b02a171e523fbd0e860417f9d0920025ecc9dac-merged.mount: Deactivated successfully.
Nov 29 07:30:16 compute-0 podman[100425]: 2025-11-29 07:30:16.921343057 +0000 UTC m=+0.392154625 container remove 9052d61cebe3dea4f9307d6d391bbb12f82691eeebeb79a0a57f10881316e5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hawking, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:16 compute-0 systemd[1]: libpod-conmon-9052d61cebe3dea4f9307d6d391bbb12f82691eeebeb79a0a57f10881316e5e3.scope: Deactivated successfully.
Nov 29 07:30:17 compute-0 podman[100481]: 2025-11-29 07:30:17.06749168 +0000 UTC m=+0.041426417 container create ef6b8ac6289ff0f9959291fe6ac679ec9fd9b1fb1c19046cedef4dd7422ae283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_blackburn, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:30:17 compute-0 systemd[1]: Started libpod-conmon-ef6b8ac6289ff0f9959291fe6ac679ec9fd9b1fb1c19046cedef4dd7422ae283.scope.
Nov 29 07:30:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462e047ecefb0e9668f7df44340af96514225d9f26e188c792ad1eacc6d1164f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:17 compute-0 podman[100481]: 2025-11-29 07:30:17.051811342 +0000 UTC m=+0.025746099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462e047ecefb0e9668f7df44340af96514225d9f26e188c792ad1eacc6d1164f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462e047ecefb0e9668f7df44340af96514225d9f26e188c792ad1eacc6d1164f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462e047ecefb0e9668f7df44340af96514225d9f26e188c792ad1eacc6d1164f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:17 compute-0 podman[100481]: 2025-11-29 07:30:17.168886459 +0000 UTC m=+0.142821196 container init ef6b8ac6289ff0f9959291fe6ac679ec9fd9b1fb1c19046cedef4dd7422ae283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_blackburn, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:30:17 compute-0 podman[100481]: 2025-11-29 07:30:17.175765062 +0000 UTC m=+0.149699839 container start ef6b8ac6289ff0f9959291fe6ac679ec9fd9b1fb1c19046cedef4dd7422ae283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_blackburn, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:30:17 compute-0 podman[100481]: 2025-11-29 07:30:17.180663183 +0000 UTC m=+0.154597950 container attach ef6b8ac6289ff0f9959291fe6ac679ec9fd9b1fb1c19046cedef4dd7422ae283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_blackburn, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:30:17 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14254 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:30:17 compute-0 upbeat_volhard[100457]: 
Nov 29 07:30:17 compute-0 upbeat_volhard[100457]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 07:30:17 compute-0 systemd[1]: libpod-8b74afea815841332fbfaf46dfbb8418090168d6866cf7ac1ba8d69763aca9e9.scope: Deactivated successfully.
Nov 29 07:30:17 compute-0 podman[100523]: 2025-11-29 07:30:17.469857837 +0000 UTC m=+0.032298224 container died 8b74afea815841332fbfaf46dfbb8418090168d6866cf7ac1ba8d69763aca9e9 (image=quay.io/ceph/ceph:v18, name=upbeat_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 29 07:30:17 compute-0 sshd-session[100317]: Received disconnect from 114.34.106.146 port 46346:11: Bye Bye [preauth]
Nov 29 07:30:17 compute-0 sshd-session[100317]: Disconnected from authenticating user root 114.34.106.146 port 46346 [preauth]
Nov 29 07:30:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc0b1140a7588781a6ac22d5a85fad6a0ff5d7b1760151aedfde9d962839c4a6-merged.mount: Deactivated successfully.
Nov 29 07:30:17 compute-0 podman[100523]: 2025-11-29 07:30:17.533029265 +0000 UTC m=+0.095469622 container remove 8b74afea815841332fbfaf46dfbb8418090168d6866cf7ac1ba8d69763aca9e9 (image=quay.io/ceph/ceph:v18, name=upbeat_volhard, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 07:30:17 compute-0 systemd[1]: libpod-conmon-8b74afea815841332fbfaf46dfbb8418090168d6866cf7ac1ba8d69763aca9e9.scope: Deactivated successfully.
Nov 29 07:30:17 compute-0 ansible-async_wrapper.py[100397]: Module complete (100397)
Nov 29 07:30:17 compute-0 sudo[100584]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqdherjfjsgwwyprgswqotpcoaoaraia ; /usr/bin/python3'
Nov 29 07:30:17 compute-0 sudo[100584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:17 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 29 07:30:17 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 29 07:30:17 compute-0 ceph-mon[75237]: pgmap v107: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:17 compute-0 ceph-mon[75237]: from='client.14254 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:30:17 compute-0 python3[100586]: ansible-ansible.legacy.async_status Invoked with jid=j278314238448.100382 mode=status _async_dir=/root/.ansible_async
Nov 29 07:30:17 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 29 07:30:17 compute-0 sudo[100584]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:17 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 29 07:30:17 compute-0 sudo[100644]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayoakzvljrggpgbowauwpbyljkwlrtto ; /usr/bin/python3'
Nov 29 07:30:17 compute-0 sudo[100644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:18 compute-0 python3[100646]: ansible-ansible.legacy.async_status Invoked with jid=j278314238448.100382 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 07:30:18 compute-0 sudo[100644]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:18 compute-0 nice_blackburn[100497]: {
Nov 29 07:30:18 compute-0 nice_blackburn[100497]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:30:18 compute-0 nice_blackburn[100497]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:30:18 compute-0 nice_blackburn[100497]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:30:18 compute-0 nice_blackburn[100497]:         "osd_id": 2,
Nov 29 07:30:18 compute-0 nice_blackburn[100497]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:30:18 compute-0 nice_blackburn[100497]:         "type": "bluestore"
Nov 29 07:30:18 compute-0 nice_blackburn[100497]:     },
Nov 29 07:30:18 compute-0 nice_blackburn[100497]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:30:18 compute-0 nice_blackburn[100497]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:30:18 compute-0 nice_blackburn[100497]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:30:18 compute-0 nice_blackburn[100497]:         "osd_id": 0,
Nov 29 07:30:18 compute-0 nice_blackburn[100497]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:30:18 compute-0 nice_blackburn[100497]:         "type": "bluestore"
Nov 29 07:30:18 compute-0 nice_blackburn[100497]:     },
Nov 29 07:30:18 compute-0 nice_blackburn[100497]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:30:18 compute-0 nice_blackburn[100497]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:30:18 compute-0 nice_blackburn[100497]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:30:18 compute-0 nice_blackburn[100497]:         "osd_id": 1,
Nov 29 07:30:18 compute-0 nice_blackburn[100497]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:30:18 compute-0 nice_blackburn[100497]:         "type": "bluestore"
Nov 29 07:30:18 compute-0 nice_blackburn[100497]:     }
Nov 29 07:30:18 compute-0 nice_blackburn[100497]: }
Nov 29 07:30:18 compute-0 systemd[1]: libpod-ef6b8ac6289ff0f9959291fe6ac679ec9fd9b1fb1c19046cedef4dd7422ae283.scope: Deactivated successfully.
Nov 29 07:30:18 compute-0 podman[100664]: 2025-11-29 07:30:18.21686361 +0000 UTC m=+0.030974489 container died ef6b8ac6289ff0f9959291fe6ac679ec9fd9b1fb1c19046cedef4dd7422ae283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:30:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-462e047ecefb0e9668f7df44340af96514225d9f26e188c792ad1eacc6d1164f-merged.mount: Deactivated successfully.
Nov 29 07:30:18 compute-0 sudo[100702]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hebznbwoqpsyxryqwdrsvvvljnmzizpo ; /usr/bin/python3'
Nov 29 07:30:18 compute-0 sudo[100702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v108: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:18 compute-0 podman[100664]: 2025-11-29 07:30:18.64068273 +0000 UTC m=+0.454793519 container remove ef6b8ac6289ff0f9959291fe6ac679ec9fd9b1fb1c19046cedef4dd7422ae283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:30:18 compute-0 systemd[1]: libpod-conmon-ef6b8ac6289ff0f9959291fe6ac679ec9fd9b1fb1c19046cedef4dd7422ae283.scope: Deactivated successfully.
Nov 29 07:30:18 compute-0 sudo[100318]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:30:18 compute-0 python3[100704]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:30:18 compute-0 podman[100705]: 2025-11-29 07:30:18.813267659 +0000 UTC m=+0.110738978 container create 18da14d45a6a57aceb262c043e7c5b0c2696e494a85bfcda560c70c80d249fa2 (image=quay.io/ceph/ceph:v18, name=strange_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:30:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:30:18 compute-0 ceph-mon[75237]: 4.7 scrub starts
Nov 29 07:30:18 compute-0 ceph-mon[75237]: 4.7 scrub ok
Nov 29 07:30:18 compute-0 podman[100705]: 2025-11-29 07:30:18.728022922 +0000 UTC m=+0.025494261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:30:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:18 compute-0 ceph-mgr[75527]: [progress INFO root] update: starting ev d52ec90a-e6ae-4e3b-913c-4178eaf80621 (Updating rgw.rgw deployment (+1 -> 1))
Nov 29 07:30:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rpfenx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 29 07:30:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rpfenx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 07:30:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rpfenx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 07:30:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 29 07:30:18 compute-0 systemd[1]: Started libpod-conmon-18da14d45a6a57aceb262c043e7c5b0c2696e494a85bfcda560c70c80d249fa2.scope.
Nov 29 07:30:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:30:18 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:30:18 compute-0 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.rpfenx on compute-0
Nov 29 07:30:18 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.rpfenx on compute-0
Nov 29 07:30:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad9238a3a2c6f996faa0e2961e41678fa9a26077676b96d3499057aa61b2a164/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad9238a3a2c6f996faa0e2961e41678fa9a26077676b96d3499057aa61b2a164/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:18 compute-0 sudo[100724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:18 compute-0 sudo[100724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:18 compute-0 sudo[100724]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:18 compute-0 sudo[100749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:30:18 compute-0 sudo[100749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:18 compute-0 sudo[100749]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:19 compute-0 sudo[100774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:19 compute-0 sudo[100774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:19 compute-0 sudo[100774]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:19 compute-0 podman[100705]: 2025-11-29 07:30:19.075063802 +0000 UTC m=+0.372535131 container init 18da14d45a6a57aceb262c043e7c5b0c2696e494a85bfcda560c70c80d249fa2 (image=quay.io/ceph/ceph:v18, name=strange_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:30:19 compute-0 podman[100705]: 2025-11-29 07:30:19.082281314 +0000 UTC m=+0.379752623 container start 18da14d45a6a57aceb262c043e7c5b0c2696e494a85bfcda560c70c80d249fa2 (image=quay.io/ceph/ceph:v18, name=strange_hermann, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:30:19 compute-0 sudo[100799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:30:19 compute-0 sudo[100799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:19 compute-0 podman[100705]: 2025-11-29 07:30:19.172169225 +0000 UTC m=+0.469640604 container attach 18da14d45a6a57aceb262c043e7c5b0c2696e494a85bfcda560c70c80d249fa2 (image=quay.io/ceph/ceph:v18, name=strange_hermann, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:19 compute-0 podman[100882]: 2025-11-29 07:30:19.447109189 +0000 UTC m=+0.020309264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:19 compute-0 podman[100882]: 2025-11-29 07:30:19.59428916 +0000 UTC m=+0.167489235 container create d075ad8230d2b8b890eb681be28a7e36812915a9eb1428c4c2b464067965bd25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 07:30:19 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14256 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:30:19 compute-0 strange_hermann[100721]: 
Nov 29 07:30:19 compute-0 strange_hermann[100721]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 07:30:19 compute-0 systemd[1]: libpod-18da14d45a6a57aceb262c043e7c5b0c2696e494a85bfcda560c70c80d249fa2.scope: Deactivated successfully.
Nov 29 07:30:19 compute-0 podman[100705]: 2025-11-29 07:30:19.681975602 +0000 UTC m=+0.979446921 container died 18da14d45a6a57aceb262c043e7c5b0c2696e494a85bfcda560c70c80d249fa2 (image=quay.io/ceph/ceph:v18, name=strange_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:30:19 compute-0 systemd[1]: Started libpod-conmon-d075ad8230d2b8b890eb681be28a7e36812915a9eb1428c4c2b464067965bd25.scope.
Nov 29 07:30:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:19 compute-0 podman[100882]: 2025-11-29 07:30:19.758072655 +0000 UTC m=+0.331272740 container init d075ad8230d2b8b890eb681be28a7e36812915a9eb1428c4c2b464067965bd25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:30:19 compute-0 podman[100882]: 2025-11-29 07:30:19.764195528 +0000 UTC m=+0.337395573 container start d075ad8230d2b8b890eb681be28a7e36812915a9eb1428c4c2b464067965bd25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_visvesvaraya, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:30:19 compute-0 epic_visvesvaraya[100913]: 167 167
Nov 29 07:30:19 compute-0 podman[100882]: 2025-11-29 07:30:19.768627046 +0000 UTC m=+0.341827111 container attach d075ad8230d2b8b890eb681be28a7e36812915a9eb1428c4c2b464067965bd25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_visvesvaraya, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:30:19 compute-0 systemd[1]: libpod-d075ad8230d2b8b890eb681be28a7e36812915a9eb1428c4c2b464067965bd25.scope: Deactivated successfully.
Nov 29 07:30:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad9238a3a2c6f996faa0e2961e41678fa9a26077676b96d3499057aa61b2a164-merged.mount: Deactivated successfully.
Nov 29 07:30:19 compute-0 podman[100882]: 2025-11-29 07:30:19.778300945 +0000 UTC m=+0.351500990 container died d075ad8230d2b8b890eb681be28a7e36812915a9eb1428c4c2b464067965bd25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:30:19 compute-0 podman[100705]: 2025-11-29 07:30:19.966270645 +0000 UTC m=+1.263741974 container remove 18da14d45a6a57aceb262c043e7c5b0c2696e494a85bfcda560c70c80d249fa2 (image=quay.io/ceph/ceph:v18, name=strange_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:30:19 compute-0 ceph-mon[75237]: 3.a scrub starts
Nov 29 07:30:19 compute-0 ceph-mon[75237]: 3.a scrub ok
Nov 29 07:30:19 compute-0 ceph-mon[75237]: pgmap v108: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:19 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:19 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:19 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rpfenx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 07:30:19 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rpfenx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 07:30:19 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:19 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:30:19 compute-0 ceph-mon[75237]: Deploying daemon rgw.rgw.compute-0.rpfenx on compute-0
Nov 29 07:30:19 compute-0 systemd[1]: libpod-conmon-18da14d45a6a57aceb262c043e7c5b0c2696e494a85bfcda560c70c80d249fa2.scope: Deactivated successfully.
Nov 29 07:30:19 compute-0 sudo[100702]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-7388eb0935ecd2d45634f4ce428e798373cb2a922cad4f41a0225ed5ec2442db-merged.mount: Deactivated successfully.
Nov 29 07:30:20 compute-0 podman[100882]: 2025-11-29 07:30:20.216462378 +0000 UTC m=+0.789662423 container remove d075ad8230d2b8b890eb681be28a7e36812915a9eb1428c4c2b464067965bd25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 29 07:30:20 compute-0 systemd[1]: libpod-conmon-d075ad8230d2b8b890eb681be28a7e36812915a9eb1428c4c2b464067965bd25.scope: Deactivated successfully.
Nov 29 07:30:20 compute-0 systemd[1]: Reloading.
Nov 29 07:30:20 compute-0 systemd-rc-local-generator[100960]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:30:20 compute-0 systemd-sysv-generator[100963]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:30:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v109: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:20 compute-0 systemd[1]: Reloading.
Nov 29 07:30:20 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 4.1c deep-scrub starts
Nov 29 07:30:20 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 4.1c deep-scrub ok
Nov 29 07:30:20 compute-0 systemd-rc-local-generator[100997]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:30:20 compute-0 systemd-sysv-generator[101000]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:30:20 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Nov 29 07:30:20 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Nov 29 07:30:20 compute-0 sudo[101030]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfeychtnmtcfttqwtunetajymywilhdx ; /usr/bin/python3'
Nov 29 07:30:20 compute-0 sudo[101030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:20 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.rpfenx for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e...
Nov 29 07:30:20 compute-0 ceph-mon[75237]: from='client.14256 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:30:20 compute-0 ceph-mon[75237]: 3.9 scrub starts
Nov 29 07:30:21 compute-0 python3[101035]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:30:21 compute-0 podman[101061]: 2025-11-29 07:30:21.155836138 +0000 UTC m=+0.030029312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:30:21 compute-0 podman[101061]: 2025-11-29 07:30:21.350395375 +0000 UTC m=+0.224588499 container create 73de6b3c17683690e0400b87e20b9cd1437c8fe6b2d752fe9e2f03c702eb25d6 (image=quay.io/ceph/ceph:v18, name=elated_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:30:21 compute-0 systemd[1]: Started libpod-conmon-73de6b3c17683690e0400b87e20b9cd1437c8fe6b2d752fe9e2f03c702eb25d6.scope.
Nov 29 07:30:21 compute-0 ansible-async_wrapper.py[100396]: Done in kid B.
Nov 29 07:30:21 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89153db9a489bcc936d42dee6a2cce21e03aed5c735d859ca7326e8989423dd3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89153db9a489bcc936d42dee6a2cce21e03aed5c735d859ca7326e8989423dd3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:21 compute-0 podman[101061]: 2025-11-29 07:30:21.459014757 +0000 UTC m=+0.333207901 container init 73de6b3c17683690e0400b87e20b9cd1437c8fe6b2d752fe9e2f03c702eb25d6 (image=quay.io/ceph/ceph:v18, name=elated_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:30:21 compute-0 podman[101061]: 2025-11-29 07:30:21.467436571 +0000 UTC m=+0.341629705 container start 73de6b3c17683690e0400b87e20b9cd1437c8fe6b2d752fe9e2f03c702eb25d6 (image=quay.io/ceph/ceph:v18, name=elated_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 07:30:21 compute-0 podman[101094]: 2025-11-29 07:30:21.4673998 +0000 UTC m=+0.057760434 container create 862276fb5523f4bea26d134a7195330a424330243f250c2c8d4097a77bd86ceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-rgw-rgw-compute-0-rpfenx, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 07:30:21 compute-0 podman[101061]: 2025-11-29 07:30:21.47113116 +0000 UTC m=+0.345324284 container attach 73de6b3c17683690e0400b87e20b9cd1437c8fe6b2d752fe9e2f03c702eb25d6 (image=quay.io/ceph/ceph:v18, name=elated_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:30:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469f35e8332f0f3d24d94284a88c1b828c7eb31a1d1c337fe45db8f489eb268b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469f35e8332f0f3d24d94284a88c1b828c7eb31a1d1c337fe45db8f489eb268b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469f35e8332f0f3d24d94284a88c1b828c7eb31a1d1c337fe45db8f489eb268b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469f35e8332f0f3d24d94284a88c1b828c7eb31a1d1c337fe45db8f489eb268b/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.rpfenx supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:21 compute-0 podman[101094]: 2025-11-29 07:30:21.436705541 +0000 UTC m=+0.027066195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:21 compute-0 podman[101094]: 2025-11-29 07:30:21.533430114 +0000 UTC m=+0.123790798 container init 862276fb5523f4bea26d134a7195330a424330243f250c2c8d4097a77bd86ceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-rgw-rgw-compute-0-rpfenx, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:30:21 compute-0 podman[101094]: 2025-11-29 07:30:21.539244209 +0000 UTC m=+0.129604863 container start 862276fb5523f4bea26d134a7195330a424330243f250c2c8d4097a77bd86ceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-rgw-rgw-compute-0-rpfenx, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:30:21 compute-0 bash[101094]: 862276fb5523f4bea26d134a7195330a424330243f250c2c8d4097a77bd86ceb
Nov 29 07:30:21 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.rpfenx for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e.
Nov 29 07:30:21 compute-0 sudo[100799]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:30:21 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:21 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Nov 29 07:30:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:30:21 compute-0 radosgw[101118]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:30:21 compute-0 radosgw[101118]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 29 07:30:21 compute-0 radosgw[101118]: framework: beast
Nov 29 07:30:21 compute-0 radosgw[101118]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 29 07:30:21 compute-0 radosgw[101118]: init_numa not setting numa affinity
Nov 29 07:30:21 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 07:30:21 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Nov 29 07:30:21 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:21 compute-0 ceph-mgr[75527]: [progress INFO root] complete: finished ev d52ec90a-e6ae-4e3b-913c-4178eaf80621 (Updating rgw.rgw deployment (+1 -> 1))
Nov 29 07:30:21 compute-0 ceph-mgr[75527]: [progress INFO root] Completed event d52ec90a-e6ae-4e3b-913c-4178eaf80621 (Updating rgw.rgw deployment (+1 -> 1)) in 3 seconds
Nov 29 07:30:21 compute-0 ceph-mgr[75527]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Nov 29 07:30:21 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 29 07:30:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 07:30:21 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 07:30:21 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:21 compute-0 ceph-mgr[75527]: [progress INFO root] update: starting ev e3468e35-9a2e-4ea7-8456-5380893ba1bc (Updating mds.cephfs deployment (+1 -> 1))
Nov 29 07:30:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.yemcdg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 29 07:30:21 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.yemcdg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 07:30:21 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.yemcdg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 07:30:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:30:21 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:30:21 compute-0 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.yemcdg on compute-0
Nov 29 07:30:21 compute-0 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.yemcdg on compute-0
Nov 29 07:30:21 compute-0 sudo[101180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:21 compute-0 sudo[101180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:21 compute-0 sudo[101180]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:21 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 29 07:30:21 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 29 07:30:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:30:21 compute-0 sudo[101205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:30:21 compute-0 sudo[101205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:21 compute-0 sudo[101205]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:21 compute-0 sudo[101249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:21 compute-0 sudo[101249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:21 compute-0 sudo[101249]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:21 compute-0 sudo[101274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e
Nov 29 07:30:21 compute-0 sudo[101274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:21 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14261 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:30:22 compute-0 elated_kepler[101095]: 
Nov 29 07:30:22 compute-0 elated_kepler[101095]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 29 07:30:22 compute-0 ceph-mon[75237]: pgmap v109: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:22 compute-0 ceph-mon[75237]: 4.1c deep-scrub starts
Nov 29 07:30:22 compute-0 ceph-mon[75237]: 4.1c deep-scrub ok
Nov 29 07:30:22 compute-0 ceph-mon[75237]: 3.9 scrub ok
Nov 29 07:30:22 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:22 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:22 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:22 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:22 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:22 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.yemcdg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 07:30:22 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.yemcdg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 07:30:22 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:30:22 compute-0 systemd[1]: libpod-73de6b3c17683690e0400b87e20b9cd1437c8fe6b2d752fe9e2f03c702eb25d6.scope: Deactivated successfully.
Nov 29 07:30:22 compute-0 podman[101061]: 2025-11-29 07:30:22.022945628 +0000 UTC m=+0.897138762 container died 73de6b3c17683690e0400b87e20b9cd1437c8fe6b2d752fe9e2f03c702eb25d6 (image=quay.io/ceph/ceph:v18, name=elated_kepler, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:30:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-89153db9a489bcc936d42dee6a2cce21e03aed5c735d859ca7326e8989423dd3-merged.mount: Deactivated successfully.
Nov 29 07:30:22 compute-0 podman[101061]: 2025-11-29 07:30:22.077108985 +0000 UTC m=+0.951302109 container remove 73de6b3c17683690e0400b87e20b9cd1437c8fe6b2d752fe9e2f03c702eb25d6 (image=quay.io/ceph/ceph:v18, name=elated_kepler, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 07:30:22 compute-0 systemd[1]: libpod-conmon-73de6b3c17683690e0400b87e20b9cd1437c8fe6b2d752fe9e2f03c702eb25d6.scope: Deactivated successfully.
Nov 29 07:30:22 compute-0 sudo[101030]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:22 compute-0 podman[101357]: 2025-11-29 07:30:22.383322474 +0000 UTC m=+0.059054268 container create fb168448f6936708c536deb80027f9292461ce93d65f9dc2b137ff3a8df849ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_solomon, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 07:30:22 compute-0 systemd[1]: Started libpod-conmon-fb168448f6936708c536deb80027f9292461ce93d65f9dc2b137ff3a8df849ec.scope.
Nov 29 07:30:22 compute-0 podman[101357]: 2025-11-29 07:30:22.354901514 +0000 UTC m=+0.030633318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:22 compute-0 podman[101357]: 2025-11-29 07:30:22.483932342 +0000 UTC m=+0.159664126 container init fb168448f6936708c536deb80027f9292461ce93d65f9dc2b137ff3a8df849ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_solomon, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:30:22 compute-0 podman[101357]: 2025-11-29 07:30:22.491054662 +0000 UTC m=+0.166786456 container start fb168448f6936708c536deb80027f9292461ce93d65f9dc2b137ff3a8df849ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 07:30:22 compute-0 podman[101357]: 2025-11-29 07:30:22.495370847 +0000 UTC m=+0.171102631 container attach fb168448f6936708c536deb80027f9292461ce93d65f9dc2b137ff3a8df849ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:30:22 compute-0 stoic_solomon[101374]: 167 167
Nov 29 07:30:22 compute-0 systemd[1]: libpod-fb168448f6936708c536deb80027f9292461ce93d65f9dc2b137ff3a8df849ec.scope: Deactivated successfully.
Nov 29 07:30:22 compute-0 podman[101357]: 2025-11-29 07:30:22.497495653 +0000 UTC m=+0.173227427 container died fb168448f6936708c536deb80027f9292461ce93d65f9dc2b137ff3a8df849ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:30:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-84f8bcd0146051a6b484899fb78e40b1a235ed592f98dfe6c6ab74e365952002-merged.mount: Deactivated successfully.
Nov 29 07:30:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v110: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 29 07:30:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 29 07:30:22 compute-0 podman[101357]: 2025-11-29 07:30:22.731501443 +0000 UTC m=+0.407233197 container remove fb168448f6936708c536deb80027f9292461ce93d65f9dc2b137ff3a8df849ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_solomon, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 07:30:22 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 29 07:30:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 29 07:30:22 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/26649190' entity='client.rgw.rgw.compute-0.rpfenx' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 07:30:22 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 29 07:30:22 compute-0 systemd[1]: libpod-conmon-fb168448f6936708c536deb80027f9292461ce93d65f9dc2b137ff3a8df849ec.scope: Deactivated successfully.
Nov 29 07:30:22 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 38 pg[8.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:22 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 29 07:30:22 compute-0 systemd[1]: Reloading.
Nov 29 07:30:22 compute-0 systemd-rc-local-generator[101419]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:30:22 compute-0 systemd-sysv-generator[101424]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:30:23 compute-0 ceph-mon[75237]: 4.a deep-scrub starts
Nov 29 07:30:23 compute-0 ceph-mon[75237]: 4.a deep-scrub ok
Nov 29 07:30:23 compute-0 ceph-mon[75237]: Saving service rgw.rgw spec with placement compute-0
Nov 29 07:30:23 compute-0 ceph-mon[75237]: Deploying daemon mds.cephfs.compute-0.yemcdg on compute-0
Nov 29 07:30:23 compute-0 ceph-mon[75237]: 4.5 scrub starts
Nov 29 07:30:23 compute-0 ceph-mon[75237]: 4.5 scrub ok
Nov 29 07:30:23 compute-0 ceph-mon[75237]: from='client.14261 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:30:23 compute-0 ceph-mon[75237]: pgmap v110: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:23 compute-0 ceph-mon[75237]: osdmap e38: 3 total, 3 up, 3 in
Nov 29 07:30:23 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/26649190' entity='client.rgw.rgw.compute-0.rpfenx' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 07:30:23 compute-0 ceph-mon[75237]: 4.9 scrub starts
Nov 29 07:30:23 compute-0 sudo[101452]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljmmvecpxidgycsdaoprzgwmyqnvzvos ; /usr/bin/python3'
Nov 29 07:30:23 compute-0 sudo[101452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:23 compute-0 systemd[1]: Reloading.
Nov 29 07:30:23 compute-0 systemd-rc-local-generator[101486]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:30:23 compute-0 python3[101456]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:30:23 compute-0 systemd-sysv-generator[101490]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:30:23 compute-0 podman[101494]: 2025-11-29 07:30:23.372841273 +0000 UTC m=+0.029084647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:30:23 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.yemcdg for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e...
Nov 29 07:30:23 compute-0 podman[101494]: 2025-11-29 07:30:23.53183425 +0000 UTC m=+0.188077624 container create 85b539cfd730223b42ed0dff630e53eaa0c0ad54d8f3db7f873649337035b1dd (image=quay.io/ceph/ceph:v18, name=vigorous_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:30:23 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 29 07:30:23 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 29 07:30:23 compute-0 systemd[1]: Started libpod-conmon-85b539cfd730223b42ed0dff630e53eaa0c0ad54d8f3db7f873649337035b1dd.scope.
Nov 29 07:30:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/504163748bf1aeef4f38f26627bd2e1c972690becaf1608aa9ddeb0fd3f37443/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/504163748bf1aeef4f38f26627bd2e1c972690becaf1608aa9ddeb0fd3f37443/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:23 compute-0 ceph-mgr[75527]: [progress INFO root] Writing back 7 completed events
Nov 29 07:30:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 07:30:23 compute-0 podman[101494]: 2025-11-29 07:30:23.687752074 +0000 UTC m=+0.343995508 container init 85b539cfd730223b42ed0dff630e53eaa0c0ad54d8f3db7f873649337035b1dd (image=quay.io/ceph/ceph:v18, name=vigorous_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:30:23 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:23 compute-0 ceph-mgr[75527]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Nov 29 07:30:23 compute-0 podman[101494]: 2025-11-29 07:30:23.694743901 +0000 UTC m=+0.350987265 container start 85b539cfd730223b42ed0dff630e53eaa0c0ad54d8f3db7f873649337035b1dd (image=quay.io/ceph/ceph:v18, name=vigorous_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:30:23 compute-0 podman[101494]: 2025-11-29 07:30:23.700019182 +0000 UTC m=+0.356262586 container attach 85b539cfd730223b42ed0dff630e53eaa0c0ad54d8f3db7f873649337035b1dd (image=quay.io/ceph/ceph:v18, name=vigorous_elion, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 29 07:30:23 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 29 07:30:23 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/26649190' entity='client.rgw.rgw.compute-0.rpfenx' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 29 07:30:23 compute-0 podman[101562]: 2025-11-29 07:30:23.84968579 +0000 UTC m=+0.065739787 container create 5c1d28f0f45c9b9b148d9054b96d937b2584e5f4319ddcc704a27fa7ecf36195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mds-cephfs-compute-0-yemcdg, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 07:30:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 29 07:30:23 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 29 07:30:23 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 29 07:30:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac7e4e656d31d531316b2c458bb5a8628d6fc2da2d6aa9529e9d54e3eae6208a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac7e4e656d31d531316b2c458bb5a8628d6fc2da2d6aa9529e9d54e3eae6208a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac7e4e656d31d531316b2c458bb5a8628d6fc2da2d6aa9529e9d54e3eae6208a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac7e4e656d31d531316b2c458bb5a8628d6fc2da2d6aa9529e9d54e3eae6208a/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.yemcdg supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:23 compute-0 podman[101562]: 2025-11-29 07:30:23.807701968 +0000 UTC m=+0.023755995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 39 pg[8.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:23 compute-0 podman[101562]: 2025-11-29 07:30:23.990257524 +0000 UTC m=+0.206311571 container init 5c1d28f0f45c9b9b148d9054b96d937b2584e5f4319ddcc704a27fa7ecf36195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mds-cephfs-compute-0-yemcdg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:30:23 compute-0 podman[101562]: 2025-11-29 07:30:23.996235644 +0000 UTC m=+0.212289661 container start 5c1d28f0f45c9b9b148d9054b96d937b2584e5f4319ddcc704a27fa7ecf36195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mds-cephfs-compute-0-yemcdg, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:30:24 compute-0 bash[101562]: 5c1d28f0f45c9b9b148d9054b96d937b2584e5f4319ddcc704a27fa7ecf36195
Nov 29 07:30:24 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.yemcdg for 321e9cb7-01a2-5759-bf8c-981c9a64aa3e.
Nov 29 07:30:24 compute-0 sudo[101274]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:30:24 compute-0 ceph-mds[101581]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:30:24 compute-0 ceph-mds[101581]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 29 07:30:24 compute-0 ceph-mds[101581]: main not setting numa affinity
Nov 29 07:30:24 compute-0 ceph-mds[101581]: pidfile_write: ignore empty --pid-file
Nov 29 07:30:24 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mds-cephfs-compute-0-yemcdg[101577]: starting mds.cephfs.compute-0.yemcdg at 
Nov 29 07:30:24 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:30:24 compute-0 ceph-mon[75237]: 4.9 scrub ok
Nov 29 07:30:24 compute-0 ceph-mon[75237]: 4.1b scrub starts
Nov 29 07:30:24 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:24 compute-0 ceph-mon[75237]: 3.6 scrub starts
Nov 29 07:30:24 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/26649190' entity='client.rgw.rgw.compute-0.rpfenx' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 29 07:30:24 compute-0 ceph-mon[75237]: osdmap e39: 3 total, 3 up, 3 in
Nov 29 07:30:24 compute-0 ceph-mds[101581]: mds.cephfs.compute-0.yemcdg Updating MDS map to version 2 from mon.0
Nov 29 07:30:24 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 07:30:24 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:24 compute-0 ceph-mgr[75527]: [progress INFO root] complete: finished ev e3468e35-9a2e-4ea7-8456-5380893ba1bc (Updating mds.cephfs deployment (+1 -> 1))
Nov 29 07:30:24 compute-0 ceph-mgr[75527]: [progress INFO root] Completed event e3468e35-9a2e-4ea7-8456-5380893ba1bc (Updating mds.cephfs deployment (+1 -> 1)) in 3 seconds
Nov 29 07:30:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 29 07:30:24 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 07:30:24 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:24 compute-0 sudo[101621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:24 compute-0 sudo[101621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:24 compute-0 sudo[101621]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:24 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:30:24 compute-0 vigorous_elion[101525]: 
Nov 29 07:30:24 compute-0 vigorous_elion[101525]: [{"container_id": "0102e732daa3", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.38%", "created": "2025-11-29T07:28:08.778381Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-29T07:28:08.871287Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T07:30:11.525690Z", "memory_usage": 11607736, "ports": [], "service_name": "crash", "started": "2025-11-29T07:28:08.634474Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e@crash.compute-0", "version": "18.2.7"}, {"daemon_id": "cephfs.compute-0.yemcdg", "daemon_name": "mds.cephfs.compute-0.yemcdg", "daemon_type": "mds", "events": ["2025-11-29T07:30:24.417674Z daemon:mds.cephfs.compute-0.yemcdg [INFO] \"Deployed mds.cephfs.compute-0.yemcdg on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "6283142e4ea1", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "23.97%", "created": "2025-11-29T07:26:49.143676Z", "daemon_id": "compute-0.fwfehy", "daemon_name": "mgr.compute-0.fwfehy", "daemon_type": "mgr", "events": ["2025-11-29T07:28:17.313574Z daemon:mgr.compute-0.fwfehy [INFO] \"Reconfigured mgr.compute-0.fwfehy on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T07:30:11.525607Z", "memory_usage": 549558681, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-29T07:26:49.000511Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e@mgr.compute-0.fwfehy", "version": "18.2.7"}, {"container_id": "3d40b4863c00", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.88%", "created": "2025-11-29T07:26:43.775569Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-29T07:28:16.427287Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T07:30:11.525493Z", "memory_request": 2147483648, "memory_usage": 39457914, "ports": [], "service_name": "mon", "started": "2025-11-29T07:26:46.708477Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e@mon.compute-0", "version": "18.2.7"}, {"container_id": "c5fd94a830bc", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.73%", "created": "2025-11-29T07:28:51.998473Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-11-29T07:28:53.702193Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T07:30:11.525770Z", "memory_request": 4294967296, "memory_usage": 62149099, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-29T07:28:46.008311Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e@osd.0", "version": "18.2.7"}, {"container_id": "ca50ff0f22db", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.08%", "created": "2025-11-29T07:29:05.276818Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-29T07:29:05.333161Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T07:30:11.525847Z", "memory_request": 4294967296, "memory_usage": 61886955, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-29T07:29:05.174371Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e@osd.1", "version": "18.2.7"}, {"container_id": "c7b261a6c7aa", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.98%", "created": "2025-11-29T07:29:10.875963Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-11-29T07:29:11.030875Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T07:30:11.525925Z", "memory_request": 4294967296, "memory_usage": 59915632, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-29T07:29:10.639528Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e@osd.2", "version": "18.2.7"}, {"daemon_id": "rgw.compute-0.rpfenx", "daemon_name": "rgw.rgw.compute-0.rpfenx", "daemon_type": "rgw", "events": ["2025-11-29T07:30:21.604688Z daemon:rgw.rgw.compute-0.rpfenx [INFO] \"Deployed rgw.rgw.compute-0.rpfenx on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Nov 29 07:30:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v113: 101 pgs: 101 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 511 B/s wr, 0 op/s
Nov 29 07:30:24 compute-0 systemd[1]: libpod-85b539cfd730223b42ed0dff630e53eaa0c0ad54d8f3db7f873649337035b1dd.scope: Deactivated successfully.
Nov 29 07:30:24 compute-0 conmon[101525]: conmon 85b539cfd730223b42ed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-85b539cfd730223b42ed0dff630e53eaa0c0ad54d8f3db7f873649337035b1dd.scope/container/memory.events
Nov 29 07:30:24 compute-0 podman[101494]: 2025-11-29 07:30:24.583265953 +0000 UTC m=+1.239509387 container died 85b539cfd730223b42ed0dff630e53eaa0c0ad54d8f3db7f873649337035b1dd (image=quay.io/ceph/ceph:v18, name=vigorous_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:30:24 compute-0 sudo[101646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:30:24 compute-0 sudo[101646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:24 compute-0 sudo[101646]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:24 compute-0 sudo[101680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:24 compute-0 sudo[101680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:24 compute-0 sudo[101680]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:24 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 29 07:30:24 compute-0 sudo[101710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:30:24 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 29 07:30:24 compute-0 sudo[101710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:24 compute-0 sudo[101710]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:24 compute-0 sudo[101735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:24 compute-0 sudo[101735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:24 compute-0 sudo[101735]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:24 compute-0 sudo[101760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:30:24 compute-0 sudo[101760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 29 07:30:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-504163748bf1aeef4f38f26627bd2e1c972690becaf1608aa9ddeb0fd3f37443-merged.mount: Deactivated successfully.
Nov 29 07:30:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 29 07:30:25 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 29 07:30:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 29 07:30:25 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/26649190' entity='client.rgw.rgw.compute-0.rpfenx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 07:30:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).mds e3 new map
Nov 29 07:30:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T07:30:04.634900+0000
                                           modified        2025-11-29T07:30:04.634983+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.yemcdg{-1:14265} state up:standby seq 1 addr [v2:192.168.122.100:6814/878771175,v1:192.168.122.100:6815/878771175] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 07:30:25 compute-0 ceph-mon[75237]: 4.1b scrub ok
Nov 29 07:30:25 compute-0 ceph-mon[75237]: 3.6 scrub ok
Nov 29 07:30:25 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:25 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:25 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:25 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:25 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:25 compute-0 ceph-mon[75237]: from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:30:25 compute-0 ceph-mon[75237]: pgmap v113: 101 pgs: 101 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 511 B/s wr, 0 op/s
Nov 29 07:30:25 compute-0 ceph-mon[75237]: 4.4 scrub starts
Nov 29 07:30:25 compute-0 ceph-mon[75237]: 4.4 scrub ok
Nov 29 07:30:25 compute-0 ceph-mds[101581]: mds.cephfs.compute-0.yemcdg Updating MDS map to version 3 from mon.0
Nov 29 07:30:25 compute-0 ceph-mds[101581]: mds.cephfs.compute-0.yemcdg Monitors have assigned me to become a standby.
Nov 29 07:30:25 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/878771175,v1:192.168.122.100:6815/878771175] up:boot
Nov 29 07:30:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/878771175,v1:192.168.122.100:6815/878771175] as mds.0
Nov 29 07:30:25 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.yemcdg assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 29 07:30:25 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 29 07:30:25 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 29 07:30:25 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 07:30:25 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 29 07:30:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.yemcdg"} v 0) v1
Nov 29 07:30:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.yemcdg"}]: dispatch
Nov 29 07:30:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).mds e3 all = 0
Nov 29 07:30:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).mds e4 new map
Nov 29 07:30:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T07:30:04.634900+0000
                                           modified        2025-11-29T07:30:25.307526+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14265}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.yemcdg{0:14265} state up:creating seq 1 addr [v2:192.168.122.100:6814/878771175,v1:192.168.122.100:6815/878771175] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 29 07:30:25 compute-0 ceph-mds[101581]: mds.cephfs.compute-0.yemcdg Updating MDS map to version 4 from mon.0
Nov 29 07:30:25 compute-0 ceph-mds[101581]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 29 07:30:25 compute-0 ceph-mds[101581]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Nov 29 07:30:25 compute-0 ceph-mds[101581]: mds.0.cache creating system inode with ino:0x1
Nov 29 07:30:25 compute-0 ceph-mds[101581]: mds.0.cache creating system inode with ino:0x100
Nov 29 07:30:25 compute-0 ceph-mds[101581]: mds.0.cache creating system inode with ino:0x600
Nov 29 07:30:25 compute-0 ceph-mds[101581]: mds.0.cache creating system inode with ino:0x601
Nov 29 07:30:25 compute-0 ceph-mds[101581]: mds.0.cache creating system inode with ino:0x602
Nov 29 07:30:25 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.yemcdg=up:creating}
Nov 29 07:30:25 compute-0 ceph-mds[101581]: mds.0.cache creating system inode with ino:0x603
Nov 29 07:30:25 compute-0 ceph-mds[101581]: mds.0.cache creating system inode with ino:0x604
Nov 29 07:30:25 compute-0 ceph-mds[101581]: mds.0.cache creating system inode with ino:0x605
Nov 29 07:30:25 compute-0 ceph-mds[101581]: mds.0.cache creating system inode with ino:0x606
Nov 29 07:30:25 compute-0 ceph-mds[101581]: mds.0.cache creating system inode with ino:0x607
Nov 29 07:30:25 compute-0 ceph-mds[101581]: mds.0.cache creating system inode with ino:0x608
Nov 29 07:30:25 compute-0 ceph-mds[101581]: mds.0.cache creating system inode with ino:0x609
Nov 29 07:30:25 compute-0 podman[101494]: 2025-11-29 07:30:25.405610217 +0000 UTC m=+2.061853601 container remove 85b539cfd730223b42ed0dff630e53eaa0c0ad54d8f3db7f873649337035b1dd (image=quay.io/ceph/ceph:v18, name=vigorous_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:30:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 40 pg[9.0( empty local-lis/les=0/0 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:25 compute-0 sudo[101452]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:25 compute-0 systemd[1]: libpod-conmon-85b539cfd730223b42ed0dff630e53eaa0c0ad54d8f3db7f873649337035b1dd.scope: Deactivated successfully.
Nov 29 07:30:25 compute-0 ceph-mds[101581]: mds.0.4 creating_done
Nov 29 07:30:25 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.yemcdg is now active in filesystem cephfs as rank 0
Nov 29 07:30:25 compute-0 podman[101867]: 2025-11-29 07:30:25.987337286 +0000 UTC m=+0.176514616 container exec 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 07:30:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 29 07:30:26 compute-0 podman[101887]: 2025-11-29 07:30:26.28854109 +0000 UTC m=+0.188010122 container exec_died 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:30:26 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/26649190' entity='client.rgw.rgw.compute-0.rpfenx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 07:30:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 29 07:30:26 compute-0 sudo[101923]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkbpkbcsbpzwwqgctbaqqrwfxbvxardf ; /usr/bin/python3'
Nov 29 07:30:26 compute-0 podman[101867]: 2025-11-29 07:30:26.322048765 +0000 UTC m=+0.511226195 container exec_died 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:30:26 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 29 07:30:26 compute-0 sudo[101923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:26 compute-0 ceph-mon[75237]: osdmap e40: 3 total, 3 up, 3 in
Nov 29 07:30:26 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/26649190' entity='client.rgw.rgw.compute-0.rpfenx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 07:30:26 compute-0 ceph-mon[75237]: mds.? [v2:192.168.122.100:6814/878771175,v1:192.168.122.100:6815/878771175] up:boot
Nov 29 07:30:26 compute-0 ceph-mon[75237]: daemon mds.cephfs.compute-0.yemcdg assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 29 07:30:26 compute-0 ceph-mon[75237]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 29 07:30:26 compute-0 ceph-mon[75237]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 29 07:30:26 compute-0 ceph-mon[75237]: Cluster is now healthy
Nov 29 07:30:26 compute-0 ceph-mon[75237]: fsmap cephfs:0 1 up:standby
Nov 29 07:30:26 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.yemcdg"}]: dispatch
Nov 29 07:30:26 compute-0 ceph-mon[75237]: fsmap cephfs:1 {0=cephfs.compute-0.yemcdg=up:creating}
Nov 29 07:30:26 compute-0 ceph-mon[75237]: daemon mds.cephfs.compute-0.yemcdg is now active in filesystem cephfs as rank 0
Nov 29 07:30:26 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 41 pg[9.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:26 compute-0 python3[101925]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:30:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v116: 102 pgs: 1 unknown, 101 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 29 07:30:26 compute-0 podman[101946]: 2025-11-29 07:30:26.505542626 +0000 UTC m=+0.022016499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:30:26 compute-0 podman[101946]: 2025-11-29 07:30:26.725506392 +0000 UTC m=+0.241980215 container create b49eabde426abf2e36293f73837d1fc05ac810b50348bd6950dd512a27435705 (image=quay.io/ceph/ceph:v18, name=bold_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 07:30:26 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 29 07:30:26 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 29 07:30:26 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 29 07:30:26 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 29 07:30:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).mds e5 new map
Nov 29 07:30:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T07:30:04.634900+0000
                                           modified        2025-11-29T07:30:26.522840+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14265}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.yemcdg{0:14265} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/878771175,v1:192.168.122.100:6815/878771175] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 29 07:30:26 compute-0 ceph-mds[101581]: mds.cephfs.compute-0.yemcdg Updating MDS map to version 5 from mon.0
Nov 29 07:30:26 compute-0 ceph-mds[101581]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 29 07:30:26 compute-0 ceph-mds[101581]: mds.0.4 handle_mds_map state change up:creating --> up:active
Nov 29 07:30:26 compute-0 ceph-mds[101581]: mds.0.4 recovery_done -- successful recovery!
Nov 29 07:30:26 compute-0 ceph-mds[101581]: mds.0.4 active_start
Nov 29 07:30:26 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/878771175,v1:192.168.122.100:6815/878771175] up:active
Nov 29 07:30:26 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.yemcdg=up:active}
Nov 29 07:30:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:30:26 compute-0 systemd[1]: Started libpod-conmon-b49eabde426abf2e36293f73837d1fc05ac810b50348bd6950dd512a27435705.scope.
Nov 29 07:30:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e9bdb70cf21beeb4ea611d5c9fbea454e04f3c55912fd8548e2107113a7cb8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e9bdb70cf21beeb4ea611d5c9fbea454e04f3c55912fd8548e2107113a7cb8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:26 compute-0 podman[101946]: 2025-11-29 07:30:26.948163109 +0000 UTC m=+0.464636982 container init b49eabde426abf2e36293f73837d1fc05ac810b50348bd6950dd512a27435705 (image=quay.io/ceph/ceph:v18, name=bold_lumiere, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:30:26 compute-0 podman[101946]: 2025-11-29 07:30:26.962976235 +0000 UTC m=+0.479450078 container start b49eabde426abf2e36293f73837d1fc05ac810b50348bd6950dd512a27435705 (image=quay.io/ceph/ceph:v18, name=bold_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 07:30:27 compute-0 podman[101946]: 2025-11-29 07:30:27.007414671 +0000 UTC m=+0.523888504 container attach b49eabde426abf2e36293f73837d1fc05ac810b50348bd6950dd512a27435705 (image=quay.io/ceph/ceph:v18, name=bold_lumiere, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:30:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 29 07:30:27 compute-0 sudo[101760]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:30:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 29 07:30:27 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 29 07:30:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 29 07:30:27 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/26649190' entity='client.rgw.rgw.compute-0.rpfenx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 07:30:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/26649190' entity='client.rgw.rgw.compute-0.rpfenx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 07:30:27 compute-0 ceph-mon[75237]: osdmap e41: 3 total, 3 up, 3 in
Nov 29 07:30:27 compute-0 ceph-mon[75237]: pgmap v116: 102 pgs: 1 unknown, 101 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 29 07:30:27 compute-0 ceph-mon[75237]: 4.10 scrub starts
Nov 29 07:30:27 compute-0 ceph-mon[75237]: 3.f scrub starts
Nov 29 07:30:27 compute-0 ceph-mon[75237]: 4.10 scrub ok
Nov 29 07:30:27 compute-0 ceph-mon[75237]: 3.f scrub ok
Nov 29 07:30:27 compute-0 ceph-mon[75237]: mds.? [v2:192.168.122.100:6814/878771175,v1:192.168.122.100:6815/878771175] up:active
Nov 29 07:30:27 compute-0 ceph-mon[75237]: fsmap cephfs:1 {0=cephfs.compute-0.yemcdg=up:active}
Nov 29 07:30:27 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:30:27 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 07:30:27 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2745278248' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 07:30:27 compute-0 bold_lumiere[101979]: 
Nov 29 07:30:27 compute-0 bold_lumiere[101979]: {"fsid":"321e9cb7-01a2-5759-bf8c-981c9a64aa3e","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":220,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":42,"num_osds":3,"num_up_osds":3,"osd_up_since":1764401359,"num_in_osds":3,"osd_in_since":1764401310,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":101},{"state_name":"unknown","count":1}],"num_pgs":102,"num_pools":9,"num_objects":27,"data_bytes":463028,"bytes_used":84099072,"bytes_avail":64327827456,"bytes_total":64411926528,"unknown_pgs_ratio":0.0098039219155907631,"read_bytes_sec":1023,"write_bytes_sec":4606,"read_op_per_sec":0,"write_op_per_sec":11},"fsmap":{"epoch":5,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.yemcdg","status":"up:active","gid":14265}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":5,"modified":"2025-11-29T07:30:20.565248+0000","services":{}},"progress_events":{"ed1ac11b-15dd-4656-b0a2-091fb4f1c52b":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Nov 29 07:30:27 compute-0 systemd[1]: libpod-b49eabde426abf2e36293f73837d1fc05ac810b50348bd6950dd512a27435705.scope: Deactivated successfully.
Nov 29 07:30:27 compute-0 podman[101946]: 2025-11-29 07:30:27.593482515 +0000 UTC m=+1.109956358 container died b49eabde426abf2e36293f73837d1fc05ac810b50348bd6950dd512a27435705 (image=quay.io/ceph/ceph:v18, name=bold_lumiere, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 07:30:27 compute-0 sudo[102099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:27 compute-0 sudo[102099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:27 compute-0 sudo[102099]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0e9bdb70cf21beeb4ea611d5c9fbea454e04f3c55912fd8548e2107113a7cb8-merged.mount: Deactivated successfully.
Nov 29 07:30:27 compute-0 sudo[102135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:30:27 compute-0 sudo[102135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:27 compute-0 sudo[102135]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:27 compute-0 sudo[102163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:27 compute-0 sudo[102163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:27 compute-0 sudo[102163]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:27 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 3.c deep-scrub starts
Nov 29 07:30:27 compute-0 podman[101946]: 2025-11-29 07:30:27.770892584 +0000 UTC m=+1.287366407 container remove b49eabde426abf2e36293f73837d1fc05ac810b50348bd6950dd512a27435705 (image=quay.io/ceph/ceph:v18, name=bold_lumiere, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:30:27 compute-0 systemd[1]: libpod-conmon-b49eabde426abf2e36293f73837d1fc05ac810b50348bd6950dd512a27435705.scope: Deactivated successfully.
Nov 29 07:30:27 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 3.c deep-scrub ok
Nov 29 07:30:27 compute-0 sudo[101923]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:27 compute-0 sudo[102188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:30:27 compute-0 sudo[102188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:28 compute-0 sudo[102188]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:30:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:30:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:30:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:30:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:30:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:28 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 3d43216d-5c0b-4e87-b1d9-cfad09f0c95f does not exist
Nov 29 07:30:28 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev c3b2aa6a-4ec6-4b5f-9a8e-788c46fd227a does not exist
Nov 29 07:30:28 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 288578c8-1cb3-49ba-bf5d-551ea030312f does not exist
Nov 29 07:30:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:30:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:30:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:30:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:30:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:30:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:30:28 compute-0 sudo[102244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:28 compute-0 sudo[102244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:28 compute-0 sudo[102244]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:28 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 42 pg[10.0( empty local-lis/les=0/0 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [2] r=0 lpr=42 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:28 compute-0 sudo[102269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:30:28 compute-0 sudo[102269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:28 compute-0 sudo[102269]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:28 compute-0 sudo[102294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:28 compute-0 sudo[102294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:28 compute-0 sudo[102294]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 29 07:30:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/26649190' entity='client.rgw.rgw.compute-0.rpfenx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 07:30:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 29 07:30:28 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 29 07:30:28 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 43 pg[10.0( empty local-lis/les=42/43 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [2] r=0 lpr=42 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:28 compute-0 ceph-mon[75237]: osdmap e42: 3 total, 3 up, 3 in
Nov 29 07:30:28 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/26649190' entity='client.rgw.rgw.compute-0.rpfenx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 07:30:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:28 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2745278248' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 07:30:28 compute-0 ceph-mon[75237]: 3.c deep-scrub starts
Nov 29 07:30:28 compute-0 ceph-mon[75237]: 3.c deep-scrub ok
Nov 29 07:30:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:30:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:30:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:30:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:30:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:30:28 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/26649190' entity='client.rgw.rgw.compute-0.rpfenx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 07:30:28 compute-0 ceph-mon[75237]: osdmap e43: 3 total, 3 up, 3 in
Nov 29 07:30:28 compute-0 sudo[102319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:30:28 compute-0 sudo[102319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v119: 103 pgs: 2 unknown, 101 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 10 op/s
Nov 29 07:30:28 compute-0 sudo[102371]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdvvrheippyfyqmnsnltmxmnbyldidmx ; /usr/bin/python3'
Nov 29 07:30:28 compute-0 sudo[102371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:28 compute-0 ceph-mgr[75527]: [progress INFO root] Writing back 8 completed events
Nov 29 07:30:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 07:30:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:28 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 29 07:30:28 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 29 07:30:28 compute-0 python3[102380]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:30:28 compute-0 podman[102423]: 2025-11-29 07:30:28.889263324 +0000 UTC m=+0.048578788 container create 06931558abae9069c50740b635d3c9a0fdfe3951668ce946e66130de797bb067 (image=quay.io/ceph/ceph:v18, name=elastic_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 07:30:28 compute-0 podman[102424]: 2025-11-29 07:30:28.960484906 +0000 UTC m=+0.119190224 container create 481631762d0a64956da2bb3e629f2b05d8349cd1db497f962e0186c48cd0e545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_curie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:28 compute-0 podman[102423]: 2025-11-29 07:30:28.869416464 +0000 UTC m=+0.028731948 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:30:28 compute-0 podman[102424]: 2025-11-29 07:30:28.88422234 +0000 UTC m=+0.042927678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:28 compute-0 systemd[1]: Started libpod-conmon-06931558abae9069c50740b635d3c9a0fdfe3951668ce946e66130de797bb067.scope.
Nov 29 07:30:28 compute-0 systemd[1]: Started libpod-conmon-481631762d0a64956da2bb3e629f2b05d8349cd1db497f962e0186c48cd0e545.scope.
Nov 29 07:30:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9eb49337fd077cc726f631fa8e5aa93becbc261b4d687b03b99a23490211d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9eb49337fd077cc726f631fa8e5aa93becbc261b4d687b03b99a23490211d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:29 compute-0 podman[102423]: 2025-11-29 07:30:29.015027983 +0000 UTC m=+0.174343547 container init 06931558abae9069c50740b635d3c9a0fdfe3951668ce946e66130de797bb067 (image=quay.io/ceph/ceph:v18, name=elastic_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 07:30:29 compute-0 podman[102424]: 2025-11-29 07:30:29.020352866 +0000 UTC m=+0.179058204 container init 481631762d0a64956da2bb3e629f2b05d8349cd1db497f962e0186c48cd0e545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_curie, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:30:29 compute-0 podman[102423]: 2025-11-29 07:30:29.023936002 +0000 UTC m=+0.183251496 container start 06931558abae9069c50740b635d3c9a0fdfe3951668ce946e66130de797bb067 (image=quay.io/ceph/ceph:v18, name=elastic_pasteur, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 07:30:29 compute-0 podman[102423]: 2025-11-29 07:30:29.028110263 +0000 UTC m=+0.187425737 container attach 06931558abae9069c50740b635d3c9a0fdfe3951668ce946e66130de797bb067 (image=quay.io/ceph/ceph:v18, name=elastic_pasteur, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:30:29 compute-0 podman[102424]: 2025-11-29 07:30:29.02835696 +0000 UTC m=+0.187062288 container start 481631762d0a64956da2bb3e629f2b05d8349cd1db497f962e0186c48cd0e545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_curie, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 07:30:29 compute-0 podman[102424]: 2025-11-29 07:30:29.031296058 +0000 UTC m=+0.190001396 container attach 481631762d0a64956da2bb3e629f2b05d8349cd1db497f962e0186c48cd0e545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 07:30:29 compute-0 funny_curie[102456]: 167 167
Nov 29 07:30:29 compute-0 systemd[1]: libpod-481631762d0a64956da2bb3e629f2b05d8349cd1db497f962e0186c48cd0e545.scope: Deactivated successfully.
Nov 29 07:30:29 compute-0 podman[102424]: 2025-11-29 07:30:29.032730527 +0000 UTC m=+0.191435865 container died 481631762d0a64956da2bb3e629f2b05d8349cd1db497f962e0186c48cd0e545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_curie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:30:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f4f7facebf7d51f51a9f4b479350b3790dd6a72d82a6f7015cc13b44210b179-merged.mount: Deactivated successfully.
Nov 29 07:30:29 compute-0 podman[102424]: 2025-11-29 07:30:29.077876362 +0000 UTC m=+0.236581690 container remove 481631762d0a64956da2bb3e629f2b05d8349cd1db497f962e0186c48cd0e545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_curie, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 07:30:29 compute-0 systemd[1]: libpod-conmon-481631762d0a64956da2bb3e629f2b05d8349cd1db497f962e0186c48cd0e545.scope: Deactivated successfully.
Nov 29 07:30:29 compute-0 podman[102481]: 2025-11-29 07:30:29.289400222 +0000 UTC m=+0.069267671 container create a572fbee144e402aa5a5f4467a14974b0af4c2b2f10b9cca4b8855eeb28cad9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_goodall, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:30:29 compute-0 systemd[1]: Started libpod-conmon-a572fbee144e402aa5a5f4467a14974b0af4c2b2f10b9cca4b8855eeb28cad9a.scope.
Nov 29 07:30:29 compute-0 podman[102481]: 2025-11-29 07:30:29.259873563 +0000 UTC m=+0.039741102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0aecd04fca10f8348c6f8e07e0539019108b29a79779b38f6e58e056310388/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0aecd04fca10f8348c6f8e07e0539019108b29a79779b38f6e58e056310388/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0aecd04fca10f8348c6f8e07e0539019108b29a79779b38f6e58e056310388/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0aecd04fca10f8348c6f8e07e0539019108b29a79779b38f6e58e056310388/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0aecd04fca10f8348c6f8e07e0539019108b29a79779b38f6e58e056310388/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:29 compute-0 podman[102481]: 2025-11-29 07:30:29.374317 +0000 UTC m=+0.154184469 container init a572fbee144e402aa5a5f4467a14974b0af4c2b2f10b9cca4b8855eeb28cad9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:30:29 compute-0 podman[102481]: 2025-11-29 07:30:29.383574687 +0000 UTC m=+0.163442136 container start a572fbee144e402aa5a5f4467a14974b0af4c2b2f10b9cca4b8855eeb28cad9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_goodall, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 07:30:29 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 29 07:30:29 compute-0 podman[102481]: 2025-11-29 07:30:29.387731788 +0000 UTC m=+0.167599267 container attach a572fbee144e402aa5a5f4467a14974b0af4c2b2f10b9cca4b8855eeb28cad9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 07:30:29 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 29 07:30:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 29 07:30:29 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 29 07:30:29 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 29 07:30:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 07:30:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/334675303' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:30:30 compute-0 elastic_pasteur[102454]: 
Nov 29 07:30:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 29 07:30:30 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 29 07:30:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 29 07:30:30 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1612613637' entity='client.rgw.rgw.compute-0.rpfenx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 07:30:30 compute-0 ceph-mon[75237]: pgmap v119: 103 pgs: 2 unknown, 101 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 10 op/s
Nov 29 07:30:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:30 compute-0 ceph-mon[75237]: 4.f scrub starts
Nov 29 07:30:30 compute-0 ceph-mon[75237]: 4.f scrub ok
Nov 29 07:30:30 compute-0 systemd[1]: libpod-06931558abae9069c50740b635d3c9a0fdfe3951668ce946e66130de797bb067.scope: Deactivated successfully.
Nov 29 07:30:30 compute-0 elastic_pasteur[102454]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.rpfenx","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Nov 29 07:30:30 compute-0 podman[102423]: 2025-11-29 07:30:30.071227625 +0000 UTC m=+1.230543129 container died 06931558abae9069c50740b635d3c9a0fdfe3951668ce946e66130de797bb067 (image=quay.io/ceph/ceph:v18, name=elastic_pasteur, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-af9eb49337fd077cc726f631fa8e5aa93becbc261b4d687b03b99a23490211d0-merged.mount: Deactivated successfully.
Nov 29 07:30:30 compute-0 gallant_goodall[102498]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:30:30 compute-0 gallant_goodall[102498]: --> relative data size: 1.0
Nov 29 07:30:30 compute-0 gallant_goodall[102498]: --> All data devices are unavailable
Nov 29 07:30:30 compute-0 systemd[1]: libpod-a572fbee144e402aa5a5f4467a14974b0af4c2b2f10b9cca4b8855eeb28cad9a.scope: Deactivated successfully.
Nov 29 07:30:30 compute-0 systemd[1]: libpod-a572fbee144e402aa5a5f4467a14974b0af4c2b2f10b9cca4b8855eeb28cad9a.scope: Consumed 1.081s CPU time.
Nov 29 07:30:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v121: 104 pgs: 1 unknown, 103 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 241 B/s rd, 482 B/s wr, 1 op/s
Nov 29 07:30:30 compute-0 podman[102423]: 2025-11-29 07:30:30.651031501 +0000 UTC m=+1.810347005 container remove 06931558abae9069c50740b635d3c9a0fdfe3951668ce946e66130de797bb067 (image=quay.io/ceph/ceph:v18, name=elastic_pasteur, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 07:30:30 compute-0 sudo[102371]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:30 compute-0 podman[102481]: 2025-11-29 07:30:30.698646072 +0000 UTC m=+1.478513561 container died a572fbee144e402aa5a5f4467a14974b0af4c2b2f10b9cca4b8855eeb28cad9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_goodall, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:30:30 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 29 07:30:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 29 07:30:31 compute-0 sshd-session[102539]: Invalid user sol from 80.94.92.182 port 44334
Nov 29 07:30:31 compute-0 sudo[102599]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikmbqzzzjksfhlfevfjoootgitbqxwxe ; /usr/bin/python3'
Nov 29 07:30:31 compute-0 sudo[102599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:31 compute-0 ceph-mon[75237]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:30:31 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1612613637' entity='client.rgw.rgw.compute-0.rpfenx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 07:30:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 29 07:30:31 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 44 pg[11.0( empty local-lis/les=0/0 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [1] r=0 lpr=44 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:31 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 29 07:30:31 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 29 07:30:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 29 07:30:31 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1612613637' entity='client.rgw.rgw.compute-0.rpfenx' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 07:30:31 compute-0 ceph-mon[75237]: 4.1a scrub starts
Nov 29 07:30:31 compute-0 ceph-mon[75237]: 4.1a scrub ok
Nov 29 07:30:31 compute-0 ceph-mon[75237]: 4.12 scrub starts
Nov 29 07:30:31 compute-0 ceph-mon[75237]: 4.12 scrub ok
Nov 29 07:30:31 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/334675303' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:30:31 compute-0 ceph-mon[75237]: osdmap e44: 3 total, 3 up, 3 in
Nov 29 07:30:31 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1612613637' entity='client.rgw.rgw.compute-0.rpfenx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 07:30:31 compute-0 ceph-mon[75237]: pgmap v121: 104 pgs: 1 unknown, 103 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 241 B/s rd, 482 B/s wr, 1 op/s
Nov 29 07:30:31 compute-0 ceph-mon[75237]: 4.14 scrub starts
Nov 29 07:30:31 compute-0 sshd-session[102539]: Connection closed by invalid user sol 80.94.92.182 port 44334 [preauth]
Nov 29 07:30:31 compute-0 python3[102601]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:30:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:30:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c0aecd04fca10f8348c6f8e07e0539019108b29a79779b38f6e58e056310388-merged.mount: Deactivated successfully.
Nov 29 07:30:31 compute-0 podman[102604]: 2025-11-29 07:30:31.901702775 +0000 UTC m=+0.117692664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:30:31 compute-0 podman[102563]: 2025-11-29 07:30:31.998263995 +0000 UTC m=+1.456584066 container remove a572fbee144e402aa5a5f4467a14974b0af4c2b2f10b9cca4b8855eeb28cad9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_goodall, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:30:32 compute-0 systemd[1]: libpod-conmon-a572fbee144e402aa5a5f4467a14974b0af4c2b2f10b9cca4b8855eeb28cad9a.scope: Deactivated successfully.
Nov 29 07:30:32 compute-0 podman[102604]: 2025-11-29 07:30:32.004592843 +0000 UTC m=+0.220582712 container create 4164be5893b7bc103b12a104a8b983eb127a4cc64ea848feb02116a88d6142c1 (image=quay.io/ceph/ceph:v18, name=dreamy_maxwell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:32 compute-0 systemd[1]: Started libpod-conmon-4164be5893b7bc103b12a104a8b983eb127a4cc64ea848feb02116a88d6142c1.scope.
Nov 29 07:30:32 compute-0 sudo[102319]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:32 compute-0 systemd[1]: libpod-conmon-06931558abae9069c50740b635d3c9a0fdfe3951668ce946e66130de797bb067.scope: Deactivated successfully.
Nov 29 07:30:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1346108013337e23e522abedb6827f1b70da7536f1844d51c85e690da4be5e6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1346108013337e23e522abedb6827f1b70da7536f1844d51c85e690da4be5e6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:32 compute-0 sudo[102621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:32 compute-0 sudo[102621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:32 compute-0 sudo[102621]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:32 compute-0 sudo[102647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:30:32 compute-0 sudo[102647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:32 compute-0 sudo[102647]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:32 compute-0 sudo[102672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:32 compute-0 sudo[102672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:32 compute-0 sudo[102672]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:32 compute-0 sudo[102697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:30:32 compute-0 sudo[102697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v123: 104 pgs: 1 unknown, 103 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 202 B/s rd, 405 B/s wr, 1 op/s
Nov 29 07:30:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 29 07:30:32 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 07:30:32 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 07:30:32 compute-0 podman[102604]: 2025-11-29 07:30:32.68063982 +0000 UTC m=+0.896629719 container init 4164be5893b7bc103b12a104a8b983eb127a4cc64ea848feb02116a88d6142c1 (image=quay.io/ceph/ceph:v18, name=dreamy_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:30:32 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1612613637' entity='client.rgw.rgw.compute-0.rpfenx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 07:30:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 29 07:30:32 compute-0 podman[102604]: 2025-11-29 07:30:32.687919435 +0000 UTC m=+0.903909314 container start 4164be5893b7bc103b12a104a8b983eb127a4cc64ea848feb02116a88d6142c1 (image=quay.io/ceph/ceph:v18, name=dreamy_maxwell, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:30:32 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 29 07:30:32 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 29 07:30:32 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 29 07:30:32 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 29 07:30:33 compute-0 podman[102604]: 2025-11-29 07:30:33.227352823 +0000 UTC m=+1.443342692 container attach 4164be5893b7bc103b12a104a8b983eb127a4cc64ea848feb02116a88d6142c1 (image=quay.io/ceph/ceph:v18, name=dreamy_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 07:30:33 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 29 07:30:33 compute-0 ceph-mon[75237]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:30:33 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1612613637' entity='client.rgw.rgw.compute-0.rpfenx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 07:30:33 compute-0 ceph-mon[75237]: 4.14 scrub ok
Nov 29 07:30:33 compute-0 ceph-mon[75237]: osdmap e45: 3 total, 3 up, 3 in
Nov 29 07:30:33 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1612613637' entity='client.rgw.rgw.compute-0.rpfenx' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 07:30:33 compute-0 ceph-mon[75237]: pgmap v123: 104 pgs: 1 unknown, 103 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 202 B/s rd, 405 B/s wr, 1 op/s
Nov 29 07:30:33 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 46 pg[11.0( empty local-lis/les=44/46 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [1] r=0 lpr=44 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:33 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 29 07:30:33 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 29 07:30:33 compute-0 podman[102781]: 2025-11-29 07:30:33.489236008 +0000 UTC m=+0.057223510 container create dee74cf27aa6f4593f9733713c0660f1870a060533a7ebdd20943e55d228985f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:30:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 29 07:30:33 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3711457794' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 29 07:30:33 compute-0 dreamy_maxwell[102619]: mimic
Nov 29 07:30:33 compute-0 podman[102604]: 2025-11-29 07:30:33.532144413 +0000 UTC m=+1.748134282 container died 4164be5893b7bc103b12a104a8b983eb127a4cc64ea848feb02116a88d6142c1 (image=quay.io/ceph/ceph:v18, name=dreamy_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 07:30:33 compute-0 systemd[1]: Started libpod-conmon-dee74cf27aa6f4593f9733713c0660f1870a060533a7ebdd20943e55d228985f.scope.
Nov 29 07:30:33 compute-0 systemd[1]: libpod-4164be5893b7bc103b12a104a8b983eb127a4cc64ea848feb02116a88d6142c1.scope: Deactivated successfully.
Nov 29 07:30:33 compute-0 podman[102781]: 2025-11-29 07:30:33.457560581 +0000 UTC m=+0.025548103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:33 compute-0 podman[102781]: 2025-11-29 07:30:33.58330073 +0000 UTC m=+0.151288252 container init dee74cf27aa6f4593f9733713c0660f1870a060533a7ebdd20943e55d228985f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_gauss, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 07:30:33 compute-0 podman[102781]: 2025-11-29 07:30:33.591261453 +0000 UTC m=+0.159248955 container start dee74cf27aa6f4593f9733713c0660f1870a060533a7ebdd20943e55d228985f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 07:30:33 compute-0 kind_gauss[102800]: 167 167
Nov 29 07:30:33 compute-0 systemd[1]: libpod-dee74cf27aa6f4593f9733713c0660f1870a060533a7ebdd20943e55d228985f.scope: Deactivated successfully.
Nov 29 07:30:33 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 29 07:30:33 compute-0 podman[102781]: 2025-11-29 07:30:33.758561491 +0000 UTC m=+0.326548993 container attach dee74cf27aa6f4593f9733713c0660f1870a060533a7ebdd20943e55d228985f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_gauss, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:33 compute-0 podman[102781]: 2025-11-29 07:30:33.759387944 +0000 UTC m=+0.327375486 container died dee74cf27aa6f4593f9733713c0660f1870a060533a7ebdd20943e55d228985f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_gauss, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:30:33 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 29 07:30:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-983411b33700a63f0f6502f62a58b69b4b759277ab08379cfdeb1233094b8219-merged.mount: Deactivated successfully.
Nov 29 07:30:34 compute-0 podman[102781]: 2025-11-29 07:30:34.01237814 +0000 UTC m=+0.580365642 container remove dee74cf27aa6f4593f9733713c0660f1870a060533a7ebdd20943e55d228985f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:30:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1346108013337e23e522abedb6827f1b70da7536f1844d51c85e690da4be5e6-merged.mount: Deactivated successfully.
Nov 29 07:30:34 compute-0 podman[102604]: 2025-11-29 07:30:34.130849155 +0000 UTC m=+2.346839024 container remove 4164be5893b7bc103b12a104a8b983eb127a4cc64ea848feb02116a88d6142c1 (image=quay.io/ceph/ceph:v18, name=dreamy_maxwell, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:30:34 compute-0 systemd[1]: libpod-conmon-4164be5893b7bc103b12a104a8b983eb127a4cc64ea848feb02116a88d6142c1.scope: Deactivated successfully.
Nov 29 07:30:34 compute-0 sudo[102599]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:34 compute-0 systemd[1]: libpod-conmon-dee74cf27aa6f4593f9733713c0660f1870a060533a7ebdd20943e55d228985f.scope: Deactivated successfully.
Nov 29 07:30:34 compute-0 radosgw[101118]: LDAP not started since no server URIs were provided in the configuration.
Nov 29 07:30:34 compute-0 radosgw[101118]: framework: beast
Nov 29 07:30:34 compute-0 radosgw[101118]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 29 07:30:34 compute-0 radosgw[101118]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 29 07:30:34 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-rgw-rgw-compute-0-rpfenx[101114]: 2025-11-29T07:30:34.227+0000 7f9b97748940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 29 07:30:34 compute-0 podman[102835]: 2025-11-29 07:30:34.150024447 +0000 UTC m=+0.024009122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:34 compute-0 radosgw[101118]: starting handler: beast
Nov 29 07:30:34 compute-0 radosgw[101118]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:30:34 compute-0 radosgw[101118]: mgrc service_daemon_register rgw.14269 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.rpfenx,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864324,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=6e741955-f69b-4332-8af3-629aba8c944b,zone_name=default,zonegroup_id=8938f58a-47ac-41f5-8283-81e0550422c8,zonegroup_name=default}
Nov 29 07:30:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 29 07:30:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v125: 104 pgs: 104 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 5.7 KiB/s wr, 22 op/s
Nov 29 07:30:34 compute-0 podman[102835]: 2025-11-29 07:30:34.688885381 +0000 UTC m=+0.562870026 container create f5f30c0f5cab8ab5ed751eff702fb1b80bdd9dcb726ac038fcd0329d124d03b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 07:30:34 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Nov 29 07:30:34 compute-0 ceph-mon[75237]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 07:30:34 compute-0 ceph-mon[75237]: Cluster is now healthy
Nov 29 07:30:34 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1612613637' entity='client.rgw.rgw.compute-0.rpfenx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 07:30:34 compute-0 ceph-mon[75237]: osdmap e46: 3 total, 3 up, 3 in
Nov 29 07:30:34 compute-0 ceph-mon[75237]: 4.d scrub starts
Nov 29 07:30:34 compute-0 ceph-mon[75237]: 3.12 scrub starts
Nov 29 07:30:34 compute-0 ceph-mon[75237]: 3.12 scrub ok
Nov 29 07:30:34 compute-0 ceph-mon[75237]: 4.d scrub ok
Nov 29 07:30:34 compute-0 ceph-mon[75237]: 4.e scrub starts
Nov 29 07:30:34 compute-0 ceph-mon[75237]: 4.e scrub ok
Nov 29 07:30:34 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3711457794' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 29 07:30:34 compute-0 ceph-mon[75237]: 2.1b scrub starts
Nov 29 07:30:34 compute-0 ceph-mon[75237]: 2.1b scrub ok
Nov 29 07:30:34 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Nov 29 07:30:34 compute-0 systemd[1]: Started libpod-conmon-f5f30c0f5cab8ab5ed751eff702fb1b80bdd9dcb726ac038fcd0329d124d03b8.scope.
Nov 29 07:30:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f296c9cbd559ab8e90757347d6505f32e33a9214aba717c9c3c4ead7a78f69c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f296c9cbd559ab8e90757347d6505f32e33a9214aba717c9c3c4ead7a78f69c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f296c9cbd559ab8e90757347d6505f32e33a9214aba717c9c3c4ead7a78f69c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f296c9cbd559ab8e90757347d6505f32e33a9214aba717c9c3c4ead7a78f69c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:34 compute-0 podman[102835]: 2025-11-29 07:30:34.979701228 +0000 UTC m=+0.853685873 container init f5f30c0f5cab8ab5ed751eff702fb1b80bdd9dcb726ac038fcd0329d124d03b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ishizaka, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:30:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 29 07:30:34 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 29 07:30:34 compute-0 podman[102835]: 2025-11-29 07:30:34.986479059 +0000 UTC m=+0.860463694 container start f5f30c0f5cab8ab5ed751eff702fb1b80bdd9dcb726ac038fcd0329d124d03b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 29 07:30:34 compute-0 podman[102835]: 2025-11-29 07:30:34.990512617 +0000 UTC m=+0.864497262 container attach f5f30c0f5cab8ab5ed751eff702fb1b80bdd9dcb726ac038fcd0329d124d03b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ishizaka, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:30:35 compute-0 sudo[103425]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afseoumvuzvwncpsxeejuawkmkdwrlfy ; /usr/bin/python3'
Nov 29 07:30:35 compute-0 sudo[103425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:35 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 29 07:30:35 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]: {
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:     "0": [
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:         {
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "devices": [
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "/dev/loop3"
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             ],
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "lv_name": "ceph_lv0",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "lv_size": "21470642176",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "name": "ceph_lv0",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "tags": {
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.cluster_name": "ceph",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.crush_device_class": "",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.encrypted": "0",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.osd_id": "0",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.type": "block",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.vdo": "0"
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             },
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "type": "block",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "vg_name": "ceph_vg0"
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:         }
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:     ],
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:     "1": [
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:         {
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "devices": [
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "/dev/loop4"
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             ],
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "lv_name": "ceph_lv1",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "lv_size": "21470642176",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "name": "ceph_lv1",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "tags": {
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.cluster_name": "ceph",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.crush_device_class": "",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.encrypted": "0",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.osd_id": "1",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.type": "block",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.vdo": "0"
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             },
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "type": "block",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "vg_name": "ceph_vg1"
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:         }
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:     ],
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:     "2": [
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:         {
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "devices": [
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "/dev/loop5"
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             ],
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "lv_name": "ceph_lv2",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "lv_size": "21470642176",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "name": "ceph_lv2",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "tags": {
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.cluster_name": "ceph",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.crush_device_class": "",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.encrypted": "0",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.osd_id": "2",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.type": "block",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:                 "ceph.vdo": "0"
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             },
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "type": "block",
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:             "vg_name": "ceph_vg2"
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:         }
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]:     ]
Nov 29 07:30:35 compute-0 goofy_ishizaka[103397]: }
Nov 29 07:30:35 compute-0 systemd[1]: libpod-f5f30c0f5cab8ab5ed751eff702fb1b80bdd9dcb726ac038fcd0329d124d03b8.scope: Deactivated successfully.
Nov 29 07:30:35 compute-0 podman[102835]: 2025-11-29 07:30:35.782906151 +0000 UTC m=+1.656890826 container died f5f30c0f5cab8ab5ed751eff702fb1b80bdd9dcb726ac038fcd0329d124d03b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 07:30:36 compute-0 python3[103427]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:30:36 compute-0 ceph-mon[75237]: pgmap v125: 104 pgs: 104 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 5.7 KiB/s wr, 22 op/s
Nov 29 07:30:36 compute-0 ceph-mon[75237]: 2.17 scrub starts
Nov 29 07:30:36 compute-0 ceph-mon[75237]: 2.17 scrub ok
Nov 29 07:30:36 compute-0 ceph-mon[75237]: osdmap e47: 3 total, 3 up, 3 in
Nov 29 07:30:36 compute-0 podman[103444]: 2025-11-29 07:30:36.260244621 +0000 UTC m=+0.047226642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:30:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v127: 104 pgs: 104 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 5.3 KiB/s wr, 21 op/s
Nov 29 07:30:36 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 29 07:30:36 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 29 07:30:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:30:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-f296c9cbd559ab8e90757347d6505f32e33a9214aba717c9c3c4ead7a78f69c7-merged.mount: Deactivated successfully.
Nov 29 07:30:37 compute-0 ceph-mon[75237]: 3.15 scrub starts
Nov 29 07:30:37 compute-0 ceph-mon[75237]: 3.15 scrub ok
Nov 29 07:30:37 compute-0 ceph-mon[75237]: pgmap v127: 104 pgs: 104 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 5.3 KiB/s wr, 21 op/s
Nov 29 07:30:37 compute-0 ceph-mon[75237]: 3.17 scrub starts
Nov 29 07:30:37 compute-0 ceph-mon[75237]: 3.17 scrub ok
Nov 29 07:30:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:30:38
Nov 29 07:30:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:30:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:30:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'images', 'vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 29 07:30:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:30:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v128: 104 pgs: 104 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 4.6 KiB/s wr, 18 op/s
Nov 29 07:30:38 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:30:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:30:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:30:38 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:30:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:30:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:30:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:30:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:30:38 compute-0 ceph-mgr[75527]: [progress INFO root] Completed event ed1ac11b-15dd-4656-b0a2-091fb4f1c52b (Global Recovery Event) in 15 seconds
Nov 29 07:30:38 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Nov 29 07:30:38 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Nov 29 07:30:39 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:30:39 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:30:39 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:30:39 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:30:39 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:30:39 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:30:39 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:30:39 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:30:39 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Nov 29 07:30:39 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Nov 29 07:30:40 compute-0 podman[102835]: 2025-11-29 07:30:40.196804705 +0000 UTC m=+6.070789350 container remove f5f30c0f5cab8ab5ed751eff702fb1b80bdd9dcb726ac038fcd0329d124d03b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ishizaka, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:30:40 compute-0 sudo[102697]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:40 compute-0 sudo[103458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:40 compute-0 sudo[103458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:40 compute-0 sudo[103458]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:40 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 29 07:30:40 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 29 07:30:40 compute-0 sudo[103483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:30:40 compute-0 sudo[103483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:40 compute-0 sudo[103483]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:40 compute-0 sudo[103508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:40 compute-0 sudo[103508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:40 compute-0 sudo[103508]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:40 compute-0 sudo[103533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:30:40 compute-0 sudo[103533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v129: 104 pgs: 104 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 4.0 KiB/s wr, 33 op/s
Nov 29 07:30:40 compute-0 sshd[1003]: Timeout before authentication for connection from 14.29.181.34 to 38.102.83.203, pid = 88496
Nov 29 07:30:40 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 29 07:30:41 compute-0 ceph-mon[75237]: pgmap v128: 104 pgs: 104 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 4.6 KiB/s wr, 18 op/s
Nov 29 07:30:41 compute-0 ceph-mon[75237]: 2.15 scrub starts
Nov 29 07:30:41 compute-0 podman[103444]: 2025-11-29 07:30:41.391563987 +0000 UTC m=+5.178546008 container create 5326c017b1235b757bfca9afa36ae56bf29ba0abcd3affc442210067f38ee939 (image=quay.io/ceph/ceph:v18, name=busy_jemison, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 07:30:41 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 29 07:30:41 compute-0 systemd[1]: Started libpod-conmon-5326c017b1235b757bfca9afa36ae56bf29ba0abcd3affc442210067f38ee939.scope.
Nov 29 07:30:41 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6855031e07d71e3f7b14add32962683e9539855672e508f612055da6a04faf80/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6855031e07d71e3f7b14add32962683e9539855672e508f612055da6a04faf80/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:41 compute-0 podman[103444]: 2025-11-29 07:30:41.550374338 +0000 UTC m=+5.337356359 container init 5326c017b1235b757bfca9afa36ae56bf29ba0abcd3affc442210067f38ee939 (image=quay.io/ceph/ceph:v18, name=busy_jemison, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:30:41 compute-0 podman[103444]: 2025-11-29 07:30:41.56054174 +0000 UTC m=+5.347523741 container start 5326c017b1235b757bfca9afa36ae56bf29ba0abcd3affc442210067f38ee939 (image=quay.io/ceph/ceph:v18, name=busy_jemison, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:30:41 compute-0 systemd[1]: libpod-conmon-f5f30c0f5cab8ab5ed751eff702fb1b80bdd9dcb726ac038fcd0329d124d03b8.scope: Deactivated successfully.
Nov 29 07:30:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:30:41 compute-0 podman[103444]: 2025-11-29 07:30:41.900777407 +0000 UTC m=+5.687759418 container attach 5326c017b1235b757bfca9afa36ae56bf29ba0abcd3affc442210067f38ee939 (image=quay.io/ceph/ceph:v18, name=busy_jemison, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:30:41 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 07:30:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 07:30:41 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:30:42 compute-0 podman[103621]: 2025-11-29 07:30:42.120532447 +0000 UTC m=+0.105583481 container create 52cc4288b2f0bbfe4d3fd7a73cd845467d7efdfd24d8f620da58a213a7399e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_newton, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:42 compute-0 podman[103621]: 2025-11-29 07:30:42.039588305 +0000 UTC m=+0.024639319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:42 compute-0 systemd[1]: Started libpod-conmon-52cc4288b2f0bbfe4d3fd7a73cd845467d7efdfd24d8f620da58a213a7399e91.scope.
Nov 29 07:30:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:42 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 4.18 deep-scrub starts
Nov 29 07:30:42 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 4.18 deep-scrub ok
Nov 29 07:30:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 29 07:30:42 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3416972073' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 29 07:30:42 compute-0 busy_jemison[103572]: 
Nov 29 07:30:42 compute-0 systemd[1]: libpod-5326c017b1235b757bfca9afa36ae56bf29ba0abcd3affc442210067f38ee939.scope: Deactivated successfully.
Nov 29 07:30:42 compute-0 busy_jemison[103572]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"rgw":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":7}}
Nov 29 07:30:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 29 07:30:42 compute-0 ceph-mon[75237]: 2.15 scrub ok
Nov 29 07:30:42 compute-0 ceph-mon[75237]: 3.1b scrub starts
Nov 29 07:30:42 compute-0 ceph-mon[75237]: 3.1b scrub ok
Nov 29 07:30:42 compute-0 ceph-mon[75237]: 4.11 scrub starts
Nov 29 07:30:42 compute-0 ceph-mon[75237]: 4.11 scrub ok
Nov 29 07:30:42 compute-0 ceph-mon[75237]: pgmap v129: 104 pgs: 104 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 4.0 KiB/s wr, 33 op/s
Nov 29 07:30:42 compute-0 ceph-mon[75237]: 2.d scrub starts
Nov 29 07:30:42 compute-0 ceph-mon[75237]: 2.d scrub ok
Nov 29 07:30:42 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:30:42 compute-0 podman[103621]: 2025-11-29 07:30:42.479716841 +0000 UTC m=+0.464767885 container init 52cc4288b2f0bbfe4d3fd7a73cd845467d7efdfd24d8f620da58a213a7399e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:30:42 compute-0 podman[103444]: 2025-11-29 07:30:42.48117308 +0000 UTC m=+6.268155101 container died 5326c017b1235b757bfca9afa36ae56bf29ba0abcd3affc442210067f38ee939 (image=quay.io/ceph/ceph:v18, name=busy_jemison, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:30:42 compute-0 podman[103621]: 2025-11-29 07:30:42.48792372 +0000 UTC m=+0.472974734 container start 52cc4288b2f0bbfe4d3fd7a73cd845467d7efdfd24d8f620da58a213a7399e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 29 07:30:42 compute-0 agitated_newton[103637]: 167 167
Nov 29 07:30:42 compute-0 systemd[1]: libpod-52cc4288b2f0bbfe4d3fd7a73cd845467d7efdfd24d8f620da58a213a7399e91.scope: Deactivated successfully.
Nov 29 07:30:42 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:30:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 29 07:30:42 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 29 07:30:42 compute-0 ceph-mgr[75527]: [progress INFO root] update: starting ev 1128f2e1-2d79-41cb-88de-aa31485467a3 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 29 07:30:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Nov 29 07:30:42 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 29 07:30:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v131: 104 pgs: 104 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 0 B/s wr, 17 op/s
Nov 29 07:30:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:30:42 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:30:42 compute-0 podman[103621]: 2025-11-29 07:30:42.620056739 +0000 UTC m=+0.605107733 container attach 52cc4288b2f0bbfe4d3fd7a73cd845467d7efdfd24d8f620da58a213a7399e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_newton, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 07:30:42 compute-0 podman[103621]: 2025-11-29 07:30:42.62048219 +0000 UTC m=+0.605533184 container died 52cc4288b2f0bbfe4d3fd7a73cd845467d7efdfd24d8f620da58a213a7399e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_newton, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 07:30:42 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 29 07:30:42 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 29 07:30:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-6855031e07d71e3f7b14add32962683e9539855672e508f612055da6a04faf80-merged.mount: Deactivated successfully.
Nov 29 07:30:42 compute-0 podman[103444]: 2025-11-29 07:30:42.788393405 +0000 UTC m=+6.575375416 container remove 5326c017b1235b757bfca9afa36ae56bf29ba0abcd3affc442210067f38ee939 (image=quay.io/ceph/ceph:v18, name=busy_jemison, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:30:42 compute-0 systemd[1]: libpod-conmon-5326c017b1235b757bfca9afa36ae56bf29ba0abcd3affc442210067f38ee939.scope: Deactivated successfully.
Nov 29 07:30:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4a3aaabba6db0b0c989ca18671109a8b678d5926ee644c1c12fdf8f0a01c8f5-merged.mount: Deactivated successfully.
Nov 29 07:30:42 compute-0 sudo[103425]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:42 compute-0 podman[103621]: 2025-11-29 07:30:42.821604142 +0000 UTC m=+0.806655136 container remove 52cc4288b2f0bbfe4d3fd7a73cd845467d7efdfd24d8f620da58a213a7399e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_newton, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 07:30:42 compute-0 systemd[1]: libpod-conmon-52cc4288b2f0bbfe4d3fd7a73cd845467d7efdfd24d8f620da58a213a7399e91.scope: Deactivated successfully.
Nov 29 07:30:43 compute-0 podman[103678]: 2025-11-29 07:30:43.043326444 +0000 UTC m=+0.115827084 container create 112ca3c9460aa19514ea50ad4604cf43b08e7e29c403e286f5fb3a6418a82b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:30:43 compute-0 podman[103678]: 2025-11-29 07:30:42.951775869 +0000 UTC m=+0.024276499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:43 compute-0 systemd[1]: Started libpod-conmon-112ca3c9460aa19514ea50ad4604cf43b08e7e29c403e286f5fb3a6418a82b9f.scope.
Nov 29 07:30:43 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e8ee9e600a37ff417c2f41a9a5c8c48a14ada27bfb8a7b72214a868f12b556c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e8ee9e600a37ff417c2f41a9a5c8c48a14ada27bfb8a7b72214a868f12b556c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e8ee9e600a37ff417c2f41a9a5c8c48a14ada27bfb8a7b72214a868f12b556c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e8ee9e600a37ff417c2f41a9a5c8c48a14ada27bfb8a7b72214a868f12b556c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:43 compute-0 podman[103678]: 2025-11-29 07:30:43.336029672 +0000 UTC m=+0.408530362 container init 112ca3c9460aa19514ea50ad4604cf43b08e7e29c403e286f5fb3a6418a82b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:30:43 compute-0 podman[103678]: 2025-11-29 07:30:43.342605148 +0000 UTC m=+0.415105778 container start 112ca3c9460aa19514ea50ad4604cf43b08e7e29c403e286f5fb3a6418a82b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_goldwasser, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 07:30:43 compute-0 podman[103678]: 2025-11-29 07:30:43.542756584 +0000 UTC m=+0.615257214 container attach 112ca3c9460aa19514ea50ad4604cf43b08e7e29c403e286f5fb3a6418a82b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_goldwasser, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:43 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 29 07:30:43 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 29 07:30:43 compute-0 ceph-mgr[75527]: [progress INFO root] Writing back 9 completed events
Nov 29 07:30:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 07:30:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 29 07:30:44 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]: {
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]:         "osd_id": 2,
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]:         "type": "bluestore"
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]:     },
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]:         "osd_id": 0,
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]:         "type": "bluestore"
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]:     },
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]:         "osd_id": 1,
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]:         "type": "bluestore"
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]:     }
Nov 29 07:30:44 compute-0 suspicious_goldwasser[103694]: }
Nov 29 07:30:44 compute-0 systemd[1]: libpod-112ca3c9460aa19514ea50ad4604cf43b08e7e29c403e286f5fb3a6418a82b9f.scope: Deactivated successfully.
Nov 29 07:30:44 compute-0 podman[103678]: 2025-11-29 07:30:44.37040604 +0000 UTC m=+1.442906670 container died 112ca3c9460aa19514ea50ad4604cf43b08e7e29c403e286f5fb3a6418a82b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 07:30:44 compute-0 systemd[1]: libpod-112ca3c9460aa19514ea50ad4604cf43b08e7e29c403e286f5fb3a6418a82b9f.scope: Consumed 1.032s CPU time.
Nov 29 07:30:44 compute-0 ceph-mon[75237]: 4.18 deep-scrub starts
Nov 29 07:30:44 compute-0 ceph-mon[75237]: 4.18 deep-scrub ok
Nov 29 07:30:44 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3416972073' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 29 07:30:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:30:44 compute-0 ceph-mon[75237]: osdmap e48: 3 total, 3 up, 3 in
Nov 29 07:30:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 29 07:30:44 compute-0 ceph-mon[75237]: pgmap v131: 104 pgs: 104 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 0 B/s wr, 17 op/s
Nov 29 07:30:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:30:44 compute-0 ceph-mon[75237]: 2.3 scrub starts
Nov 29 07:30:44 compute-0 ceph-mon[75237]: 2.3 scrub ok
Nov 29 07:30:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v132: 104 pgs: 104 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 39 op/s
Nov 29 07:30:44 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 29 07:30:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:30:44 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:30:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e8ee9e600a37ff417c2f41a9a5c8c48a14ada27bfb8a7b72214a868f12b556c-merged.mount: Deactivated successfully.
Nov 29 07:30:44 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 29 07:30:44 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:30:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 29 07:30:45 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:45 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 29 07:30:45 compute-0 ceph-mgr[75527]: [progress WARNING root] Starting Global Recovery Event,31 pgs not in active + clean state
Nov 29 07:30:45 compute-0 ceph-mgr[75527]: [progress INFO root] update: starting ev abbdccd0-44b6-4f21-80e7-f4bae70199cf (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 29 07:30:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 07:30:45 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:30:45 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 29 07:30:45 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 29 07:30:45 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 29 07:30:45 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 29 07:30:45 compute-0 podman[103678]: 2025-11-29 07:30:45.692949855 +0000 UTC m=+2.765450485 container remove 112ca3c9460aa19514ea50ad4604cf43b08e7e29c403e286f5fb3a6418a82b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_goldwasser, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 07:30:45 compute-0 sudo[103533]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:30:45 compute-0 systemd[1]: libpod-conmon-112ca3c9460aa19514ea50ad4604cf43b08e7e29c403e286f5fb3a6418a82b9f.scope: Deactivated successfully.
Nov 29 07:30:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 29 07:30:45 compute-0 ceph-mon[75237]: 2.16 scrub starts
Nov 29 07:30:45 compute-0 ceph-mon[75237]: 2.16 scrub ok
Nov 29 07:30:45 compute-0 ceph-mon[75237]: 4.13 scrub starts
Nov 29 07:30:45 compute-0 ceph-mon[75237]: pgmap v132: 104 pgs: 104 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 39 op/s
Nov 29 07:30:45 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:30:45 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 29 07:30:45 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:30:45 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:45 compute-0 ceph-mon[75237]: osdmap e49: 3 total, 3 up, 3 in
Nov 29 07:30:45 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:30:46 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:30:46 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:30:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 29 07:30:46 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:46 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 29 07:30:46 compute-0 ceph-mgr[75527]: [progress INFO root] update: starting ev 7b68149c-f1c3-474c-a7b9-a92520cb7f3d (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 29 07:30:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:30:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 07:30:46 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:30:46 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:46 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 21b22cbe-691b-4a32-8901-56958f8a0606 does not exist
Nov 29 07:30:46 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 353f8918-8c79-4ebe-8f1c-a87b0fb5a804 does not exist
Nov 29 07:30:46 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 29 07:30:46 compute-0 sudo[103739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:46 compute-0 sudo[103739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:46 compute-0 sudo[103739]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:46 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 29 07:30:46 compute-0 sudo[103764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:30:46 compute-0 sudo[103764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:46 compute-0 sudo[103764]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:46 compute-0 sudo[103789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:46 compute-0 sudo[103789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:46 compute-0 sudo[103789]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:46 compute-0 sudo[103814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:30:46 compute-0 sudo[103814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:46 compute-0 sudo[103814]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:46 compute-0 sudo[103839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:46 compute-0 sudo[103839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:46 compute-0 sudo[103839]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v135: 135 pgs: 31 unknown, 104 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 40 op/s
Nov 29 07:30:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:30:46 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:30:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 29 07:30:46 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 07:30:46 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 29 07:30:46 compute-0 sudo[103864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:30:46 compute-0 sudo[103864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:46 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 29 07:30:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 49 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=49 pruub=13.745587349s) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active pruub 108.622474670s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=49 pruub=13.745587349s) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown pruub 108.622474670s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.14( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.12( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.19( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.1a( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.1b( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.1c( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.1d( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.1e( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.15( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.16( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.17( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.11( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.18( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.10( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.13( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.2( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.1f( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.b( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.6( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.8( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.5( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.f( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.d( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.3( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.c( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.a( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.e( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.9( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.4( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.7( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 50 pg[5.1( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:47 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 29 07:30:48 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 29 07:30:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 29 07:30:48 compute-0 ceph-mon[75237]: 4.13 scrub ok
Nov 29 07:30:48 compute-0 ceph-mon[75237]: 3.1d scrub starts
Nov 29 07:30:48 compute-0 ceph-mon[75237]: 3.1d scrub ok
Nov 29 07:30:48 compute-0 ceph-mon[75237]: 2.8 scrub starts
Nov 29 07:30:48 compute-0 ceph-mon[75237]: 2.8 scrub ok
Nov 29 07:30:48 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:30:48 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:30:48 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:48 compute-0 ceph-mon[75237]: osdmap e50: 3 total, 3 up, 3 in
Nov 29 07:30:48 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:30:48 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:48 compute-0 ceph-mon[75237]: 3.5 scrub starts
Nov 29 07:30:48 compute-0 ceph-mon[75237]: 3.5 scrub ok
Nov 29 07:30:48 compute-0 ceph-mon[75237]: pgmap v135: 135 pgs: 31 unknown, 104 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 40 op/s
Nov 29 07:30:48 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:30:48 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 07:30:48 compute-0 ceph-mon[75237]: 2.13 scrub starts
Nov 29 07:30:48 compute-0 ceph-mon[75237]: 2.13 scrub ok
Nov 29 07:30:48 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:30:48 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:30:48 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 07:30:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 29 07:30:48 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 29 07:30:48 compute-0 ceph-mgr[75527]: [progress INFO root] update: starting ev 0aeacb9f-5277-4b98-af3e-27da61504bf0 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 29 07:30:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 07:30:48 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:30:48 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 51 pg[6.0( v 43'39 (0'0,43'39] local-lis/les=26/27 n=22 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=51 pruub=14.798465729s) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 lcod 40'38 mlcod 40'38 active pruub 129.225250244s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:30:48 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 51 pg[6.0( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=26/27 n=1 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=51 pruub=14.798465729s) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 lcod 40'38 mlcod 0'0 unknown pruub 129.225250244s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:48 compute-0 podman[103961]: 2025-11-29 07:30:48.205608887 +0000 UTC m=+1.165725287 container exec 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.1f( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.1d( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.10( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.12( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.1e( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.15( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.13( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.9( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.7( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.b( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.3( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.6( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.f( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.1( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.0( empty local-lis/les=49/51 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.e( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.1b( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.1a( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.18( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.19( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.2( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.4( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.c( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.11( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.14( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.a( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.8( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.1c( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.d( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.16( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.17( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 51 pg[5.5( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [2] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:48 compute-0 podman[103961]: 2025-11-29 07:30:48.319434018 +0000 UTC m=+1.279550438 container exec_died 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 07:30:48 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 29 07:30:48 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 29 07:30:48 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 51 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=51 pruub=8.497483253s) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active pruub 110.268386841s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:30:48 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 51 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=51 pruub=8.497483253s) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown pruub 110.268386841s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v137: 181 pgs: 77 unknown, 104 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 40 op/s
Nov 29 07:30:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:30:48 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:30:48 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 29 07:30:48 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 29 07:30:49 compute-0 sudo[103864]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:30:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 29 07:30:49 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:30:49 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:30:49 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:30:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 29 07:30:49 compute-0 ceph-mon[75237]: 2.1c scrub starts
Nov 29 07:30:49 compute-0 ceph-mon[75237]: 2.1c scrub ok
Nov 29 07:30:49 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:30:49 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:30:49 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 07:30:49 compute-0 ceph-mon[75237]: osdmap e51: 3 total, 3 up, 3 in
Nov 29 07:30:49 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:30:49 compute-0 ceph-mon[75237]: 3.7 scrub starts
Nov 29 07:30:49 compute-0 ceph-mon[75237]: 3.7 scrub ok
Nov 29 07:30:49 compute-0 ceph-mon[75237]: pgmap v137: 181 pgs: 77 unknown, 104 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 40 op/s
Nov 29 07:30:49 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:30:49 compute-0 ceph-mon[75237]: 2.5 scrub starts
Nov 29 07:30:49 compute-0 ceph-mon[75237]: 2.5 scrub ok
Nov 29 07:30:49 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 29 07:30:49 compute-0 ceph-mgr[75527]: [progress INFO root] update: starting ev e3fc97e6-4937-43c0-a2b7-f4f2fd714cc6 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 29 07:30:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 07:30:49 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.7( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=26/27 n=1 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.8( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=26/27 n=1 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.d( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=26/27 n=1 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.5( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=26/27 n=2 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.9( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=26/27 n=1 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.b( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=26/27 n=1 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.4( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=26/27 n=2 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.6( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=26/27 n=2 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.1( v 43'39 (0'0,43'39] local-lis/les=26/27 n=2 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.3( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=26/27 n=2 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.a( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=26/27 n=1 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.2( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=26/27 n=2 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.e( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=26/27 n=1 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.f( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=26/27 n=1 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.c( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=26/27 n=1 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.1f( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.1c( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.1b( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.1a( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.19( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.16( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.1e( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.1d( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.13( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.12( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.11( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.10( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.17( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.b( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.14( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.9( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.a( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.8( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.6( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.4( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.7( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.1( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.3( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.2( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.c( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.e( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.f( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.18( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[8.0( v 39'4 (0'0,39'4] local-lis/les=38/39 n=4 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=52 pruub=14.639462471s) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 39'3 mlcod 39'3 active pruub 117.248619080s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.d( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.d( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.8( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.5( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.15( empty local-lis/les=28/29 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[8.0( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=52 pruub=14.639462471s) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 39'3 mlcod 0'0 unknown pruub 117.248619080s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:30:49 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:30:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:30:49 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.1f( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.1b( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.1c( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.19( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.1a( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.16( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.1d( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.1e( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.12( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.11( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.13( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.10( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.7( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.17( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.9( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.b( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.4( v 43'39 (0'0,43'39] local-lis/les=51/52 n=2 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.b( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.14( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.1( v 43'39 (0'0,43'39] local-lis/les=51/52 n=2 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.9( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.6( v 43'39 (0'0,43'39] local-lis/les=51/52 n=2 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.a( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.a( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.0( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 lcod 40'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.2( v 43'39 (0'0,43'39] local-lis/les=51/52 n=2 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.e( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.3( v 43'39 (0'0,43'39] local-lis/les=51/52 n=2 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.f( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.c( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 52 pg[6.5( v 43'39 (0'0,43'39] local-lis/les=51/52 n=2 ec=51/26 lis/c=26/26 les/c/f=27/27/0 sis=51) [0] r=0 lpr=51 pi=[26,51)/1 crt=43'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.0( empty local-lis/les=51/52 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.7( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.4( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.6( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.1( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.8( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.2( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.e( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.18( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.f( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.c( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.5( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.d( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.15( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 52 pg[7.3( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=28/28 les/c/f=29/29/0 sis=51) [1] r=0 lpr=51 pi=[28,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:49 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:49 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 68b4c6c7-5e2f-4e87-8352-dbd9dc18a951 does not exist
Nov 29 07:30:49 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 966f221b-6e1c-461d-93c5-a68eb09ac474 does not exist
Nov 29 07:30:49 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 97a07acd-c603-44ff-b449-edf72de94bce does not exist
Nov 29 07:30:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:30:49 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:30:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:30:49 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:30:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:30:49 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:30:49 compute-0 sudo[104120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:49 compute-0 sudo[104120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:49 compute-0 sudo[104120]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:49 compute-0 sudo[104145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:30:49 compute-0 sudo[104145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:49 compute-0 sudo[104145]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:49 compute-0 sudo[104170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:49 compute-0 sudo[104170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:49 compute-0 sudo[104170]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:49 compute-0 sudo[104195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:30:49 compute-0 sudo[104195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:49 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 29 07:30:49 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 29 07:30:50 compute-0 podman[104257]: 2025-11-29 07:30:49.949044414 +0000 UTC m=+0.020712094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:50 compute-0 podman[104257]: 2025-11-29 07:30:50.305844843 +0000 UTC m=+0.377512533 container create 1a9b27c601d8062c7858b3331681560865acc2d7223c83c1dae7b0a0cf956642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 07:30:50 compute-0 systemd[1]: Started libpod-conmon-1a9b27c601d8062c7858b3331681560865acc2d7223c83c1dae7b0a0cf956642.scope.
Nov 29 07:30:50 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 29 07:30:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v139: 212 pgs: 1 peering, 46 unknown, 165 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 32 op/s
Nov 29 07:30:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:30:50 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:30:51 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 29 07:30:51 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 29 07:30:51 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 29 07:30:51 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:51 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:30:51 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:30:51 compute-0 ceph-mon[75237]: osdmap e52: 3 total, 3 up, 3 in
Nov 29 07:30:51 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:30:51 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:51 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:30:51 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:30:51 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:30:51 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:30:51 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:30:51 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:30:51 compute-0 ceph-mon[75237]: 2.4 scrub starts
Nov 29 07:30:51 compute-0 ceph-mon[75237]: 2.4 scrub ok
Nov 29 07:30:51 compute-0 podman[104257]: 2025-11-29 07:30:51.632877918 +0000 UTC m=+1.704545598 container init 1a9b27c601d8062c7858b3331681560865acc2d7223c83c1dae7b0a0cf956642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hofstadter, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 07:30:51 compute-0 podman[104257]: 2025-11-29 07:30:51.640897812 +0000 UTC m=+1.712565492 container start 1a9b27c601d8062c7858b3331681560865acc2d7223c83c1dae7b0a0cf956642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 07:30:51 compute-0 compassionate_hofstadter[104274]: 167 167
Nov 29 07:30:51 compute-0 systemd[1]: libpod-1a9b27c601d8062c7858b3331681560865acc2d7223c83c1dae7b0a0cf956642.scope: Deactivated successfully.
Nov 29 07:30:51 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 29 07:30:52 compute-0 podman[104257]: 2025-11-29 07:30:52.110193307 +0000 UTC m=+2.181860987 container attach 1a9b27c601d8062c7858b3331681560865acc2d7223c83c1dae7b0a0cf956642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hofstadter, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:30:52 compute-0 podman[104257]: 2025-11-29 07:30:52.111537783 +0000 UTC m=+2.183205533 container died 1a9b27c601d8062c7858b3331681560865acc2d7223c83c1dae7b0a0cf956642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:52 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:30:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 29 07:30:52 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 29 07:30:52 compute-0 ceph-mgr[75527]: [progress INFO root] update: starting ev cbb8d0cf-e5f0-4921-8221-41b82a63d9ae (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 29 07:30:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 07:30:52 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.10( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.15( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.16( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.14( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.19( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.11( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.12( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.1c( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.1d( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.1e( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.1f( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.18( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.1b( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.4( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.5( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.7( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.6( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.9( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.13( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.b( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.8( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.f( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.e( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.d( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.c( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.3( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.1( v 39'4 (0'0,39'4] local-lis/les=38/39 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.17( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.a( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.2( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.1a( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=38/39 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c1abb1f48450f8817fbe424e43a1d4332652102e9211d3472de57244f671b99-merged.mount: Deactivated successfully.
Nov 29 07:30:52 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 29 07:30:52 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 29 07:30:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v141: 212 pgs: 1 peering, 31 unknown, 180 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 29 op/s
Nov 29 07:30:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:30:52 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:30:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:30:52 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.10( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.16( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.19( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.12( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.1c( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.11( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.1d( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.1f( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.1e( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.18( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.4( v 39'4 (0'0,39'4] local-lis/les=52/53 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.1b( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.5( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.7( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.9( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.6( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.8( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.f( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.b( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.e( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.d( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.c( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.14( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.3( v 39'4 (0'0,39'4] local-lis/les=52/53 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.17( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.1( v 39'4 (0'0,39'4] local-lis/les=52/53 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.a( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.2( v 39'4 (0'0,39'4] local-lis/les=52/53 n=1 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.1a( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.0( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 39'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:52 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.13( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 29 07:30:53 compute-0 ceph-mon[75237]: pgmap v139: 212 pgs: 1 peering, 46 unknown, 165 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 32 op/s
Nov 29 07:30:53 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:30:53 compute-0 ceph-mon[75237]: 3.8 scrub starts
Nov 29 07:30:53 compute-0 ceph-mon[75237]: 3.8 scrub ok
Nov 29 07:30:53 compute-0 ceph-mon[75237]: 2.7 scrub starts
Nov 29 07:30:53 compute-0 ceph-mon[75237]: 2.7 scrub ok
Nov 29 07:30:53 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:30:53 compute-0 ceph-mon[75237]: osdmap e53: 3 total, 3 up, 3 in
Nov 29 07:30:53 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:30:53 compute-0 ceph-mon[75237]: 3.e scrub starts
Nov 29 07:30:53 compute-0 ceph-mon[75237]: 3.e scrub ok
Nov 29 07:30:53 compute-0 ceph-mon[75237]: pgmap v141: 212 pgs: 1 peering, 31 unknown, 180 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 29 op/s
Nov 29 07:30:53 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:30:53 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:30:53 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 29 07:30:53 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 53 pg[8.15( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=38/38 les/c/f=39/39/0 sis=52) [1] r=0 lpr=52 pi=[38,52)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:30:53 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 29 07:30:54 compute-0 podman[104257]: 2025-11-29 07:30:54.028541815 +0000 UTC m=+4.100209495 container remove 1a9b27c601d8062c7858b3331681560865acc2d7223c83c1dae7b0a0cf956642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:30:54 compute-0 systemd[1]: libpod-conmon-1a9b27c601d8062c7858b3331681560865acc2d7223c83c1dae7b0a0cf956642.scope: Deactivated successfully.
Nov 29 07:30:54 compute-0 podman[104300]: 2025-11-29 07:30:54.166341076 +0000 UTC m=+0.022099791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:54 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts
Nov 29 07:30:54 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok
Nov 29 07:30:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v142: 212 pgs: 212 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 32 op/s
Nov 29 07:30:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:30:54 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:30:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:30:54 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:30:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:30:54 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:30:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:30:54 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:30:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 29 07:30:54 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 07:30:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:30:54 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:30:54 compute-0 podman[104300]: 2025-11-29 07:30:54.861515884 +0000 UTC m=+0.717274589 container create 2ec0b1d199ca5cc32097944a4f679423568e9ea5a6d92e39a3adad4a1ca9b3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brattain, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:55 compute-0 ceph-mgr[75527]: [progress INFO root] Completed event c58d0da2-498f-4ae4-9a1f-17a877f6e949 (Global Recovery Event) in 10 seconds
Nov 29 07:30:55 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:30:55 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:30:55 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:30:55 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:30:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 29 07:30:55 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 29 07:30:55 compute-0 systemd[1]: Started libpod-conmon-2ec0b1d199ca5cc32097944a4f679423568e9ea5a6d92e39a3adad4a1ca9b3f7.scope.
Nov 29 07:30:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa0ca08f779292e8671eaf1b8382374985d690df191800b6076c4ed28dbcde2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa0ca08f779292e8671eaf1b8382374985d690df191800b6076c4ed28dbcde2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa0ca08f779292e8671eaf1b8382374985d690df191800b6076c4ed28dbcde2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa0ca08f779292e8671eaf1b8382374985d690df191800b6076c4ed28dbcde2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa0ca08f779292e8671eaf1b8382374985d690df191800b6076c4ed28dbcde2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:55 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 2.1f deep-scrub starts
Nov 29 07:30:55 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 2.1f deep-scrub ok
Nov 29 07:30:55 compute-0 ceph-mgr[75527]: [progress INFO root] update: starting ev df82da48-9103-4b0d-b8ba-c637f8f36d93 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 29 07:30:55 compute-0 ceph-mgr[75527]: [progress INFO root] complete: finished ev 1128f2e1-2d79-41cb-88de-aa31485467a3 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 29 07:30:55 compute-0 ceph-mgr[75527]: [progress INFO root] Completed event 1128f2e1-2d79-41cb-88de-aa31485467a3 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 13 seconds
Nov 29 07:30:55 compute-0 ceph-mgr[75527]: [progress INFO root] complete: finished ev abbdccd0-44b6-4f21-80e7-f4bae70199cf (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 29 07:30:55 compute-0 ceph-mgr[75527]: [progress INFO root] Completed event abbdccd0-44b6-4f21-80e7-f4bae70199cf (PG autoscaler increasing pool 6 PGs from 1 to 16) in 11 seconds
Nov 29 07:30:55 compute-0 ceph-mgr[75527]: [progress INFO root] complete: finished ev 7b68149c-f1c3-474c-a7b9-a92520cb7f3d (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 29 07:30:55 compute-0 ceph-mgr[75527]: [progress INFO root] Completed event 7b68149c-f1c3-474c-a7b9-a92520cb7f3d (PG autoscaler increasing pool 7 PGs from 1 to 32) in 10 seconds
Nov 29 07:30:55 compute-0 ceph-mgr[75527]: [progress INFO root] complete: finished ev 0aeacb9f-5277-4b98-af3e-27da61504bf0 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 29 07:30:55 compute-0 ceph-mgr[75527]: [progress INFO root] Completed event 0aeacb9f-5277-4b98-af3e-27da61504bf0 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 7 seconds
Nov 29 07:30:55 compute-0 ceph-mgr[75527]: [progress INFO root] complete: finished ev e3fc97e6-4937-43c0-a2b7-f4f2fd714cc6 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 29 07:30:55 compute-0 ceph-mgr[75527]: [progress INFO root] Completed event e3fc97e6-4937-43c0-a2b7-f4f2fd714cc6 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 6 seconds
Nov 29 07:30:55 compute-0 ceph-mgr[75527]: [progress INFO root] complete: finished ev cbb8d0cf-e5f0-4921-8221-41b82a63d9ae (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 29 07:30:55 compute-0 ceph-mgr[75527]: [progress INFO root] Completed event cbb8d0cf-e5f0-4921-8221-41b82a63d9ae (PG autoscaler increasing pool 10 PGs from 1 to 32) in 4 seconds
Nov 29 07:30:55 compute-0 ceph-mgr[75527]: [progress INFO root] complete: finished ev df82da48-9103-4b0d-b8ba-c637f8f36d93 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 29 07:30:55 compute-0 ceph-mgr[75527]: [progress INFO root] Completed event df82da48-9103-4b0d-b8ba-c637f8f36d93 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Nov 29 07:30:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 29 07:30:56 compute-0 ceph-mon[75237]: 2.6 scrub starts
Nov 29 07:30:56 compute-0 ceph-mon[75237]: 2.6 scrub ok
Nov 29 07:30:56 compute-0 podman[104300]: 2025-11-29 07:30:56.377891596 +0000 UTC m=+2.233650281 container init 2ec0b1d199ca5cc32097944a4f679423568e9ea5a6d92e39a3adad4a1ca9b3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:30:56 compute-0 podman[104300]: 2025-11-29 07:30:56.385410827 +0000 UTC m=+2.241169532 container start 2ec0b1d199ca5cc32097944a4f679423568e9ea5a6d92e39a3adad4a1ca9b3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 07:30:56 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 29 07:30:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 54 pg[9.0( v 53'234 (0'0,53'234] local-lis/les=40/41 n=102 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=54 pruub=9.810370445s) [1] r=0 lpr=54 pi=[40,54)/1 luod=53'233 crt=53'234 lcod 53'232 mlcod 53'232 active pruub 119.637786865s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:30:56 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 29 07:30:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v144: 274 pgs: 62 unknown, 212 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 28 op/s
Nov 29 07:30:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 54 pg[9.0( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=54 pruub=9.810370445s) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 53'232 mlcod 0'0 unknown pruub 119.637786865s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 07:30:56 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 29 07:30:57 compute-0 trusting_brattain[104316]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:30:57 compute-0 trusting_brattain[104316]: --> relative data size: 1.0
Nov 29 07:30:57 compute-0 trusting_brattain[104316]: --> All data devices are unavailable
Nov 29 07:30:57 compute-0 systemd[1]: libpod-2ec0b1d199ca5cc32097944a4f679423568e9ea5a6d92e39a3adad4a1ca9b3f7.scope: Deactivated successfully.
Nov 29 07:30:57 compute-0 systemd[1]: libpod-2ec0b1d199ca5cc32097944a4f679423568e9ea5a6d92e39a3adad4a1ca9b3f7.scope: Consumed 1.047s CPU time.
Nov 29 07:30:57 compute-0 podman[104300]: 2025-11-29 07:30:57.608586767 +0000 UTC m=+3.464345452 container attach 2ec0b1d199ca5cc32097944a4f679423568e9ea5a6d92e39a3adad4a1ca9b3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:30:57 compute-0 podman[104300]: 2025-11-29 07:30:57.610436126 +0000 UTC m=+3.466194811 container died 2ec0b1d199ca5cc32097944a4f679423568e9ea5a6d92e39a3adad4a1ca9b3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 29 07:30:58 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 29 07:30:58 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 29 07:30:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:30:58 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:30:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v145: 274 pgs: 62 unknown, 212 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 07:30:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:30:58 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:30:58 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 29 07:30:59 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 29 07:30:59 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 29 07:30:59 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 29 07:30:59 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 29 07:30:59 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:30:59 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:30:59 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:30:59 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:30:59 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 07:30:59 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:30:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 29 07:31:00 compute-0 ceph-mgr[75527]: [progress INFO root] Writing back 17 completed events
Nov 29 07:31:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v146: 274 pgs: 1 peering, 62 unknown, 211 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:00 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 29 07:31:01 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 29 07:31:01 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 29 07:31:01 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.267811775s, txc = 0x56222419ac00
Nov 29 07:31:01 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 29 07:31:02 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 3.18 deep-scrub starts
Nov 29 07:31:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v148: 274 pgs: 2 active+clean+scrubbing, 1 peering, 62 unknown, 209 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:02 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 29 07:31:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v149: 274 pgs: 2 active+clean+scrubbing, 1 peering, 62 unknown, 209 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:04 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 29 07:31:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 07:31:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 29 07:31:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:31:05 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:31:05 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:31:05 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:05 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 29 07:31:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v150: 274 pgs: 2 active+clean+scrubbing, 1 peering, 62 unknown, 209 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:31:06 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:08 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 7.564002991s
Nov 29 07:31:08 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_commit, latency = 6.997917652s
Nov 29 07:31:08 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 6.997917652s
Nov 29 07:31:08 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.821424484s, txc = 0x558bf3e17800
Nov 29 07:31:08 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 8.776844978s
Nov 29 07:31:08 compute-0 ceph-mon[75237]: 3.16 deep-scrub starts
Nov 29 07:31:08 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.988244057s, txc = 0x5571f3d66000
Nov 29 07:31:08 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.943866730s, txc = 0x5571f4d63500
Nov 29 07:31:08 compute-0 ceph-mon[75237]: 3.16 deep-scrub ok
Nov 29 07:31:08 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.040841103s, txc = 0x5571f3dcf800
Nov 29 07:31:08 compute-0 ceph-mon[75237]: pgmap v142: 212 pgs: 212 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 32 op/s
Nov 29 07:31:08 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:08 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:08 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:08 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:08 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 07:31:08 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:08 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:08 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:31:08 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:08 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:08 compute-0 ceph-mon[75237]: osdmap e54: 3 total, 3 up, 3 in
Nov 29 07:31:08 compute-0 ceph-mon[75237]: 2.1f deep-scrub starts
Nov 29 07:31:08 compute-0 ceph-mon[75237]: 2.1f deep-scrub ok
Nov 29 07:31:08 compute-0 ceph-mon[75237]: 2.9 scrub starts
Nov 29 07:31:08 compute-0 ceph-mon[75237]: 2.9 scrub ok
Nov 29 07:31:08 compute-0 ceph-mon[75237]: pgmap v144: 274 pgs: 62 unknown, 212 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 28 op/s
Nov 29 07:31:08 compute-0 ceph-mon[75237]: 2.f scrub starts
Nov 29 07:31:08 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:08 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 54 pg[10.0( v 43'16 (0'0,43'16] local-lis/les=42/43 n=8 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=8.137035370s) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 43'15 mlcod 43'15 active pruub 124.009613037s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:08 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 54 pg[10.0( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=8.137035370s) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 43'15 mlcod 0'0 unknown pruub 124.009613037s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:08 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 3.18 deep-scrub ok
Nov 29 07:31:08 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 29 07:31:08 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 29 07:31:08 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 29 07:31:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v151: 274 pgs: 2 peering, 4 active+clean+scrubbing, 62 unknown, 206 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:31:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:31:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:31:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:31:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:31:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:31:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:31:09 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 29 07:31:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v152: 274 pgs: 2 peering, 4 active+clean+scrubbing, 62 unknown, 206 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 3.2 KiB/s wr, 68 op/s
Nov 29 07:31:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-daa0ca08f779292e8671eaf1b8382374985d690df191800b6076c4ed28dbcde2-merged.mount: Deactivated successfully.
Nov 29 07:31:11 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 10.065108299s
Nov 29 07:31:11 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 10.065108299s
Nov 29 07:31:11 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.336373329s, txc = 0x5622241b6900
Nov 29 07:31:11 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.329458237s, txc = 0x56222419a900
Nov 29 07:31:11 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.282473564s, txc = 0x562224199b00
Nov 29 07:31:11 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 29 07:31:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v153: 274 pgs: 2 peering, 4 active+clean+scrubbing, 62 unknown, 206 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.9 KiB/s wr, 61 op/s
Nov 29 07:31:13 compute-0 ceph-mds[101581]: mds.beacon.cephfs.compute-0.yemcdg missed beacon ack from the monitors
Nov 29 07:31:13 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 2.b deep-scrub starts
Nov 29 07:31:14 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.694545746s, txc = 0x5571f3b07200
Nov 29 07:31:14 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.670268059s, txc = 0x5571f4d7a000
Nov 29 07:31:14 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.728292465s, txc = 0x5571f2cd5b00
Nov 29 07:31:14 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_commit, latency = 5.700784206s
Nov 29 07:31:14 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 5.700784206s
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.12( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 5.702077389s
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.10( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 5.702077389s
Nov 29 07:31:14 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.520814896s, txc = 0x558bf240d500
Nov 29 07:31:14 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.498922348s, txc = 0x558bf3e17b00
Nov 29 07:31:14 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.506230354s, txc = 0x558bf3d25500
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.11( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.490028381s, txc = 0x558bf3d5bb00
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.1f( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.1e( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.121679306s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active pruub 135.694183350s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.1e( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.121618271s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.694183350s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.1d( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.1c( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.1d( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.121698380s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active pruub 135.694198608s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.1d( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.121324539s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.694198608s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.12( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.121258736s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active pruub 135.694168091s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.13( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.121439934s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active pruub 135.694473267s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.12( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.121128082s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.694168091s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.13( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.121356964s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.694473267s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.1a( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.18( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.15( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.121000290s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active pruub 135.694396973s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.15( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.120967865s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.694396973s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.7( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=1 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.6( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=1 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.9( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.120802879s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active pruub 135.694427490s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.9( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.120723724s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.694427490s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.4( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=1 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.8( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=1 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.7( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.120532990s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active pruub 135.694488525s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.7( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.120495796s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.694488525s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.f( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.c( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.9( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.e( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.3( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.120213509s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active pruub 135.694519043s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.3( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.120180130s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.694519043s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.1( v 43'16 (0'0,43'16] local-lis/les=42/43 n=1 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.1( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.120064735s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active pruub 135.694534302s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.14( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.1( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.119968414s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.694534302s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.f( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.119918823s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active pruub 135.694534302s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.15( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.f( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.119851112s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.694534302s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.1a( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.120088577s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active pruub 135.694900513s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.16( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.1a( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.120050430s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.694900513s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.17( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.19( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.119791985s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active pruub 135.694732666s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.19( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.119756699s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.694732666s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.18( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.119688988s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active pruub 135.694702148s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.d( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.18( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.119660378s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.694702148s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.b( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.a( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.2( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.119467735s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active pruub 135.694747925s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.4( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.119477272s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active pruub 135.694778442s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.5( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.119408607s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active pruub 135.694747925s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.5( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=1 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.2( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.119424820s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.694747925s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.4( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.119429588s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.694778442s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.5( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.119379997s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.694747925s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.3( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=1 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.c( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.119349480s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active pruub 135.694900513s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.1e( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.c( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.119300842s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.694900513s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.1b( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.11( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.119256973s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active pruub 135.694946289s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.19( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.11( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.119208336s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.694946289s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.16( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.119191170s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active pruub 135.695037842s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.16( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.119167328s) [1] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.695037842s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.13( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[10.2( v 43'16 lc 0'0 (0'0,43'16] local-lis/les=42/43 n=1 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.14( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.118993759s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active pruub 135.694915771s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:14 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[5.14( empty local-lis/les=49/51 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55 pruub=14.118968010s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 135.694915771s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:14 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 29 07:31:14 compute-0 sshd-session[104356]: Received disconnect from 20.185.243.158 port 57954:11: Bye Bye [preauth]
Nov 29 07:31:14 compute-0 sshd-session[104356]: Disconnected from authenticating user root 20.185.243.158 port 57954 [preauth]
Nov 29 07:31:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v154: 274 pgs: 2 peering, 2 active+clean+scrubbing, 62 unknown, 208 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.7 KiB/s wr, 56 op/s
Nov 29 07:31:15 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 29 07:31:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v155: 274 pgs: 2 peering, 2 active+clean+scrubbing, 62 unknown, 208 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.7 KiB/s wr, 56 op/s
Nov 29 07:31:16 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 29 07:31:16 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 5.037002563s
Nov 29 07:31:16 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 5.037003040s
Nov 29 07:31:16 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 15.098211288s, txc = 0x56222419af00
Nov 29 07:31:16 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.174992561s, txc = 0x56222419b200
Nov 29 07:31:17 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 2.b deep-scrub ok
Nov 29 07:31:17 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 29 07:31:17 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 29 07:31:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:31:17 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 12.069 seconds
Nov 29 07:31:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:31:17 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:31:17 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:31:17 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:31:17 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:17 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 5.6 deep-scrub starts
Nov 29 07:31:18 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.004439354s, txc = 0x558bf240db00
Nov 29 07:31:18 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.001044273s, txc = 0x558bf3e1a000
Nov 29 07:31:18 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.997482300s, txc = 0x558bf3e17200
Nov 29 07:31:18 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.962224007s, txc = 0x558bf3d6c300
Nov 29 07:31:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v156: 274 pgs: 2 peering, 3 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 62 unknown, 206 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.7 KiB/s wr, 56 op/s
Nov 29 07:31:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:31:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 29 07:31:18 compute-0 sshd-session[104358]: Received disconnect from 103.236.140.19 port 56620:11: Bye Bye [preauth]
Nov 29 07:31:18 compute-0 sshd-session[104358]: Disconnected from authenticating user root 103.236.140.19 port 56620 [preauth]
Nov 29 07:31:19 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.603507996s, txc = 0x5571f4d2c600
Nov 29 07:31:19 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.603691101s, txc = 0x5571f4d2cc00
Nov 29 07:31:19 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.039975643s, txc = 0x5622262ba000
Nov 29 07:31:19 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.111646652s, txc = 0x5622241b6c00
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[5.15( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[5.4( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[5.1e( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[5.7( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[5.5( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[5.2( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[5.3( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[5.14( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[6.3( v 43'39 (0'0,43'39] local-lis/les=51/52 n=2 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=10.352400780s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'39 lcod 0'0 mlcod 0'0 active pruub 155.584472656s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[6.3( v 43'39 (0'0,43'39] local-lis/les=51/52 n=2 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=10.352316856s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.584472656s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:19 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 5.6 deep-scrub ok
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[6.f( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=10.352173805s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'39 lcod 0'0 mlcod 0'0 active pruub 155.584442139s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[6.f( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=10.352120399s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.584442139s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[6.b( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=10.351624489s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'39 lcod 0'0 mlcod 0'0 active pruub 155.583999634s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[6.1( v 43'39 (0'0,43'39] local-lis/les=51/52 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=10.351575851s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'39 lcod 0'0 mlcod 0'0 active pruub 155.584030151s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[6.b( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=10.351516724s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.583999634s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[6.1( v 43'39 (0'0,43'39] local-lis/les=51/52 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=10.351509094s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.584030151s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[6.7( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=10.351285934s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'39 lcod 0'0 mlcod 0'0 active pruub 155.583892822s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[6.5( v 43'39 (0'0,43'39] local-lis/les=51/52 n=2 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=10.351964951s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'39 lcod 0'0 mlcod 0'0 active pruub 155.584579468s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[6.d( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=10.347140312s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'39 lcod 0'0 mlcod 0'0 active pruub 155.579757690s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[6.7( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=10.351269722s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.583892822s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[6.5( v 43'39 (0'0,43'39] local-lis/les=51/52 n=2 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=10.351938248s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.584579468s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[6.d( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=10.347104073s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.579757690s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[6.9( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=10.350958824s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'39 lcod 0'0 mlcod 0'0 active pruub 155.583908081s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:19 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[6.9( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=10.350925446s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=43'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.583908081s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:19 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 29 07:31:20 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Nov 29 07:31:20 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.5 deep-scrub starts
Nov 29 07:31:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v157: 274 pgs: 33 peering, 3 active+clean+scrubbing, 2 active+clean+scrubbing+deep, 31 unknown, 205 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:20 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Nov 29 07:31:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v158: 274 pgs: 33 peering, 3 active+clean+scrubbing, 2 active+clean+scrubbing+deep, 31 unknown, 205 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:22 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.4 deep-scrub starts
Nov 29 07:31:23 compute-0 podman[104300]: 2025-11-29 07:31:23.316135611 +0000 UTC m=+29.171894316 container remove 2ec0b1d199ca5cc32097944a4f679423568e9ea5a6d92e39a3adad4a1ca9b3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brattain, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:31:23 compute-0 sudo[104195]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:23 compute-0 systemd[1]: libpod-conmon-2ec0b1d199ca5cc32097944a4f679423568e9ea5a6d92e39a3adad4a1ca9b3f7.scope: Deactivated successfully.
Nov 29 07:31:23 compute-0 sudo[104361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:31:23 compute-0 sudo[104361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:23 compute-0 sudo[104361]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:23 compute-0 sudo[104386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:31:23 compute-0 sudo[104386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:23 compute-0 sudo[104386]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:23 compute-0 sudo[104411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:31:23 compute-0 sudo[104411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:23 compute-0 sudo[104411]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:23 compute-0 sudo[104436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:31:23 compute-0 sudo[104436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:23 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 5.405849934s
Nov 29 07:31:23 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 5.405850410s
Nov 29 07:31:23 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.716930389s, txc = 0x558bf457a300
Nov 29 07:31:23 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.714327812s, txc = 0x558bf3d25800
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[5.1d( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[5.19( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[5.18( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[5.11( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[5.13( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[5.12( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[5.16( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[5.9( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[6.b( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[6.9( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[6.7( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[6.5( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[6.3( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[5.1( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[6.d( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[5.c( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[6.f( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[5.1a( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[5.f( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.1a( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.049701691s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active pruub 146.129852295s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.1b( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.1a( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.049670219s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.129852295s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.2( v 39'4 (0'0,39'4] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.049603462s) [2] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active pruub 146.129837036s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.2( v 39'4 (0'0,39'4] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.049576759s) [2] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.129837036s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.b( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.18( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=8.573815346s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active pruub 145.654205322s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[8.1a( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.3( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=4 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.18( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=8.573798180s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 145.654205322s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.5( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=8.573883057s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active+scrubbing+deep pruub 145.654312134s@ [ 7.5:  ]  TIME_FOR_DEEP mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.5( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=8.573855400s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 145.654312134s@ TIME_FOR_DEEP mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.16( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[8.2( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [2] r=0 lpr=55 pi=[52,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.f( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=8.573690414s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active pruub 145.654281616s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.f( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=8.573673248s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 145.654281616s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.1( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=4 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.e( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=8.573533058s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active pruub 145.654251099s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.e( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=8.573513031s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 145.654251099s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.c( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=8.573489189s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active pruub 145.654251099s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.c( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=8.573469162s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 145.654251099s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.2( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=4 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.3( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=8.573249817s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active pruub 145.654205322s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.c( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.048664093s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active pruub 146.129653931s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.d( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.3( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=8.573201180s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 145.654205322s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.c( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.048632622s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.129653931s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.15( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=8.573216438s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active pruub 145.654327393s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[7.e( empty local-lis/les=0/0 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [2] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[7.c( empty local-lis/les=0/0 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [2] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[7.15( empty local-lis/les=0/0 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [2] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[7.2( empty local-lis/les=0/0 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [2] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[8.d( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [2] r=0 lpr=55 pi=[52,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[7.1( empty local-lis/les=0/0 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [2] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[7.8( empty local-lis/les=0/0 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [2] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[7.a( empty local-lis/les=0/0 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [2] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.2( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=8.573084831s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active pruub 145.654220581s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.d( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.048209190s) [2] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active pruub 146.129348755s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.c( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.15( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=8.573176384s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 145.654327393s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[8.4( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [2] r=0 lpr=55 pi=[52,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[8.1b( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [2] r=0 lpr=55 pi=[52,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[7.11( empty local-lis/les=0/0 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [2] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.2( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=8.573050499s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 145.654220581s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.d( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.048192978s) [2] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.129348755s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.1( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=8.573086739s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active pruub 145.654418945s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[8.1c( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [2] r=0 lpr=55 pi=[52,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.e( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.047996521s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active pruub 146.129364014s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.1( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=8.573040962s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 145.654418945s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.e( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.047976494s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.129364014s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[8.12( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [2] r=0 lpr=55 pi=[52,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[8.11( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [2] r=0 lpr=55 pi=[52,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.f( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[8.15( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [2] r=0 lpr=55 pi=[52,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[7.18( empty local-lis/les=0/0 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[7.f( empty local-lis/les=0/0 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[7.3( empty local-lis/les=0/0 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[8.c( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[8.e( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[8.f( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[8.b( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[7.6( empty local-lis/les=0/0 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[8.9( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[8.6( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[7.9( empty local-lis/les=0/0 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[8.18( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[8.1f( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[7.13( empty local-lis/les=0/0 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[8.1d( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[7.1a( empty local-lis/les=0/0 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [2] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[7.1c( empty local-lis/les=0/0 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [2] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[8.14( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.f( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.048092842s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active pruub 146.129638672s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.f( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.048076630s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.129638672s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.e( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.9( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.4( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=15.993239403s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active+scrubbing pruub 153.075057983s@ [ 7.4:  ]  mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.4( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=15.993035316s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 153.075057983s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.b( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.047597885s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active pruub 146.129669189s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.b( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.047578812s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.129669189s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[7.1b( empty local-lis/les=0/0 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.6( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=13.534308434s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active pruub 150.616516113s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.6( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=13.534293175s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.616516113s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.a( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.9( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.046908379s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active pruub 146.129272461s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.9( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.046886444s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.129272461s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.8( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.8( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=13.533562660s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active pruub 150.616149902s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.8( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=13.533539772s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.616149902s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.6( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=4 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[7.1f( empty local-lis/les=0/0 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.9( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=13.533313751s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active pruub 150.616043091s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.6( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.046551704s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active pruub 146.129333496s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.6( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.046530724s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.129333496s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.9( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=13.533290863s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.616043091s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.7( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.a( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=13.533060074s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active pruub 150.616043091s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.a( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=13.533040047s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.616043091s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.4( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=4 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.4( v 39'4 (0'0,39'4] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.045923233s) [2] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active pruub 146.129104614s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.4( v 39'4 (0'0,39'4] local-lis/les=52/53 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.045901299s) [2] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.129104614s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.5( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=4 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.1b( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.045739174s) [2] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active pruub 146.129074097s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.1b( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.045719147s) [2] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.129074097s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.1a( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.19( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.18( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.045443535s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active pruub 146.129074097s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.1e( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.18( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.045417786s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.129074097s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.1f( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.045165062s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active pruub 146.128967285s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.1f( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.1f( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.045142174s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.128967285s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.11( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=13.532061577s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active pruub 150.615997314s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[8.10( empty local-lis/les=0/0 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.1c( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.1d( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.1d( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.044815063s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active pruub 146.128921509s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.11( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=13.531738281s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.615997314s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.13( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=13.531734467s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active pruub 150.616012573s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.1c( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.044404984s) [2] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active pruub 146.128845215s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.1c( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.044383049s) [2] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.128845215s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.12( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.044230461s) [2] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active pruub 146.128829956s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.12( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.044212341s) [2] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.128829956s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.13( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.13( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=13.531291008s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.616012573s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.1d( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.044706345s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.128921509s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.10( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.18( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.11( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.043567657s) [2] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active pruub 146.128921509s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.17( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.11( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.043540001s) [2] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.128921509s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.15( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.866903305s) [2] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active pruub 146.952606201s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.15( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.866878510s) [2] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.952606201s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.14( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.1a( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=13.529921532s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active pruub 150.615829468s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.1a( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=13.529895782s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.615829468s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.1b( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=13.529738426s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active pruub 150.615768433s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.14( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.043694496s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active pruub 146.129730225s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.14( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.043667793s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.129730225s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.15( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.1c( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=13.529652596s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active pruub 150.615844727s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.1b( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=13.529564857s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.615768433s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.1c( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=13.529631615s) [2] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.615844727s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.12( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.1f( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=13.529479027s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active pruub 150.615814209s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[7.1f( empty local-lis/les=51/52 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=13.529462814s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.615814209s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[9.11( v 53'234 lc 0'0 (0'0,53'234] local-lis/les=40/41 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.10( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.042279243s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active pruub 146.128738403s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[8.10( v 39'4 (0'0,39'4] local-lis/les=52/53 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55 pruub=9.041513443s) [0] r=-1 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.128738403s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:31:23 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:31:23 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 29 07:31:24 compute-0 podman[104501]: 2025-11-29 07:31:23.963750727 +0000 UTC m=+0.023632807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:31:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v160: 305 pgs: 33 peering, 3 active+clean+scrubbing, 2 active+clean+scrubbing+deep, 62 unknown, 205 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:25 compute-0 ceph-mds[101581]: mds.beacon.cephfs.compute-0.yemcdg missed beacon ack from the monitors
Nov 29 07:31:25 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 6.690305233s
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.597099304s, txc = 0x5571f4c1e300
Nov 29 07:31:25 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 6.690305233s
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.597076416s, txc = 0x5571f3dcfb00
Nov 29 07:31:25 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.686347961s, txc = 0x5622241b6f00
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.596967697s, txc = 0x5571f3b07800
Nov 29 07:31:25 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.682879448s, txc = 0x5622262ac000
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.596774101s, txc = 0x5571f4c1e600
Nov 29 07:31:25 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.679280281s, txc = 0x56222419b500
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.596754074s, txc = 0x5571f4dc8000
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.596584320s, txc = 0x5571f4c1e900
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.596400261s, txc = 0x5571f4dce000
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.596264839s, txc = 0x5571f4c1ec00
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.596213341s, txc = 0x5571f4dce300
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.596161842s, txc = 0x5571f3d66300
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.596097946s, txc = 0x5571f4dce600
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.596039772s, txc = 0x5571f4c1ef00
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.595990181s, txc = 0x5571f4dce900
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.595774651s, txc = 0x5571f4dcec00
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.595774651s, txc = 0x5571f4c1f200
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.595528603s, txc = 0x5571f4dcef00
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.595509529s, txc = 0x5571f4d36000
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.595446587s, txc = 0x5571f4c1f500
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.595340729s, txc = 0x5571f4dcf200
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.595253944s, txc = 0x5571f3d66600
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.595239639s, txc = 0x5571f4dc8300
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.595223427s, txc = 0x5571f4d36900
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.595172882s, txc = 0x5571f4dcf500
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.595168114s, txc = 0x5571f4c1f800
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.595056534s, txc = 0x5571f4dcf800
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.594945908s, txc = 0x5571f3d66900
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.594895363s, txc = 0x5571f4c1fb00
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.594837189s, txc = 0x5571f4dc8600
Nov 29 07:31:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.594753265s, txc = 0x5571f4d36c00
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.594690323s, txc = 0x5571f4e22000
Nov 29 07:31:25 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.594669342s, txc = 0x5571f3d66c00
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.594604492s, txc = 0x5571f4dcfb00
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.594533920s, txc = 0x5571f4dc8900
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.594542503s, txc = 0x5571f4d36f00
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.594411850s, txc = 0x5571f4e24000
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.594408035s, txc = 0x5571f4e22300
Nov 29 07:31:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.594297409s, txc = 0x5571f4d37200
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.594227791s, txc = 0x5571f4e24300
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.594239235s, txc = 0x5571f4dc8c00
Nov 29 07:31:25 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.594228745s, txc = 0x5571f2c70300
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.594153404s, txc = 0x5571f4e22600
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.594120026s, txc = 0x5571f4d37500
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.594065666s, txc = 0x5571f4e2c000
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.594072342s, txc = 0x5571f4e24600
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.593977928s, txc = 0x5571f4d37800
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.593908310s, txc = 0x5571f4e22900
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.593740463s, txc = 0x5571f4d37b00
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.593729973s, txc = 0x5571f4e2c300
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.593720436s, txc = 0x5571f4e22c00
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.593651772s, txc = 0x5571f4dc8f00
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.593645096s, txc = 0x5571f4e24900
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.230216980s, txc = 0x5571f4dc9500
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_commit, latency = 6.691236496s
Nov 29 07:31:25 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 6.691236496s
Nov 29 07:31:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:31:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 29 07:31:25 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Nov 29 07:31:25 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.4 deep-scrub ok
Nov 29 07:31:25 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Nov 29 07:31:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v161: 305 pgs: 79 peering, 3 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 62 unknown, 160 active+clean; 455 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:26 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 29 07:31:26 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 29 07:31:27 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.247981071s, txc = 0x558bf3d25b00
Nov 29 07:31:27 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.202432632s, txc = 0x558bf3d6c900
Nov 29 07:31:27 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 55 pg[6.1( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:27 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 55 pg[7.4( empty local-lis/les=0/0 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:27 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 55 pg[7.5( empty local-lis/les=0/0 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [2] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:27 compute-0 podman[104501]: 2025-11-29 07:31:27.497454777 +0000 UTC m=+3.557336857 container create a2223375bfac8e049d1e2b39e27153e34372da6202c89b2947b2619090d32828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_raman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:31:27 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 29 07:31:28 compute-0 sudo[104538]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oifhwhixfpcylgscynynoffehpslxgui ; /usr/bin/python3'
Nov 29 07:31:28 compute-0 sudo[104538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:28 compute-0 python3[104540]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:31:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v162: 305 pgs: 129 peering, 3 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 31 unknown, 141 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:28 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 5.d deep-scrub starts
Nov 29 07:31:28 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 29 07:31:28 compute-0 ceph-mon[75237]: 3.1e scrub starts
Nov 29 07:31:28 compute-0 ceph-mon[75237]: 3.1e scrub ok
Nov 29 07:31:28 compute-0 ceph-mon[75237]: pgmap v145: 274 pgs: 62 unknown, 212 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 07:31:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:28 compute-0 ceph-mon[75237]: 2.1d scrub starts
Nov 29 07:31:28 compute-0 ceph-mon[75237]: 3.11 scrub starts
Nov 29 07:31:28 compute-0 ceph-mon[75237]: 2.f scrub ok
Nov 29 07:31:28 compute-0 ceph-mon[75237]: 2.1d scrub ok
Nov 29 07:31:28 compute-0 ceph-mon[75237]: 2.19 scrub starts
Nov 29 07:31:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:31:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:31:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 07:31:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:31:28 compute-0 ceph-mon[75237]: pgmap v146: 274 pgs: 1 peering, 62 unknown, 211 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:28 compute-0 ceph-mon[75237]: 2.a scrub starts
Nov 29 07:31:28 compute-0 ceph-mon[75237]: 3.11 scrub ok
Nov 29 07:31:28 compute-0 ceph-mon[75237]: osdmap e55: 3 total, 3 up, 3 in
Nov 29 07:31:28 compute-0 ceph-mon[75237]: 2.19 scrub ok
Nov 29 07:31:28 compute-0 ceph-mon[75237]: 3.18 deep-scrub starts
Nov 29 07:31:28 compute-0 ceph-mon[75237]: pgmap v148: 274 pgs: 2 active+clean+scrubbing, 1 peering, 62 unknown, 209 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:28 compute-0 ceph-mon[75237]: 7.1 scrub starts
Nov 29 07:31:28 compute-0 ceph-mon[75237]: pgmap v149: 274 pgs: 2 active+clean+scrubbing, 1 peering, 62 unknown, 209 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:28 compute-0 ceph-mon[75237]: 7.2 scrub starts
Nov 29 07:31:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:28 compute-0 ceph-mon[75237]: 2.18 scrub starts
Nov 29 07:31:28 compute-0 ceph-mon[75237]: pgmap v150: 274 pgs: 2 active+clean+scrubbing, 1 peering, 62 unknown, 209 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:28 compute-0 ceph-mon[75237]: 3.18 deep-scrub ok
Nov 29 07:31:28 compute-0 ceph-mon[75237]: 2.a scrub ok
Nov 29 07:31:28 compute-0 ceph-mon[75237]: 7.1 scrub ok
Nov 29 07:31:28 compute-0 ceph-mon[75237]: 7.2 scrub ok
Nov 29 07:31:28 compute-0 ceph-mon[75237]: pgmap v151: 274 pgs: 2 peering, 4 active+clean+scrubbing, 62 unknown, 206 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:31:28 compute-0 ceph-mon[75237]: 7.3 scrub starts
Nov 29 07:31:28 compute-0 ceph-mon[75237]: 2.18 scrub ok
Nov 29 07:31:28 compute-0 ceph-mon[75237]: 2.b deep-scrub starts
Nov 29 07:31:28 compute-0 ceph-mon[75237]: 7.3 scrub ok
Nov 29 07:31:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:28 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.808284760s, txc = 0x5571f4dc9800
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.808588982s, txc = 0x562224199800
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.808416367s, txc = 0x5622262acc00
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.807786942s, txc = 0x5622262ad200
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.807522774s, txc = 0x5622262ad500
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.807247162s, txc = 0x5622262ac600
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.807023048s, txc = 0x5622241b7200
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.806709290s, txc = 0x5622262ba300
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.806163788s, txc = 0x5622262ba600
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.805417061s, txc = 0x5622241b7500
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.805150986s, txc = 0x5622262ba900
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.805079460s, txc = 0x5622241b7800
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.804832458s, txc = 0x56222419c300
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.804810524s, txc = 0x5622262bac00
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.804736137s, txc = 0x5622262ad800
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.804675102s, txc = 0x5622262baf00
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.804669380s, txc = 0x5622241b7b00
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.804644585s, txc = 0x5622262bb200
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.804651260s, txc = 0x56222419cc00
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.804634094s, txc = 0x5622262adb00
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.804611206s, txc = 0x5622262e0000
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.804578781s, txc = 0x5622262bb500
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.804542542s, txc = 0x5622262e2000
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.804531097s, txc = 0x56222419cf00
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.804345131s, txc = 0x56222419b800
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.952169418s, txc = 0x56222419d500
Nov 29 07:31:28 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.905386925s, txc = 0x5622262f0000
Nov 29 07:31:28 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.808138847s, txc = 0x5571f4e2e600
Nov 29 07:31:28 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.806211472s, txc = 0x5571f4e22f00
Nov 29 07:31:28 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.806178093s, txc = 0x5571f4dc9b00
Nov 29 07:31:28 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.805650711s, txc = 0x5571f2c24000
Nov 29 07:31:28 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.805508614s, txc = 0x5571f2c24300
Nov 29 07:31:28 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.805444717s, txc = 0x5571f4e2c600
Nov 29 07:31:28 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.805188179s, txc = 0x5571f4e24c00
Nov 29 07:31:28 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.805165291s, txc = 0x5571f4e2e900
Nov 29 07:31:28 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.754190445s, txc = 0x5571f2c24900
Nov 29 07:31:28 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 29 07:31:28 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 29 07:31:28 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 5.d deep-scrub ok
Nov 29 07:31:28 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 29 07:31:28 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 29 07:31:29 compute-0 ceph-osd[89968]: osd.1 55 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14259.0:281 9.4 9.9fd64b44 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:29 compute-0 ceph-osd[89968]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 07:31:29 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1[89964]: 2025-11-29T07:31:29.272+0000 7fe7bf403640 -1 osd.1 55 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14259.0:281 9.4 9.9fd64b44 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:29 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 29 07:31:30 compute-0 ceph-osd[89968]: osd.1 55 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14259.0:281 9.4 9.9fd64b44 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:30 compute-0 ceph-osd[89968]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:31:30 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1[89964]: 2025-11-29T07:31:30.322+0000 7fe7bf403640 -1 osd.1 55 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14259.0:281 9.4 9.9fd64b44 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v163: 305 pgs: 129 peering, 3 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 31 unknown, 141 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:30 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.858432293s, txc = 0x558bf3d25200
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.858086586s, txc = 0x558bf2a3cf00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.857462406s, txc = 0x558bf3d5b800
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.857339382s, txc = 0x558bf240d800
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.857182026s, txc = 0x558bf3e1a300
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.856989861s, txc = 0x558bf2a3d200
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.856743813s, txc = 0x558bf461e000
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.856551647s, txc = 0x558bf461e300
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.856403351s, txc = 0x558bf461e600
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.856234550s, txc = 0x558bf461e900
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.855931282s, txc = 0x558bf3d6cc00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.855694771s, txc = 0x558bf2a3cc00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.855441093s, txc = 0x558bf3e1a600
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.855177402s, txc = 0x558bf3e1a900
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.854993820s, txc = 0x558bf3e1ac00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.854793072s, txc = 0x558bf3d6cf00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.854687691s, txc = 0x558bf4538000
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.854542255s, txc = 0x558bf4538300
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.854410172s, txc = 0x558bf4538600
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.854221344s, txc = 0x558bf3e1af00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.854220390s, txc = 0x558bf461ec00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.854131699s, txc = 0x558bf3d6d200
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.853994846s, txc = 0x558bf3d6d500
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.853944778s, txc = 0x558bf3d6c600
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.853899479s, txc = 0x558bf3d6d800
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.853804111s, txc = 0x558bf461ef00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.853784084s, txc = 0x558bf3d6db00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.853692055s, txc = 0x558bf461f200
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.853678703s, txc = 0x558bf3e1b200
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.853685379s, txc = 0x558bf2169800
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.853540421s, txc = 0x558bf2169b00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.853469849s, txc = 0x558bf461f500
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.853465557s, txc = 0x558bf216e000
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.853466511s, txc = 0x558bf3e1b500
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.853332520s, txc = 0x558bf4538900
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.853320122s, txc = 0x558bf216e300
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.853323460s, txc = 0x558bf3e1b800
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.853340149s, txc = 0x558bf461f800
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.853162289s, txc = 0x558bf4538c00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.853152275s, txc = 0x558bf461fb00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.853011608s, txc = 0x558bf4538f00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.852982044s, txc = 0x558bf3d1e000
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.852933407s, txc = 0x558bf3d1e300
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.852924347s, txc = 0x558bf4539200
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.852915764s, txc = 0x558bf3d20000
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.852828503s, txc = 0x558bf4539500
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.852558136s, txc = 0x558bf240cc00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.852504253s, txc = 0x558bf3d1e600
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.852501392s, txc = 0x558bf3d1e900
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.852415085s, txc = 0x558bf3d1ec00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.852198601s, txc = 0x558bf3d1ef00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.852158546s, txc = 0x558bf3d20300
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.852090359s, txc = 0x558bf3d1f200
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.852023602s, txc = 0x558bf3d20600
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.851966381s, txc = 0x558bf3d1f500
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.851829529s, txc = 0x558bf3d1f800
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.851744175s, txc = 0x558bf3d1fb00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.851731300s, txc = 0x558bf3d20900
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.851616383s, txc = 0x558bf45c4000
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.851505280s, txc = 0x558bf3d20c00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.851506233s, txc = 0x558bf45c4300
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.851332664s, txc = 0x558bf45c4600
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.851321697s, txc = 0x558bf3d20f00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.851239204s, txc = 0x558bf45c4900
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.851161957s, txc = 0x558bf45c4c00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.851066589s, txc = 0x558bf45c4f00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.851011753s, txc = 0x558bf3d21200
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.850908279s, txc = 0x558bf45c5200
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.850763798s, txc = 0x558bf3d21500
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.850660324s, txc = 0x558bf4539800
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.850571156s, txc = 0x558bf4539b00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.850459099s, txc = 0x558bf45c5500
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.850303173s, txc = 0x558bf45c5800
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.850152493s, txc = 0x558bf45c5b00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.850042343s, txc = 0x558bf4548000
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.850051880s, txc = 0x558bf45d0000
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.849914074s, txc = 0x558bf3d21800
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.849890709s, txc = 0x558bf45d0300
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.849407196s, txc = 0x558bf4548300
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.849230766s, txc = 0x558bf3d21b00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.848998070s, txc = 0x558bf45d0600
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.849077225s, txc = 0x558bf454a000
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.848906517s, txc = 0x558bf45d0900
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.848885059s, txc = 0x558bf4548600
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.848751068s, txc = 0x558bf45d0c00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.848697662s, txc = 0x558bf4548900
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.848657131s, txc = 0x558bf45d0f00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.848582268s, txc = 0x558bf454a300
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.848564625s, txc = 0x558bf4548c00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.848537445s, txc = 0x558bf45d1200
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.847764015s, txc = 0x558bf4548f00
Nov 29 07:31:30 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.847474098s, txc = 0x558bf454a600
Nov 29 07:31:30 compute-0 systemd[1]: Started libpod-conmon-a2223375bfac8e049d1e2b39e27153e34372da6202c89b2947b2619090d32828.scope.
Nov 29 07:31:30 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:31:30 compute-0 podman[104541]: 2025-11-29 07:31:30.763718499 +0000 UTC m=+2.180678429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:31:31 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 29 07:31:31 compute-0 ceph-osd[89968]: osd.1 55 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14259.0:281 9.4 9.9fd64b44 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:31 compute-0 ceph-osd[89968]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:31:31 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1[89964]: 2025-11-29T07:31:31.286+0000 7fe7bf403640 -1 osd.1 55 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14259.0:281 9.4 9.9fd64b44 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:31 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.967040539s, txc = 0x5571f4e55200
Nov 29 07:31:31 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.968077660s, txc = 0x562226b13b00
Nov 29 07:31:31 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.959186554s, txc = 0x5622262f0300
Nov 29 07:31:31 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.955703259s, txc = 0x56222419bb00
Nov 29 07:31:31 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.007679462s, txc = 0x5622262f0600
Nov 29 07:31:31 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 29 07:31:31 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Nov 29 07:31:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.091855049s, txc = 0x558bf216e600
Nov 29 07:31:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.085692406s, txc = 0x558bf454a900
Nov 29 07:31:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.086238861s, txc = 0x558bf45d1500
Nov 29 07:31:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.085869789s, txc = 0x558bf4549200
Nov 29 07:31:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.092073441s, txc = 0x558bf3e1bb00
Nov 29 07:31:31 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[11.0( empty local-lis/les=44/46 n=0 ec=44/44 lis/c=44/44 les/c/f=46/46/0 sis=56 pruub=13.339334488s) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active pruub 158.514450073s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:31:31 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 29 07:31:31 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[11.0( empty local-lis/les=44/46 n=0 ec=44/44 lis/c=44/44 les/c/f=46/46/0 sis=56 pruub=13.339334488s) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown pruub 158.514450073s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:31 compute-0 podman[104501]: 2025-11-29 07:31:31.955024292 +0000 UTC m=+8.014906362 container init a2223375bfac8e049d1e2b39e27153e34372da6202c89b2947b2619090d32828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 07:31:31 compute-0 podman[104501]: 2025-11-29 07:31:31.962667571 +0000 UTC m=+8.022549611 container start a2223375bfac8e049d1e2b39e27153e34372da6202c89b2947b2619090d32828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_raman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:31:31 compute-0 nostalgic_raman[104554]: 167 167
Nov 29 07:31:31 compute-0 systemd[1]: libpod-a2223375bfac8e049d1e2b39e27153e34372da6202c89b2947b2619090d32828.scope: Deactivated successfully.
Nov 29 07:31:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 get_health_metrics reporting 1 slow ops, oldest is mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Nov 29 07:31:32 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0[75233]: 2025-11-29T07:31:32.006+0000 7ff97fecb640 -1 mon.compute-0@0(leader) e1 get_health_metrics reporting 1 slow ops, oldest is mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Nov 29 07:31:32 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:32 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:32 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:32 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:32 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:32 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:32 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:32 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:32 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:32 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:32 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:32 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 29 07:31:32 compute-0 ceph-osd[89968]: osd.1 56 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14259.0:281 9.4 9.9fd64b44 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:32 compute-0 ceph-osd[89968]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'default.rgw.log' : 3 ])
Nov 29 07:31:32 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1[89964]: 2025-11-29T07:31:32.239+0000 7fe7bf403640 -1 osd.1 56 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14259.0:281 9.4 9.9fd64b44 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:32 compute-0 sshd-session[104556]: Received disconnect from 103.234.151.178 port 5982:11: Bye Bye [preauth]
Nov 29 07:31:32 compute-0 sshd-session[104556]: Disconnected from authenticating user root 103.234.151.178 port 5982 [preauth]
Nov 29 07:31:32 compute-0 podman[104501]: 2025-11-29 07:31:32.367069111 +0000 UTC m=+8.426951201 container attach a2223375bfac8e049d1e2b39e27153e34372da6202c89b2947b2619090d32828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_raman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:31:32 compute-0 podman[104501]: 2025-11-29 07:31:32.367877884 +0000 UTC m=+8.427759954 container died a2223375bfac8e049d1e2b39e27153e34372da6202c89b2947b2619090d32828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_raman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:31:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 32 activating, 101 peering, 2 active+clean+scrubbing, 31 unknown, 139 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:33 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 29 07:31:33 compute-0 ceph-osd[89968]: osd.1 56 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14259.0:281 9.4 9.9fd64b44 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:33 compute-0 ceph-osd[89968]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'default.rgw.log' : 3 ])
Nov 29 07:31:33 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1[89964]: 2025-11-29T07:31:33.279+0000 7fe7bf403640 -1 osd.1 56 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14259.0:281 9.4 9.9fd64b44 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:33 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 29 07:31:34 compute-0 ceph-osd[89968]: osd.1 56 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14259.0:281 9.4 9.9fd64b44 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:34 compute-0 ceph-osd[89968]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'default.rgw.log' : 3 ])
Nov 29 07:31:34 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1[89964]: 2025-11-29T07:31:34.233+0000 7fe7bf403640 -1 osd.1 56 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14259.0:281 9.4 9.9fd64b44 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:34 compute-0 sshd-session[104576]: Received disconnect from 114.34.106.146 port 36734:11: Bye Bye [preauth]
Nov 29 07:31:34 compute-0 sshd-session[104576]: Disconnected from authenticating user root 114.34.106.146 port 36734 [preauth]
Nov 29 07:31:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v166: 305 pgs: 32 activating, 101 peering, 2 active+clean+scrubbing, 31 unknown, 139 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:34 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 29 07:31:35 compute-0 ceph-osd[89968]: osd.1 56 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14259.0:281 9.4 9.9fd64b44 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:35 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1[89964]: 2025-11-29T07:31:35.266+0000 7fe7bf403640 -1 osd.1 56 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14259.0:281 9.4 9.9fd64b44 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:35 compute-0 ceph-osd[89968]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'default.rgw.log' : 4 ])
Nov 29 07:31:36 compute-0 ceph-osd[89968]: osd.1 56 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14259.0:281 9.4 9.9fd64b44 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:36 compute-0 ceph-osd[89968]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'default.rgw.log' : 4 ])
Nov 29 07:31:36 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1[89964]: 2025-11-29T07:31:36.246+0000 7fe7bf403640 -1 osd.1 56 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14259.0:281 9.4 9.9fd64b44 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:36 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 29 07:31:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v167: 305 pgs: 32 activating, 101 peering, 2 active+clean+scrubbing, 31 unknown, 139 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s
Nov 29 07:31:36 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 29 07:31:37 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 29 07:31:37 compute-0 ceph-osd[89968]: osd.1 56 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14259.0:281 9.4 9.9fd64b44 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:37 compute-0 ceph-osd[89968]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'default.rgw.log' : 5 ])
Nov 29 07:31:37 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1[89964]: 2025-11-29T07:31:37.275+0000 7fe7bf403640 -1 osd.1 56 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14259.0:281 9.4 9.9fd64b44 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:37 compute-0 ceph-mds[101581]: mds.beacon.cephfs.compute-0.yemcdg missed beacon ack from the monitors
Nov 29 07:31:37 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_commit, latency = 5.780857563s
Nov 29 07:31:37 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 5.780857563s
Nov 29 07:31:37 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.298101425s, txc = 0x5571f4ee6f00
Nov 29 07:31:37 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.291035175s, txc = 0x5571f4e55b00
Nov 29 07:31:37 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 29 07:31:37 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 29 07:31:38 compute-0 ceph-osd[89968]: osd.1 56 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14259.0:281 9.4 9.9fd64b44 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:38 compute-0 ceph-osd[89968]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'default.rgw.log' : 5 ])
Nov 29 07:31:38 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1[89964]: 2025-11-29T07:31:38.265+0000 7fe7bf403640 -1 osd.1 56 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14259.0:281 9.4 9.9fd64b44 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 29 07:31:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:31:38
Nov 29 07:31:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:31:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Some PGs (0.101639) are unknown; try again later
Nov 29 07:31:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v168: 305 pgs: 2 active+clean+scrubbing, 32 activating, 101 peering, 31 unknown, 139 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 102 B/s wr, 0 op/s
Nov 29 07:31:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:31:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:31:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:31:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:31:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:31:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:31:39 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:31:39 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:31:39 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:31:39 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:31:39 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:31:39 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:31:39 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:31:39 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:31:39 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:31:39 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 7.131261349s
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 7.131261826s
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.438546658s, txc = 0x558bf3d0fb00
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.438408852s, txc = 0x558bf210e000
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.438338280s, txc = 0x558bf210e300
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.438229561s, txc = 0x558bf3d3e000
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.438169003s, txc = 0x558bf3d3e300
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.438123226s, txc = 0x558bf210e600
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.437758923s, txc = 0x558bf3d3e600
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.437684536s, txc = 0x558bf4568300
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.437373638s, txc = 0x558bf3d3e900
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.437348843s, txc = 0x558bf4576300
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.437304974s, txc = 0x558bf210e900
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.437300682s, txc = 0x558bf3d3ec00
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.437257767s, txc = 0x558bf210ec00
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.437215805s, txc = 0x558bf210ef00
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.437146664s, txc = 0x558bf210f200
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.437096596s, txc = 0x558bf210f500
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.437084198s, txc = 0x558bf3d3ef00
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.437044621s, txc = 0x558bf210f800
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.436985970s, txc = 0x558bf4568600
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.436967850s, txc = 0x558bf210fb00
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.436915874s, txc = 0x558bf3d41200
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.436885357s, txc = 0x558bf3d41500
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.436891556s, txc = 0x558bf4568900
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.436854362s, txc = 0x558bf3d41800
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.436790943s, txc = 0x558bf3d3f200
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.436702251s, txc = 0x558bf4568c00
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.436693192s, txc = 0x558bf3d41b00
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.436644077s, txc = 0x558bf3f68300
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.436617374s, txc = 0x558bf4568f00
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.436590195s, txc = 0x558bf3f68600
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.436563492s, txc = 0x558bf4569200
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.436539173s, txc = 0x558bf229e300
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.436817646s, txc = 0x558bf4576600
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.435980797s, txc = 0x558bf4569500
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.434995174s, txc = 0x558bf3d34900
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.1b( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.b( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.3( v 53'234 (0'0,53'234] local-lis/les=54/56 n=4 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.1( v 53'234 (0'0,53'234] local-lis/les=54/56 n=4 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.16( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.0( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 53'232 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.c( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.f( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.e( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.2( v 53'234 (0'0,53'234] local-lis/les=54/56 n=4 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.d( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.9( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.8( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.7( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.4( v 53'234 (0'0,53'234] local-lis/les=54/56 n=4 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.6( v 53'234 (0'0,53'234] local-lis/les=54/56 n=4 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.1a( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.5( v 53'234 (0'0,53'234] local-lis/les=54/56 n=4 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.1f( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.1c( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.19( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.1e( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.1d( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.13( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.10( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.18( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.17( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.14( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.15( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.11( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.12( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 56 pg[9.a( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=53'234 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:39 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 29 07:31:39 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 6.372829914s
Nov 29 07:31:39 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 6.372829914s
Nov 29 07:31:39 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.373463154s, txc = 0x562226b44300
Nov 29 07:31:39 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 29 07:31:39 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 29 07:31:39 compute-0 ceph-mon[75237]: pgmap v152: 274 pgs: 2 peering, 4 active+clean+scrubbing, 62 unknown, 206 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 3.2 KiB/s wr, 68 op/s
Nov 29 07:31:39 compute-0 ceph-mon[75237]: pgmap v153: 274 pgs: 2 peering, 4 active+clean+scrubbing, 62 unknown, 206 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.9 KiB/s wr, 61 op/s
Nov 29 07:31:39 compute-0 ceph-mon[75237]: pgmap v154: 274 pgs: 2 peering, 2 active+clean+scrubbing, 62 unknown, 208 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.7 KiB/s wr, 56 op/s
Nov 29 07:31:39 compute-0 ceph-mon[75237]: 2.11 scrub starts
Nov 29 07:31:39 compute-0 ceph-mon[75237]: pgmap v155: 274 pgs: 2 peering, 2 active+clean+scrubbing, 62 unknown, 208 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.7 KiB/s wr, 56 op/s
Nov 29 07:31:39 compute-0 ceph-mon[75237]: 6.1 scrub starts
Nov 29 07:31:39 compute-0 ceph-mon[75237]: 2.b deep-scrub ok
Nov 29 07:31:39 compute-0 ceph-mon[75237]: 6.1 scrub ok
Nov 29 07:31:39 compute-0 ceph-mon[75237]: 2.11 scrub ok
Nov 29 07:31:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:39 compute-0 ceph-mon[75237]: 5.6 deep-scrub starts
Nov 29 07:31:39 compute-0 ceph-mon[75237]: pgmap v156: 274 pgs: 2 peering, 3 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 62 unknown, 206 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.7 KiB/s wr, 56 op/s
Nov 29 07:31:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:39 compute-0 ceph-mon[75237]: 5.6 deep-scrub ok
Nov 29 07:31:39 compute-0 ceph-mon[75237]: 7.4 scrub starts
Nov 29 07:31:39 compute-0 ceph-mon[75237]: 5.8 scrub starts
Nov 29 07:31:39 compute-0 ceph-mon[75237]: 7.5 deep-scrub starts
Nov 29 07:31:39 compute-0 ceph-mon[75237]: pgmap v157: 274 pgs: 33 peering, 3 active+clean+scrubbing, 2 active+clean+scrubbing+deep, 31 unknown, 205 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:39 compute-0 ceph-mon[75237]: 6.2 scrub starts
Nov 29 07:31:39 compute-0 ceph-mon[75237]: pgmap v158: 274 pgs: 33 peering, 3 active+clean+scrubbing, 2 active+clean+scrubbing+deep, 31 unknown, 205 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:39 compute-0 ceph-mon[75237]: 6.4 deep-scrub starts
Nov 29 07:31:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:31:39 compute-0 ceph-mon[75237]: osdmap e56: 3 total, 3 up, 3 in
Nov 29 07:31:39 compute-0 ceph-mon[75237]: pgmap v160: 305 pgs: 33 peering, 3 active+clean+scrubbing, 2 active+clean+scrubbing+deep, 62 unknown, 205 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:31:39 compute-0 ceph-mon[75237]: 5.8 scrub ok
Nov 29 07:31:39 compute-0 ceph-mon[75237]: 6.4 deep-scrub ok
Nov 29 07:31:39 compute-0 ceph-mon[75237]: 6.2 scrub ok
Nov 29 07:31:39 compute-0 ceph-mon[75237]: pgmap v161: 305 pgs: 79 peering, 3 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 62 unknown, 160 active+clean; 455 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:39 compute-0 ceph-mon[75237]: 6.6 scrub starts
Nov 29 07:31:39 compute-0 ceph-mon[75237]: 5.a scrub starts
Nov 29 07:31:39 compute-0 ceph-mon[75237]: 5.b scrub starts
Nov 29 07:31:39 compute-0 ceph-mon[75237]: pgmap v162: 305 pgs: 129 peering, 3 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 31 unknown, 141 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:39 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.590498924s, txc = 0x5571f4f04900
Nov 29 07:31:39 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.014536381s, txc = 0x5571f4f06300
Nov 29 07:31:39 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.746401787s, txc = 0x558bf22a8900
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.19( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.9( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.14( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.1( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.3( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.2( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.f( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.e( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.d( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.c( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.a( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.8( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.4( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.b( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.5( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.6( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.7( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.18( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.1b( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.1c( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.1d( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.1e( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.1f( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.11( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.12( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.15( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.16( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.17( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.1a( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.10( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[11.13( empty local-lis/les=44/46 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:31:40 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 29 07:31:40 compute-0 ceph-osd[89968]: osd.1 57 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14259.0:281 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch reconnect cookie 94822413493248 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:40 compute-0 ceph-osd[89968]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 07:31:40 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1[89964]: 2025-11-29T07:31:40.363+0000 7fe7bf403640 -1 osd.1 57 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14259.0:281 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch reconnect cookie 94822413493248 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:40 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.909454346s, txc = 0x56222580ef00
Nov 29 07:31:40 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 29 07:31:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v169: 305 pgs: 5 active+clean+scrubbing, 61 activating, 72 peering, 31 unknown, 136 active+clean; 457 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 102 B/s wr, 0 op/s; 2/134 objects degraded (1.493%)
Nov 29 07:31:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-347d24e59691c359f726620a17c112053049262a5fa4da063127b43553d3f295-merged.mount: Deactivated successfully.
Nov 29 07:31:41 compute-0 ceph-osd[89968]: osd.1 57 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14259.0:281 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch reconnect cookie 94822413493248 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:41 compute-0 ceph-osd[89968]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 07:31:41 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1[89964]: 2025-11-29T07:31:41.353+0000 7fe7bf403640 -1 osd.1 57 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14259.0:281 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch reconnect cookie 94822413493248 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:41 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[8.1f( v 39'4 (0'0,39'4] local-lis/les=55/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[7.1b( empty local-lis/les=55/57 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[8.14( v 39'4 (0'0,39'4] local-lis/les=55/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[7.13( empty local-lis/les=55/57 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[7.f( empty local-lis/les=55/57 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[8.c( v 39'4 (0'0,39'4] local-lis/les=55/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[7.3( empty local-lis/les=55/57 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[8.e( v 39'4 (0'0,39'4] local-lis/les=55/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[8.1a( v 39'4 (0'0,39'4] local-lis/les=55/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[5.14( empty local-lis/les=55/57 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[5.3( empty local-lis/les=55/57 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[8.f( v 39'4 lc 0'0 (0'0,39'4] local-lis/les=55/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=39'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[5.5( empty local-lis/les=55/57 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[7.18( empty local-lis/les=55/57 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[5.7( empty local-lis/les=55/57 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[5.2( empty local-lis/les=55/57 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[7.1f( empty local-lis/les=55/57 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[7.4( empty local-lis/les=55/57 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[8.b( v 39'4 (0'0,39'4] local-lis/les=55/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[5.1e( empty local-lis/les=55/57 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[8.10( v 39'4 (0'0,39'4] local-lis/les=55/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[8.9( v 39'4 (0'0,39'4] local-lis/les=55/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[5.4( empty local-lis/les=55/57 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[7.9( empty local-lis/les=55/57 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[7.6( empty local-lis/les=55/57 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[8.6( v 39'4 (0'0,39'4] local-lis/les=55/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[5.15( empty local-lis/les=55/57 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[8.18( v 39'4 (0'0,39'4] local-lis/les=55/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 57 pg[8.1d( v 39'4 (0'0,39'4] local-lis/les=55/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:41 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 57 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14259.0:311 10.15 10.88aa5c95 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e55)
Nov 29 07:31:42 compute-0 ceph-osd[90977]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'default.rgw.control' : 1 ])
Nov 29 07:31:42 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2[90973]: 2025-11-29T07:31:42.018+0000 7f816d157640 -1 osd.2 57 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14259.0:311 10.15 10.88aa5c95 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e55)
Nov 29 07:31:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 29 07:31:42 compute-0 ceph-osd[89968]: osd.1 57 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14259.0:281 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch reconnect cookie 94822413493248 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:42 compute-0 ceph-osd[89968]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 07:31:42 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-1[89964]: 2025-11-29T07:31:42.367+0000 7fe7bf403640 -1 osd.1 57 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14259.0:281 9.4 9:22d26bf9:::data_loggenerations_metadata:head [watch reconnect cookie 94822413493248 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.10( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[8.12( v 39'4 (0'0,39'4] local-lis/les=55/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [2] r=0 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[7.1c( empty local-lis/les=55/57 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [2] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.11( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.1f( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.1d( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.1c( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.1a( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.18( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[7.a( empty local-lis/les=55/57 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [2] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[7.15( empty local-lis/les=55/57 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [2] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.6( v 43'16 (0'0,43'16] local-lis/les=54/57 n=1 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[8.4( v 39'4 (0'0,39'4] local-lis/les=55/57 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [2] r=0 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.4( v 43'16 (0'0,43'16] local-lis/les=54/57 n=1 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.8( v 43'16 (0'0,43'16] local-lis/les=54/57 n=1 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[8.d( v 39'4 (0'0,39'4] local-lis/les=55/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [2] r=0 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.7( v 43'16 (0'0,43'16] local-lis/les=54/57 n=1 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.f( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[7.2( empty local-lis/les=55/57 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [2] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.9( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.c( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[7.1( empty local-lis/les=55/57 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [2] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.e( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[7.c( empty local-lis/les=55/57 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [2] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[8.2( v 39'4 (0'0,39'4] local-lis/les=55/57 n=1 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [2] r=0 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.1( v 43'16 (0'0,43'16] local-lis/les=54/57 n=1 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.0( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 43'15 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.16( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.14( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.15( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[7.1a( empty local-lis/les=55/57 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [2] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.17( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[7.5( empty local-lis/les=55/57 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [2] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.d( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[7.8( empty local-lis/les=55/57 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [2] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.b( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.a( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.5( v 43'16 (0'0,43'16] local-lis/les=54/57 n=1 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.3( v 43'16 (0'0,43'16] local-lis/les=54/57 n=1 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[8.15( v 39'4 (0'0,39'4] local-lis/les=55/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [2] r=0 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.2( v 43'16 (0'0,43'16] local-lis/les=54/57 n=1 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[8.1c( v 39'4 (0'0,39'4] local-lis/les=55/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [2] r=0 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.1b( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[7.e( empty local-lis/les=55/57 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [2] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.19( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[8.11( v 39'4 (0'0,39'4] local-lis/les=55/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [2] r=0 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.13( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.1e( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[7.11( empty local-lis/les=55/57 n=0 ec=51/28 lis/c=51/51 les/c/f=52/52/0 sis=55) [2] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[8.1b( v 39'4 (0'0,39'4] local-lis/les=55/57 n=0 ec=52/38 lis/c=52/52 les/c/f=53/53/0 sis=55) [2] r=0 lpr=55 pi=[52,55)/1 crt=39'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 57 pg[10.12( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [2] r=0 lpr=54 pi=[42,54)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v170: 305 pgs: 5 active+clean+scrubbing, 61 activating, 72 peering, 31 unknown, 136 active+clean; 457 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 2/134 objects degraded (1.493%)
Nov 29 07:31:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[5.19( empty local-lis/les=55/57 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[5.18( empty local-lis/les=55/57 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[5.1d( empty local-lis/les=55/57 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[5.12( empty local-lis/les=55/57 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[5.16( empty local-lis/les=55/57 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[5.11( empty local-lis/les=55/57 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[6.9( v 43'39 (0'0,43'39] local-lis/les=55/57 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[6.7( v 43'39 lc 40'21 (0'0,43'39] local-lis/les=55/57 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[5.13( empty local-lis/les=55/57 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[6.5( v 43'39 lc 40'11 (0'0,43'39] local-lis/les=55/57 n=2 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[6.1( v 43'39 (0'0,43'39] local-lis/les=55/57 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[6.b( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=55/57 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[5.1( empty local-lis/les=55/57 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[6.3( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=55/57 n=2 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[5.c( empty local-lis/les=55/57 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[5.1a( empty local-lis/les=55/57 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[5.f( empty local-lis/les=55/57 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[6.f( v 43'39 lc 40'1 (0'0,43'39] local-lis/les=55/57 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[6.d( v 43'39 lc 40'13 (0'0,43'39] local-lis/les=55/57 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 57 pg[5.9( empty local-lis/les=55/57 n=0 ec=49/24 lis/c=49/49 les/c/f=51/51/0 sis=55) [1] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:43 compute-0 ceph-osd[90977]: osd.2 57 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14259.0:311 10.15 10.88aa5c95 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e55)
Nov 29 07:31:43 compute-0 ceph-osd[90977]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'default.rgw.control' : 1 ])
Nov 29 07:31:43 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2[90973]: 2025-11-29T07:31:43.026+0000 7f816d157640 -1 osd.2 57 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14259.0:311 10.15 10.88aa5c95 (undecoded) ondisk+write+known_if_redirected+supports_pool_eio e55)
Nov 29 07:31:43 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 29 07:31:43 compute-0 ceph-mon[75237]: log_channel(cluster) log [WRN] : Health check failed: 5 slow ops, oldest one blocked for 38 sec, daemons [osd.1,osd.2] have slow ops. (SLOW_OPS)
Nov 29 07:31:43 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 29 07:31:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:31:44 compute-0 ceph-osd[90977]: osd.2 57 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14259.0:282 10.1f 10:f95f44c2:::notify.0:head [watch reconnect cookie 94822397998592 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:44 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2[90973]: 2025-11-29T07:31:44.421+0000 7f816d157640 -1 osd.2 57 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14259.0:282 10.1f 10:f95f44c2:::notify.0:head [watch reconnect cookie 94822397998592 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:44 compute-0 ceph-osd[90977]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'started' : 8 ] most affected pool [ 'default.rgw.control' : 8 ])
Nov 29 07:31:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v172: 305 pgs: 32 peering, 5 active+recovery_wait+degraded, 1 active+clean+scrubbing, 2 active+recovering, 265 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s; 10/133 objects degraded (7.519%); 1/133 objects misplaced (0.752%)
Nov 29 07:31:45 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 29 07:31:45 compute-0 ceph-osd[90977]: osd.2 57 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14259.0:282 10.1f 10:f95f44c2:::notify.0:head [watch reconnect cookie 94822397998592 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:45 compute-0 ceph-osd[90977]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'started' : 8 ] most affected pool [ 'default.rgw.control' : 8 ])
Nov 29 07:31:45 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2[90973]: 2025-11-29T07:31:45.382+0000 7f816d157640 -1 osd.2 57 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14259.0:282 10.1f 10:f95f44c2:::notify.0:head [watch reconnect cookie 94822397998592 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:46 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Nov 29 07:31:46 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-osd-2[90973]: 2025-11-29T07:31:46.423+0000 7f816d157640 -1 osd.2 57 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14259.0:282 10.1f 10:f95f44c2:::notify.0:head [watch reconnect cookie 94822397998592 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:46 compute-0 ceph-osd[90977]: osd.2 57 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14259.0:282 10.1f 10:f95f44c2:::notify.0:head [watch reconnect cookie 94822397998592 gen 1] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e54)
Nov 29 07:31:46 compute-0 ceph-osd[90977]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'started' : 8 ] most affected pool [ 'default.rgw.control' : 8 ])
Nov 29 07:31:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v173: 305 pgs: 32 peering, 5 active+recovery_wait+degraded, 1 active+clean+scrubbing, 2 active+recovering, 265 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s; 10/133 objects degraded (7.519%); 1/133 objects misplaced (0.752%)
Nov 29 07:31:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:31:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v174: 305 pgs: 32 peering, 5 active+recovery_wait+degraded, 1 active+clean+scrubbing, 2 active+recovering, 265 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s; 10/133 objects degraded (7.519%); 1/133 objects misplaced (0.752%)
Nov 29 07:31:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v175: 305 pgs: 32 peering, 5 active+recovery_wait+degraded, 1 active+clean+scrubbing, 2 active+recovering, 265 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s; 10/133 objects degraded (7.519%); 1/133 objects misplaced (0.752%)
Nov 29 07:31:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 32 peering, 5 active+recovery_wait+degraded, 1 active+clean+scrubbing, 2 active+recovering, 265 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s; 10/133 objects degraded (7.519%); 1/133 objects misplaced (0.752%)
Nov 29 07:31:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 11.811470032s
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 11.811470985s
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.020055771s, txc = 0x558bf48dd500
Nov 29 07:31:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 32 peering, 5 active+recovery_wait+degraded, 1 active+clean+scrubbing, 2 active+recovering, 265 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 178 B/s rd, 0 op/s; 10/133 objects degraded (7.519%); 1/133 objects misplaced (0.752%)
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 5.d deep-scrub starts
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 6.8 scrub starts
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 5.a scrub ok
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 6.6 scrub ok
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 5.d deep-scrub ok
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 6.8 scrub ok
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 5.b scrub ok
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 5.e scrub starts
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:31:54 compute-0 ceph-mon[75237]: pgmap v163: 305 pgs: 129 peering, 3 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 31 unknown, 141 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 5.10 scrub starts
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 7.7 scrub starts
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 5.e scrub ok
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 5.10 scrub ok
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 7.7 scrub ok
Nov 29 07:31:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'default.rgw.log' : 3 ])
Nov 29 07:31:54 compute-0 ceph-mon[75237]: pgmap v164: 305 pgs: 32 activating, 101 peering, 2 active+clean+scrubbing, 31 unknown, 139 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:54 compute-0 ceph-mon[75237]: osdmap e57: 3 total, 3 up, 3 in
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'default.rgw.log' : 3 ])
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 5.17 scrub starts
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'default.rgw.log' : 3 ])
Nov 29 07:31:54 compute-0 ceph-mon[75237]: pgmap v166: 305 pgs: 32 activating, 101 peering, 2 active+clean+scrubbing, 31 unknown, 139 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 6.a scrub starts
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'default.rgw.log' : 4 ])
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'default.rgw.log' : 4 ])
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 6.c scrub starts
Nov 29 07:31:54 compute-0 ceph-mon[75237]: pgmap v167: 305 pgs: 32 activating, 101 peering, 2 active+clean+scrubbing, 31 unknown, 139 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 5.1b scrub starts
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 7.b scrub starts
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'default.rgw.log' : 5 ])
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 5.17 scrub ok
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 5.1b scrub ok
Nov 29 07:31:54 compute-0 ceph-mon[75237]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'default.rgw.log' : 5 ])
Nov 29 07:31:54 compute-0 ceph-mon[75237]: pgmap v168: 305 pgs: 2 active+clean+scrubbing, 32 activating, 101 peering, 31 unknown, 139 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 102 B/s wr, 0 op/s
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_commit, latency = 11.420985222s
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 11.420985222s
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.299200058s, txc = 0x5571f4eec900
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.299232483s, txc = 0x5571f4f3d500
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.299208641s, txc = 0x5571f4eecc00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.299218178s, txc = 0x5571f4f07500
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.299201965s, txc = 0x5571f4f3d800
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.299196243s, txc = 0x5571f4eecf00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.299168587s, txc = 0x5571f4f3db00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.299136162s, txc = 0x5571f4eed200
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.299067497s, txc = 0x5571f4eed500
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.299074173s, txc = 0x5571f3066000
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.299018860s, txc = 0x5571f4eed800
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298957825s, txc = 0x5571f3066300
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298953056s, txc = 0x5571f4eedb00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298955917s, txc = 0x5571f4f07800
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298919678s, txc = 0x5571f3066600
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298880577s, txc = 0x5571f3068000
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298865318s, txc = 0x5571f3066900
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298871040s, txc = 0x5571f4f07b00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298854828s, txc = 0x5571f3068300
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298838615s, txc = 0x5571f3066c00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298816681s, txc = 0x5571f3068600
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298813820s, txc = 0x5571f4e98000
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298820496s, txc = 0x5571f3066f00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298797607s, txc = 0x5571f3068900
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298768997s, txc = 0x5571f3068c00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298768044s, txc = 0x5571f3067200
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298758507s, txc = 0x5571f4e98300
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298753738s, txc = 0x5571f3068f00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298744202s, txc = 0x5571f3067500
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298722267s, txc = 0x5571f3067800
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298693657s, txc = 0x5571f3067b00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298704147s, txc = 0x5571f4e98600
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298695564s, txc = 0x5571f3069200
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298669815s, txc = 0x5571f4e9a000
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298636436s, txc = 0x5571f4e9a300
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298672676s, txc = 0x5571f3069500
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298650742s, txc = 0x5571f4e98900
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298609734s, txc = 0x5571f3069800
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298615456s, txc = 0x5571f4e9a600
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298596382s, txc = 0x5571f4e98c00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298591614s, txc = 0x5571f3069b00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298548698s, txc = 0x5571f4e9a900
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298550606s, txc = 0x5571f4f60000
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298549652s, txc = 0x5571f4e98f00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298531532s, txc = 0x5571f4e9ac00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298527718s, txc = 0x5571f4f60300
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298505783s, txc = 0x5571f4e9af00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298494339s, txc = 0x5571f4f60600
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298480034s, txc = 0x5571f4e9b200
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298462868s, txc = 0x5571f4f60900
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298460007s, txc = 0x5571f4e9b500
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298501015s, txc = 0x5571f4e99200
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298439026s, txc = 0x5571f4e9b800
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298444748s, txc = 0x5571f4f60c00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298419952s, txc = 0x5571f4e9bb00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298412323s, txc = 0x5571f4e99500
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298397064s, txc = 0x5571f4f60f00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298370361s, txc = 0x5571f4f6e000
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298359871s, txc = 0x5571f4f61200
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298338890s, txc = 0x5571f4f6e300
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298344612s, txc = 0x5571f4e99800
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298285484s, txc = 0x5571f4f6e600
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298290253s, txc = 0x5571f4f61500
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298252106s, txc = 0x5571f4f6e900
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298220634s, txc = 0x5571f4f6ec00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298226357s, txc = 0x5571f4f61800
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298226357s, txc = 0x5571f4e99b00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298193932s, txc = 0x5571f4f6ef00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298185349s, txc = 0x5571f4f61b00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298169136s, txc = 0x5571f4f6f200
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298135757s, txc = 0x5571f4f78000
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298123360s, txc = 0x5571f4f6f500
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298134804s, txc = 0x5571f4f76000
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298094749s, txc = 0x5571f4f6f800
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298101425s, txc = 0x5571f4f78300
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298067093s, txc = 0x5571f4f6fb00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298055649s, txc = 0x5571f4f76300
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298048973s, txc = 0x5571f4f78600
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298023224s, txc = 0x5571f4f7e000
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.298014641s, txc = 0x5571f4f78900
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.297988892s, txc = 0x5571f4f76600
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.297975540s, txc = 0x5571f4f7e300
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.297975540s, txc = 0x5571f4f78c00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.297935486s, txc = 0x5571f4f78f00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.297932625s, txc = 0x5571f4f7e600
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.297901154s, txc = 0x5571f4f79200
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.297891617s, txc = 0x5571f4f7e900
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.297869682s, txc = 0x5571f4f79500
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.297931671s, txc = 0x5571f4f76900
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.297827721s, txc = 0x5571f4f7ec00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.297828674s, txc = 0x5571f4f79800
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.297785759s, txc = 0x5571f4f7ef00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.297789574s, txc = 0x5571f4f76c00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.297772408s, txc = 0x5571f4f79b00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.297745705s, txc = 0x5571f4f7f200
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.297730446s, txc = 0x5571f4f8c000
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.297721863s, txc = 0x5571f4f76f00
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.297705650s, txc = 0x5571f4f7f500
Nov 29 07:31:54 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.297624588s, txc = 0x5571f4f8e000
Nov 29 07:31:54 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 29 07:31:54 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 29 07:31:54 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.218409538s, txc = 0x558bf1a06f00
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.218380928s, txc = 0x558bf48dd800
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.218353271s, txc = 0x558bf48ddb00
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.218393326s, txc = 0x558bf48dec00
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.218323708s, txc = 0x558bf41da600
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.218294144s, txc = 0x558bf41da900
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.218212128s, txc = 0x558bf41da000
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.218195915s, txc = 0x558bf41da300
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.218167305s, txc = 0x558bf3d11b00
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.218199730s, txc = 0x558bf48def00
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217984200s, txc = 0x558bf210e600
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217985153s, txc = 0x558bf41db500
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217988968s, txc = 0x558bf1a07500
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217936516s, txc = 0x558bf210e900
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217892647s, txc = 0x558bf4567500
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217907906s, txc = 0x558bf41db800
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217919350s, txc = 0x558bf1a07800
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217805862s, txc = 0x558bf4567200
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217726707s, txc = 0x558bf41dbb00
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217687607s, txc = 0x558bf1a07b00
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217661858s, txc = 0x558bf4566f00
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217620850s, txc = 0x558bf3d40f00
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217602730s, txc = 0x558bf4566c00
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217647552s, txc = 0x558bf3d0e900
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217464447s, txc = 0x558bf3d0f200
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217471123s, txc = 0x558bf3d40c00
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217407227s, txc = 0x558bf4549b00
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217366219s, txc = 0x558bf4567b00
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217329979s, txc = 0x558bf3d40900
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217302322s, txc = 0x558bf48df200
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217285156s, txc = 0x558bf4566000
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217242241s, txc = 0x558bf4567800
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.217389107s, txc = 0x558bf3d0ef00
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.216536522s, txc = 0x558bf3d0e600
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.216321945s, txc = 0x558bf3d3a000
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.216219902s, txc = 0x558bf216f800
Nov 29 07:31:54 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.774934769s, txc = 0x558bf3e16600
Nov 29 07:31:54 compute-0 podman[104501]: 2025-11-29 07:31:54.939436783 +0000 UTC m=+30.999318833 container remove a2223375bfac8e049d1e2b39e27153e34372da6202c89b2947b2619090d32828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_raman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:31:54 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 11.537849426s
Nov 29 07:31:54 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 11.537850380s
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280478477s, txc = 0x562226b5ec00
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280543327s, txc = 0x562226b70300
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280526161s, txc = 0x5622262f1500
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280494690s, txc = 0x562226b5ef00
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280489922s, txc = 0x5622262f1800
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280477524s, txc = 0x562226b70600
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.281023026s, txc = 0x562226b5f200
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.281077385s, txc = 0x5622262f1b00
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.281059265s, txc = 0x562226b70900
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.281059265s, txc = 0x562226b5f500
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.281054497s, txc = 0x56222460e000
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.281075478s, txc = 0x562226b70c00
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.281059265s, txc = 0x562226b70f00
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.281083107s, txc = 0x562226b5f800
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.281063080s, txc = 0x562226b71200
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.281070709s, txc = 0x56222460e300
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280993462s, txc = 0x562226b57500
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280989647s, txc = 0x562226b6e600
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280949593s, txc = 0x562226b6e900
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280930519s, txc = 0x562226b57800
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280921936s, txc = 0x562226b6ec00
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280899048s, txc = 0x562226b57b00
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280900002s, txc = 0x562226b6ef00
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280876160s, txc = 0x562226b6f200
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280879021s, txc = 0x562224610000
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280856133s, txc = 0x562226b6f500
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280857086s, txc = 0x562224610300
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280832291s, txc = 0x562226b6f800
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280835152s, txc = 0x562224610600
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280811310s, txc = 0x562226b6fb00
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280814171s, txc = 0x562224610900
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280776978s, txc = 0x562226b74000
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280776978s, txc = 0x562224610c00
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280756950s, txc = 0x562226b74300
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280713081s, txc = 0x562224610f00
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280690193s, txc = 0x562226b74600
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280688286s, txc = 0x562224611200
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280602455s, txc = 0x562226b71500
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280555725s, txc = 0x562226b5fb00
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280543327s, txc = 0x562226b71800
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280534744s, txc = 0x56222460e600
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280519485s, txc = 0x562226b71b00
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280515671s, txc = 0x562226b76000
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280507088s, txc = 0x56222460e900
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280487061s, txc = 0x562226b7c000
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280488014s, txc = 0x562226b76300
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280465126s, txc = 0x562226b7c300
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280448914s, txc = 0x562226b7c600
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280450821s, txc = 0x562226b76600
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280448914s, txc = 0x56222460ec00
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280438423s, txc = 0x562226b7c900
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280421257s, txc = 0x562226b76900
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280417442s, txc = 0x56222460ef00
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280735970s, txc = 0x562226b74900
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280392647s, txc = 0x562224611500
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280344009s, txc = 0x562226b76c00
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.280317307s, txc = 0x562226b7cc00
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.274416924s, txc = 0x56222460f200
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.270919800s, txc = 0x562226b76f00
Nov 29 07:31:55 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.932309151s, txc = 0x562226b54600
Nov 29 07:31:55 compute-0 systemd[1]: libpod-conmon-a2223375bfac8e049d1e2b39e27153e34372da6202c89b2947b2619090d32828.scope: Deactivated successfully.
Nov 29 07:31:55 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.585041046s, txc = 0x5571f4f8c300
Nov 29 07:31:55 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.585052490s, txc = 0x5571f4f8e300
Nov 29 07:31:55 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.707307816s, txc = 0x5571f4f8c600
Nov 29 07:31:55 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.707091331s, txc = 0x5571f4f8c900
Nov 29 07:31:55 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.706509590s, txc = 0x5571f4f77200
Nov 29 07:31:55 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.706349373s, txc = 0x5571f4f77500
Nov 29 07:31:55 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.706252098s, txc = 0x5571f4f77800
Nov 29 07:31:55 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.706026077s, txc = 0x5571f4f77b00
Nov 29 07:31:55 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.705859184s, txc = 0x5571f4f8cc00
Nov 29 07:31:55 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.706569672s, txc = 0x5571f4f8d200
Nov 29 07:31:55 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.743280411s, txc = 0x5571f4f7fb00
Nov 29 07:31:55 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.701592445s, txc = 0x5571f4f8d800
Nov 29 07:31:55 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.512228012s, txc = 0x5571f4faef00
Nov 29 07:31:55 compute-0 podman[104541]: 2025-11-29 07:31:55.434202834 +0000 UTC m=+26.851162674 container create 76df25f5b335158eee8b2cca04354fbc5ec23288aecf51ad4e13b43ac857b74c (image=quay.io/ceph/ceph:v18, name=practical_chatterjee, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 07:31:55 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.1( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:31:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:31:56 compute-0 systemd[1]: Started libpod-conmon-76df25f5b335158eee8b2cca04354fbc5ec23288aecf51ad4e13b43ac857b74c.scope.
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.19( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.9( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.3( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.14( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.0( empty local-lis/les=56/58 n=0 ec=44/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.2( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.e( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.f( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.d( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.c( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.b( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.4( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.a( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.5( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.18( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.7( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.6( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.8( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.1b( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.1e( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.1f( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.1d( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.1c( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.11( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.12( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.1a( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.16( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.15( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.17( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.10( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 58 pg[11.13( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=44/44 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[44,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:31:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/622052f0b2f8ab23c3a6bdeb30814848e0636119ce3e1ec9ef25b0f438485e5d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:31:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/622052f0b2f8ab23c3a6bdeb30814848e0636119ce3e1ec9ef25b0f438485e5d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:31:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v178: 305 pgs: 3 active+clean+scrubbing, 32 peering, 5 active+recovery_wait+degraded, 2 active+recovering, 263 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 10/133 objects degraded (7.519%); 1/133 objects misplaced (0.752%)
Nov 29 07:31:57 compute-0 ceph-mon[75237]: 7.b scrub ok
Nov 29 07:31:57 compute-0 ceph-mon[75237]: 6.a scrub ok
Nov 29 07:31:57 compute-0 ceph-mon[75237]: 6.c scrub ok
Nov 29 07:31:57 compute-0 ceph-mon[75237]: 7.d scrub starts
Nov 29 07:31:57 compute-0 ceph-mon[75237]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 07:31:57 compute-0 ceph-mon[75237]: 6.e scrub starts
Nov 29 07:31:57 compute-0 ceph-mon[75237]: pgmap v169: 305 pgs: 5 active+clean+scrubbing, 61 activating, 72 peering, 31 unknown, 136 active+clean; 457 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 102 B/s wr, 0 op/s; 2/134 objects degraded (1.493%)
Nov 29 07:31:57 compute-0 ceph-mon[75237]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 07:31:57 compute-0 ceph-mon[75237]: 7.d scrub ok
Nov 29 07:31:57 compute-0 ceph-mon[75237]: 6.e scrub ok
Nov 29 07:31:57 compute-0 ceph-mon[75237]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'default.rgw.control' : 1 ])
Nov 29 07:31:57 compute-0 ceph-mon[75237]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 07:31:57 compute-0 ceph-mon[75237]: pgmap v170: 305 pgs: 5 active+clean+scrubbing, 61 activating, 72 peering, 31 unknown, 136 active+clean; 457 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 2/134 objects degraded (1.493%)
Nov 29 07:31:57 compute-0 ceph-mon[75237]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'default.rgw.control' : 1 ])
Nov 29 07:31:57 compute-0 ceph-mon[75237]: osdmap e58: 3 total, 3 up, 3 in
Nov 29 07:31:57 compute-0 ceph-mon[75237]: Health check failed: 5 slow ops, oldest one blocked for 38 sec, daemons [osd.1,osd.2] have slow ops. (SLOW_OPS)
Nov 29 07:31:57 compute-0 ceph-mon[75237]: 5.1c scrub starts
Nov 29 07:31:57 compute-0 ceph-mon[75237]: 8 slow requests (by type [ 'started' : 8 ] most affected pool [ 'default.rgw.control' : 8 ])
Nov 29 07:31:57 compute-0 ceph-mon[75237]: pgmap v172: 305 pgs: 32 peering, 5 active+recovery_wait+degraded, 1 active+clean+scrubbing, 2 active+recovering, 265 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s; 10/133 objects degraded (7.519%); 1/133 objects misplaced (0.752%)
Nov 29 07:31:57 compute-0 ceph-mon[75237]: 5.1f scrub starts
Nov 29 07:31:57 compute-0 ceph-mon[75237]: 8 slow requests (by type [ 'started' : 8 ] most affected pool [ 'default.rgw.control' : 8 ])
Nov 29 07:31:57 compute-0 ceph-mon[75237]: 10.1 scrub starts
Nov 29 07:31:57 compute-0 ceph-mon[75237]: 8 slow requests (by type [ 'started' : 8 ] most affected pool [ 'default.rgw.control' : 8 ])
Nov 29 07:31:57 compute-0 ceph-mon[75237]: pgmap v173: 305 pgs: 32 peering, 5 active+recovery_wait+degraded, 1 active+clean+scrubbing, 2 active+recovering, 265 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s; 10/133 objects degraded (7.519%); 1/133 objects misplaced (0.752%)
Nov 29 07:31:57 compute-0 ceph-mon[75237]: pgmap v174: 305 pgs: 32 peering, 5 active+recovery_wait+degraded, 1 active+clean+scrubbing, 2 active+recovering, 265 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s; 10/133 objects degraded (7.519%); 1/133 objects misplaced (0.752%)
Nov 29 07:31:57 compute-0 ceph-mon[75237]: pgmap v175: 305 pgs: 32 peering, 5 active+recovery_wait+degraded, 1 active+clean+scrubbing, 2 active+recovering, 265 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s; 10/133 objects degraded (7.519%); 1/133 objects misplaced (0.752%)
Nov 29 07:31:57 compute-0 ceph-mon[75237]: pgmap v176: 305 pgs: 32 peering, 5 active+recovery_wait+degraded, 1 active+clean+scrubbing, 2 active+recovering, 265 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s; 10/133 objects degraded (7.519%); 1/133 objects misplaced (0.752%)
Nov 29 07:31:57 compute-0 ceph-mon[75237]: pgmap v177: 305 pgs: 32 peering, 5 active+recovery_wait+degraded, 1 active+clean+scrubbing, 2 active+recovering, 265 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 178 B/s rd, 0 op/s; 10/133 objects degraded (7.519%); 1/133 objects misplaced (0.752%)
Nov 29 07:31:57 compute-0 ceph-mon[75237]: 5.1c scrub ok
Nov 29 07:31:57 compute-0 ceph-mon[75237]: 5.1f scrub ok
Nov 29 07:31:57 compute-0 ceph-mon[75237]: 10.1 scrub ok
Nov 29 07:31:57 compute-0 podman[104541]: 2025-11-29 07:31:57.12264672 +0000 UTC m=+28.539606660 container init 76df25f5b335158eee8b2cca04354fbc5ec23288aecf51ad4e13b43ac857b74c (image=quay.io/ceph/ceph:v18, name=practical_chatterjee, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 07:31:57 compute-0 podman[104541]: 2025-11-29 07:31:57.136433827 +0000 UTC m=+28.553393697 container start 76df25f5b335158eee8b2cca04354fbc5ec23288aecf51ad4e13b43ac857b74c (image=quay.io/ceph/ceph:v18, name=practical_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 07:31:57 compute-0 ceph-mon[75237]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 10/133 objects degraded (7.519%), 5 pgs degraded (PG_DEGRADED)
Nov 29 07:31:57 compute-0 ceph-mon[75237]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 48 sec, osd.2 has slow ops (SLOW_OPS)
Nov 29 07:31:57 compute-0 podman[104541]: 2025-11-29 07:31:57.745511021 +0000 UTC m=+29.162470861 container attach 76df25f5b335158eee8b2cca04354fbc5ec23288aecf51ad4e13b43ac857b74c (image=quay.io/ceph/ceph:v18, name=practical_chatterjee, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 07:31:57 compute-0 podman[104586]: 2025-11-29 07:31:57.827609614 +0000 UTC m=+2.665540658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:31:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:31:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 3 active+clean+scrubbing, 32 peering, 5 active+recovery_wait+degraded, 2 active+recovering, 263 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 10/133 objects degraded (7.519%); 1/133 objects misplaced (0.752%)
Nov 29 07:31:59 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 29 07:31:59 compute-0 podman[104586]: 2025-11-29 07:31:59.012537512 +0000 UTC m=+3.850468516 container create 304dd7e5e804412f70d33236bc6438964a12906f53072f4cca2d20a0a6df9ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shtern, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 07:31:59 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 29 07:31:59 compute-0 ceph-mon[75237]: pgmap v178: 305 pgs: 3 active+clean+scrubbing, 32 peering, 5 active+recovery_wait+degraded, 2 active+recovering, 263 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 10/133 objects degraded (7.519%); 1/133 objects misplaced (0.752%)
Nov 29 07:31:59 compute-0 systemd[1]: Started libpod-conmon-304dd7e5e804412f70d33236bc6438964a12906f53072f4cca2d20a0a6df9ce6.scope.
Nov 29 07:31:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3ae78bdf9f6a549f0f119dcd45fc91b85c6fa27e2e9cee7018e62e25f1e4f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3ae78bdf9f6a549f0f119dcd45fc91b85c6fa27e2e9cee7018e62e25f1e4f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3ae78bdf9f6a549f0f119dcd45fc91b85c6fa27e2e9cee7018e62e25f1e4f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3ae78bdf9f6a549f0f119dcd45fc91b85c6fa27e2e9cee7018e62e25f1e4f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:32:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v180: 305 pgs: 3 active+recovery_wait+degraded, 1 active+recovering, 301 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 0 B/s wr, 1 op/s; 7/133 objects degraded (5.263%); 15 B/s, 0 objects/s recovering
Nov 29 07:32:00 compute-0 podman[104586]: 2025-11-29 07:32:00.74013938 +0000 UTC m=+5.578070484 container init 304dd7e5e804412f70d33236bc6438964a12906f53072f4cca2d20a0a6df9ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:32:00 compute-0 podman[104586]: 2025-11-29 07:32:00.746474693 +0000 UTC m=+5.584405697 container start 304dd7e5e804412f70d33236bc6438964a12906f53072f4cca2d20a0a6df9ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shtern, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 07:32:00 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Nov 29 07:32:01 compute-0 condescending_shtern[104670]: {
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:     "0": [
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:         {
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "devices": [
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "/dev/loop3"
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             ],
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "lv_name": "ceph_lv0",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "lv_size": "21470642176",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "name": "ceph_lv0",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "tags": {
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.cluster_name": "ceph",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.crush_device_class": "",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.encrypted": "0",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.osd_id": "0",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.type": "block",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.vdo": "0"
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             },
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "type": "block",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "vg_name": "ceph_vg0"
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:         }
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:     ],
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:     "1": [
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:         {
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "devices": [
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "/dev/loop4"
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             ],
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "lv_name": "ceph_lv1",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "lv_size": "21470642176",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "name": "ceph_lv1",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "tags": {
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.cluster_name": "ceph",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.crush_device_class": "",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.encrypted": "0",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.osd_id": "1",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.type": "block",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.vdo": "0"
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             },
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "type": "block",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "vg_name": "ceph_vg1"
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:         }
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:     ],
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:     "2": [
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:         {
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "devices": [
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "/dev/loop5"
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             ],
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "lv_name": "ceph_lv2",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "lv_size": "21470642176",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "name": "ceph_lv2",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "tags": {
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.cluster_name": "ceph",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.crush_device_class": "",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.encrypted": "0",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.osd_id": "2",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.type": "block",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:                 "ceph.vdo": "0"
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             },
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "type": "block",
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:             "vg_name": "ceph_vg2"
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:         }
Nov 29 07:32:01 compute-0 condescending_shtern[104670]:     ]
Nov 29 07:32:01 compute-0 condescending_shtern[104670]: }
Nov 29 07:32:01 compute-0 systemd[1]: libpod-304dd7e5e804412f70d33236bc6438964a12906f53072f4cca2d20a0a6df9ce6.scope: Deactivated successfully.
Nov 29 07:32:01 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 29 07:32:02 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Nov 29 07:32:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 3 active+recovery_wait+degraded, 1 active+recovering, 301 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 0 B/s wr, 1 op/s; 7/133 objects degraded (5.263%); 15 B/s, 0 objects/s recovering
Nov 29 07:32:03 compute-0 podman[104586]: 2025-11-29 07:32:03.25938083 +0000 UTC m=+8.097311844 container attach 304dd7e5e804412f70d33236bc6438964a12906f53072f4cca2d20a0a6df9ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 07:32:03 compute-0 podman[104586]: 2025-11-29 07:32:03.260449268 +0000 UTC m=+8.098380272 container died 304dd7e5e804412f70d33236bc6438964a12906f53072f4cca2d20a0a6df9ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shtern, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 07:32:03 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 29 07:32:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:32:04 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.022624969s, txc = 0x5571f4f4e300
Nov 29 07:32:04 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.006494045s, txc = 0x5571f4f8f500
Nov 29 07:32:04 compute-0 ceph-mon[75237]: Health check failed: Degraded data redundancy: 10/133 objects degraded (7.519%), 5 pgs degraded (PG_DEGRADED)
Nov 29 07:32:04 compute-0 ceph-mon[75237]: Health check update: 0 slow ops, oldest one blocked for 48 sec, osd.2 has slow ops (SLOW_OPS)
Nov 29 07:32:04 compute-0 ceph-mon[75237]: pgmap v179: 305 pgs: 3 active+clean+scrubbing, 32 peering, 5 active+recovery_wait+degraded, 2 active+recovering, 263 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 10/133 objects degraded (7.519%); 1/133 objects misplaced (0.752%)
Nov 29 07:32:04 compute-0 ceph-mon[75237]: 10.2 scrub starts
Nov 29 07:32:04 compute-0 ceph-mon[75237]: 10.2 scrub ok
Nov 29 07:32:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v182: 305 pgs: 2 active+recovery_wait+degraded, 1 active+recovering, 302 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s; 5/134 objects degraded (3.731%); 1/134 objects misplaced (0.746%); 15 B/s, 0 objects/s recovering
Nov 29 07:32:04 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 29 07:32:04 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 29 07:32:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f3ae78bdf9f6a549f0f119dcd45fc91b85c6fa27e2e9cee7018e62e25f1e4f2-merged.mount: Deactivated successfully.
Nov 29 07:32:05 compute-0 ceph-mon[75237]: log_channel(cluster) log [WRN] : Health check update: Degraded data redundancy: 5/134 objects degraded (3.731%), 2 pgs degraded (PG_DEGRADED)
Nov 29 07:32:05 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : Health check cleared: SLOW_OPS (was: 0 slow ops, oldest one blocked for 48 sec, osd.2 has slow ops)
Nov 29 07:32:05 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 29 07:32:06 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 29 07:32:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 2 active+recovery_wait+degraded, 1 active+recovering, 302 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s; 5/134 objects degraded (3.731%); 1/134 objects misplaced (0.746%); 15 B/s, 0 objects/s recovering
Nov 29 07:32:07 compute-0 sshd-session[104691]: Received disconnect from 45.78.219.195 port 33876:11: Bye Bye [preauth]
Nov 29 07:32:07 compute-0 sshd-session[104691]: Disconnected from authenticating user root 45.78.219.195 port 33876 [preauth]
Nov 29 07:32:07 compute-0 ceph-mon[75237]: pgmap v180: 305 pgs: 3 active+recovery_wait+degraded, 1 active+recovering, 301 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 0 B/s wr, 1 op/s; 7/133 objects degraded (5.263%); 15 B/s, 0 objects/s recovering
Nov 29 07:32:07 compute-0 ceph-mon[75237]: 10.3 scrub starts
Nov 29 07:32:07 compute-0 ceph-mon[75237]: 5.15 scrub starts
Nov 29 07:32:07 compute-0 ceph-mon[75237]: 10.3 scrub ok
Nov 29 07:32:07 compute-0 ceph-mon[75237]: pgmap v181: 305 pgs: 3 active+recovery_wait+degraded, 1 active+recovering, 301 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 0 B/s wr, 1 op/s; 7/133 objects degraded (5.263%); 15 B/s, 0 objects/s recovering
Nov 29 07:32:07 compute-0 ceph-mon[75237]: 5.15 scrub ok
Nov 29 07:32:07 compute-0 ceph-mon[75237]: 5.4 scrub starts
Nov 29 07:32:07 compute-0 ceph-mon[75237]: Health check update: Degraded data redundancy: 5/134 objects degraded (3.731%), 2 pgs degraded (PG_DEGRADED)
Nov 29 07:32:07 compute-0 ceph-mon[75237]: Health check cleared: SLOW_OPS (was: 0 slow ops, oldest one blocked for 48 sec, osd.2 has slow ops)
Nov 29 07:32:07 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Nov 29 07:32:08 compute-0 podman[104586]: 2025-11-29 07:32:08.389735525 +0000 UTC m=+13.227666539 container remove 304dd7e5e804412f70d33236bc6438964a12906f53072f4cca2d20a0a6df9ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 07:32:08 compute-0 systemd[1]: libpod-conmon-304dd7e5e804412f70d33236bc6438964a12906f53072f4cca2d20a0a6df9ce6.scope: Deactivated successfully.
Nov 29 07:32:08 compute-0 sudo[104436]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:08 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Nov 29 07:32:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:32:08 compute-0 sudo[104698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:32:08 compute-0 sudo[104698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:08 compute-0 sudo[104698]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:08 compute-0 sudo[104723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:32:08 compute-0 sudo[104723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:08 compute-0 sudo[104723]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v184: 305 pgs: 2 active+recovery_wait+degraded, 1 active+recovering, 302 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s; 5/134 objects degraded (3.731%); 1/134 objects misplaced (0.746%); 15 B/s, 0 objects/s recovering
Nov 29 07:32:08 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 29 07:32:08 compute-0 sudo[104748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:32:08 compute-0 sudo[104748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:08 compute-0 sudo[104748]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:32:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:32:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:32:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:32:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:32:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:32:08 compute-0 sudo[104773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:32:08 compute-0 sudo[104773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:08 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Nov 29 07:32:09 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Nov 29 07:32:09 compute-0 podman[104838]: 2025-11-29 07:32:09.081855436 +0000 UTC m=+0.040471749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:32:09 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 29 07:32:09 compute-0 ceph-mon[75237]: pgmap v182: 305 pgs: 2 active+recovery_wait+degraded, 1 active+recovering, 302 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s; 5/134 objects degraded (3.731%); 1/134 objects misplaced (0.746%); 15 B/s, 0 objects/s recovering
Nov 29 07:32:09 compute-0 ceph-mon[75237]: 5.4 scrub ok
Nov 29 07:32:09 compute-0 ceph-mon[75237]: 5.7 scrub starts
Nov 29 07:32:09 compute-0 ceph-mon[75237]: 5.7 scrub ok
Nov 29 07:32:09 compute-0 ceph-mon[75237]: pgmap v183: 305 pgs: 2 active+recovery_wait+degraded, 1 active+recovering, 302 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s; 5/134 objects degraded (3.731%); 1/134 objects misplaced (0.746%); 15 B/s, 0 objects/s recovering
Nov 29 07:32:09 compute-0 podman[104838]: 2025-11-29 07:32:09.380441965 +0000 UTC m=+0.339058248 container create b3c7442caa667c76d1931f7d269fc620dedf69e9be3144239a126d4b834c9acf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_noether, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 07:32:09 compute-0 sshd-session[104695]: Received disconnect from 101.47.142.104 port 45126:11: Bye Bye [preauth]
Nov 29 07:32:09 compute-0 sshd-session[104695]: Disconnected from authenticating user root 101.47.142.104 port 45126 [preauth]
Nov 29 07:32:09 compute-0 systemd[1]: Started libpod-conmon-b3c7442caa667c76d1931f7d269fc620dedf69e9be3144239a126d4b834c9acf.scope.
Nov 29 07:32:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:32:09 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 29 07:32:09 compute-0 podman[104838]: 2025-11-29 07:32:09.689408339 +0000 UTC m=+0.648024642 container init b3c7442caa667c76d1931f7d269fc620dedf69e9be3144239a126d4b834c9acf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:32:09 compute-0 podman[104838]: 2025-11-29 07:32:09.696479109 +0000 UTC m=+0.655095392 container start b3c7442caa667c76d1931f7d269fc620dedf69e9be3144239a126d4b834c9acf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_noether, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 07:32:09 compute-0 unruffled_noether[104855]: 167 167
Nov 29 07:32:09 compute-0 systemd[1]: libpod-b3c7442caa667c76d1931f7d269fc620dedf69e9be3144239a126d4b834c9acf.scope: Deactivated successfully.
Nov 29 07:32:09 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 29 07:32:09 compute-0 podman[104838]: 2025-11-29 07:32:09.818337656 +0000 UTC m=+0.776953939 container attach b3c7442caa667c76d1931f7d269fc620dedf69e9be3144239a126d4b834c9acf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_noether, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 07:32:09 compute-0 podman[104838]: 2025-11-29 07:32:09.818881821 +0000 UTC m=+0.777498124 container died b3c7442caa667c76d1931f7d269fc620dedf69e9be3144239a126d4b834c9acf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_noether, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:32:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-14af64592385f1ccb795080bb60458ad1ae0d80b07cf6309dd970b8001aa905c-merged.mount: Deactivated successfully.
Nov 29 07:32:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v185: 305 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 303 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 2 op/s; 2/134 objects degraded (1.493%); 19 B/s, 0 keys/s, 0 objects/s recovering
Nov 29 07:32:10 compute-0 ceph-mon[75237]: 10.4 scrub starts
Nov 29 07:32:10 compute-0 ceph-mon[75237]: 10.4 scrub ok
Nov 29 07:32:10 compute-0 ceph-mon[75237]: pgmap v184: 305 pgs: 2 active+recovery_wait+degraded, 1 active+recovering, 302 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s; 5/134 objects degraded (3.731%); 1/134 objects misplaced (0.746%); 15 B/s, 0 objects/s recovering
Nov 29 07:32:10 compute-0 ceph-mon[75237]: 5.1e scrub starts
Nov 29 07:32:10 compute-0 ceph-mon[75237]: 10.5 scrub starts
Nov 29 07:32:10 compute-0 ceph-mon[75237]: 10.5 scrub ok
Nov 29 07:32:10 compute-0 ceph-mon[75237]: 5.1e scrub ok
Nov 29 07:32:10 compute-0 ceph-mon[75237]: 5.2 scrub starts
Nov 29 07:32:10 compute-0 ceph-mon[75237]: 5.2 scrub ok
Nov 29 07:32:10 compute-0 podman[104838]: 2025-11-29 07:32:10.94490372 +0000 UTC m=+1.903520043 container remove b3c7442caa667c76d1931f7d269fc620dedf69e9be3144239a126d4b834c9acf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_noether, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 07:32:10 compute-0 systemd[1]: libpod-conmon-b3c7442caa667c76d1931f7d269fc620dedf69e9be3144239a126d4b834c9acf.scope: Deactivated successfully.
Nov 29 07:32:11 compute-0 practical_chatterjee[104600]: could not fetch user info: no user info saved
Nov 29 07:32:11 compute-0 podman[104899]: 2025-11-29 07:32:11.088206555 +0000 UTC m=+0.027701847 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:32:11 compute-0 podman[104899]: 2025-11-29 07:32:11.188624533 +0000 UTC m=+0.128119805 container create 0e6211aa8245eb62e5c2eed129d2c5549d10b3a239bc8482e149a679f90a9e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 07:32:11 compute-0 systemd[1]: Started libpod-conmon-0e6211aa8245eb62e5c2eed129d2c5549d10b3a239bc8482e149a679f90a9e74.scope.
Nov 29 07:32:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe84a7b03243437cd4f2cd92d7eaa9cd4492cbd04d6f6cb25d82f50059ce1122/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe84a7b03243437cd4f2cd92d7eaa9cd4492cbd04d6f6cb25d82f50059ce1122/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe84a7b03243437cd4f2cd92d7eaa9cd4492cbd04d6f6cb25d82f50059ce1122/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe84a7b03243437cd4f2cd92d7eaa9cd4492cbd04d6f6cb25d82f50059ce1122/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:32:11 compute-0 podman[104899]: 2025-11-29 07:32:11.406752931 +0000 UTC m=+0.346248223 container init 0e6211aa8245eb62e5c2eed129d2c5549d10b3a239bc8482e149a679f90a9e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:32:11 compute-0 podman[104899]: 2025-11-29 07:32:11.414026547 +0000 UTC m=+0.353521819 container start 0e6211aa8245eb62e5c2eed129d2c5549d10b3a239bc8482e149a679f90a9e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 07:32:11 compute-0 podman[104899]: 2025-11-29 07:32:11.481882601 +0000 UTC m=+0.421377903 container attach 0e6211aa8245eb62e5c2eed129d2c5549d10b3a239bc8482e149a679f90a9e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 07:32:11 compute-0 ceph-mon[75237]: log_channel(cluster) log [WRN] : Health check update: Degraded data redundancy: 2/134 objects degraded (1.493%), 1 pg degraded (PG_DEGRADED)
Nov 29 07:32:11 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 29 07:32:12 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 29 07:32:12 compute-0 ceph-mon[75237]: pgmap v185: 305 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 303 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 2 op/s; 2/134 objects degraded (1.493%); 19 B/s, 0 keys/s, 0 objects/s recovering
Nov 29 07:32:12 compute-0 fervent_beaver[104915]: {
Nov 29 07:32:12 compute-0 fervent_beaver[104915]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:32:12 compute-0 fervent_beaver[104915]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:32:12 compute-0 fervent_beaver[104915]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:32:12 compute-0 fervent_beaver[104915]:         "osd_id": 2,
Nov 29 07:32:12 compute-0 fervent_beaver[104915]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:32:12 compute-0 fervent_beaver[104915]:         "type": "bluestore"
Nov 29 07:32:12 compute-0 fervent_beaver[104915]:     },
Nov 29 07:32:12 compute-0 fervent_beaver[104915]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:32:12 compute-0 fervent_beaver[104915]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:32:12 compute-0 fervent_beaver[104915]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:32:12 compute-0 fervent_beaver[104915]:         "osd_id": 0,
Nov 29 07:32:12 compute-0 fervent_beaver[104915]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:32:12 compute-0 fervent_beaver[104915]:         "type": "bluestore"
Nov 29 07:32:12 compute-0 fervent_beaver[104915]:     },
Nov 29 07:32:12 compute-0 fervent_beaver[104915]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:32:12 compute-0 fervent_beaver[104915]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:32:12 compute-0 fervent_beaver[104915]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:32:12 compute-0 fervent_beaver[104915]:         "osd_id": 1,
Nov 29 07:32:12 compute-0 fervent_beaver[104915]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:32:12 compute-0 fervent_beaver[104915]:         "type": "bluestore"
Nov 29 07:32:12 compute-0 fervent_beaver[104915]:     }
Nov 29 07:32:12 compute-0 fervent_beaver[104915]: }
Nov 29 07:32:12 compute-0 systemd[1]: libpod-0e6211aa8245eb62e5c2eed129d2c5549d10b3a239bc8482e149a679f90a9e74.scope: Deactivated successfully.
Nov 29 07:32:12 compute-0 podman[104948]: 2025-11-29 07:32:12.464488922 +0000 UTC m=+0.034955743 container died 0e6211aa8245eb62e5c2eed129d2c5549d10b3a239bc8482e149a679f90a9e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:32:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 303 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 1 op/s; 2/134 objects degraded (1.493%); 4 B/s, 0 keys/s, 0 objects/s recovering
Nov 29 07:32:12 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Nov 29 07:32:12 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Nov 29 07:32:12 compute-0 systemd[1]: libpod-76df25f5b335158eee8b2cca04354fbc5ec23288aecf51ad4e13b43ac857b74c.scope: Deactivated successfully.
Nov 29 07:32:13 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.8 deep-scrub starts
Nov 29 07:32:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:32:14 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.10 deep-scrub starts
Nov 29 07:32:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v187: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 0 B/s wr, 4 op/s; 58 B/s, 0 keys/s, 0 objects/s recovering
Nov 29 07:32:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 29 07:32:14 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 07:32:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:32:14 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:32:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 29 07:32:14 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 07:32:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:32:14 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:32:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 29 07:32:15 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 29 07:32:16 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.10 deep-scrub ok
Nov 29 07:32:16 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Nov 29 07:32:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v188: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 3 op/s; 58 B/s, 0 keys/s, 0 objects/s recovering
Nov 29 07:32:18 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.810846806s, txc = 0x5571f4f4f800
Nov 29 07:32:18 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_commit, latency = 5.641175747s
Nov 29 07:32:18 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 5.641176224s
Nov 29 07:32:18 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.8 deep-scrub ok
Nov 29 07:32:18 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 29 07:32:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 303 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 3 op/s; 58 B/s, 0 keys/s, 0 objects/s recovering
Nov 29 07:32:18 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 29 07:32:18 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 29 07:32:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 29 07:32:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 07:32:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:32:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:32:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 29 07:32:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 07:32:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:32:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:32:18 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2/134 objects degraded (1.493%), 1 pg degraded)
Nov 29 07:32:18 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 07:32:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 29 07:32:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 07:32:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:32:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:32:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 29 07:32:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 07:32:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:32:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:32:19 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 07:32:19 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:32:19 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 07:32:19 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:32:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 29 07:32:19 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Nov 29 07:32:19 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.508395195s, txc = 0x5571f4f4fb00
Nov 29 07:32:19 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.655934334s, txc = 0x5571f4f94900
Nov 29 07:32:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe84a7b03243437cd4f2cd92d7eaa9cd4492cbd04d6f6cb25d82f50059ce1122-merged.mount: Deactivated successfully.
Nov 29 07:32:19 compute-0 ceph-mon[75237]: Health check update: Degraded data redundancy: 2/134 objects degraded (1.493%), 1 pg degraded (PG_DEGRADED)
Nov 29 07:32:19 compute-0 ceph-mon[75237]: 10.6 scrub starts
Nov 29 07:32:19 compute-0 ceph-mon[75237]: 10.6 scrub ok
Nov 29 07:32:19 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 29 07:32:20 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 29 07:32:20 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 29 07:32:20 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 29 07:32:20 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 29 07:32:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v191: 305 pgs: 2 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 302 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 3 op/s; 65 B/s, 0 objects/s recovering
Nov 29 07:32:20 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 29 07:32:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 29 07:32:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 29 07:32:21 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 07:32:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 29 07:32:21 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.12( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=9.324495316s) [1] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active pruub 198.476806641s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.11( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=9.324679375s) [1] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active pruub 198.477005005s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.12( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=9.324445724s) [1] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.476806641s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.11( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=9.324623108s) [1] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.477005005s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.10( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=9.324416161s) [1] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active pruub 198.476806641s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.10( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=9.324347496s) [1] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.476806641s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.7( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=9.323998451s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active pruub 198.476974487s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.4( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=9.323840141s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active pruub 198.476928711s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.7( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=9.323961258s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.476974487s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.4( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=9.323815346s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.476928711s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.6( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=9.323770523s) [1] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active pruub 198.476913452s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.8( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=12.871457100s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active pruub 202.024597168s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.6( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=9.323703766s) [1] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.476913452s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.f( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=9.323682785s) [1] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active pruub 198.476989746s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.8( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=12.871340752s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.024597168s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.9( v 58'20 (0'0,58'20] local-lis/les=54/57 n=1 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=9.323651314s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=58'20 lcod 58'19 mlcod 58'19 active pruub 198.476989746s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.f( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=9.323651314s) [1] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.476989746s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.9( v 58'20 (0'0,58'20] local-lis/les=54/57 n=1 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=9.323616028s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=58'20 lcod 58'19 mlcod 0'0 unknown NOTIFY pruub 198.476989746s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.1( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=13.164611816s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active pruub 202.318130493s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.e( v 58'20 (0'0,58'20] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=9.323477745s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=58'20 lcod 58'19 mlcod 58'19 active pruub 198.477066040s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.15( v 58'20 (0'0,58'20] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=13.163767815s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=58'20 lcod 58'19 mlcod 58'19 active pruub 202.317398071s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.e( v 58'20 (0'0,58'20] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=9.323449135s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=58'20 lcod 58'19 mlcod 0'0 unknown NOTIFY pruub 198.477066040s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.15( v 58'20 (0'0,58'20] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=13.163734436s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=58'20 lcod 58'19 mlcod 0'0 unknown NOTIFY pruub 202.317398071s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.14( v 58'20 (0'0,58'20] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=13.163702011s) [1] r=-1 lpr=59 pi=[54,59)/1 crt=58'20 lcod 58'19 mlcod 58'19 active pruub 202.317352295s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.1( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=13.164595604s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.318130493s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.16( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=13.163577080s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active pruub 202.317367554s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.14( v 58'20 (0'0,58'20] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=13.163591385s) [1] r=-1 lpr=59 pi=[54,59)/1 crt=58'20 lcod 58'19 mlcod 0'0 unknown NOTIFY pruub 202.317352295s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.16( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=13.163551331s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.317367554s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.17( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=13.163587570s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active pruub 202.317443848s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.17( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=13.163565636s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.317443848s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.b( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=13.163576126s) [1] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active pruub 202.317489624s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.b( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=13.163560867s) [1] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.317489624s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.d( v 58'20 (0'0,58'20] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=13.163465500s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=58'20 lcod 58'19 mlcod 58'19 active pruub 202.317459106s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.d( v 58'20 (0'0,58'20] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=13.163439751s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=58'20 lcod 58'19 mlcod 0'0 unknown NOTIFY pruub 202.317459106s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.19( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=13.163854599s) [1] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active pruub 202.317962646s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.1e( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=13.163630486s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active pruub 202.317764282s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.19( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=13.163835526s) [1] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.317962646s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.2( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=13.163476944s) [1] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active pruub 202.317626953s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.1e( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=13.163605690s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.317764282s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.2( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=13.163445473s) [1] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.317626953s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.13( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=13.163423538s) [1] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active pruub 202.317718506s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.13( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=13.163405418s) [1] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.317718506s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.1a( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=9.322465897s) [1] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active pruub 198.476882935s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:21 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[10.1a( v 43'16 (0'0,43'16] local-lis/les=54/57 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59 pruub=9.322438240s) [1] r=-1 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.476882935s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:21 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 29 07:32:21 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 29 07:32:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 2 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 302 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 3 op/s; 65 B/s, 0 objects/s recovering
Nov 29 07:32:22 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.a deep-scrub starts
Nov 29 07:32:23 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.a deep-scrub ok
Nov 29 07:32:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 29 07:32:23 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 07:32:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 29 07:32:23 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 07:32:24 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 29 07:32:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 303 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:32:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 29 07:32:24 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 07:32:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 29 07:32:24 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 07:32:24 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[10.1( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:24 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[10.4( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:24 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[10.9( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:24 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[10.8( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:24 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[10.15( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:24 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[10.7( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:24 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[10.17( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:24 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[10.d( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:24 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[10.e( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:24 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[10.1e( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:24 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[10.16( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:24 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 29 07:32:24 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[6.e( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=8.551202774s) [1] r=-1 lpr=59 pi=[51,59)/1 crt=43'39 lcod 0'0 mlcod 0'0 active pruub 219.585479736s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:24 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[6.e( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=8.551172256s) [1] r=-1 lpr=59 pi=[51,59)/1 crt=43'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.585479736s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:24 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[6.2( v 43'39 (0'0,43'39] local-lis/les=51/52 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=12.888425827s) [1] r=-1 lpr=59 pi=[51,59)/1 crt=43'39 lcod 0'0 mlcod 0'0 active pruub 223.922851562s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:24 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[6.2( v 43'39 (0'0,43'39] local-lis/les=51/52 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=12.888391495s) [1] r=-1 lpr=59 pi=[51,59)/1 crt=43'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.922851562s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:24 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[6.6( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=8.550443649s) [1] r=-1 lpr=59 pi=[51,59)/1 crt=43'39 lcod 0'0 mlcod 0'0 active pruub 219.585449219s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:24 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[6.6( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=8.550414085s) [1] r=-1 lpr=59 pi=[51,59)/1 crt=43'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.585449219s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:24 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[6.a( v 43'39 (0'0,43'39] local-lis/les=51/52 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=10.704800606s) [1] r=-1 lpr=59 pi=[51,59)/1 crt=43'39 lcod 0'0 mlcod 0'0 active pruub 221.740036011s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:24 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[6.a( v 43'39 (0'0,43'39] local-lis/les=51/52 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=10.704773903s) [1] r=-1 lpr=59 pi=[51,59)/1 crt=43'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.740036011s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:24 compute-0 podman[104948]: 2025-11-29 07:32:24.973257615 +0000 UTC m=+12.543724366 container remove 0e6211aa8245eb62e5c2eed129d2c5549d10b3a239bc8482e149a679f90a9e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:32:24 compute-0 podman[104541]: 2025-11-29 07:32:24.974045847 +0000 UTC m=+56.391005687 container died 76df25f5b335158eee8b2cca04354fbc5ec23288aecf51ad4e13b43ac857b74c (image=quay.io/ceph/ceph:v18, name=practical_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:32:24 compute-0 systemd[1]: libpod-conmon-0e6211aa8245eb62e5c2eed129d2c5549d10b3a239bc8482e149a679f90a9e74.scope: Deactivated successfully.
Nov 29 07:32:25 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 29 07:32:25 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 07:32:25 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:32:25 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 07:32:25 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:32:25 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 07:32:25 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:32:25 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 07:32:25 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:32:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[10.12( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[10.11( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[10.14( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[10.13( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[10.10( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[10.1a( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[10.19( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[10.6( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[6.a( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[10.b( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[6.6( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[10.f( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[6.2( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[10.2( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[6.e( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.19( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.725183487s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844116211s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.19( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.725151062s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844116211s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.1b( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.495466232s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 208.614456177s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.1( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=9.692500114s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 208.811553955s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.1( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=9.692470551s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.811553955s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.1b( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.495423317s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.614456177s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.3( v 58'236 (0'0,58'236] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.495354652s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=58'236 lcod 58'235 mlcod 58'235 active pruub 208.614562988s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.3( v 58'236 (0'0,58'236] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.495327950s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=58'236 lcod 58'235 mlcod 0'0 unknown NOTIFY pruub 208.614562988s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.b( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.494891167s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 208.614410400s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.b( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.494848251s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.614410400s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.9( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.724545479s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844192505s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.9( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.724395752s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844192505s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.3( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.724184990s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844070435s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.3( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.724130630s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844070435s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.1( v 53'234 (0'0,53'234] local-lis/les=54/56 n=4 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.494487762s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 208.614486694s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.2( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.724133492s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844192505s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.1( v 53'234 (0'0,53'234] local-lis/les=54/56 n=4 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.494409561s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.614486694s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.2( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.724099159s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844192505s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.d( v 58'238 (0'0,58'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.494449615s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=58'238 lcod 58'237 mlcod 58'237 active pruub 208.614791870s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.14( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.723711967s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844116211s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.d( v 58'238 (0'0,58'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.494391441s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=58'238 lcod 58'237 mlcod 0'0 unknown NOTIFY pruub 208.614791870s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.e( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.723698616s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844161987s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.14( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.723665237s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844116211s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.f( v 58'238 (0'0,58'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.493962288s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=58'238 lcod 58'237 mlcod 58'237 active pruub 208.614608765s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.e( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.723542213s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844161987s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.f( v 58'238 (0'0,58'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.493938446s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=58'238 lcod 58'237 mlcod 0'0 unknown NOTIFY pruub 208.614608765s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.f( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.723515511s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844207764s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.f( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.723485947s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844207764s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[11.3( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.9( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.493405342s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 208.614791870s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.d( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.722650528s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844223022s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[11.2( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.d( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.722602844s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844223022s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.b( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.722852707s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844406128s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.8( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.722703934s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844528198s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.8( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.722661972s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844528198s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.9( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.493367195s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.614791870s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[11.9( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.b( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.722394943s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844406128s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.4( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.721903801s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844406128s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.4( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.721866608s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844406128s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.7( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.492624283s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 208.615188599s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[11.d( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.7( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.492583275s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.615188599s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[11.8( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.6( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.721648216s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844528198s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[11.b( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.18( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.721449852s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844497681s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.18( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.721406937s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844497681s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.5( v 58'236 (0'0,58'236] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.492183685s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=58'236 lcod 58'235 mlcod 58'235 active pruub 208.615173340s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.5( v 58'236 (0'0,58'236] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.491922379s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=58'236 lcod 58'235 mlcod 0'0 unknown NOTIFY pruub 208.615173340s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.6( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.721479416s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844528198s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.1c( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.721183777s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844818115s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[11.18( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.19( v 58'238 (0'0,58'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.491639137s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=58'238 lcod 58'237 mlcod 58'237 active pruub 208.615325928s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.19( v 58'238 (0'0,58'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.491581917s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=58'238 lcod 58'237 mlcod 0'0 unknown NOTIFY pruub 208.615325928s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.1c( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.721067429s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844818115s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.1b( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.720697403s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844635010s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.1b( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.720652580s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844635010s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.1f( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.491105080s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 208.615264893s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.1f( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.491066933s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.615264893s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[11.1c( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.1e( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.720496178s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844879150s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[11.1b( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.1e( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.720452309s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844879150s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.1f( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.719697952s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844650269s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.1d( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.490540504s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 208.615676880s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.1d( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.490498543s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.615676880s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[11.1e( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.1f( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.719617844s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844650269s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.13( v 57'235 (0'0,57'235] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.490022659s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=57'235 lcod 53'234 mlcod 53'234 active pruub 208.615631104s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.13( v 57'235 (0'0,57'235] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.489983559s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=57'235 lcod 53'234 mlcod 0'0 unknown NOTIFY pruub 208.615631104s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[11.1f( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.12( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.718811035s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844665527s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.11( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.719366074s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844940186s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.1a( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.718854904s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844863892s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.1a( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.718803406s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844863892s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.11( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.718885422s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844940186s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.12( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.718687057s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844665527s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.15( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.718579292s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844848633s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.15( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.718498230s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844848633s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[11.1a( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.15( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.489420891s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 208.615997314s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.15( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.489378929s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.615997314s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.17( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.718118668s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844879150s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[11.11( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.17( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.718016624s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844879150s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.10( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.717951775s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active pruub 209.844924927s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[11.10( empty local-lis/les=56/58 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59 pruub=10.717893600s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 209.844924927s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.17( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.488794327s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 208.615997314s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[11.15( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.11( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.488709450s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 208.616165161s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.11( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.488449097s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.616165161s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 59 pg[9.17( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59 pruub=9.488316536s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.615997314s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:25 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 59 pg[11.12( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[11.19( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[11.1( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[9.3( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[9.b( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[9.1( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[11.14( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[11.e( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[11.f( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[9.9( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[11.4( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[11.6( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[11.17( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[11.10( empty local-lis/les=0/0 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 59 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:26 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 29 07:32:26 compute-0 ceph-mon[75237]: pgmap v186: 305 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 303 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 1 op/s; 2/134 objects degraded (1.493%); 4 B/s, 0 keys/s, 0 objects/s recovering
Nov 29 07:32:26 compute-0 ceph-mon[75237]: 10.7 scrub starts
Nov 29 07:32:26 compute-0 ceph-mon[75237]: 10.7 scrub ok
Nov 29 07:32:26 compute-0 ceph-mon[75237]: 10.8 deep-scrub starts
Nov 29 07:32:26 compute-0 ceph-mon[75237]: 7.10 deep-scrub starts
Nov 29 07:32:26 compute-0 ceph-mon[75237]: pgmap v187: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 0 B/s wr, 4 op/s; 58 B/s, 0 keys/s, 0 objects/s recovering
Nov 29 07:32:26 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 07:32:26 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:32:26 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 07:32:26 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:32:26 compute-0 ceph-mon[75237]: 10.9 scrub starts
Nov 29 07:32:26 compute-0 ceph-mon[75237]: 7.10 deep-scrub ok
Nov 29 07:32:26 compute-0 ceph-mon[75237]: 7.12 scrub starts
Nov 29 07:32:26 compute-0 ceph-mon[75237]: pgmap v188: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 3 op/s; 58 B/s, 0 keys/s, 0 objects/s recovering
Nov 29 07:32:26 compute-0 ceph-mon[75237]: 10.8 deep-scrub ok
Nov 29 07:32:26 compute-0 ceph-mon[75237]: 10.9 scrub ok
Nov 29 07:32:26 compute-0 ceph-mon[75237]: 5.14 scrub starts
Nov 29 07:32:26 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 07:32:26 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:32:26 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 07:32:26 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:32:26 compute-0 ceph-mon[75237]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2/134 objects degraded (1.493%), 1 pg degraded)
Nov 29 07:32:26 compute-0 ceph-mon[75237]: Cluster is now healthy
Nov 29 07:32:26 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 07:32:26 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:32:26 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 07:32:26 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:32:26 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 07:32:26 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 29 07:32:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 11 peering, 294 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:32:27 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 29 07:32:28 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 29 07:32:28 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 29 07:32:28 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 29 07:32:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 29 07:32:28 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 29 07:32:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-622052f0b2f8ab23c3a6bdeb30814848e0636119ce3e1ec9ef25b0f438485e5d-merged.mount: Deactivated successfully.
Nov 29 07:32:28 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 29 07:32:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 26 peering, 279 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 114 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:32:28 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 29 07:32:28 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 29 07:32:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 07:32:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 07:32:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 07:32:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 07:32:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 07:32:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 07:32:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 29 07:32:29 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 29 07:32:29 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 29 07:32:29 compute-0 ceph-mon[75237]: pgmap v189: 305 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 303 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 3 op/s; 58 B/s, 0 keys/s, 0 objects/s recovering
Nov 29 07:32:29 compute-0 ceph-mon[75237]: 5.14 scrub ok
Nov 29 07:32:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:32:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 07:32:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:32:29 compute-0 ceph-mon[75237]: 7.12 scrub ok
Nov 29 07:32:29 compute-0 ceph-mon[75237]: 5.3 scrub starts
Nov 29 07:32:29 compute-0 ceph-mon[75237]: 7.14 scrub starts
Nov 29 07:32:29 compute-0 ceph-mon[75237]: 7.14 scrub ok
Nov 29 07:32:29 compute-0 ceph-mon[75237]: 5.3 scrub ok
Nov 29 07:32:29 compute-0 ceph-mon[75237]: osdmap e59: 3 total, 3 up, 3 in
Nov 29 07:32:29 compute-0 ceph-mon[75237]: pgmap v191: 305 pgs: 2 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 302 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 3 op/s; 65 B/s, 0 objects/s recovering
Nov 29 07:32:29 compute-0 ceph-mon[75237]: 5.5 scrub starts
Nov 29 07:32:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 07:32:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 07:32:29 compute-0 ceph-mon[75237]: 5.5 scrub ok
Nov 29 07:32:29 compute-0 ceph-mon[75237]: 8.1a scrub starts
Nov 29 07:32:29 compute-0 ceph-mon[75237]: pgmap v192: 305 pgs: 2 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 302 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 3 op/s; 65 B/s, 0 objects/s recovering
Nov 29 07:32:29 compute-0 ceph-mon[75237]: 10.a deep-scrub starts
Nov 29 07:32:29 compute-0 ceph-mon[75237]: 10.a deep-scrub ok
Nov 29 07:32:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 07:32:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 07:32:29 compute-0 ceph-mon[75237]: 10.c scrub starts
Nov 29 07:32:29 compute-0 ceph-mon[75237]: pgmap v193: 305 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 303 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:32:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 07:32:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 07:32:29 compute-0 ceph-mon[75237]: 8.1a scrub ok
Nov 29 07:32:29 compute-0 ceph-mon[75237]: 7.16 scrub starts
Nov 29 07:32:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 07:32:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:32:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 07:32:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:32:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 07:32:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:32:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 07:32:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:32:29 compute-0 ceph-mon[75237]: osdmap e60: 3 total, 3 up, 3 in
Nov 29 07:32:29 compute-0 podman[104541]: 2025-11-29 07:32:29.140465595 +0000 UTC m=+60.557425435 container remove 76df25f5b335158eee8b2cca04354fbc5ec23288aecf51ad4e13b43ac857b74c (image=quay.io/ceph/ceph:v18, name=practical_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:32:29 compute-0 sudo[104538]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:29 compute-0 systemd[1]: libpod-conmon-76df25f5b335158eee8b2cca04354fbc5ec23288aecf51ad4e13b43ac857b74c.scope: Deactivated successfully.
Nov 29 07:32:29 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 29 07:32:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:32:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 29 07:32:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e61 encode_pending skipping prime_pg_temp; mapping job 0x55dbe03c7a40 did not complete, 11 left
Nov 29 07:32:29 compute-0 sudo[104773]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 61 pg[6.f( v 43'39 (0'0,43'39] local-lis/les=55/57 n=1 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=61 pruub=9.399237633s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=43'39 mlcod 43'39 active pruub 211.936523438s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 61 pg[6.f( v 43'39 (0'0,43'39] local-lis/les=55/57 n=1 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=61 pruub=9.399080276s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=43'39 mlcod 0'0 unknown NOTIFY pruub 211.936523438s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 61 pg[6.3( v 43'39 (0'0,43'39] local-lis/les=55/57 n=2 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=61 pruub=9.398655891s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=43'39 mlcod 43'39 active pruub 211.936172485s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 61 pg[6.3( v 43'39 (0'0,43'39] local-lis/les=55/57 n=2 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=61 pruub=9.398622513s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=43'39 mlcod 0'0 unknown NOTIFY pruub 211.936172485s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 61 pg[9.1b( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=61) [0]/[1] r=0 lpr=61 pi=[54,61)/1 crt=53'234 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 61 pg[9.1b( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=61) [0]/[1] r=0 lpr=61 pi=[54,61)/1 crt=53'234 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 61 pg[6.7( v 43'39 (0'0,43'39] local-lis/les=55/57 n=1 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=61 pruub=9.398019791s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=43'39 mlcod 43'39 active pruub 211.935897827s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 61 pg[6.7( v 43'39 (0'0,43'39] local-lis/les=55/57 n=1 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=61 pruub=9.397963524s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=43'39 mlcod 0'0 unknown NOTIFY pruub 211.935897827s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 61 pg[6.b( v 43'39 (0'0,43'39] local-lis/les=55/57 n=1 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=61 pruub=9.397914886s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=43'39 mlcod 43'39 active pruub 211.936111450s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 61 pg[6.b( v 43'39 (0'0,43'39] local-lis/les=55/57 n=1 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=61 pruub=9.397736549s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=43'39 mlcod 0'0 unknown NOTIFY pruub 211.936111450s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 29 07:32:29 compute-0 sudo[105000]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chbploeodioyxagolviwuqgpgpzghbti ; /usr/bin/python3'
Nov 29 07:32:29 compute-0 sudo[105000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:29 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 61 pg[6.7( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 61 pg[6.b( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 61 pg[6.3( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 61 pg[6.f( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 61 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 61 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=61) [0]/[1] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:29 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 61 pg[11.12( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.3( v 58'236 (0'0,58'236] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=58'236 lcod 58'235 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.3( v 58'236 (0'0,58'236] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=58'236 lcod 58'235 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.b( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.1( v 53'234 (0'0,53'234] local-lis/les=54/56 n=4 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.1( v 53'234 (0'0,53'234] local-lis/les=54/56 n=4 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.d( v 58'238 (0'0,58'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=58'238 lcod 58'237 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.d( v 58'238 (0'0,58'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=58'238 lcod 58'237 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:29 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.f( v 58'238 (0'0,58'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=58'238 lcod 58'237 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.f( v 58'238 (0'0,58'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=58'238 lcod 58'237 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.9( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.9( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.7( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.7( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.5( v 58'236 (0'0,58'236] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=58'236 lcod 58'235 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.5( v 58'236 (0'0,58'236] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=58'236 lcod 58'235 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.19( v 58'238 (0'0,58'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=58'238 lcod 58'237 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.19( v 58'238 (0'0,58'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=58'238 lcod 58'237 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.b( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.1f( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.1d( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.1f( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.1d( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.13( v 57'235 (0'0,57'235] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=57'235 lcod 53'234 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.13( v 57'235 (0'0,57'235] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=57'235 lcod 53'234 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.15( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.17( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.15( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.17( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.11( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[9.11( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.9( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.9( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.1( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.3( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.3( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.1( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.b( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[9.b( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[54,62)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:29 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 62 pg[11.3( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 62 pg[11.2( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 62 pg[11.1f( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 62 pg[11.1a( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 62 pg[11.d( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 62 pg[11.15( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 62 pg[11.9( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 62 pg[11.b( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 62 pg[11.8( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 62 pg[11.11( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 62 pg[11.1c( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 62 pg[11.18( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 62 pg[11.1b( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 62 pg[11.1e( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[10.1( v 43'16 (0'0,43'16] local-lis/les=59/61 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[10.4( v 43'16 (0'0,43'16] local-lis/les=59/61 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[10.9( v 58'20 lc 58'19 (0'0,58'20] local-lis/les=59/61 n=1 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=58'20 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[10.15( v 58'20 lc 58'19 (0'0,58'20] local-lis/les=59/61 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=58'20 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[11.14( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[11.6( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[11.4( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[11.e( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[10.d( v 58'20 lc 58'19 (0'0,58'20] local-lis/les=59/61 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=58'20 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[10.e( v 58'20 lc 58'19 (0'0,58'20] local-lis/les=59/61 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=58'20 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[11.f( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[11.1( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[10.8( v 43'16 (0'0,43'16] local-lis/les=59/61 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[10.1e( v 43'16 (0'0,43'16] local-lis/les=59/61 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[10.17( v 43'16 (0'0,43'16] local-lis/les=59/61 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[11.19( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[10.16( v 43'16 (0'0,43'16] local-lis/les=59/61 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[11.17( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[11.10( empty local-lis/les=59/61 n=0 ec=56/44 lis/c=56/56 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 62 pg[10.7( v 43'16 (0'0,43'16] local-lis/les=59/61 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[6.e( v 43'39 lc 40'19 (0'0,43'39] local-lis/les=59/62 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=43'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[10.2( v 43'16 (0'0,43'16] local-lis/les=59/62 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[10.f( v 43'16 (0'0,43'16] local-lis/les=59/62 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[6.2( v 43'39 (0'0,43'39] local-lis/les=59/62 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=43'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[6.6( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=59/62 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=43'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[10.b( v 43'16 (0'0,43'16] local-lis/les=59/62 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[6.a( v 43'39 (0'0,43'39] local-lis/les=59/62 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=43'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[10.6( v 43'16 (0'0,43'16] local-lis/les=59/62 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[10.19( v 43'16 (0'0,43'16] local-lis/les=59/62 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[10.1a( v 43'16 (0'0,43'16] local-lis/les=59/62 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[10.13( v 43'16 (0'0,43'16] local-lis/les=59/62 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[10.11( v 43'16 (0'0,43'16] local-lis/les=59/62 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[10.12( v 43'16 (0'0,43'16] local-lis/les=59/62 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[10.14( v 58'20 lc 58'19 (0'0,58'20] local-lis/les=59/62 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=58'20 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 62 pg[10.10( v 43'16 (0'0,43'16] local-lis/les=59/62 n=0 ec=54/42 lis/c=54/54 les/c/f=57/57/0 sis=59) [1] r=0 lpr=59 pi=[54,59)/1 crt=43'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:29 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:32:29 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev ea7be55d-7d76-48ef-984f-cb2425d692b1 does not exist
Nov 29 07:32:29 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev d47e8818-01c9-4c64-b33a-c6ccc65b1387 does not exist
Nov 29 07:32:29 compute-0 python3[105002]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:32:29 compute-0 sudo[105003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:32:29 compute-0 sudo[105003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:29 compute-0 sudo[105003]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:29 compute-0 podman[105009]: 2025-11-29 07:32:29.653564909 +0000 UTC m=+0.060011263 container create aec2560b833ff2b4bf003f471082ac1cf1cb4c300aca049df0a06f90722a9756 (image=quay.io/ceph/ceph:v18, name=gracious_curran, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 07:32:29 compute-0 sudo[105040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:32:29 compute-0 sudo[105040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:29 compute-0 sudo[105040]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:29 compute-0 podman[105009]: 2025-11-29 07:32:29.622631912 +0000 UTC m=+0.029078276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:32:29 compute-0 systemd[1]: Started libpod-conmon-aec2560b833ff2b4bf003f471082ac1cf1cb4c300aca049df0a06f90722a9756.scope.
Nov 29 07:32:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:32:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/943693f3dba5af91c586cec3e874410ea28a90171aac7e2f1466326a6e70e86e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:32:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/943693f3dba5af91c586cec3e874410ea28a90171aac7e2f1466326a6e70e86e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:32:29 compute-0 podman[105009]: 2025-11-29 07:32:29.879129417 +0000 UTC m=+0.285575781 container init aec2560b833ff2b4bf003f471082ac1cf1cb4c300aca049df0a06f90722a9756 (image=quay.io/ceph/ceph:v18, name=gracious_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:32:29 compute-0 podman[105009]: 2025-11-29 07:32:29.886007352 +0000 UTC m=+0.292453696 container start aec2560b833ff2b4bf003f471082ac1cf1cb4c300aca049df0a06f90722a9756 (image=quay.io/ceph/ceph:v18, name=gracious_curran, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:32:30 compute-0 podman[105009]: 2025-11-29 07:32:30.043331734 +0000 UTC m=+0.449778098 container attach aec2560b833ff2b4bf003f471082ac1cf1cb4c300aca049df0a06f90722a9756 (image=quay.io/ceph/ceph:v18, name=gracious_curran, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:32:30 compute-0 ceph-mon[75237]: 10.c scrub ok
Nov 29 07:32:30 compute-0 ceph-mon[75237]: 7.18 scrub starts
Nov 29 07:32:30 compute-0 ceph-mon[75237]: 10.18 scrub starts
Nov 29 07:32:30 compute-0 ceph-mon[75237]: pgmap v195: 305 pgs: 11 peering, 294 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:32:30 compute-0 ceph-mon[75237]: 7.16 scrub ok
Nov 29 07:32:30 compute-0 ceph-mon[75237]: 10.18 scrub ok
Nov 29 07:32:30 compute-0 ceph-mon[75237]: 7.18 scrub ok
Nov 29 07:32:30 compute-0 ceph-mon[75237]: 7.17 scrub starts
Nov 29 07:32:30 compute-0 ceph-mon[75237]: 10.1b scrub starts
Nov 29 07:32:30 compute-0 ceph-mon[75237]: pgmap v196: 305 pgs: 26 peering, 279 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 114 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:32:30 compute-0 ceph-mon[75237]: 7.17 scrub ok
Nov 29 07:32:30 compute-0 ceph-mon[75237]: 10.1b scrub ok
Nov 29 07:32:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 07:32:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 07:32:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 07:32:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 07:32:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 07:32:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 07:32:30 compute-0 ceph-mon[75237]: 7.f scrub starts
Nov 29 07:32:30 compute-0 ceph-mon[75237]: 7.f scrub ok
Nov 29 07:32:30 compute-0 ceph-mon[75237]: osdmap e61: 3 total, 3 up, 3 in
Nov 29 07:32:30 compute-0 ceph-mon[75237]: osdmap e62: 3 total, 3 up, 3 in
Nov 29 07:32:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:32:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:32:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 29 07:32:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 29 07:32:30 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 29 07:32:30 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 63 pg[6.f( v 43'39 lc 40'1 (0'0,43'39] local-lis/les=61/63 n=1 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=43'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:30 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 63 pg[6.7( v 43'39 lc 40'21 (0'0,43'39] local-lis/les=61/63 n=1 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=43'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:30 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 63 pg[9.3( v 58'236 (0'0,58'236] local-lis/les=62/63 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[54,62)/2 crt=58'236 lcod 58'235 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:30 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 63 pg[9.b( v 53'234 (0'0,53'234] local-lis/les=62/63 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:30 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 63 pg[9.1b( v 53'234 (0'0,53'234] local-lis/les=61/63 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=61) [0]/[1] async=[0] r=0 lpr=61 pi=[54,61)/1 crt=53'234 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:30 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 63 pg[9.d( v 58'238 (0'0,58'238] local-lis/les=62/63 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[54,62)/2 crt=58'238 lcod 58'237 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:30 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 63 pg[9.9( v 53'234 (0'0,53'234] local-lis/les=62/63 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:30 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 63 pg[9.1( v 53'234 (0'0,53'234] local-lis/les=62/63 n=4 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:30 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 63 pg[6.b( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=61/63 n=1 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=43'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:30 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 63 pg[6.3( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=61/63 n=2 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=43'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:30 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 63 pg[9.f( v 58'238 (0'0,58'238] local-lis/les=62/63 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[54,62)/2 crt=58'238 lcod 58'237 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:30 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 63 pg[9.7( v 53'234 (0'0,53'234] local-lis/les=62/63 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:30 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 63 pg[9.19( v 58'238 (0'0,58'238] local-lis/les=62/63 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[54,62)/2 crt=58'238 lcod 58'237 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:30 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 63 pg[9.1f( v 53'234 (0'0,53'234] local-lis/les=62/63 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:30 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 63 pg[9.5( v 58'236 (0'0,58'236] local-lis/les=62/63 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[54,62)/2 crt=58'236 lcod 58'235 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:30 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 63 pg[9.17( v 53'234 (0'0,53'234] local-lis/les=62/63 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:30 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 63 pg[9.1d( v 53'234 (0'0,53'234] local-lis/les=62/63 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:30 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 63 pg[9.15( v 53'234 (0'0,53'234] local-lis/les=62/63 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:30 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 63 pg[9.13( v 57'235 (0'0,57'235] local-lis/les=62/63 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[54,62)/2 crt=57'235 lcod 53'234 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:30 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 63 pg[9.11( v 53'234 (0'0,53'234] local-lis/les=62/63 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[54,62)/2 crt=53'234 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v200: 305 pgs: 1 remapped+peering, 41 peering, 263 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:31 compute-0 ceph-mon[75237]: osdmap e63: 3 total, 3 up, 3 in
Nov 29 07:32:31 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 29 07:32:31 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 29 07:32:31 compute-0 gracious_curran[105069]: {
Nov 29 07:32:31 compute-0 gracious_curran[105069]:     "user_id": "openstack",
Nov 29 07:32:31 compute-0 gracious_curran[105069]:     "display_name": "openstack",
Nov 29 07:32:31 compute-0 gracious_curran[105069]:     "email": "",
Nov 29 07:32:31 compute-0 gracious_curran[105069]:     "suspended": 0,
Nov 29 07:32:31 compute-0 gracious_curran[105069]:     "max_buckets": 1000,
Nov 29 07:32:31 compute-0 gracious_curran[105069]:     "subusers": [],
Nov 29 07:32:31 compute-0 gracious_curran[105069]:     "keys": [
Nov 29 07:32:31 compute-0 gracious_curran[105069]:         {
Nov 29 07:32:31 compute-0 gracious_curran[105069]:             "user": "openstack",
Nov 29 07:32:31 compute-0 gracious_curran[105069]:             "access_key": "GWEYRPXF5EFCR4GA9AIY",
Nov 29 07:32:31 compute-0 gracious_curran[105069]:             "secret_key": "70y9lZYnAcnJKV79YKQChN0DBqdsyoaRVBCAyMGA"
Nov 29 07:32:31 compute-0 gracious_curran[105069]:         }
Nov 29 07:32:31 compute-0 gracious_curran[105069]:     ],
Nov 29 07:32:31 compute-0 gracious_curran[105069]:     "swift_keys": [],
Nov 29 07:32:31 compute-0 gracious_curran[105069]:     "caps": [],
Nov 29 07:32:31 compute-0 gracious_curran[105069]:     "op_mask": "read, write, delete",
Nov 29 07:32:31 compute-0 gracious_curran[105069]:     "default_placement": "",
Nov 29 07:32:31 compute-0 gracious_curran[105069]:     "default_storage_class": "",
Nov 29 07:32:31 compute-0 gracious_curran[105069]:     "placement_tags": [],
Nov 29 07:32:31 compute-0 gracious_curran[105069]:     "bucket_quota": {
Nov 29 07:32:31 compute-0 gracious_curran[105069]:         "enabled": false,
Nov 29 07:32:31 compute-0 gracious_curran[105069]:         "check_on_raw": false,
Nov 29 07:32:31 compute-0 gracious_curran[105069]:         "max_size": -1,
Nov 29 07:32:31 compute-0 gracious_curran[105069]:         "max_size_kb": 0,
Nov 29 07:32:31 compute-0 gracious_curran[105069]:         "max_objects": -1
Nov 29 07:32:31 compute-0 gracious_curran[105069]:     },
Nov 29 07:32:31 compute-0 gracious_curran[105069]:     "user_quota": {
Nov 29 07:32:31 compute-0 gracious_curran[105069]:         "enabled": false,
Nov 29 07:32:31 compute-0 gracious_curran[105069]:         "check_on_raw": false,
Nov 29 07:32:31 compute-0 gracious_curran[105069]:         "max_size": -1,
Nov 29 07:32:31 compute-0 gracious_curran[105069]:         "max_size_kb": 0,
Nov 29 07:32:31 compute-0 gracious_curran[105069]:         "max_objects": -1
Nov 29 07:32:31 compute-0 gracious_curran[105069]:     },
Nov 29 07:32:31 compute-0 gracious_curran[105069]:     "temp_url_keys": [],
Nov 29 07:32:31 compute-0 gracious_curran[105069]:     "type": "rgw",
Nov 29 07:32:31 compute-0 gracious_curran[105069]:     "mfa_ids": []
Nov 29 07:32:31 compute-0 gracious_curran[105069]: }
Nov 29 07:32:31 compute-0 gracious_curran[105069]: 
Nov 29 07:32:31 compute-0 sshd-session[105154]: Invalid user server from 20.185.243.158 port 41668
Nov 29 07:32:32 compute-0 sshd-session[105154]: Received disconnect from 20.185.243.158 port 41668:11: Bye Bye [preauth]
Nov 29 07:32:32 compute-0 sshd-session[105154]: Disconnected from invalid user server 20.185.243.158 port 41668 [preauth]
Nov 29 07:32:32 compute-0 systemd[1]: libpod-aec2560b833ff2b4bf003f471082ac1cf1cb4c300aca049df0a06f90722a9756.scope: Deactivated successfully.
Nov 29 07:32:32 compute-0 podman[105009]: 2025-11-29 07:32:32.132600434 +0000 UTC m=+2.539046788 container died aec2560b833ff2b4bf003f471082ac1cf1cb4c300aca049df0a06f90722a9756 (image=quay.io/ceph/ceph:v18, name=gracious_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:32:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-943693f3dba5af91c586cec3e874410ea28a90171aac7e2f1466326a6e70e86e-merged.mount: Deactivated successfully.
Nov 29 07:32:32 compute-0 podman[105009]: 2025-11-29 07:32:32.189980472 +0000 UTC m=+2.596426816 container remove aec2560b833ff2b4bf003f471082ac1cf1cb4c300aca049df0a06f90722a9756 (image=quay.io/ceph/ceph:v18, name=gracious_curran, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 07:32:32 compute-0 sudo[105000]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:32 compute-0 systemd[1]: libpod-conmon-aec2560b833ff2b4bf003f471082ac1cf1cb4c300aca049df0a06f90722a9756.scope: Deactivated successfully.
Nov 29 07:32:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 29 07:32:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v201: 305 pgs: 2 active+recovery_wait+degraded, 1 remapped+peering, 30 peering, 1 active+recovering, 271 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 3/139 objects degraded (2.158%); 2/139 objects misplaced (1.439%); 0 B/s, 0 objects/s recovering
Nov 29 07:32:32 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 29 07:32:32 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 29 07:32:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 29 07:32:32 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 29 07:32:32 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 64 pg[9.d( v 63'240 (0'0,63'240] local-lis/les=0/0 n=6 ec=54/40 lis/c=0/54 les/c/f=0/56/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/2 luod=0'0 lua=58'238 crt=63'240 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:32 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 64 pg[9.3( v 58'236 (0'0,58'236] local-lis/les=0/0 n=5 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/2 luod=0'0 crt=58'236 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:32 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 64 pg[9.d( v 63'240 (0'0,63'240] local-lis/les=0/0 n=6 ec=54/40 lis/c=0/54 les/c/f=0/56/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/2 crt=63'240 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:32 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 64 pg[9.3( v 58'236 (0'0,58'236] local-lis/les=0/0 n=5 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/2 crt=58'236 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:32 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 64 pg[9.1( v 63'239 (0'0,63'239] local-lis/les=0/0 n=7 ec=54/40 lis/c=0/54 les/c/f=0/56/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/2 luod=0'0 lua=53'234 crt=63'239 lcod 63'238 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:32 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 64 pg[9.1b( v 63'236 (0'0,63'236] local-lis/les=0/0 n=4 ec=54/40 lis/c=0/54 les/c/f=0/56/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/1 luod=0'0 lua=53'234 crt=63'236 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:32 compute-0 ceph-mon[75237]: pgmap v200: 305 pgs: 1 remapped+peering, 41 peering, 263 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:32 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 64 pg[9.1b( v 63'236 (0'0,63'236] local-lis/les=0/0 n=4 ec=54/40 lis/c=0/54 les/c/f=0/56/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/1 crt=63'236 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:32 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 64 pg[9.b( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/2 luod=0'0 crt=53'234 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:32 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 64 pg[9.b( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/2 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:32 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 64 pg[9.1( v 63'239 (0'0,63'239] local-lis/les=0/0 n=7 ec=54/40 lis/c=0/54 les/c/f=0/56/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/2 crt=63'239 lcod 63'238 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:32 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 64 pg[9.9( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/2 luod=0'0 crt=53'234 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:32 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 64 pg[9.9( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/2 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:32 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 64 pg[9.1b( v 63'236 (0'0,63'236] local-lis/les=61/63 n=4 ec=54/40 lis/c=61/54 les/c/f=63/56/0 sis=64 pruub=13.633961678s) [0] async=[0] r=-1 lpr=64 pi=[54,64)/1 crt=63'236 lcod 63'235 mlcod 63'235 active pruub 219.676071167s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:32 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 64 pg[9.3( v 58'236 (0'0,58'236] local-lis/les=62/63 n=5 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=64 pruub=13.630942345s) [0] async=[0] r=-1 lpr=64 pi=[54,64)/2 crt=58'236 lcod 58'235 mlcod 58'235 active pruub 219.673049927s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:32 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 64 pg[9.b( v 53'234 (0'0,53'234] local-lis/les=62/63 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=64 pruub=13.633878708s) [0] async=[0] r=-1 lpr=64 pi=[54,64)/2 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 219.676071167s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:32 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 64 pg[9.3( v 58'236 (0'0,58'236] local-lis/les=62/63 n=5 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=64 pruub=13.630857468s) [0] r=-1 lpr=64 pi=[54,64)/2 crt=58'236 lcod 58'235 mlcod 0'0 unknown NOTIFY pruub 219.673049927s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:32 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 64 pg[9.1b( v 63'236 (0'0,63'236] local-lis/les=61/63 n=4 ec=54/40 lis/c=61/54 les/c/f=63/56/0 sis=64 pruub=13.633858681s) [0] r=-1 lpr=64 pi=[54,64)/1 crt=63'236 lcod 63'235 mlcod 0'0 unknown NOTIFY pruub 219.676071167s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:32 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 64 pg[9.b( v 53'234 (0'0,53'234] local-lis/les=62/63 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=64 pruub=13.633844376s) [0] r=-1 lpr=64 pi=[54,64)/2 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.676071167s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:32 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 64 pg[9.d( v 63'240 (0'0,63'240] local-lis/les=62/63 n=6 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=64 pruub=13.633708954s) [0] async=[0] r=-1 lpr=64 pi=[54,64)/2 crt=63'240 lcod 63'239 mlcod 63'239 active pruub 219.676147461s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:32 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 64 pg[9.d( v 63'240 (0'0,63'240] local-lis/les=62/63 n=6 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=64 pruub=13.633637428s) [0] r=-1 lpr=64 pi=[54,64)/2 crt=63'240 lcod 63'239 mlcod 0'0 unknown NOTIFY pruub 219.676147461s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:32 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 64 pg[9.1( v 63'239 (0'0,63'239] local-lis/les=62/63 n=7 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=64 pruub=13.633491516s) [0] async=[0] r=-1 lpr=64 pi=[54,64)/2 luod=63'238 crt=63'239 lcod 63'237 mlcod 63'237 active pruub 219.676101685s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:32 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 64 pg[9.1( v 63'239 (0'0,63'239] local-lis/les=62/63 n=7 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=64 pruub=13.633319855s) [0] r=-1 lpr=64 pi=[54,64)/2 crt=63'239 lcod 63'237 mlcod 0'0 unknown NOTIFY pruub 219.676101685s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:32 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 64 pg[9.9( v 53'234 (0'0,53'234] local-lis/les=62/63 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=64 pruub=13.633289337s) [0] async=[0] r=-1 lpr=64 pi=[54,64)/2 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 219.676162720s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:32 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 64 pg[9.9( v 53'234 (0'0,53'234] local-lis/les=62/63 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=64 pruub=13.633243561s) [0] r=-1 lpr=64 pi=[54,64)/2 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.676162720s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:32 compute-0 ceph-mon[75237]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 3/139 objects degraded (2.158%), 2 pgs degraded (PG_DEGRADED)
Nov 29 07:32:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 29 07:32:33 compute-0 ceph-mon[75237]: 10.1c scrub starts
Nov 29 07:32:33 compute-0 ceph-mon[75237]: 10.1c scrub ok
Nov 29 07:32:33 compute-0 ceph-mon[75237]: pgmap v201: 305 pgs: 2 active+recovery_wait+degraded, 1 remapped+peering, 30 peering, 1 active+recovering, 271 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 3/139 objects degraded (2.158%); 2/139 objects misplaced (1.439%); 0 B/s, 0 objects/s recovering
Nov 29 07:32:33 compute-0 ceph-mon[75237]: 10.1d scrub starts
Nov 29 07:32:33 compute-0 ceph-mon[75237]: 10.1d scrub ok
Nov 29 07:32:33 compute-0 ceph-mon[75237]: osdmap e64: 3 total, 3 up, 3 in
Nov 29 07:32:33 compute-0 ceph-mon[75237]: Health check failed: Degraded data redundancy: 3/139 objects degraded (2.158%), 2 pgs degraded (PG_DEGRADED)
Nov 29 07:32:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 29 07:32:33 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 29 07:32:34 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 65 pg[9.5( v 63'238 (0'0,63'238] local-lis/les=62/63 n=6 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65 pruub=12.385173798s) [0] async=[0] r=-1 lpr=65 pi=[54,65)/2 crt=63'238 lcod 63'237 mlcod 63'237 active pruub 219.677566528s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:34 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 65 pg[9.7( v 53'234 (0'0,53'234] local-lis/les=62/63 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65 pruub=12.384979248s) [0] async=[0] r=-1 lpr=65 pi=[54,65)/2 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 219.677429199s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:34 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 65 pg[9.7( v 53'234 (0'0,53'234] local-lis/les=62/63 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65 pruub=12.384869576s) [0] r=-1 lpr=65 pi=[54,65)/2 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.677429199s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:34 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 65 pg[9.19( v 58'238 (0'0,58'238] local-lis/les=62/63 n=5 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65 pruub=12.384885788s) [0] async=[0] r=-1 lpr=65 pi=[54,65)/2 crt=58'238 lcod 58'237 mlcod 58'237 active pruub 219.677459717s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:34 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 65 pg[9.19( v 58'238 (0'0,58'238] local-lis/les=62/63 n=5 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65 pruub=12.384617805s) [0] r=-1 lpr=65 pi=[54,65)/2 crt=58'238 lcod 58'237 mlcod 0'0 unknown NOTIFY pruub 219.677459717s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:34 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 65 pg[9.f( v 58'238 (0'0,58'238] local-lis/les=62/63 n=5 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65 pruub=12.385105133s) [0] async=[0] r=-1 lpr=65 pi=[54,65)/2 crt=58'238 lcod 58'237 mlcod 58'237 active pruub 219.677383423s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:34 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 65 pg[9.f( v 58'238 (0'0,58'238] local-lis/les=62/63 n=5 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65 pruub=12.384286880s) [0] r=-1 lpr=65 pi=[54,65)/2 crt=58'238 lcod 58'237 mlcod 0'0 unknown NOTIFY pruub 219.677383423s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:34 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 65 pg[9.1f( v 63'236 (0'0,63'236] local-lis/les=62/63 n=4 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65 pruub=12.384361267s) [0] async=[0] r=-1 lpr=65 pi=[54,65)/2 crt=63'236 lcod 63'235 mlcod 63'235 active pruub 219.677551270s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:34 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 65 pg[9.1f( v 63'236 (0'0,63'236] local-lis/les=62/63 n=4 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65 pruub=12.384316444s) [0] r=-1 lpr=65 pi=[54,65)/2 crt=63'236 lcod 63'235 mlcod 0'0 unknown NOTIFY pruub 219.677551270s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:34 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 65 pg[9.1d( v 53'234 (0'0,53'234] local-lis/les=62/63 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65 pruub=12.384307861s) [0] async=[0] r=-1 lpr=65 pi=[54,65)/2 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 219.677703857s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:34 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 65 pg[9.1d( v 53'234 (0'0,53'234] local-lis/les=62/63 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65 pruub=12.384255409s) [0] r=-1 lpr=65 pi=[54,65)/2 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.677703857s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:34 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 65 pg[9.5( v 63'238 (0'0,63'238] local-lis/les=62/63 n=6 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65 pruub=12.385060310s) [0] r=-1 lpr=65 pi=[54,65)/2 crt=63'238 lcod 63'237 mlcod 0'0 unknown NOTIFY pruub 219.677566528s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:34 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 65 pg[9.17( v 53'234 (0'0,53'234] local-lis/les=62/63 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65 pruub=12.383592606s) [0] async=[0] r=-1 lpr=65 pi=[54,65)/2 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 219.677612305s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:34 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 65 pg[9.17( v 53'234 (0'0,53'234] local-lis/les=62/63 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65 pruub=12.383441925s) [0] r=-1 lpr=65 pi=[54,65)/2 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.677612305s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:34 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 65 pg[9.11( v 53'234 (0'0,53'234] local-lis/les=62/63 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65 pruub=12.382907867s) [0] async=[0] r=-1 lpr=65 pi=[54,65)/2 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 219.677658081s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:34 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 65 pg[9.11( v 53'234 (0'0,53'234] local-lis/les=62/63 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65 pruub=12.382843018s) [0] r=-1 lpr=65 pi=[54,65)/2 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.677658081s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 65 pg[9.7( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 luod=0'0 crt=53'234 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 65 pg[9.7( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 65 pg[9.19( v 58'238 (0'0,58'238] local-lis/les=0/0 n=5 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 luod=0'0 crt=58'238 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 65 pg[9.19( v 58'238 (0'0,58'238] local-lis/les=0/0 n=5 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 crt=58'238 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 65 pg[9.f( v 58'238 (0'0,58'238] local-lis/les=0/0 n=5 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 luod=0'0 crt=58'238 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 65 pg[9.1f( v 63'236 (0'0,63'236] local-lis/les=0/0 n=4 ec=54/40 lis/c=0/54 les/c/f=0/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 luod=0'0 lua=53'234 crt=63'236 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 65 pg[9.1d( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 luod=0'0 crt=53'234 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 65 pg[9.f( v 58'238 (0'0,58'238] local-lis/les=0/0 n=5 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 crt=58'238 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 65 pg[9.1d( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 65 pg[9.1f( v 63'236 (0'0,63'236] local-lis/les=0/0 n=4 ec=54/40 lis/c=0/54 les/c/f=0/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 crt=63'236 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 65 pg[9.5( v 63'238 (0'0,63'238] local-lis/les=0/0 n=6 ec=54/40 lis/c=0/54 les/c/f=0/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 luod=0'0 lua=58'236 crt=63'238 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 65 pg[9.5( v 63'238 (0'0,63'238] local-lis/les=0/0 n=6 ec=54/40 lis/c=0/54 les/c/f=0/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 crt=63'238 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 65 pg[9.11( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 luod=0'0 crt=53'234 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 65 pg[9.17( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 luod=0'0 crt=53'234 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 65 pg[9.11( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 65 pg[9.17( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 65 pg[9.9( v 53'234 (0'0,53'234] local-lis/les=64/65 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/2 crt=53'234 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 65 pg[9.1( v 63'239 (0'0,63'239] local-lis/les=64/65 n=7 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/2 crt=63'239 lcod 63'238 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 65 pg[9.d( v 63'240 (0'0,63'240] local-lis/les=64/65 n=6 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/2 crt=63'240 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 65 pg[9.1b( v 63'236 (0'0,63'236] local-lis/les=64/65 n=4 ec=54/40 lis/c=61/54 les/c/f=63/56/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/1 crt=63'236 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 65 pg[9.3( v 58'236 (0'0,58'236] local-lis/les=64/65 n=5 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/2 crt=58'236 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 65 pg[9.b( v 53'234 (0'0,53'234] local-lis/les=64/65 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=64) [0] r=0 lpr=64 pi=[54,64)/2 crt=53'234 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:32:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 29 07:32:34 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 29 07:32:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 29 07:32:34 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 66 pg[9.15( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[54,66)/2 luod=0'0 crt=53'234 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 66 pg[9.15( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[54,66)/2 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 66 pg[9.13( v 57'235 (0'0,57'235] local-lis/les=0/0 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[54,66)/2 luod=0'0 crt=57'235 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 66 pg[9.13( v 57'235 (0'0,57'235] local-lis/les=0/0 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[54,66)/2 crt=57'235 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:34 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 66 pg[9.13( v 57'235 (0'0,57'235] local-lis/les=62/63 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=66 pruub=11.951332092s) [0] async=[0] r=-1 lpr=66 pi=[54,66)/2 crt=57'235 lcod 53'234 mlcod 53'234 active pruub 219.677871704s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:34 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 66 pg[9.13( v 57'235 (0'0,57'235] local-lis/les=62/63 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=66 pruub=11.951286316s) [0] r=-1 lpr=66 pi=[54,66)/2 crt=57'235 lcod 53'234 mlcod 0'0 unknown NOTIFY pruub 219.677871704s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:34 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 66 pg[9.15( v 53'234 (0'0,53'234] local-lis/les=62/63 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=66 pruub=11.950978279s) [0] async=[0] r=-1 lpr=66 pi=[54,66)/2 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 219.677658081s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:34 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 66 pg[9.15( v 53'234 (0'0,53'234] local-lis/les=62/63 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=66 pruub=11.950926781s) [0] r=-1 lpr=66 pi=[54,66)/2 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.677658081s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 66 pg[9.1d( v 53'234 (0'0,53'234] local-lis/les=65/66 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 crt=53'234 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 66 pg[9.1f( v 63'236 (0'0,63'236] local-lis/les=65/66 n=4 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 crt=63'236 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:34 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 66 pg[9.f( v 58'238 (0'0,58'238] local-lis/les=65/66 n=5 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 crt=58'238 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 66 pg[9.17( v 53'234 (0'0,53'234] local-lis/les=65/66 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 crt=53'234 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 66 pg[9.11( v 53'234 (0'0,53'234] local-lis/les=65/66 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 crt=53'234 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 66 pg[9.19( v 58'238 (0'0,58'238] local-lis/les=65/66 n=5 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 crt=58'238 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 66 pg[9.7( v 53'234 (0'0,53'234] local-lis/les=65/66 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 crt=53'234 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:34 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 66 pg[9.5( v 63'238 (0'0,63'238] local-lis/les=65/66 n=6 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=65) [0] r=0 lpr=65 pi=[54,65)/2 crt=63'238 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v205: 305 pgs: 1 active+clean+scrubbing, 2 active+recovery_wait+degraded, 1 remapped+peering, 2 active+remapped, 1 active+recovering, 298 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 8.8 KiB/s rd, 243 B/s wr, 19 op/s; 3/155 objects degraded (1.935%); 2/155 objects misplaced (1.290%); 52 B/s, 3 objects/s recovering
Nov 29 07:32:34 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Nov 29 07:32:34 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Nov 29 07:32:34 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.c deep-scrub starts
Nov 29 07:32:34 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.c deep-scrub ok
Nov 29 07:32:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 29 07:32:35 compute-0 ceph-mon[75237]: osdmap e65: 3 total, 3 up, 3 in
Nov 29 07:32:35 compute-0 ceph-mon[75237]: 7.19 scrub starts
Nov 29 07:32:35 compute-0 ceph-mon[75237]: osdmap e66: 3 total, 3 up, 3 in
Nov 29 07:32:35 compute-0 ceph-mon[75237]: 7.19 scrub ok
Nov 29 07:32:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 29 07:32:35 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 29 07:32:35 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 67 pg[9.13( v 57'235 (0'0,57'235] local-lis/les=66/67 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[54,66)/2 crt=57'235 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:35 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 67 pg[9.15( v 53'234 (0'0,53'234] local-lis/les=66/67 n=3 ec=54/40 lis/c=62/54 les/c/f=63/56/0 sis=66) [0] r=0 lpr=66 pi=[54,66)/2 crt=53'234 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:35 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 29 07:32:35 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 29 07:32:35 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 29 07:32:35 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 29 07:32:36 compute-0 ceph-mon[75237]: pgmap v205: 305 pgs: 1 active+clean+scrubbing, 2 active+recovery_wait+degraded, 1 remapped+peering, 2 active+remapped, 1 active+recovering, 298 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 8.8 KiB/s rd, 243 B/s wr, 19 op/s; 3/155 objects degraded (1.935%); 2/155 objects misplaced (1.290%); 52 B/s, 3 objects/s recovering
Nov 29 07:32:36 compute-0 ceph-mon[75237]: 10.1f scrub starts
Nov 29 07:32:36 compute-0 ceph-mon[75237]: 10.1f scrub ok
Nov 29 07:32:36 compute-0 ceph-mon[75237]: 8.c deep-scrub starts
Nov 29 07:32:36 compute-0 ceph-mon[75237]: 8.c deep-scrub ok
Nov 29 07:32:36 compute-0 ceph-mon[75237]: osdmap e67: 3 total, 3 up, 3 in
Nov 29 07:32:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 511 B/s wr, 28 op/s; 720 B/s, 2 keys/s, 16 objects/s recovering
Nov 29 07:32:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 29 07:32:36 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 07:32:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 29 07:32:36 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 07:32:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 29 07:32:37 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 3/155 objects degraded (1.935%), 2 pgs degraded)
Nov 29 07:32:37 compute-0 ceph-mon[75237]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 07:32:37 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 07:32:37 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 07:32:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 29 07:32:37 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 29 07:32:37 compute-0 ceph-mon[75237]: 8.2 scrub starts
Nov 29 07:32:37 compute-0 ceph-mon[75237]: 8.e scrub starts
Nov 29 07:32:37 compute-0 ceph-mon[75237]: 8.e scrub ok
Nov 29 07:32:37 compute-0 ceph-mon[75237]: 8.2 scrub ok
Nov 29 07:32:37 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 07:32:37 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 07:32:37 compute-0 sshd-session[105166]: Received disconnect from 103.236.140.19 port 41724:11: Bye Bye [preauth]
Nov 29 07:32:37 compute-0 sshd-session[105166]: Disconnected from authenticating user root 103.236.140.19 port 41724 [preauth]
Nov 29 07:32:37 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 29 07:32:37 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 29 07:32:37 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 68 pg[6.c( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=68 pruub=13.691247940s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=43'39 lcod 0'0 mlcod 0'0 active pruub 237.739624023s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:37 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 68 pg[6.c( v 43'39 (0'0,43'39] local-lis/les=51/52 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=68 pruub=13.691177368s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=43'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.739624023s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:37 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 68 pg[6.4( v 43'39 (0'0,43'39] local-lis/les=51/52 n=4 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=68 pruub=12.198290825s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=43'39 lcod 0'0 mlcod 0'0 active pruub 236.247207642s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:37 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 68 pg[6.4( v 43'39 (0'0,43'39] local-lis/les=51/52 n=4 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=68 pruub=12.198222160s) [1] r=-1 lpr=68 pi=[51,68)/1 crt=43'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 236.247207642s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:37 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 68 pg[6.c( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:37 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 68 pg[6.4( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:32:38
Nov 29 07:32:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:32:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:32:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['.rgw.root', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'volumes', 'backups', 'vms', '.mgr', 'cephfs.cephfs.data']
Nov 29 07:32:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 7/10 changes
Nov 29 07:32:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Executing plan auto_2025-11-29_07:32:38
Nov 29 07:32:38 compute-0 ceph-mgr[75527]: [balancer INFO root] ceph osd pg-upmap-items 6.4 mappings [{'from': 1, 'to': 2}]
Nov 29 07:32:38 compute-0 ceph-mgr[75527]: [balancer INFO root] ceph osd pg-upmap-items 9.5 mappings [{'from': 0, 'to': 2}]
Nov 29 07:32:38 compute-0 ceph-mgr[75527]: [balancer INFO root] ceph osd pg-upmap-items 9.10 mappings [{'from': 1, 'to': 2}]
Nov 29 07:32:38 compute-0 ceph-mgr[75527]: [balancer INFO root] ceph osd pg-upmap-items 9.16 mappings [{'from': 1, 'to': 2}]
Nov 29 07:32:38 compute-0 ceph-mgr[75527]: [balancer INFO root] ceph osd pg-upmap-items 9.19 mappings [{'from': 0, 'to': 2}]
Nov 29 07:32:38 compute-0 ceph-mgr[75527]: [balancer INFO root] ceph osd pg-upmap-items 9.1a mappings [{'from': 1, 'to': 2}]
Nov 29 07:32:38 compute-0 ceph-mgr[75527]: [balancer INFO root] ceph osd pg-upmap-items 9.1d mappings [{'from': 0, 'to': 2}]
Nov 29 07:32:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.4", "id": [1, 2]} v 0) v1
Nov 29 07:32:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.4", "id": [1, 2]}]: dispatch
Nov 29 07:32:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.5", "id": [0, 2]} v 0) v1
Nov 29 07:32:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.5", "id": [0, 2]}]: dispatch
Nov 29 07:32:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.10", "id": [1, 2]} v 0) v1
Nov 29 07:32:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.10", "id": [1, 2]}]: dispatch
Nov 29 07:32:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.16", "id": [1, 2]} v 0) v1
Nov 29 07:32:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.16", "id": [1, 2]}]: dispatch
Nov 29 07:32:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.19", "id": [0, 2]} v 0) v1
Nov 29 07:32:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.19", "id": [0, 2]}]: dispatch
Nov 29 07:32:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1a", "id": [1, 2]} v 0) v1
Nov 29 07:32:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1a", "id": [1, 2]}]: dispatch
Nov 29 07:32:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1d", "id": [0, 2]} v 0) v1
Nov 29 07:32:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1d", "id": [0, 2]}]: dispatch
Nov 29 07:32:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 9.8 KiB/s rd, 0 B/s wr, 23 op/s; 616 B/s, 2 keys/s, 14 objects/s recovering
Nov 29 07:32:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 29 07:32:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 29 07:32:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 07:32:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 29 07:32:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 07:32:38 compute-0 ceph-mon[75237]: pgmap v207: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 511 B/s wr, 28 op/s; 720 B/s, 2 keys/s, 16 objects/s recovering
Nov 29 07:32:38 compute-0 ceph-mon[75237]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 3/155 objects degraded (1.935%), 2 pgs degraded)
Nov 29 07:32:38 compute-0 ceph-mon[75237]: Cluster is now healthy
Nov 29 07:32:38 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 07:32:38 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 07:32:38 compute-0 ceph-mon[75237]: osdmap e68: 3 total, 3 up, 3 in
Nov 29 07:32:38 compute-0 ceph-mon[75237]: 7.c scrub starts
Nov 29 07:32:38 compute-0 ceph-mon[75237]: 7.c scrub ok
Nov 29 07:32:38 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.4", "id": [1, 2]}]: dispatch
Nov 29 07:32:38 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.5", "id": [0, 2]}]: dispatch
Nov 29 07:32:38 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.10", "id": [1, 2]}]: dispatch
Nov 29 07:32:38 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.16", "id": [1, 2]}]: dispatch
Nov 29 07:32:38 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.19", "id": [0, 2]}]: dispatch
Nov 29 07:32:38 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1a", "id": [1, 2]}]: dispatch
Nov 29 07:32:38 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1d", "id": [0, 2]}]: dispatch
Nov 29 07:32:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.4", "id": [1, 2]}]': finished
Nov 29 07:32:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.5", "id": [0, 2]}]': finished
Nov 29 07:32:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.10", "id": [1, 2]}]': finished
Nov 29 07:32:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.16", "id": [1, 2]}]': finished
Nov 29 07:32:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.19", "id": [0, 2]}]': finished
Nov 29 07:32:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1a", "id": [1, 2]}]': finished
Nov 29 07:32:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1d", "id": [0, 2]}]': finished
Nov 29 07:32:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 29 07:32:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e69 crush map has features 3314933000854323200, adjusting msgr requires
Nov 29 07:32:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e69 crush map has features 432629239337189376, adjusting msgr requires
Nov 29 07:32:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e69 crush map has features 432629239337189376, adjusting msgr requires
Nov 29 07:32:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e69 crush map has features 432629239337189376, adjusting msgr requires
Nov 29 07:32:38 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 29 07:32:38 compute-0 ceph-osd[88926]: osd.0 69 crush map has features 432629239337189376, adjusting msgr requires for clients
Nov 29 07:32:38 compute-0 ceph-osd[88926]: osd.0 69 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Nov 29 07:32:38 compute-0 ceph-osd[88926]: osd.0 69 crush map has features 3314933000854323200, adjusting msgr requires for osds
Nov 29 07:32:38 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 69 pg[9.1d( v 53'234 (0'0,53'234] local-lis/les=65/66 n=3 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=69 pruub=11.840084076s) [2] r=-1 lpr=69 pi=[65,69)/1 crt=53'234 mlcod 0'0 active pruub 236.698196411s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:38 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 69 pg[9.1d( v 53'234 (0'0,53'234] local-lis/les=65/66 n=3 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=69 pruub=11.840016365s) [2] r=-1 lpr=69 pi=[65,69)/1 crt=53'234 mlcod 0'0 unknown NOTIFY pruub 236.698196411s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:38 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 69 pg[6.4( v 43'39 (0'0,43'39] local-lis/les=51/52 n=4 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=69) [2] r=-1 lpr=69 pi=[51,69)/1 crt=43'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:38 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 69 pg[6.4( v 43'39 (0'0,43'39] local-lis/les=51/52 n=4 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=69) [2] r=-1 lpr=69 pi=[51,69)/1 crt=43'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:38 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 69 pg[9.5( v 63'238 (0'0,63'238] local-lis/les=65/66 n=6 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=69 pruub=11.959060669s) [2] r=-1 lpr=69 pi=[65,69)/1 crt=63'238 mlcod 0'0 active pruub 236.817749023s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:38 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 69 pg[9.5( v 63'238 (0'0,63'238] local-lis/les=65/66 n=6 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=69 pruub=11.959035873s) [2] r=-1 lpr=69 pi=[65,69)/1 crt=63'238 mlcod 0'0 unknown NOTIFY pruub 236.817749023s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:38 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 69 pg[9.19( v 58'238 (0'0,58'238] local-lis/les=65/66 n=5 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=69 pruub=11.958709717s) [2] r=-1 lpr=69 pi=[65,69)/1 crt=58'238 mlcod 0'0 active pruub 236.817626953s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:38 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 69 pg[9.19( v 58'238 (0'0,58'238] local-lis/les=65/66 n=5 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=69 pruub=11.958687782s) [2] r=-1 lpr=69 pi=[65,69)/1 crt=58'238 mlcod 0'0 unknown NOTIFY pruub 236.817626953s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:38 compute-0 ceph-osd[89968]: osd.1 69 crush map has features 432629239337189376, adjusting msgr requires for clients
Nov 29 07:32:38 compute-0 ceph-osd[89968]: osd.1 69 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Nov 29 07:32:38 compute-0 ceph-osd[89968]: osd.1 69 crush map has features 3314933000854323200, adjusting msgr requires for osds
Nov 29 07:32:38 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 69 pg[9.16( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=69 pruub=13.237934113s) [2] r=-1 lpr=69 pi=[54,69)/1 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 225.133941650s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:38 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 69 pg[9.16( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=69 pruub=13.237903595s) [2] r=-1 lpr=69 pi=[54,69)/1 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 225.133941650s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:38 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 69 pg[6.4( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=51/52 n=4 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=69) [2] r=-1 lpr=69 pi=[51,69)/1 crt=43'39 mlcod 0'0 unknown m=4 mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:38 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 69 pg[9.1a( v 63'236 (0'0,63'236] local-lis/les=54/56 n=4 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=69 pruub=13.237069130s) [2] r=-1 lpr=69 pi=[54,69)/1 crt=63'236 lcod 63'235 mlcod 63'235 active pruub 225.133865356s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:38 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 69 pg[9.1a( v 63'236 (0'0,63'236] local-lis/les=54/56 n=4 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=69 pruub=13.237039566s) [2] r=-1 lpr=69 pi=[54,69)/1 crt=63'236 lcod 63'235 mlcod 0'0 unknown NOTIFY pruub 225.133865356s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:38 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 69 pg[9.10( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=69 pruub=13.237050056s) [2] r=-1 lpr=69 pi=[54,69)/1 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 225.133926392s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:38 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 69 pg[9.10( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=69 pruub=13.237027168s) [2] r=-1 lpr=69 pi=[54,69)/1 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 225.133926392s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:38 compute-0 ceph-osd[90977]: osd.2 69 crush map has features 432629239337189376, adjusting msgr requires for clients
Nov 29 07:32:38 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 69 pg[6.4( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=51/52 n=4 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=69) [2] r=-1 lpr=69 pi=[51,69)/1 crt=43'39 mlcod 0'0 unknown NOTIFY m=4 mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:38 compute-0 ceph-osd[90977]: osd.2 69 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Nov 29 07:32:38 compute-0 ceph-osd[90977]: osd.2 69 crush map has features 3314933000854323200, adjusting msgr requires for osds
Nov 29 07:32:38 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 69 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=69) [2] r=0 lpr=69 pi=[54,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:38 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 69 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=69) [2] r=0 lpr=69 pi=[54,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:38 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 69 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=69) [2] r=0 lpr=69 pi=[65,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:38 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 69 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=69) [2] r=0 lpr=69 pi=[54,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:38 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 69 pg[6.4( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=69) [2] r=0 lpr=69 pi=[51,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:38 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 69 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=69) [2] r=0 lpr=69 pi=[65,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:38 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 69 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=69) [2] r=0 lpr=69 pi=[65,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:38 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 69 pg[6.c( v 43'39 lc 40'17 (0'0,43'39] local-lis/les=68/69 n=1 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=43'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:32:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:32:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:32:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:32:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:32:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:32:38 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 29 07:32:38 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 29 07:32:39 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 29 07:32:39 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 29 07:32:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:32:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 29 07:32:39 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 07:32:39 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 07:32:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 29 07:32:39 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 29 07:32:39 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 70 pg[9.1d( v 53'234 (0'0,53'234] local-lis/les=65/66 n=3 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=70) [2]/[0] r=0 lpr=70 pi=[65,70)/1 crt=53'234 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:39 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 70 pg[9.1d( v 53'234 (0'0,53'234] local-lis/les=65/66 n=3 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=70) [2]/[0] r=0 lpr=70 pi=[65,70)/1 crt=53'234 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:39 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 70 pg[9.5( v 63'238 (0'0,63'238] local-lis/les=65/66 n=6 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=70) [2]/[0] r=0 lpr=70 pi=[65,70)/1 crt=63'238 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:39 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 70 pg[9.5( v 63'238 (0'0,63'238] local-lis/les=65/66 n=6 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=70) [2]/[0] r=0 lpr=70 pi=[65,70)/1 crt=63'238 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:39 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 70 pg[9.19( v 58'238 (0'0,58'238] local-lis/les=65/66 n=5 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=70) [2]/[0] r=0 lpr=70 pi=[65,70)/1 crt=58'238 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:39 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 70 pg[9.19( v 58'238 (0'0,58'238] local-lis/les=65/66 n=5 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=70) [2]/[0] r=0 lpr=70 pi=[65,70)/1 crt=58'238 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:39 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 70 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[65,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:39 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 70 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[65,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:39 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 70 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[65,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:39 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 70 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[65,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:39 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 70 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[65,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:39 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 70 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[65,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:39 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 70 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:39 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 70 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:39 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 70 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:39 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 70 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:39 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 70 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:39 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 70 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 70 pg[9.10( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=70) [2]/[1] r=0 lpr=70 pi=[54,70)/1 crt=53'234 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 70 pg[9.16( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=70) [2]/[1] r=0 lpr=70 pi=[54,70)/1 crt=53'234 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 70 pg[9.1a( v 63'236 (0'0,63'236] local-lis/les=54/56 n=4 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=70) [2]/[1] r=0 lpr=70 pi=[54,70)/1 crt=63'236 lcod 63'235 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 70 pg[9.10( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=70) [2]/[1] r=0 lpr=70 pi=[54,70)/1 crt=53'234 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 70 pg[9.1a( v 63'236 (0'0,63'236] local-lis/les=54/56 n=4 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=70) [2]/[1] r=0 lpr=70 pi=[54,70)/1 crt=63'236 lcod 63'235 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 70 pg[6.d( v 43'39 (0'0,43'39] local-lis/les=55/57 n=1 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=70 pruub=15.313359261s) [0] r=-1 lpr=70 pi=[55,70)/1 crt=43'39 mlcod 43'39 active pruub 227.936447144s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 70 pg[6.d( v 43'39 (0'0,43'39] local-lis/les=55/57 n=1 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=70 pruub=15.313324928s) [0] r=-1 lpr=70 pi=[55,70)/1 crt=43'39 mlcod 0'0 unknown NOTIFY pruub 227.936447144s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 70 pg[9.16( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=70) [2]/[1] r=0 lpr=70 pi=[54,70)/1 crt=53'234 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 70 pg[6.5( v 43'39 (0'0,43'39] local-lis/les=55/57 n=2 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=70 pruub=15.312663078s) [0] r=-1 lpr=70 pi=[55,70)/1 crt=43'39 mlcod 43'39 active pruub 227.936187744s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 70 pg[6.5( v 43'39 (0'0,43'39] local-lis/les=55/57 n=2 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=70 pruub=15.312633514s) [0] r=-1 lpr=70 pi=[55,70)/1 crt=43'39 mlcod 0'0 unknown NOTIFY pruub 227.936187744s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:39 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 70 pg[6.d( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=70) [0] r=0 lpr=70 pi=[55,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:39 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 70 pg[6.5( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=70) [0] r=0 lpr=70 pi=[55,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:39 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 70 pg[6.4( v 43'39 lc 40'15 (0'0,43'39] local-lis/les=69/70 n=4 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=69) [2] r=0 lpr=69 pi=[51,69)/1 crt=43'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:39 compute-0 ceph-mon[75237]: pgmap v209: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 9.8 KiB/s rd, 0 B/s wr, 23 op/s; 616 B/s, 2 keys/s, 14 objects/s recovering
Nov 29 07:32:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 07:32:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 07:32:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.4", "id": [1, 2]}]': finished
Nov 29 07:32:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.5", "id": [0, 2]}]': finished
Nov 29 07:32:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.10", "id": [1, 2]}]': finished
Nov 29 07:32:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.16", "id": [1, 2]}]': finished
Nov 29 07:32:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.19", "id": [0, 2]}]': finished
Nov 29 07:32:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1a", "id": [1, 2]}]': finished
Nov 29 07:32:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1d", "id": [0, 2]}]': finished
Nov 29 07:32:39 compute-0 ceph-mon[75237]: osdmap e69: 3 total, 3 up, 3 in
Nov 29 07:32:39 compute-0 ceph-mon[75237]: 7.e scrub starts
Nov 29 07:32:39 compute-0 ceph-mon[75237]: 7.e scrub ok
Nov 29 07:32:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 07:32:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 07:32:39 compute-0 ceph-mon[75237]: osdmap e70: 3 total, 3 up, 3 in
Nov 29 07:32:39 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 29 07:32:39 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 29 07:32:39 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.f scrub starts
Nov 29 07:32:39 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.f scrub ok
Nov 29 07:32:40 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 29 07:32:40 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 29 07:32:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 29 07:32:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 29 07:32:40 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 29 07:32:40 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 71 pg[9.1a( v 63'236 (0'0,63'236] local-lis/les=70/71 n=4 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[54,70)/1 crt=63'236 lcod 63'235 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:40 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 71 pg[9.10( v 53'234 (0'0,53'234] local-lis/les=70/71 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[54,70)/1 crt=53'234 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:40 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 71 pg[9.16( v 53'234 (0'0,53'234] local-lis/les=70/71 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[54,70)/1 crt=53'234 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:40 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 71 pg[9.5( v 63'238 (0'0,63'238] local-lis/les=70/71 n=6 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[65,70)/1 crt=63'238 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:40 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 71 pg[9.19( v 58'238 (0'0,58'238] local-lis/les=70/71 n=5 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[65,70)/1 crt=58'238 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:40 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 71 pg[9.1d( v 53'234 (0'0,53'234] local-lis/les=70/71 n=3 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[65,70)/1 crt=53'234 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:40 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 71 pg[6.d( v 43'39 lc 40'13 (0'0,43'39] local-lis/les=70/71 n=1 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=70) [0] r=0 lpr=70 pi=[55,70)/1 crt=43'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:40 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 71 pg[6.5( v 43'39 lc 40'11 (0'0,43'39] local-lis/les=70/71 n=2 ec=51/26 lis/c=55/55 les/c/f=57/58/0 sis=70) [0] r=0 lpr=70 pi=[55,70)/1 crt=43'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 3 remapped+peering, 302 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 07:32:40 compute-0 ceph-mon[75237]: 7.1d scrub starts
Nov 29 07:32:40 compute-0 ceph-mon[75237]: 7.1d scrub ok
Nov 29 07:32:40 compute-0 ceph-mon[75237]: 7.15 scrub starts
Nov 29 07:32:40 compute-0 ceph-mon[75237]: 7.15 scrub ok
Nov 29 07:32:40 compute-0 ceph-mon[75237]: osdmap e71: 3 total, 3 up, 3 in
Nov 29 07:32:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 29 07:32:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 29 07:32:41 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 29 07:32:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 72 pg[9.1d( v 71'238 (0'0,71'238] local-lis/les=70/71 n=5 ec=54/40 lis/c=70/65 les/c/f=71/66/0 sis=72 pruub=14.727624893s) [2] async=[2] r=-1 lpr=72 pi=[65,72)/1 crt=71'238 lcod 71'237 mlcod 71'237 active pruub 242.596389771s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 72 pg[9.1d( v 71'238 (0'0,71'238] local-lis/les=70/71 n=5 ec=54/40 lis/c=70/65 les/c/f=71/66/0 sis=72 pruub=14.727533340s) [2] r=-1 lpr=72 pi=[65,72)/1 crt=71'238 lcod 71'237 mlcod 0'0 unknown NOTIFY pruub 242.596389771s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 72 pg[9.5( v 71'240 (0'0,71'240] local-lis/les=70/71 n=7 ec=54/40 lis/c=70/65 les/c/f=71/66/0 sis=72 pruub=14.726191521s) [2] async=[2] r=-1 lpr=72 pi=[65,72)/1 crt=71'240 lcod 71'239 mlcod 71'239 active pruub 242.595428467s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 72 pg[9.5( v 71'240 (0'0,71'240] local-lis/les=70/71 n=7 ec=54/40 lis/c=70/65 les/c/f=71/66/0 sis=72 pruub=14.726146698s) [2] r=-1 lpr=72 pi=[65,72)/1 crt=71'240 lcod 71'239 mlcod 0'0 unknown NOTIFY pruub 242.595428467s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 72 pg[9.19( v 71'244 (0'0,71'244] local-lis/les=70/71 n=8 ec=54/40 lis/c=70/65 les/c/f=71/66/0 sis=72 pruub=14.726922989s) [2] async=[2] r=-1 lpr=72 pi=[65,72)/1 crt=71'244 lcod 71'243 mlcod 71'243 active pruub 242.596389771s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:41 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 72 pg[9.19( v 71'244 (0'0,71'244] local-lis/les=70/71 n=8 ec=54/40 lis/c=70/65 les/c/f=71/66/0 sis=72 pruub=14.726875305s) [2] r=-1 lpr=72 pi=[65,72)/1 crt=71'244 lcod 71'243 mlcod 0'0 unknown NOTIFY pruub 242.596389771s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:41 compute-0 ceph-mon[75237]: 8.f scrub starts
Nov 29 07:32:41 compute-0 ceph-mon[75237]: 8.f scrub ok
Nov 29 07:32:41 compute-0 ceph-mon[75237]: 7.1e scrub starts
Nov 29 07:32:41 compute-0 ceph-mon[75237]: 7.1e scrub ok
Nov 29 07:32:41 compute-0 ceph-mon[75237]: pgmap v213: 305 pgs: 3 remapped+peering, 302 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 07:32:41 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 72 pg[9.16( v 53'234 (0'0,53'234] local-lis/les=70/71 n=3 ec=54/40 lis/c=70/54 les/c/f=71/56/0 sis=72 pruub=14.722881317s) [2] async=[2] r=-1 lpr=72 pi=[54,72)/1 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 229.627578735s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:41 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 72 pg[9.16( v 53'234 (0'0,53'234] local-lis/les=70/71 n=3 ec=54/40 lis/c=70/54 les/c/f=71/56/0 sis=72 pruub=14.722813606s) [2] r=-1 lpr=72 pi=[54,72)/1 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.627578735s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:41 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 72 pg[9.1a( v 63'236 (0'0,63'236] local-lis/les=70/71 n=4 ec=54/40 lis/c=70/54 les/c/f=71/56/0 sis=72 pruub=14.721334457s) [2] async=[2] r=-1 lpr=72 pi=[54,72)/1 crt=63'236 lcod 63'235 mlcod 63'235 active pruub 229.626770020s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:41 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 72 pg[9.1a( v 63'236 (0'0,63'236] local-lis/les=70/71 n=4 ec=54/40 lis/c=70/54 les/c/f=71/56/0 sis=72 pruub=14.721297264s) [2] r=-1 lpr=72 pi=[54,72)/1 crt=63'236 lcod 63'235 mlcod 0'0 unknown NOTIFY pruub 229.626770020s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:41 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 72 pg[9.10( v 53'234 (0'0,53'234] local-lis/les=70/71 n=3 ec=54/40 lis/c=70/54 les/c/f=71/56/0 sis=72 pruub=14.721833229s) [2] async=[2] r=-1 lpr=72 pi=[54,72)/1 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 229.627563477s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:41 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 72 pg[9.10( v 53'234 (0'0,53'234] local-lis/les=70/71 n=3 ec=54/40 lis/c=70/54 les/c/f=71/56/0 sis=72 pruub=14.721798897s) [2] r=-1 lpr=72 pi=[54,72)/1 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.627563477s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:41 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 72 pg[9.1a( v 63'236 (0'0,63'236] local-lis/les=0/0 n=4 ec=54/40 lis/c=70/54 les/c/f=71/56/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 luod=0'0 crt=63'236 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:41 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 72 pg[9.16( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=70/54 les/c/f=71/56/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 luod=0'0 crt=53'234 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:41 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 72 pg[9.10( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=70/54 les/c/f=71/56/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 luod=0'0 crt=53'234 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:41 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 72 pg[9.1a( v 63'236 (0'0,63'236] local-lis/les=0/0 n=4 ec=54/40 lis/c=70/54 les/c/f=71/56/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 crt=63'236 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:41 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 72 pg[9.1d( v 71'238 (0'0,71'238] local-lis/les=0/0 n=5 ec=54/40 lis/c=0/65 les/c/f=0/66/0 sis=72) [2] r=0 lpr=72 pi=[65,72)/1 luod=0'0 lua=53'234 crt=71'238 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:41 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 72 pg[9.10( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=70/54 les/c/f=71/56/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:41 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 72 pg[9.1d( v 71'238 (0'0,71'238] local-lis/les=0/0 n=5 ec=54/40 lis/c=0/65 les/c/f=0/66/0 sis=72) [2] r=0 lpr=72 pi=[65,72)/1 crt=71'238 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:41 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 72 pg[9.5( v 71'240 (0'0,71'240] local-lis/les=0/0 n=7 ec=54/40 lis/c=0/65 les/c/f=0/66/0 sis=72) [2] r=0 lpr=72 pi=[65,72)/1 luod=0'0 lua=63'238 crt=71'240 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:41 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 72 pg[9.5( v 71'240 (0'0,71'240] local-lis/les=0/0 n=7 ec=54/40 lis/c=0/65 les/c/f=0/66/0 sis=72) [2] r=0 lpr=72 pi=[65,72)/1 crt=71'240 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:41 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 72 pg[9.19( v 71'244 (0'0,71'244] local-lis/les=0/0 n=8 ec=54/40 lis/c=0/65 les/c/f=0/66/0 sis=72) [2] r=0 lpr=72 pi=[65,72)/1 luod=0'0 lua=58'238 crt=71'244 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:41 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 72 pg[9.19( v 71'244 (0'0,71'244] local-lis/les=0/0 n=8 ec=54/40 lis/c=0/65 les/c/f=0/66/0 sis=72) [2] r=0 lpr=72 pi=[65,72)/1 crt=71'244 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:41 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 72 pg[9.16( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=70/54 les/c/f=71/56/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:41 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.b scrub starts
Nov 29 07:32:41 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.b scrub ok
Nov 29 07:32:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 3 active+remapped, 3 remapped+peering, 299 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 28 op/s; 137 B/s, 3 objects/s recovering
Nov 29 07:32:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 29 07:32:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:32:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:32:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:32:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:32:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:32:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:32:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:32:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:32:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:32:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:32:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 29 07:32:42 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 29 07:32:42 compute-0 ceph-mon[75237]: osdmap e72: 3 total, 3 up, 3 in
Nov 29 07:32:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 73 pg[9.19( v 71'244 (0'0,71'244] local-lis/les=72/73 n=8 ec=54/40 lis/c=70/65 les/c/f=71/66/0 sis=72) [2] r=0 lpr=72 pi=[65,72)/1 crt=71'244 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 73 pg[9.5( v 71'240 (0'0,71'240] local-lis/les=72/73 n=7 ec=54/40 lis/c=70/65 les/c/f=71/66/0 sis=72) [2] r=0 lpr=72 pi=[65,72)/1 crt=71'240 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 73 pg[9.1a( v 63'236 (0'0,63'236] local-lis/les=72/73 n=4 ec=54/40 lis/c=70/54 les/c/f=71/56/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 crt=63'236 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 73 pg[9.10( v 53'234 (0'0,53'234] local-lis/les=72/73 n=3 ec=54/40 lis/c=70/54 les/c/f=71/56/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 crt=53'234 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 73 pg[9.16( v 53'234 (0'0,53'234] local-lis/les=72/73 n=3 ec=54/40 lis/c=70/54 les/c/f=71/56/0 sis=72) [2] r=0 lpr=72 pi=[54,72)/1 crt=53'234 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:42 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 73 pg[9.1d( v 71'238 (0'0,71'238] local-lis/les=72/73 n=5 ec=54/40 lis/c=70/65 les/c/f=71/66/0 sis=72) [2] r=0 lpr=72 pi=[65,72)/1 crt=71'238 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:43 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 29 07:32:43 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 29 07:32:43 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 7.6 deep-scrub starts
Nov 29 07:32:43 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 7.6 deep-scrub ok
Nov 29 07:32:43 compute-0 ceph-mon[75237]: 8.b scrub starts
Nov 29 07:32:43 compute-0 ceph-mon[75237]: 8.b scrub ok
Nov 29 07:32:43 compute-0 ceph-mon[75237]: pgmap v215: 305 pgs: 3 active+remapped, 3 remapped+peering, 299 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 28 op/s; 137 B/s, 3 objects/s recovering
Nov 29 07:32:43 compute-0 ceph-mon[75237]: osdmap e73: 3 total, 3 up, 3 in
Nov 29 07:32:44 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 29 07:32:44 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 29 07:32:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:32:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 38 op/s; 443 B/s, 5 objects/s recovering
Nov 29 07:32:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 29 07:32:44 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 07:32:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 29 07:32:44 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 07:32:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 29 07:32:45 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 07:32:45 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 07:32:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 29 07:32:45 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 29 07:32:46 compute-0 ceph-mon[75237]: 8.d scrub starts
Nov 29 07:32:46 compute-0 ceph-mon[75237]: 8.d scrub ok
Nov 29 07:32:46 compute-0 ceph-mon[75237]: 7.6 deep-scrub starts
Nov 29 07:32:46 compute-0 ceph-mon[75237]: 7.6 deep-scrub ok
Nov 29 07:32:46 compute-0 ceph-mon[75237]: 8.1 scrub starts
Nov 29 07:32:46 compute-0 ceph-mon[75237]: 8.1 scrub ok
Nov 29 07:32:46 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 07:32:46 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 07:32:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 33 op/s; 389 B/s, 4 objects/s recovering
Nov 29 07:32:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 29 07:32:46 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 07:32:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 29 07:32:46 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 07:32:46 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 7.9 deep-scrub starts
Nov 29 07:32:47 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 7.9 deep-scrub ok
Nov 29 07:32:47 compute-0 sshd-session[105168]: Accepted publickey for zuul from 192.168.122.30 port 53088 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:32:47 compute-0 systemd-logind[782]: New session 34 of user zuul.
Nov 29 07:32:47 compute-0 systemd[1]: Started Session 34 of User zuul.
Nov 29 07:32:47 compute-0 sshd-session[105168]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:32:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 29 07:32:47 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 07:32:47 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 07:32:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 29 07:32:47 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 29 07:32:47 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 75 pg[9.e( v 71'238 (0'0,71'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=75 pruub=11.997603416s) [2] r=-1 lpr=75 pi=[54,75)/1 crt=71'238 lcod 71'237 mlcod 71'237 active pruub 233.134323120s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:47 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 75 pg[9.6( v 70'242 (0'0,70'242] local-lis/les=54/56 n=8 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=75 pruub=11.997383118s) [2] r=-1 lpr=75 pi=[54,75)/1 crt=70'242 lcod 70'241 mlcod 70'241 active pruub 233.134231567s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:47 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 75 pg[9.e( v 71'238 (0'0,71'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=75 pruub=11.997500420s) [2] r=-1 lpr=75 pi=[54,75)/1 crt=71'238 lcod 71'237 mlcod 0'0 unknown NOTIFY pruub 233.134323120s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:47 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 75 pg[9.6( v 70'242 (0'0,70'242] local-lis/les=54/56 n=8 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=75 pruub=11.997344017s) [2] r=-1 lpr=75 pi=[54,75)/1 crt=70'242 lcod 70'241 mlcod 0'0 unknown NOTIFY pruub 233.134231567s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:47 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 75 pg[9.1e( v 71'240 (0'0,71'240] local-lis/les=54/56 n=6 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=75 pruub=11.996907234s) [2] r=-1 lpr=75 pi=[54,75)/1 crt=71'240 lcod 71'239 mlcod 71'239 active pruub 233.134490967s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:47 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 75 pg[9.1e( v 71'240 (0'0,71'240] local-lis/les=54/56 n=6 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=75 pruub=11.996769905s) [2] r=-1 lpr=75 pi=[54,75)/1 crt=71'240 lcod 71'239 mlcod 0'0 unknown NOTIFY pruub 233.134490967s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:47 compute-0 ceph-mon[75237]: pgmap v217: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 38 op/s; 443 B/s, 5 objects/s recovering
Nov 29 07:32:47 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 07:32:47 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 07:32:47 compute-0 ceph-mon[75237]: osdmap e74: 3 total, 3 up, 3 in
Nov 29 07:32:47 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 07:32:47 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 07:32:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 75 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=75) [2] r=0 lpr=75 pi=[54,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 75 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=75) [2] r=0 lpr=75 pi=[54,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 75 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=75) [2] r=0 lpr=75 pi=[54,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:48 compute-0 python3.9[105321]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:32:48 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 29 07:32:48 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 29 07:32:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 3 unknown, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 297 B/s, 2 objects/s recovering
Nov 29 07:32:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 29 07:32:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 29 07:32:49 compute-0 sudo[105537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwgonaokpgcxngcglwrvajmummmmbmha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401569.3538537-32-206694666702050/AnsiballZ_command.py'
Nov 29 07:32:49 compute-0 sudo[105537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:49 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Nov 29 07:32:49 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Nov 29 07:32:50 compute-0 python3.9[105539]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:32:50 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 29 07:32:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 3 unknown, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:50 compute-0 ceph-mon[75237]: pgmap v219: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 33 op/s; 389 B/s, 4 objects/s recovering
Nov 29 07:32:50 compute-0 ceph-mon[75237]: 7.9 deep-scrub starts
Nov 29 07:32:50 compute-0 ceph-mon[75237]: 7.9 deep-scrub ok
Nov 29 07:32:50 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 07:32:50 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 07:32:50 compute-0 ceph-mon[75237]: osdmap e75: 3 total, 3 up, 3 in
Nov 29 07:32:50 compute-0 ceph-mon[75237]: 8.3 scrub starts
Nov 29 07:32:50 compute-0 ceph-mon[75237]: 8.3 scrub ok
Nov 29 07:32:50 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 76 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[54,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:50 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 76 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[54,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:50 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 76 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[54,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:50 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 76 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[54,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:50 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 76 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[54,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:50 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 76 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[54,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:50 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 76 pg[9.6( v 70'242 (0'0,70'242] local-lis/les=54/56 n=8 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=76) [2]/[1] r=0 lpr=76 pi=[54,76)/1 crt=70'242 lcod 70'241 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:50 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 76 pg[9.1e( v 71'240 (0'0,71'240] local-lis/les=54/56 n=6 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=76) [2]/[1] r=0 lpr=76 pi=[54,76)/1 crt=71'240 lcod 71'239 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:50 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 76 pg[9.1e( v 71'240 (0'0,71'240] local-lis/les=54/56 n=6 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=76) [2]/[1] r=0 lpr=76 pi=[54,76)/1 crt=71'240 lcod 71'239 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:50 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 76 pg[9.6( v 70'242 (0'0,70'242] local-lis/les=54/56 n=8 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=76) [2]/[1] r=0 lpr=76 pi=[54,76)/1 crt=70'242 lcod 70'241 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:50 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 76 pg[9.e( v 71'238 (0'0,71'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=76) [2]/[1] r=0 lpr=76 pi=[54,76)/1 crt=71'238 lcod 71'237 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:50 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 76 pg[9.e( v 71'238 (0'0,71'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=76) [2]/[1] r=0 lpr=76 pi=[54,76)/1 crt=71'238 lcod 71'237 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:50 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 7.8 deep-scrub starts
Nov 29 07:32:50 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 7.8 deep-scrub ok
Nov 29 07:32:51 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Nov 29 07:32:51 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Nov 29 07:32:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 3 unknown, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:53 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 7.a deep-scrub starts
Nov 29 07:32:53 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Nov 29 07:32:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 29 07:32:53 compute-0 sshd-session[105552]: Invalid user user from 114.34.106.146 port 53518
Nov 29 07:32:53 compute-0 sshd-session[105552]: Received disconnect from 114.34.106.146 port 53518:11: Bye Bye [preauth]
Nov 29 07:32:53 compute-0 sshd-session[105552]: Disconnected from invalid user user 114.34.106.146 port 53518 [preauth]
Nov 29 07:32:54 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 8.8 deep-scrub starts
Nov 29 07:32:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 3 unknown, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:55 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Nov 29 07:32:55 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 8.8 deep-scrub ok
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:32:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:32:55 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 7.a deep-scrub ok
Nov 29 07:32:55 compute-0 ceph-mon[75237]: pgmap v221: 305 pgs: 3 unknown, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 297 B/s, 2 objects/s recovering
Nov 29 07:32:55 compute-0 ceph-mon[75237]: 8.6 scrub starts
Nov 29 07:32:55 compute-0 ceph-mon[75237]: 8.6 scrub ok
Nov 29 07:32:55 compute-0 ceph-mon[75237]: osdmap e76: 3 total, 3 up, 3 in
Nov 29 07:32:55 compute-0 ceph-mon[75237]: pgmap v223: 305 pgs: 3 unknown, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:55 compute-0 ceph-mon[75237]: 7.8 deep-scrub starts
Nov 29 07:32:55 compute-0 ceph-mon[75237]: 7.8 deep-scrub ok
Nov 29 07:32:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 29 07:32:55 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 29 07:32:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 77 pg[9.6( v 70'242 (0'0,70'242] local-lis/les=76/77 n=8 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[54,76)/1 crt=70'242 lcod 70'241 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 77 pg[9.1e( v 71'240 (0'0,71'240] local-lis/les=76/77 n=6 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[54,76)/1 crt=71'240 lcod 71'239 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 77 pg[9.e( v 71'238 (0'0,71'238] local-lis/les=76/77 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[54,76)/1 crt=71'238 lcod 71'237 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+scrubbing+deep, 3 unknown, 299 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 29 07:32:56 compute-0 ceph-mon[75237]: 8.5 scrub starts
Nov 29 07:32:56 compute-0 ceph-mon[75237]: 8.5 scrub ok
Nov 29 07:32:56 compute-0 ceph-mon[75237]: pgmap v224: 305 pgs: 3 unknown, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:56 compute-0 ceph-mon[75237]: 7.a deep-scrub starts
Nov 29 07:32:56 compute-0 ceph-mon[75237]: 8.7 scrub starts
Nov 29 07:32:56 compute-0 ceph-mon[75237]: 8.8 deep-scrub starts
Nov 29 07:32:56 compute-0 ceph-mon[75237]: pgmap v225: 305 pgs: 3 unknown, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:56 compute-0 ceph-mon[75237]: 8.7 scrub ok
Nov 29 07:32:56 compute-0 ceph-mon[75237]: 8.8 deep-scrub ok
Nov 29 07:32:56 compute-0 ceph-mon[75237]: 7.a deep-scrub ok
Nov 29 07:32:56 compute-0 ceph-mon[75237]: osdmap e77: 3 total, 3 up, 3 in
Nov 29 07:32:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 29 07:32:56 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 29 07:32:56 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 78 pg[9.1e( v 71'240 (0'0,71'240] local-lis/les=0/0 n=6 ec=54/40 lis/c=76/54 les/c/f=77/56/0 sis=78) [2] r=0 lpr=78 pi=[54,78)/1 luod=0'0 crt=71'240 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:56 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 78 pg[9.1e( v 71'240 (0'0,71'240] local-lis/les=0/0 n=6 ec=54/40 lis/c=76/54 les/c/f=77/56/0 sis=78) [2] r=0 lpr=78 pi=[54,78)/1 crt=71'240 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 78 pg[9.1e( v 71'240 (0'0,71'240] local-lis/les=76/77 n=6 ec=54/40 lis/c=76/54 les/c/f=77/56/0 sis=78 pruub=15.085280418s) [2] async=[2] r=-1 lpr=78 pi=[54,78)/1 crt=71'240 lcod 71'239 mlcod 71'239 active pruub 245.341064453s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:56 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 78 pg[9.1e( v 71'240 (0'0,71'240] local-lis/les=76/77 n=6 ec=54/40 lis/c=76/54 les/c/f=77/56/0 sis=78 pruub=15.084773064s) [2] r=-1 lpr=78 pi=[54,78)/1 crt=71'240 lcod 71'239 mlcod 0'0 unknown NOTIFY pruub 245.341064453s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:57 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.18 deep-scrub starts
Nov 29 07:32:57 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.18 deep-scrub ok
Nov 29 07:32:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 29 07:32:58 compute-0 ceph-mon[75237]: pgmap v227: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+scrubbing+deep, 3 unknown, 299 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:58 compute-0 ceph-mon[75237]: osdmap e78: 3 total, 3 up, 3 in
Nov 29 07:32:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 29 07:32:58 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 29 07:32:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+scrubbing+deep, 3 unknown, 299 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:58 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 29 07:32:59 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 79 pg[9.6( v 70'242 (0'0,70'242] local-lis/les=76/77 n=8 ec=54/40 lis/c=76/54 les/c/f=77/56/0 sis=79 pruub=13.038259506s) [2] async=[2] r=-1 lpr=79 pi=[54,79)/1 crt=70'242 lcod 70'241 mlcod 70'241 active pruub 245.341018677s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:59 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 79 pg[9.e( v 71'238 (0'0,71'238] local-lis/les=76/77 n=5 ec=54/40 lis/c=76/54 les/c/f=77/56/0 sis=79 pruub=13.038391113s) [2] async=[2] r=-1 lpr=79 pi=[54,79)/1 crt=71'238 lcod 71'237 mlcod 71'237 active pruub 245.341033936s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:59 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 79 pg[9.e( v 71'238 (0'0,71'238] local-lis/les=76/77 n=5 ec=54/40 lis/c=76/54 les/c/f=77/56/0 sis=79 pruub=13.038132668s) [2] r=-1 lpr=79 pi=[54,79)/1 crt=71'238 lcod 71'237 mlcod 0'0 unknown NOTIFY pruub 245.341033936s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:59 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 79 pg[9.6( v 70'242 (0'0,70'242] local-lis/les=76/77 n=8 ec=54/40 lis/c=76/54 les/c/f=77/56/0 sis=79 pruub=13.037620544s) [2] r=-1 lpr=79 pi=[54,79)/1 crt=70'242 lcod 70'241 mlcod 0'0 unknown NOTIFY pruub 245.341018677s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:32:59 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 79 pg[9.6( v 70'242 (0'0,70'242] local-lis/les=0/0 n=8 ec=54/40 lis/c=76/54 les/c/f=77/56/0 sis=79) [2] r=0 lpr=79 pi=[54,79)/1 luod=0'0 crt=70'242 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:59 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 79 pg[9.e( v 71'238 (0'0,71'238] local-lis/les=0/0 n=5 ec=54/40 lis/c=76/54 les/c/f=77/56/0 sis=79) [2] r=0 lpr=79 pi=[54,79)/1 luod=0'0 crt=71'238 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:32:59 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 79 pg[9.e( v 71'238 (0'0,71'238] local-lis/les=0/0 n=5 ec=54/40 lis/c=76/54 les/c/f=77/56/0 sis=79) [2] r=0 lpr=79 pi=[54,79)/1 crt=71'238 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:59 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 79 pg[9.6( v 70'242 (0'0,70'242] local-lis/les=0/0 n=8 ec=54/40 lis/c=76/54 les/c/f=77/56/0 sis=79) [2] r=0 lpr=79 pi=[54,79)/1 crt=70'242 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:32:59 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 29 07:32:59 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 79 pg[9.1e( v 71'240 (0'0,71'240] local-lis/les=78/79 n=6 ec=54/40 lis/c=76/54 les/c/f=77/56/0 sis=78) [2] r=0 lpr=78 pi=[54,78)/1 crt=71'240 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 29 07:32:59 compute-0 ceph-mon[75237]: 8.18 deep-scrub starts
Nov 29 07:32:59 compute-0 ceph-mon[75237]: 8.18 deep-scrub ok
Nov 29 07:32:59 compute-0 ceph-mon[75237]: osdmap e79: 3 total, 3 up, 3 in
Nov 29 07:32:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 29 07:32:59 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 29 07:32:59 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 80 pg[9.e( v 71'238 (0'0,71'238] local-lis/les=79/80 n=5 ec=54/40 lis/c=76/54 les/c/f=77/56/0 sis=79) [2] r=0 lpr=79 pi=[54,79)/1 crt=71'238 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:59 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 80 pg[9.6( v 70'242 (0'0,70'242] local-lis/les=79/80 n=8 ec=54/40 lis/c=76/54 les/c/f=77/56/0 sis=79) [2] r=0 lpr=79 pi=[54,79)/1 crt=70'242 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:32:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:33:00 compute-0 ceph-mon[75237]: pgmap v230: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+scrubbing+deep, 3 unknown, 299 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:00 compute-0 ceph-mon[75237]: 8.4 scrub starts
Nov 29 07:33:00 compute-0 ceph-mon[75237]: 8.4 scrub ok
Nov 29 07:33:00 compute-0 ceph-mon[75237]: osdmap e80: 3 total, 3 up, 3 in
Nov 29 07:33:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+scrubbing+deep, 3 unknown, 299 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:00 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 8.a scrub starts
Nov 29 07:33:00 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 8.a scrub ok
Nov 29 07:33:01 compute-0 ceph-mon[75237]: 8.a scrub starts
Nov 29 07:33:01 compute-0 ceph-mon[75237]: 8.a scrub ok
Nov 29 07:33:01 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 29 07:33:01 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 29 07:33:01 compute-0 sudo[105537]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:02 compute-0 sshd-session[105171]: Connection closed by 192.168.122.30 port 53088
Nov 29 07:33:02 compute-0 sshd-session[105168]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:33:02 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Nov 29 07:33:02 compute-0 systemd[1]: session-34.scope: Consumed 8.685s CPU time.
Nov 29 07:33:02 compute-0 systemd-logind[782]: Session 34 logged out. Waiting for processes to exit.
Nov 29 07:33:02 compute-0 systemd-logind[782]: Removed session 34.
Nov 29 07:33:02 compute-0 ceph-mon[75237]: pgmap v232: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+scrubbing+deep, 3 unknown, 299 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 341 B/s wr, 18 op/s; 18 B/s, 2 objects/s recovering
Nov 29 07:33:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 29 07:33:02 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 07:33:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 29 07:33:02 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 07:33:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 29 07:33:03 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 07:33:03 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 07:33:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 29 07:33:03 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 29 07:33:03 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 81 pg[9.1f( v 71'238 (0'0,71'238] local-lis/les=65/66 n=5 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=81 pruub=10.569870949s) [2] r=-1 lpr=81 pi=[65,81)/1 crt=71'238 lcod 71'237 mlcod 71'237 active pruub 260.698913574s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:03 compute-0 ceph-mon[75237]: 8.1b scrub starts
Nov 29 07:33:03 compute-0 ceph-mon[75237]: 8.1b scrub ok
Nov 29 07:33:03 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 07:33:03 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 07:33:03 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 81 pg[9.1f( v 71'238 (0'0,71'238] local-lis/les=65/66 n=5 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=81 pruub=10.569821358s) [2] r=-1 lpr=81 pi=[65,81)/1 crt=71'238 lcod 71'237 mlcod 0'0 unknown NOTIFY pruub 260.698913574s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:03 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 81 pg[9.f( v 71'244 (0'0,71'244] local-lis/les=65/66 n=8 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=81 pruub=10.688666344s) [2] r=-1 lpr=81 pi=[65,81)/1 crt=71'244 lcod 71'243 mlcod 71'243 active pruub 260.818054199s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:03 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 81 pg[9.f( v 71'244 (0'0,71'244] local-lis/les=65/66 n=8 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=81 pruub=10.688624382s) [2] r=-1 lpr=81 pi=[65,81)/1 crt=71'244 lcod 71'243 mlcod 0'0 unknown NOTIFY pruub 260.818054199s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:03 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 81 pg[9.17( v 71'236 (0'0,71'236] local-lis/les=65/66 n=4 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=81 pruub=10.688480377s) [2] r=-1 lpr=81 pi=[65,81)/1 crt=71'236 lcod 71'235 mlcod 71'235 active pruub 260.818054199s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:03 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 81 pg[9.17( v 71'236 (0'0,71'236] local-lis/les=65/66 n=4 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=81 pruub=10.688453674s) [2] r=-1 lpr=81 pi=[65,81)/1 crt=71'236 lcod 71'235 mlcod 0'0 unknown NOTIFY pruub 260.818054199s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:03 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 81 pg[9.7( v 70'236 (0'0,70'236] local-lis/les=65/66 n=4 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=81 pruub=10.688150406s) [2] r=-1 lpr=81 pi=[65,81)/1 crt=70'236 lcod 70'235 mlcod 70'235 active pruub 260.818084717s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:03 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 81 pg[6.8( v 43'39 (0'0,43'39] local-lis/les=51/52 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=81 pruub=10.118164062s) [2] r=-1 lpr=81 pi=[51,81)/1 crt=43'39 lcod 0'0 mlcod 0'0 active pruub 260.248107910s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:03 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 81 pg[9.7( v 70'236 (0'0,70'236] local-lis/les=65/66 n=4 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=81 pruub=10.688091278s) [2] r=-1 lpr=81 pi=[65,81)/1 crt=70'236 lcod 70'235 mlcod 0'0 unknown NOTIFY pruub 260.818084717s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:03 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 81 pg[6.8( v 43'39 (0'0,43'39] local-lis/les=51/52 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=81 pruub=10.118102074s) [2] r=-1 lpr=81 pi=[51,81)/1 crt=43'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 260.248107910s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:03 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 81 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=81) [2] r=0 lpr=81 pi=[65,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:03 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 81 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=81) [2] r=0 lpr=81 pi=[65,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:03 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 81 pg[6.8( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=81) [2] r=0 lpr=81 pi=[51,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:03 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 81 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=81) [2] r=0 lpr=81 pi=[65,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:03 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 81 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=81) [2] r=0 lpr=81 pi=[65,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 336 B/s wr, 18 op/s; 18 B/s, 2 objects/s recovering
Nov 29 07:33:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 29 07:33:04 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 07:33:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 29 07:33:04 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 07:33:04 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Nov 29 07:33:04 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Nov 29 07:33:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:33:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 29 07:33:04 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 07:33:04 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 07:33:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 29 07:33:04 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 29 07:33:04 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 82 pg[9.7( v 70'236 (0'0,70'236] local-lis/les=65/66 n=4 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=82) [2]/[0] r=0 lpr=82 pi=[65,82)/1 crt=70'236 lcod 70'235 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:04 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 82 pg[9.f( v 71'244 (0'0,71'244] local-lis/les=65/66 n=8 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=82) [2]/[0] r=0 lpr=82 pi=[65,82)/1 crt=71'244 lcod 71'243 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:04 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 82 pg[9.17( v 71'236 (0'0,71'236] local-lis/les=65/66 n=4 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=82) [2]/[0] r=0 lpr=82 pi=[65,82)/1 crt=71'236 lcod 71'235 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:04 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 82 pg[9.7( v 70'236 (0'0,70'236] local-lis/les=65/66 n=4 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=82) [2]/[0] r=0 lpr=82 pi=[65,82)/1 crt=70'236 lcod 70'235 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:04 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 82 pg[9.17( v 71'236 (0'0,71'236] local-lis/les=65/66 n=4 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=82) [2]/[0] r=0 lpr=82 pi=[65,82)/1 crt=71'236 lcod 71'235 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:04 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 82 pg[9.1f( v 71'238 (0'0,71'238] local-lis/les=65/66 n=5 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=82) [2]/[0] r=0 lpr=82 pi=[65,82)/1 crt=71'238 lcod 71'237 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:04 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 82 pg[9.f( v 71'244 (0'0,71'244] local-lis/les=65/66 n=8 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=82) [2]/[0] r=0 lpr=82 pi=[65,82)/1 crt=71'244 lcod 71'243 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:04 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 82 pg[9.1f( v 71'238 (0'0,71'238] local-lis/les=65/66 n=5 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=82) [2]/[0] r=0 lpr=82 pi=[65,82)/1 crt=71'238 lcod 71'237 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:04 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 82 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[65,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:04 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 82 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[65,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:04 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 82 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[65,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:04 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 82 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[65,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:04 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 82 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[65,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:04 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 82 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[65,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:04 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 82 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[65,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:04 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 82 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[65,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:04 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 82 pg[6.8( v 43'39 (0'0,43'39] local-lis/les=81/82 n=0 ec=51/26 lis/c=51/51 les/c/f=52/52/0 sis=81) [2] r=0 lpr=81 pi=[51,81)/1 crt=43'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:04 compute-0 ceph-mon[75237]: pgmap v233: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 341 B/s wr, 18 op/s; 18 B/s, 2 objects/s recovering
Nov 29 07:33:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 07:33:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 07:33:04 compute-0 ceph-mon[75237]: osdmap e81: 3 total, 3 up, 3 in
Nov 29 07:33:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 07:33:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 07:33:04 compute-0 ceph-mon[75237]: 8.1f scrub starts
Nov 29 07:33:04 compute-0 ceph-mon[75237]: 8.1f scrub ok
Nov 29 07:33:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 07:33:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 07:33:04 compute-0 ceph-mon[75237]: osdmap e82: 3 total, 3 up, 3 in
Nov 29 07:33:05 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 29 07:33:05 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 29 07:33:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 29 07:33:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 29 07:33:06 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 29 07:33:06 compute-0 ceph-mon[75237]: pgmap v235: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 336 B/s wr, 18 op/s; 18 B/s, 2 objects/s recovering
Nov 29 07:33:06 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 83 pg[9.1f( v 71'238 (0'0,71'238] local-lis/les=82/83 n=5 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=82) [2]/[0] async=[2] r=0 lpr=82 pi=[65,82)/1 crt=71'238 lcod 71'237 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:06 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 83 pg[9.7( v 70'236 (0'0,70'236] local-lis/les=82/83 n=4 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=82) [2]/[0] async=[2] r=0 lpr=82 pi=[65,82)/1 crt=70'236 lcod 70'235 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:06 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 83 pg[9.17( v 71'236 (0'0,71'236] local-lis/les=82/83 n=4 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=82) [2]/[0] async=[2] r=0 lpr=82 pi=[65,82)/1 crt=71'236 lcod 71'235 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:06 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 83 pg[9.f( v 71'244 (0'0,71'244] local-lis/les=82/83 n=8 ec=54/40 lis/c=65/65 les/c/f=66/66/0 sis=82) [2]/[0] async=[2] r=0 lpr=82 pi=[65,82)/1 crt=71'244 lcod 71'243 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 341 B/s wr, 18 op/s; 18 B/s, 2 objects/s recovering
Nov 29 07:33:06 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 29 07:33:06 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 29 07:33:06 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 82 pg[9.8( v 71'238 (0'0,71'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=82 pruub=9.080431938s) [2] r=-1 lpr=82 pi=[54,82)/1 crt=71'238 lcod 71'237 mlcod 71'237 active pruub 249.135147095s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:06 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 82 pg[6.9( v 43'39 (0'0,43'39] local-lis/les=55/57 n=1 ec=51/26 lis/c=55/55 les/c/f=57/57/0 sis=82 pruub=11.882314682s) [0] r=-1 lpr=82 pi=[55,82)/1 crt=43'39 lcod 0'0 mlcod 0'0 active pruub 251.937118530s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:06 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 83 pg[9.8( v 71'238 (0'0,71'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=82 pruub=9.080333710s) [2] r=-1 lpr=82 pi=[54,82)/1 crt=71'238 lcod 71'237 mlcod 0'0 unknown NOTIFY pruub 249.135147095s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:06 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 83 pg[6.9( v 43'39 (0'0,43'39] local-lis/les=55/57 n=1 ec=51/26 lis/c=55/55 les/c/f=57/57/0 sis=82 pruub=11.882215500s) [0] r=-1 lpr=82 pi=[55,82)/1 crt=43'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 251.937118530s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:06 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 82 pg[9.18( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=82 pruub=9.079937935s) [2] r=-1 lpr=82 pi=[54,82)/1 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 249.135131836s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:06 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 83 pg[9.18( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=82 pruub=9.079905510s) [2] r=-1 lpr=82 pi=[54,82)/1 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.135131836s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:06 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 83 pg[9.8( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=82) [2] r=0 lpr=83 pi=[54,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:06 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 83 pg[6.9( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=55/55 les/c/f=57/57/0 sis=82) [0] r=0 lpr=83 pi=[55,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:06 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 83 pg[9.18( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=82) [2] r=0 lpr=83 pi=[54,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:06 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 83 pg[6.9( v 43'39 (0'0,43'39] local-lis/les=82/83 n=1 ec=51/26 lis/c=55/55 les/c/f=57/57/0 sis=82) [0] r=0 lpr=83 pi=[55,82)/1 crt=43'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:06 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 29 07:33:06 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 29 07:33:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 29 07:33:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 29 07:33:07 compute-0 ceph-mon[75237]: 8.9 scrub starts
Nov 29 07:33:07 compute-0 ceph-mon[75237]: 8.9 scrub ok
Nov 29 07:33:07 compute-0 ceph-mon[75237]: osdmap e83: 3 total, 3 up, 3 in
Nov 29 07:33:07 compute-0 ceph-mon[75237]: 8.13 scrub starts
Nov 29 07:33:07 compute-0 ceph-mon[75237]: 8.13 scrub ok
Nov 29 07:33:07 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 29 07:33:07 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 84 pg[9.1f( v 71'238 (0'0,71'238] local-lis/les=82/83 n=5 ec=54/40 lis/c=82/65 les/c/f=83/66/0 sis=84 pruub=14.987133980s) [2] async=[2] r=-1 lpr=84 pi=[65,84)/1 crt=71'238 lcod 71'237 mlcod 71'237 active pruub 268.310546875s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:07 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 84 pg[9.1f( v 71'238 (0'0,71'238] local-lis/les=82/83 n=5 ec=54/40 lis/c=82/65 les/c/f=83/66/0 sis=84 pruub=14.987065315s) [2] r=-1 lpr=84 pi=[65,84)/1 crt=71'238 lcod 71'237 mlcod 0'0 unknown NOTIFY pruub 268.310546875s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:07 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 84 pg[9.f( v 71'244 (0'0,71'244] local-lis/les=82/83 n=8 ec=54/40 lis/c=82/65 les/c/f=83/66/0 sis=84 pruub=14.990560532s) [2] async=[2] r=-1 lpr=84 pi=[65,84)/1 crt=71'244 lcod 71'243 mlcod 71'243 active pruub 268.314300537s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:07 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 84 pg[9.f( v 71'244 (0'0,71'244] local-lis/les=82/83 n=8 ec=54/40 lis/c=82/65 les/c/f=83/66/0 sis=84 pruub=14.990467072s) [2] r=-1 lpr=84 pi=[65,84)/1 crt=71'244 lcod 71'243 mlcod 0'0 unknown NOTIFY pruub 268.314300537s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:07 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 84 pg[9.17( v 71'236 (0'0,71'236] local-lis/les=82/83 n=4 ec=54/40 lis/c=82/65 les/c/f=83/66/0 sis=84 pruub=14.990382195s) [2] async=[2] r=-1 lpr=84 pi=[65,84)/1 crt=71'236 lcod 71'235 mlcod 71'235 active pruub 268.314300537s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:07 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 84 pg[9.17( v 71'236 (0'0,71'236] local-lis/les=82/83 n=4 ec=54/40 lis/c=82/65 les/c/f=83/66/0 sis=84 pruub=14.990352631s) [2] r=-1 lpr=84 pi=[65,84)/1 crt=71'236 lcod 71'235 mlcod 0'0 unknown NOTIFY pruub 268.314300537s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:07 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 84 pg[9.7( v 70'236 (0'0,70'236] local-lis/les=82/83 n=4 ec=54/40 lis/c=82/65 les/c/f=83/66/0 sis=84 pruub=14.989436150s) [2] async=[2] r=-1 lpr=84 pi=[65,84)/1 crt=70'236 lcod 70'235 mlcod 70'235 active pruub 268.314270020s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:07 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 84 pg[9.7( v 70'236 (0'0,70'236] local-lis/les=82/83 n=4 ec=54/40 lis/c=82/65 les/c/f=83/66/0 sis=84 pruub=14.989370346s) [2] r=-1 lpr=84 pi=[65,84)/1 crt=70'236 lcod 70'235 mlcod 0'0 unknown NOTIFY pruub 268.314270020s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:07 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 84 pg[9.17( v 71'236 (0'0,71'236] local-lis/les=0/0 n=4 ec=54/40 lis/c=82/65 les/c/f=83/66/0 sis=84) [2] r=0 lpr=84 pi=[65,84)/1 luod=0'0 crt=71'236 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:07 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 84 pg[9.7( v 70'236 (0'0,70'236] local-lis/les=0/0 n=4 ec=54/40 lis/c=82/65 les/c/f=83/66/0 sis=84) [2] r=0 lpr=84 pi=[65,84)/1 luod=0'0 crt=70'236 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:07 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 84 pg[9.17( v 71'236 (0'0,71'236] local-lis/les=0/0 n=4 ec=54/40 lis/c=82/65 les/c/f=83/66/0 sis=84) [2] r=0 lpr=84 pi=[65,84)/1 crt=71'236 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:07 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 84 pg[9.f( v 71'244 (0'0,71'244] local-lis/les=0/0 n=8 ec=54/40 lis/c=82/65 les/c/f=83/66/0 sis=84) [2] r=0 lpr=84 pi=[65,84)/1 luod=0'0 crt=71'244 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:07 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 84 pg[9.7( v 70'236 (0'0,70'236] local-lis/les=0/0 n=4 ec=54/40 lis/c=82/65 les/c/f=83/66/0 sis=84) [2] r=0 lpr=84 pi=[65,84)/1 crt=70'236 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:07 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 84 pg[9.f( v 71'244 (0'0,71'244] local-lis/les=0/0 n=8 ec=54/40 lis/c=82/65 les/c/f=83/66/0 sis=84) [2] r=0 lpr=84 pi=[65,84)/1 crt=71'244 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:07 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 84 pg[9.8( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:07 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 84 pg[9.8( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:07 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 84 pg[9.1f( v 71'238 (0'0,71'238] local-lis/les=0/0 n=5 ec=54/40 lis/c=82/65 les/c/f=83/66/0 sis=84) [2] r=0 lpr=84 pi=[65,84)/1 luod=0'0 crt=71'238 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:07 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 84 pg[9.1f( v 71'238 (0'0,71'238] local-lis/les=0/0 n=5 ec=54/40 lis/c=82/65 les/c/f=83/66/0 sis=84) [2] r=0 lpr=84 pi=[65,84)/1 crt=71'238 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:07 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 84 pg[9.18( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:07 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 84 pg[9.18( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[54,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:07 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 84 pg[9.18( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=84) [2]/[1] r=0 lpr=84 pi=[54,84)/1 crt=53'234 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:07 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 84 pg[9.18( v 53'234 (0'0,53'234] local-lis/les=54/56 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=84) [2]/[1] r=0 lpr=84 pi=[54,84)/1 crt=53'234 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:07 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 84 pg[9.8( v 71'238 (0'0,71'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=84) [2]/[1] r=0 lpr=84 pi=[54,84)/1 crt=71'238 lcod 71'237 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:07 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 84 pg[9.8( v 71'238 (0'0,71'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=84) [2]/[1] r=0 lpr=84 pi=[54,84)/1 crt=71'238 lcod 71'237 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:07 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 29 07:33:07 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 29 07:33:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 29 07:33:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:33:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:33:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:33:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:33:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:33:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:33:08 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Nov 29 07:33:08 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Nov 29 07:33:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 29 07:33:09 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 29 07:33:09 compute-0 ceph-mon[75237]: pgmap v238: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 341 B/s wr, 18 op/s; 18 B/s, 2 objects/s recovering
Nov 29 07:33:09 compute-0 ceph-mon[75237]: 7.11 scrub starts
Nov 29 07:33:09 compute-0 ceph-mon[75237]: 7.11 scrub ok
Nov 29 07:33:09 compute-0 ceph-mon[75237]: osdmap e84: 3 total, 3 up, 3 in
Nov 29 07:33:09 compute-0 ceph-mon[75237]: 7.13 scrub starts
Nov 29 07:33:09 compute-0 ceph-mon[75237]: 7.13 scrub ok
Nov 29 07:33:09 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 85 pg[9.18( v 53'234 (0'0,53'234] local-lis/les=84/85 n=3 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=53'234 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:09 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 85 pg[9.1f( v 71'238 (0'0,71'238] local-lis/les=84/85 n=5 ec=54/40 lis/c=82/65 les/c/f=83/66/0 sis=84) [2] r=0 lpr=84 pi=[65,84)/1 crt=71'238 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:09 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 85 pg[9.f( v 71'244 (0'0,71'244] local-lis/les=84/85 n=8 ec=54/40 lis/c=82/65 les/c/f=83/66/0 sis=84) [2] r=0 lpr=84 pi=[65,84)/1 crt=71'244 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:09 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 85 pg[9.17( v 71'236 (0'0,71'236] local-lis/les=84/85 n=4 ec=54/40 lis/c=82/65 les/c/f=83/66/0 sis=84) [2] r=0 lpr=84 pi=[65,84)/1 crt=71'236 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:09 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 85 pg[9.7( v 70'236 (0'0,70'236] local-lis/les=84/85 n=4 ec=54/40 lis/c=82/65 les/c/f=83/66/0 sis=84) [2] r=0 lpr=84 pi=[65,84)/1 crt=70'236 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:09 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 85 pg[9.8( v 71'238 (0'0,71'238] local-lis/les=84/85 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[54,84)/1 crt=71'238 lcod 71'237 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:33:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 29 07:33:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 29 07:33:09 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 29 07:33:09 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 86 pg[9.18( v 53'234 (0'0,53'234] local-lis/les=84/85 n=3 ec=54/40 lis/c=84/54 les/c/f=85/56/0 sis=86 pruub=15.892389297s) [2] async=[2] r=-1 lpr=86 pi=[54,86)/1 crt=53'234 lcod 0'0 mlcod 0'0 active pruub 258.937194824s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:09 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 86 pg[9.18( v 53'234 (0'0,53'234] local-lis/les=84/85 n=3 ec=54/40 lis/c=84/54 les/c/f=85/56/0 sis=86 pruub=15.892304420s) [2] r=-1 lpr=86 pi=[54,86)/1 crt=53'234 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 258.937194824s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:09 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 86 pg[9.18( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=84/54 les/c/f=85/56/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 luod=0'0 crt=53'234 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:09 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 86 pg[9.18( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=84/54 les/c/f=85/56/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:10 compute-0 ceph-mon[75237]: pgmap v240: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:10 compute-0 ceph-mon[75237]: 8.1c scrub starts
Nov 29 07:33:10 compute-0 ceph-mon[75237]: 8.1c scrub ok
Nov 29 07:33:10 compute-0 ceph-mon[75237]: osdmap e85: 3 total, 3 up, 3 in
Nov 29 07:33:10 compute-0 ceph-mon[75237]: osdmap e86: 3 total, 3 up, 3 in
Nov 29 07:33:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 29 07:33:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 29 07:33:10 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 29 07:33:10 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 87 pg[9.8( v 71'238 (0'0,71'238] local-lis/les=84/85 n=5 ec=54/40 lis/c=84/54 les/c/f=85/56/0 sis=87 pruub=14.879429817s) [2] async=[2] r=-1 lpr=87 pi=[54,87)/1 crt=71'238 lcod 71'237 mlcod 71'237 active pruub 258.939361572s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:10 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 87 pg[9.8( v 71'238 (0'0,71'238] local-lis/les=0/0 n=5 ec=54/40 lis/c=84/54 les/c/f=85/56/0 sis=87) [2] r=0 lpr=87 pi=[54,87)/1 luod=0'0 crt=71'238 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:10 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 87 pg[9.8( v 71'238 (0'0,71'238] local-lis/les=84/85 n=5 ec=54/40 lis/c=84/54 les/c/f=85/56/0 sis=87 pruub=14.879338264s) [2] r=-1 lpr=87 pi=[54,87)/1 crt=71'238 lcod 71'237 mlcod 0'0 unknown NOTIFY pruub 258.939361572s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:10 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 87 pg[9.8( v 71'238 (0'0,71'238] local-lis/les=0/0 n=5 ec=54/40 lis/c=84/54 les/c/f=85/56/0 sis=87) [2] r=0 lpr=87 pi=[54,87)/1 crt=71'238 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:10 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 87 pg[9.18( v 53'234 (0'0,53'234] local-lis/les=86/87 n=3 ec=54/40 lis/c=84/54 les/c/f=85/56/0 sis=86) [2] r=0 lpr=86 pi=[54,86)/1 crt=53'234 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:10 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 29 07:33:10 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 29 07:33:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 29 07:33:11 compute-0 ceph-mon[75237]: pgmap v243: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:11 compute-0 ceph-mon[75237]: osdmap e87: 3 total, 3 up, 3 in
Nov 29 07:33:11 compute-0 ceph-mon[75237]: 8.1d scrub starts
Nov 29 07:33:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 29 07:33:11 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 29 07:33:11 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 88 pg[9.8( v 71'238 (0'0,71'238] local-lis/les=87/88 n=5 ec=54/40 lis/c=84/54 les/c/f=85/56/0 sis=87) [2] r=0 lpr=87 pi=[54,87)/1 crt=71'238 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:11 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 8.12 deep-scrub starts
Nov 29 07:33:11 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 8.12 deep-scrub ok
Nov 29 07:33:12 compute-0 sshd-session[105598]: Invalid user hamed from 103.234.151.178 port 29784
Nov 29 07:33:12 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 29 07:33:12 compute-0 sshd-session[105598]: Received disconnect from 103.234.151.178 port 29784:11: Bye Bye [preauth]
Nov 29 07:33:12 compute-0 sshd-session[105598]: Disconnected from invalid user hamed 103.234.151.178 port 29784 [preauth]
Nov 29 07:33:12 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 29 07:33:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 1 activating, 1 peering, 303 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 192 B/s, 7 objects/s recovering
Nov 29 07:33:12 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 29 07:33:12 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 29 07:33:13 compute-0 ceph-mon[75237]: 8.1d scrub ok
Nov 29 07:33:13 compute-0 ceph-mon[75237]: osdmap e88: 3 total, 3 up, 3 in
Nov 29 07:33:13 compute-0 ceph-mon[75237]: 8.12 deep-scrub starts
Nov 29 07:33:13 compute-0 ceph-mon[75237]: 8.12 deep-scrub ok
Nov 29 07:33:13 compute-0 ceph-mon[75237]: 8.16 scrub starts
Nov 29 07:33:13 compute-0 ceph-mon[75237]: 8.16 scrub ok
Nov 29 07:33:13 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 29 07:33:13 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 29 07:33:14 compute-0 ceph-mon[75237]: pgmap v246: 305 pgs: 1 activating, 1 peering, 303 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 192 B/s, 7 objects/s recovering
Nov 29 07:33:14 compute-0 ceph-mon[75237]: 7.1b scrub starts
Nov 29 07:33:14 compute-0 ceph-mon[75237]: 7.1b scrub ok
Nov 29 07:33:14 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 29 07:33:14 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 29 07:33:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 1 activating, 1 peering, 303 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 154 B/s, 6 objects/s recovering
Nov 29 07:33:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:33:14 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 29 07:33:14 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 29 07:33:15 compute-0 ceph-mon[75237]: 7.1a scrub starts
Nov 29 07:33:15 compute-0 ceph-mon[75237]: 7.1a scrub ok
Nov 29 07:33:15 compute-0 ceph-mon[75237]: 8.17 scrub starts
Nov 29 07:33:15 compute-0 ceph-mon[75237]: 8.17 scrub ok
Nov 29 07:33:15 compute-0 ceph-mon[75237]: 7.1f scrub starts
Nov 29 07:33:15 compute-0 ceph-mon[75237]: 7.1f scrub ok
Nov 29 07:33:16 compute-0 ceph-mon[75237]: pgmap v247: 305 pgs: 1 activating, 1 peering, 303 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 154 B/s, 6 objects/s recovering
Nov 29 07:33:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 112 B/s, 4 objects/s recovering
Nov 29 07:33:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 29 07:33:16 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 07:33:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 29 07:33:16 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 07:33:16 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 29 07:33:16 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 29 07:33:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 29 07:33:17 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 07:33:17 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 07:33:17 compute-0 ceph-mon[75237]: 8.10 scrub starts
Nov 29 07:33:17 compute-0 ceph-mon[75237]: 8.10 scrub ok
Nov 29 07:33:17 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 07:33:17 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 07:33:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 29 07:33:17 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 29 07:33:17 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 89 pg[6.a( v 43'39 (0'0,43'39] local-lis/les=59/62 n=0 ec=51/26 lis/c=59/59 les/c/f=62/62/0 sis=89 pruub=8.458479881s) [0] r=-1 lpr=89 pi=[59,89)/1 crt=43'39 lcod 0'0 mlcod 0'0 active pruub 258.811492920s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:17 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 89 pg[6.a( v 43'39 (0'0,43'39] local-lis/les=59/62 n=0 ec=51/26 lis/c=59/59 les/c/f=62/62/0 sis=89 pruub=8.458418846s) [0] r=-1 lpr=89 pi=[59,89)/1 crt=43'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 258.811492920s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:17 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 89 pg[6.a( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=59/59 les/c/f=62/62/0 sis=89) [0] r=0 lpr=89 pi=[59,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:17 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 29 07:33:17 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 29 07:33:17 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 29 07:33:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 29 07:33:18 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 29 07:33:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 29 07:33:18 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 29 07:33:18 compute-0 ceph-mon[75237]: pgmap v248: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 112 B/s, 4 objects/s recovering
Nov 29 07:33:18 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 07:33:18 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 07:33:18 compute-0 ceph-mon[75237]: osdmap e89: 3 total, 3 up, 3 in
Nov 29 07:33:18 compute-0 ceph-mon[75237]: 8.19 scrub starts
Nov 29 07:33:18 compute-0 ceph-mon[75237]: 8.19 scrub ok
Nov 29 07:33:18 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 90 pg[6.a( v 43'39 (0'0,43'39] local-lis/les=89/90 n=0 ec=51/26 lis/c=59/59 les/c/f=62/62/0 sis=89) [0] r=0 lpr=89 pi=[59,89)/1 crt=43'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 29 07:33:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 07:33:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 29 07:33:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 07:33:18 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Nov 29 07:33:18 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Nov 29 07:33:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 29 07:33:19 compute-0 sshd-session[105600]: Accepted publickey for zuul from 192.168.122.30 port 40250 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:33:19 compute-0 systemd-logind[782]: New session 35 of user zuul.
Nov 29 07:33:19 compute-0 systemd[1]: Started Session 35 of User zuul.
Nov 29 07:33:19 compute-0 sshd-session[105600]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:33:19 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 7.4 deep-scrub starts
Nov 29 07:33:19 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 7.4 deep-scrub ok
Nov 29 07:33:19 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 29 07:33:19 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 29 07:33:20 compute-0 python3.9[105753]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 07:33:20 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 29 07:33:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 29 07:33:20 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 07:33:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 29 07:33:20 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 07:33:21 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Nov 29 07:33:22 compute-0 python3.9[105927]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:33:22 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 07:33:22 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 07:33:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 29 07:33:22 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 29 07:33:22 compute-0 ceph-mon[75237]: 8.15 scrub starts
Nov 29 07:33:22 compute-0 ceph-mon[75237]: 8.15 scrub ok
Nov 29 07:33:22 compute-0 ceph-mon[75237]: osdmap e90: 3 total, 3 up, 3 in
Nov 29 07:33:22 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 07:33:22 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 07:33:22 compute-0 ceph-mon[75237]: 8.14 scrub starts
Nov 29 07:33:22 compute-0 ceph-mon[75237]: 8.14 scrub ok
Nov 29 07:33:22 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 29 07:33:22 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 91 pg[6.b( v 43'39 (0'0,43'39] local-lis/les=61/63 n=1 ec=51/26 lis/c=61/61 les/c/f=63/63/0 sis=91 pruub=11.852643013s) [1] r=-1 lpr=91 pi=[61,91)/1 crt=43'39 mlcod 43'39 active pruub 280.646087646s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:22 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 91 pg[6.b( v 43'39 (0'0,43'39] local-lis/les=61/63 n=1 ec=51/26 lis/c=61/61 les/c/f=63/63/0 sis=91 pruub=11.852405548s) [1] r=-1 lpr=91 pi=[61,91)/1 crt=43'39 mlcod 0'0 unknown NOTIFY pruub 280.646087646s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:22 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 91 pg[6.b( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=61/61 les/c/f=63/63/0 sis=91) [1] r=0 lpr=91 pi=[61,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:22 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Nov 29 07:33:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 29 07:33:22 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 07:33:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 29 07:33:22 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 07:33:23 compute-0 sudo[106081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekfzfemilqmaqmmyoleuhgrhkjmzxkwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401602.9203012-45-258914458752217/AnsiballZ_command.py'
Nov 29 07:33:23 compute-0 sudo[106081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 29 07:33:23 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 07:33:23 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 07:33:23 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 07:33:23 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 07:33:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 29 07:33:23 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 29 07:33:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 92 pg[6.b( v 43'39 lc 0'0 (0'0,43'39] local-lis/les=91/92 n=1 ec=51/26 lis/c=61/61 les/c/f=63/63/0 sis=91) [1] r=0 lpr=91 pi=[61,91)/1 crt=43'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:23 compute-0 ceph-mon[75237]: pgmap v251: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:23 compute-0 ceph-mon[75237]: 7.4 deep-scrub starts
Nov 29 07:33:23 compute-0 ceph-mon[75237]: 7.4 deep-scrub ok
Nov 29 07:33:23 compute-0 ceph-mon[75237]: 8.11 scrub starts
Nov 29 07:33:23 compute-0 ceph-mon[75237]: 8.11 scrub ok
Nov 29 07:33:23 compute-0 ceph-mon[75237]: 8.1e scrub starts
Nov 29 07:33:23 compute-0 ceph-mon[75237]: pgmap v252: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:23 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 07:33:23 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 07:33:23 compute-0 ceph-mon[75237]: 9.2 scrub starts
Nov 29 07:33:23 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 07:33:23 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 07:33:23 compute-0 ceph-mon[75237]: osdmap e91: 3 total, 3 up, 3 in
Nov 29 07:33:23 compute-0 ceph-mon[75237]: 8.1e scrub ok
Nov 29 07:33:23 compute-0 ceph-mon[75237]: 9.2 scrub ok
Nov 29 07:33:23 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 07:33:23 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 07:33:23 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 07:33:23 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 07:33:23 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 07:33:23 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 07:33:23 compute-0 ceph-mon[75237]: osdmap e92: 3 total, 3 up, 3 in
Nov 29 07:33:23 compute-0 python3.9[106083]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:33:23 compute-0 sudo[106081]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:23 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 29 07:33:23 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 29 07:33:24 compute-0 ceph-mon[75237]: pgmap v254: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:24 compute-0 sudo[106234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avxrampeucxmwslhfesdqyfvmljpsceb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401604.168203-57-71888014992180/AnsiballZ_stat.py'
Nov 29 07:33:24 compute-0 sudo[106234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 29 07:33:24 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 07:33:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 29 07:33:24 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 07:33:24 compute-0 python3.9[106236]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:33:24 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 29 07:33:24 compute-0 sudo[106234]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:24 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 29 07:33:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:33:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 29 07:33:25 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 07:33:25 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 07:33:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 29 07:33:25 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 29 07:33:25 compute-0 ceph-mon[75237]: 7.1c scrub starts
Nov 29 07:33:25 compute-0 ceph-mon[75237]: 7.1c scrub ok
Nov 29 07:33:25 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 07:33:25 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 07:33:25 compute-0 ceph-mon[75237]: 10.15 scrub starts
Nov 29 07:33:25 compute-0 sudo[106388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brsjryjkvjxsmcbzkdwkubttcbtiacyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401605.1926272-68-238430504647665/AnsiballZ_file.py'
Nov 29 07:33:25 compute-0 sudo[106388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:25 compute-0 systemd[76685]: Created slice User Background Tasks Slice.
Nov 29 07:33:25 compute-0 systemd[76685]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 07:33:25 compute-0 systemd[76685]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 07:33:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 93 pg[9.c( v 71'238 (0'0,71'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=93 pruub=13.999346733s) [2] r=-1 lpr=93 pi=[54,93)/1 crt=71'238 lcod 71'237 mlcod 71'237 active pruub 273.136077881s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 93 pg[9.c( v 71'238 (0'0,71'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=93 pruub=13.999280930s) [2] r=-1 lpr=93 pi=[54,93)/1 crt=71'238 lcod 71'237 mlcod 0'0 unknown NOTIFY pruub 273.136077881s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 93 pg[9.1c( v 70'238 (0'0,70'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=93 pruub=13.998509407s) [2] r=-1 lpr=93 pi=[54,93)/1 crt=70'238 lcod 70'237 mlcod 70'237 active pruub 273.136138916s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 93 pg[9.1c( v 70'238 (0'0,70'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=93 pruub=13.998302460s) [2] r=-1 lpr=93 pi=[54,93)/1 crt=70'238 lcod 70'237 mlcod 0'0 unknown NOTIFY pruub 273.136138916s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:25 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 93 pg[9.c( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=93) [2] r=0 lpr=93 pi=[54,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:25 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 93 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=93) [2] r=0 lpr=93 pi=[54,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:25 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 93 pg[6.d( v 43'39 (0'0,43'39] local-lis/les=70/71 n=1 ec=51/26 lis/c=70/70 les/c/f=71/71/0 sis=93 pruub=10.489510536s) [1] r=-1 lpr=93 pi=[70,93)/1 crt=43'39 mlcod 43'39 active pruub 282.597717285s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:25 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 93 pg[6.d( v 43'39 (0'0,43'39] local-lis/les=70/71 n=1 ec=51/26 lis/c=70/70 les/c/f=71/71/0 sis=93 pruub=10.489393234s) [1] r=-1 lpr=93 pi=[70,93)/1 crt=43'39 mlcod 0'0 unknown NOTIFY pruub 282.597717285s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:25 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 93 pg[6.d( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=70/70 les/c/f=71/71/0 sis=93) [1] r=0 lpr=93 pi=[70,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:25 compute-0 python3.9[106390]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:33:25 compute-0 sudo[106388]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:26 compute-0 sudo[106541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyuoiqxavdpcurtvjhtxtpjbzhevezxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401606.259918-77-84328385486446/AnsiballZ_file.py'
Nov 29 07:33:26 compute-0 sudo[106541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:26 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 29 07:33:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:26 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 29 07:33:26 compute-0 python3.9[106543]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:33:26 compute-0 sudo[106541]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 29 07:33:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 29 07:33:27 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 07:33:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 29 07:33:27 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 07:33:27 compute-0 ceph-mon[75237]: pgmap v256: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:27 compute-0 ceph-mon[75237]: 10.15 scrub ok
Nov 29 07:33:27 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 07:33:27 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 07:33:27 compute-0 ceph-mon[75237]: osdmap e93: 3 total, 3 up, 3 in
Nov 29 07:33:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 29 07:33:27 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 29 07:33:27 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 94 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=94) [2]/[1] r=-1 lpr=94 pi=[54,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:27 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 94 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=94) [2]/[1] r=-1 lpr=94 pi=[54,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:27 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 94 pg[9.c( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=94) [2]/[1] r=-1 lpr=94 pi=[54,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:27 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 94 pg[9.c( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=94) [2]/[1] r=-1 lpr=94 pi=[54,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:27 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 94 pg[9.1c( v 70'238 (0'0,70'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=94) [2]/[1] r=0 lpr=94 pi=[54,94)/1 crt=70'238 lcod 70'237 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:27 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 94 pg[9.1c( v 70'238 (0'0,70'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=94) [2]/[1] r=0 lpr=94 pi=[54,94)/1 crt=70'238 lcod 70'237 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:27 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 94 pg[9.c( v 71'238 (0'0,71'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=94) [2]/[1] r=0 lpr=94 pi=[54,94)/1 crt=71'238 lcod 71'237 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:27 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 94 pg[9.c( v 71'238 (0'0,71'238] local-lis/les=54/56 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=94) [2]/[1] r=0 lpr=94 pi=[54,94)/1 crt=71'238 lcod 71'237 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:27 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 94 pg[6.d( v 43'39 lc 40'13 (0'0,43'39] local-lis/les=93/94 n=1 ec=51/26 lis/c=70/70 les/c/f=71/71/0 sis=93) [1] r=0 lpr=93 pi=[70,93)/1 crt=43'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:27 compute-0 python3.9[106693]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:33:27 compute-0 network[106710]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:33:27 compute-0 network[106711]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:33:27 compute-0 network[106712]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:33:28 compute-0 ceph-mon[75237]: 9.4 scrub starts
Nov 29 07:33:28 compute-0 ceph-mon[75237]: pgmap v258: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:28 compute-0 ceph-mon[75237]: 9.4 scrub ok
Nov 29 07:33:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 07:33:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 07:33:28 compute-0 ceph-mon[75237]: osdmap e94: 3 total, 3 up, 3 in
Nov 29 07:33:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 29 07:33:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 07:33:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 07:33:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 29 07:33:28 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 29 07:33:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 07:33:28 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 10.1e deep-scrub starts
Nov 29 07:33:28 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 10.1e deep-scrub ok
Nov 29 07:33:29 compute-0 sudo[106772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:33:29 compute-0 sudo[106772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:29 compute-0 sudo[106772]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:29 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 29 07:33:29 compute-0 sudo[106800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:33:29 compute-0 sudo[106800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:29 compute-0 sudo[106800]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:29 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 29 07:33:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:33:29 compute-0 sudo[106829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:33:29 compute-0 sudo[106829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:29 compute-0 sudo[106829]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:30 compute-0 sudo[106858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:33:30 compute-0 sudo[106858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:30 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 29 07:33:30 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 29 07:33:30 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 95 pg[9.c( v 71'238 (0'0,71'238] local-lis/les=94/95 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=94) [2]/[1] async=[2] r=0 lpr=94 pi=[54,94)/1 crt=71'238 lcod 71'237 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:30 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 95 pg[9.1c( v 70'238 (0'0,70'238] local-lis/les=94/95 n=5 ec=54/40 lis/c=54/54 les/c/f=56/56/0 sis=94) [2]/[1] async=[2] r=0 lpr=94 pi=[54,94)/1 crt=70'238 lcod 70'237 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 07:33:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 07:33:30 compute-0 ceph-mon[75237]: osdmap e95: 3 total, 3 up, 3 in
Nov 29 07:33:30 compute-0 sudo[106858]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:30 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 29 07:33:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:33:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:33:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:33:30 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:33:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:33:30 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:33:30 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 939aebeb-b0ee-4593-8a8d-d1cff992f37b does not exist
Nov 29 07:33:30 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 75cd11da-6c19-4879-8cdc-15e9d6dc2f5e does not exist
Nov 29 07:33:30 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 9ce68502-a1d2-4df2-8443-2b37c3ce10b5 does not exist
Nov 29 07:33:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:33:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:33:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:33:30 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:33:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:33:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:33:30 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 29 07:33:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Nov 29 07:33:30 compute-0 sudo[106944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:33:30 compute-0 sudo[106944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:30 compute-0 sudo[106944]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:30 compute-0 sudo[106972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:33:30 compute-0 sudo[106972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:30 compute-0 sudo[106972]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:30 compute-0 sudo[107000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:33:30 compute-0 sudo[107000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:30 compute-0 sudo[107000]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:30 compute-0 sudo[107028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:33:30 compute-0 sudo[107028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:31 compute-0 podman[107165]: 2025-11-29 07:33:31.191728484 +0000 UTC m=+0.048379247 container create af63bba8f2e5c7252687d5957a518e00b93a3ed6a6a83e1f4ff1e24e5b644622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_agnesi, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 07:33:31 compute-0 systemd[1]: Started libpod-conmon-af63bba8f2e5c7252687d5957a518e00b93a3ed6a6a83e1f4ff1e24e5b644622.scope.
Nov 29 07:33:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 29 07:33:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 29 07:33:31 compute-0 podman[107165]: 2025-11-29 07:33:31.169720489 +0000 UTC m=+0.026371282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:33:31 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 29 07:33:31 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 96 pg[9.1c( v 70'238 (0'0,70'238] local-lis/les=0/0 n=5 ec=54/40 lis/c=94/54 les/c/f=95/56/0 sis=96) [2] r=0 lpr=96 pi=[54,96)/1 luod=0'0 crt=70'238 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:31 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 96 pg[9.1c( v 70'238 (0'0,70'238] local-lis/les=0/0 n=5 ec=54/40 lis/c=94/54 les/c/f=95/56/0 sis=96) [2] r=0 lpr=96 pi=[54,96)/1 crt=70'238 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:31 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 96 pg[9.c( v 71'238 (0'0,71'238] local-lis/les=0/0 n=5 ec=54/40 lis/c=94/54 les/c/f=95/56/0 sis=96) [2] r=0 lpr=96 pi=[54,96)/1 luod=0'0 crt=71'238 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:31 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 96 pg[9.c( v 71'238 (0'0,71'238] local-lis/les=0/0 n=5 ec=54/40 lis/c=94/54 les/c/f=95/56/0 sis=96) [2] r=0 lpr=96 pi=[54,96)/1 crt=71'238 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:31 compute-0 ceph-mon[75237]: pgmap v261: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 07:33:31 compute-0 ceph-mon[75237]: 10.1e deep-scrub starts
Nov 29 07:33:31 compute-0 ceph-mon[75237]: 10.1e deep-scrub ok
Nov 29 07:33:31 compute-0 ceph-mon[75237]: 10.e scrub starts
Nov 29 07:33:31 compute-0 ceph-mon[75237]: 7.5 scrub starts
Nov 29 07:33:31 compute-0 ceph-mon[75237]: 10.e scrub ok
Nov 29 07:33:31 compute-0 ceph-mon[75237]: 7.5 scrub ok
Nov 29 07:33:31 compute-0 ceph-mon[75237]: 9.a scrub starts
Nov 29 07:33:31 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:33:31 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:33:31 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:33:31 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:33:31 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:33:31 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:33:31 compute-0 ceph-mon[75237]: 9.a scrub ok
Nov 29 07:33:31 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 96 pg[9.c( v 71'238 (0'0,71'238] local-lis/les=94/95 n=5 ec=54/40 lis/c=94/54 les/c/f=95/56/0 sis=96 pruub=14.973559380s) [2] async=[2] r=-1 lpr=96 pi=[54,96)/1 crt=71'238 lcod 71'237 mlcod 71'237 active pruub 279.507995605s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:31 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 96 pg[9.c( v 71'238 (0'0,71'238] local-lis/les=94/95 n=5 ec=54/40 lis/c=94/54 les/c/f=95/56/0 sis=96 pruub=14.973485947s) [2] r=-1 lpr=96 pi=[54,96)/1 crt=71'238 lcod 71'237 mlcod 0'0 unknown NOTIFY pruub 279.507995605s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:31 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 96 pg[9.1c( v 70'238 (0'0,70'238] local-lis/les=94/95 n=5 ec=54/40 lis/c=94/54 les/c/f=95/56/0 sis=96 pruub=14.973524094s) [2] async=[2] r=-1 lpr=96 pi=[54,96)/1 crt=70'238 lcod 70'237 mlcod 70'237 active pruub 279.508056641s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:31 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 96 pg[9.1c( v 70'238 (0'0,70'238] local-lis/les=94/95 n=5 ec=54/40 lis/c=94/54 les/c/f=95/56/0 sis=96 pruub=14.973425865s) [2] r=-1 lpr=96 pi=[54,96)/1 crt=70'238 lcod 70'237 mlcod 0'0 unknown NOTIFY pruub 279.508056641s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:33:31 compute-0 podman[107165]: 2025-11-29 07:33:31.302683883 +0000 UTC m=+0.159334726 container init af63bba8f2e5c7252687d5957a518e00b93a3ed6a6a83e1f4ff1e24e5b644622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_agnesi, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:33:31 compute-0 podman[107165]: 2025-11-29 07:33:31.315012059 +0000 UTC m=+0.171662852 container start af63bba8f2e5c7252687d5957a518e00b93a3ed6a6a83e1f4ff1e24e5b644622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_agnesi, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:33:31 compute-0 podman[107165]: 2025-11-29 07:33:31.319650432 +0000 UTC m=+0.176301265 container attach af63bba8f2e5c7252687d5957a518e00b93a3ed6a6a83e1f4ff1e24e5b644622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_agnesi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:33:31 compute-0 busy_agnesi[107184]: 167 167
Nov 29 07:33:31 compute-0 systemd[1]: libpod-af63bba8f2e5c7252687d5957a518e00b93a3ed6a6a83e1f4ff1e24e5b644622.scope: Deactivated successfully.
Nov 29 07:33:31 compute-0 conmon[107184]: conmon af63bba8f2e5c7252687 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-af63bba8f2e5c7252687d5957a518e00b93a3ed6a6a83e1f4ff1e24e5b644622.scope/container/memory.events
Nov 29 07:33:31 compute-0 podman[107193]: 2025-11-29 07:33:31.394439649 +0000 UTC m=+0.044003490 container died af63bba8f2e5c7252687d5957a518e00b93a3ed6a6a83e1f4ff1e24e5b644622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:33:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fc471fc2a5a751422f891924660799e7608b9295d0b2bd0972f6a2d4497682a-merged.mount: Deactivated successfully.
Nov 29 07:33:31 compute-0 podman[107193]: 2025-11-29 07:33:31.449377213 +0000 UTC m=+0.098940974 container remove af63bba8f2e5c7252687d5957a518e00b93a3ed6a6a83e1f4ff1e24e5b644622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 07:33:31 compute-0 systemd[1]: libpod-conmon-af63bba8f2e5c7252687d5957a518e00b93a3ed6a6a83e1f4ff1e24e5b644622.scope: Deactivated successfully.
Nov 29 07:33:31 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 29 07:33:31 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 29 07:33:31 compute-0 podman[107285]: 2025-11-29 07:33:31.680545408 +0000 UTC m=+0.066370104 container create 975431b183e40b7b10e44c181d1f28a96871f3fbe4cbe3804d7b7695cab1f5db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_khayyam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:33:31 compute-0 systemd[1]: Started libpod-conmon-975431b183e40b7b10e44c181d1f28a96871f3fbe4cbe3804d7b7695cab1f5db.scope.
Nov 29 07:33:31 compute-0 podman[107285]: 2025-11-29 07:33:31.64799037 +0000 UTC m=+0.033815156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:33:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8562d3df62466e7f7b91b37010b9ea2f3514cb0440550b98d3fc855270c8635/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8562d3df62466e7f7b91b37010b9ea2f3514cb0440550b98d3fc855270c8635/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8562d3df62466e7f7b91b37010b9ea2f3514cb0440550b98d3fc855270c8635/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8562d3df62466e7f7b91b37010b9ea2f3514cb0440550b98d3fc855270c8635/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8562d3df62466e7f7b91b37010b9ea2f3514cb0440550b98d3fc855270c8635/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:33:31 compute-0 python3.9[107279]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:33:31 compute-0 podman[107285]: 2025-11-29 07:33:31.779150991 +0000 UTC m=+0.164975717 container init 975431b183e40b7b10e44c181d1f28a96871f3fbe4cbe3804d7b7695cab1f5db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:33:31 compute-0 podman[107285]: 2025-11-29 07:33:31.786162293 +0000 UTC m=+0.171986999 container start 975431b183e40b7b10e44c181d1f28a96871f3fbe4cbe3804d7b7695cab1f5db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_khayyam, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 07:33:31 compute-0 podman[107285]: 2025-11-29 07:33:31.790027164 +0000 UTC m=+0.175851870 container attach 975431b183e40b7b10e44c181d1f28a96871f3fbe4cbe3804d7b7695cab1f5db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:33:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 29 07:33:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 29 07:33:32 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 29 07:33:32 compute-0 ceph-mon[75237]: pgmap v262: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Nov 29 07:33:32 compute-0 ceph-mon[75237]: osdmap e96: 3 total, 3 up, 3 in
Nov 29 07:33:32 compute-0 ceph-mon[75237]: 9.12 scrub starts
Nov 29 07:33:32 compute-0 ceph-mon[75237]: 9.12 scrub ok
Nov 29 07:33:32 compute-0 ceph-mon[75237]: osdmap e97: 3 total, 3 up, 3 in
Nov 29 07:33:32 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 97 pg[9.c( v 71'238 (0'0,71'238] local-lis/les=96/97 n=5 ec=54/40 lis/c=94/54 les/c/f=95/56/0 sis=96) [2] r=0 lpr=96 pi=[54,96)/1 crt=71'238 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:32 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 97 pg[9.1c( v 70'238 (0'0,70'238] local-lis/les=96/97 n=5 ec=54/40 lis/c=94/54 les/c/f=95/56/0 sis=96) [2] r=0 lpr=96 pi=[54,96)/1 crt=70'238 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:32 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 29 07:33:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 07:33:32 compute-0 python3.9[107456]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:33:32 compute-0 cool_khayyam[107301]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:33:32 compute-0 cool_khayyam[107301]: --> relative data size: 1.0
Nov 29 07:33:32 compute-0 cool_khayyam[107301]: --> All data devices are unavailable
Nov 29 07:33:32 compute-0 systemd[1]: libpod-975431b183e40b7b10e44c181d1f28a96871f3fbe4cbe3804d7b7695cab1f5db.scope: Deactivated successfully.
Nov 29 07:33:32 compute-0 systemd[1]: libpod-975431b183e40b7b10e44c181d1f28a96871f3fbe4cbe3804d7b7695cab1f5db.scope: Consumed 1.130s CPU time.
Nov 29 07:33:32 compute-0 podman[107285]: 2025-11-29 07:33:32.994676378 +0000 UTC m=+1.380501084 container died 975431b183e40b7b10e44c181d1f28a96871f3fbe4cbe3804d7b7695cab1f5db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:33:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:34 compute-0 ceph-mon[75237]: 9.14 scrub starts
Nov 29 07:33:34 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 29 07:33:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8562d3df62466e7f7b91b37010b9ea2f3514cb0440550b98d3fc855270c8635-merged.mount: Deactivated successfully.
Nov 29 07:33:34 compute-0 podman[107285]: 2025-11-29 07:33:34.748300639 +0000 UTC m=+3.134125345 container remove 975431b183e40b7b10e44c181d1f28a96871f3fbe4cbe3804d7b7695cab1f5db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:33:34 compute-0 systemd[1]: libpod-conmon-975431b183e40b7b10e44c181d1f28a96871f3fbe4cbe3804d7b7695cab1f5db.scope: Deactivated successfully.
Nov 29 07:33:34 compute-0 sudo[107028]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:34 compute-0 sudo[107647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:33:34 compute-0 sudo[107647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:34 compute-0 sudo[107647]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:34 compute-0 sudo[107672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:33:34 compute-0 sudo[107672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:34 compute-0 sudo[107672]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:34 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 29 07:33:34 compute-0 python3.9[107645]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:33:34 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 29 07:33:34 compute-0 sudo[107697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:33:34 compute-0 sudo[107697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:34 compute-0 sudo[107697]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:33:35 compute-0 sudo[107728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:33:35 compute-0 sudo[107728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:35 compute-0 podman[107817]: 2025-11-29 07:33:35.324338257 +0000 UTC m=+0.043476505 container create f2fb775f8b7e26ea23863df26e6c2b40e72bc6b08b2e839f8a56c1a8273653b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 07:33:35 compute-0 systemd[1]: Started libpod-conmon-f2fb775f8b7e26ea23863df26e6c2b40e72bc6b08b2e839f8a56c1a8273653b2.scope.
Nov 29 07:33:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:33:35 compute-0 podman[107817]: 2025-11-29 07:33:35.306212004 +0000 UTC m=+0.025350282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:33:35 compute-0 podman[107817]: 2025-11-29 07:33:35.411015256 +0000 UTC m=+0.130153534 container init f2fb775f8b7e26ea23863df26e6c2b40e72bc6b08b2e839f8a56c1a8273653b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 07:33:35 compute-0 podman[107817]: 2025-11-29 07:33:35.420756167 +0000 UTC m=+0.139894425 container start f2fb775f8b7e26ea23863df26e6c2b40e72bc6b08b2e839f8a56c1a8273653b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_wilbur, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:33:35 compute-0 affectionate_wilbur[107837]: 167 167
Nov 29 07:33:35 compute-0 podman[107817]: 2025-11-29 07:33:35.425865475 +0000 UTC m=+0.145003743 container attach f2fb775f8b7e26ea23863df26e6c2b40e72bc6b08b2e839f8a56c1a8273653b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 07:33:35 compute-0 systemd[1]: libpod-f2fb775f8b7e26ea23863df26e6c2b40e72bc6b08b2e839f8a56c1a8273653b2.scope: Deactivated successfully.
Nov 29 07:33:35 compute-0 podman[107817]: 2025-11-29 07:33:35.426361708 +0000 UTC m=+0.145499966 container died f2fb775f8b7e26ea23863df26e6c2b40e72bc6b08b2e839f8a56c1a8273653b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_wilbur, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:33:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-81cc721d2c1a69bdee783f36f5ac63512e685dba4319fd7bb9525b0d09d0cc23-merged.mount: Deactivated successfully.
Nov 29 07:33:35 compute-0 podman[107817]: 2025-11-29 07:33:35.470212323 +0000 UTC m=+0.189350581 container remove f2fb775f8b7e26ea23863df26e6c2b40e72bc6b08b2e839f8a56c1a8273653b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 07:33:35 compute-0 systemd[1]: libpod-conmon-f2fb775f8b7e26ea23863df26e6c2b40e72bc6b08b2e839f8a56c1a8273653b2.scope: Deactivated successfully.
Nov 29 07:33:35 compute-0 podman[107931]: 2025-11-29 07:33:35.616051238 +0000 UTC m=+0.039407667 container create 999a4d27baf14e3f39feec9af4c15ed1940822d7586f1c604f94eb004e54b5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_robinson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:33:35 compute-0 systemd[1]: Started libpod-conmon-999a4d27baf14e3f39feec9af4c15ed1940822d7586f1c604f94eb004e54b5b8.scope.
Nov 29 07:33:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:33:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bc1b853b7d6ca148fccd2fd5507535a66da60977d57476b46cce97b056c4cdf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:33:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bc1b853b7d6ca148fccd2fd5507535a66da60977d57476b46cce97b056c4cdf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:33:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bc1b853b7d6ca148fccd2fd5507535a66da60977d57476b46cce97b056c4cdf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:33:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bc1b853b7d6ca148fccd2fd5507535a66da60977d57476b46cce97b056c4cdf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:33:35 compute-0 podman[107931]: 2025-11-29 07:33:35.681835485 +0000 UTC m=+0.105191914 container init 999a4d27baf14e3f39feec9af4c15ed1940822d7586f1c604f94eb004e54b5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_robinson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 07:33:35 compute-0 ceph-mon[75237]: pgmap v265: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 07:33:35 compute-0 ceph-mon[75237]: pgmap v266: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:35 compute-0 ceph-mon[75237]: 9.14 scrub ok
Nov 29 07:33:35 compute-0 podman[107931]: 2025-11-29 07:33:35.691991748 +0000 UTC m=+0.115348177 container start 999a4d27baf14e3f39feec9af4c15ed1940822d7586f1c604f94eb004e54b5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_robinson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:33:35 compute-0 sudo[108001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkptnfdiivehyqvqydwhkvubupmhzfdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401615.402138-125-6267260575363/AnsiballZ_setup.py'
Nov 29 07:33:35 compute-0 podman[107931]: 2025-11-29 07:33:35.599271924 +0000 UTC m=+0.022628373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:33:35 compute-0 podman[107931]: 2025-11-29 07:33:35.696658962 +0000 UTC m=+0.120015401 container attach 999a4d27baf14e3f39feec9af4c15ed1940822d7586f1c604f94eb004e54b5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_robinson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 07:33:35 compute-0 sudo[108001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:35 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Nov 29 07:33:35 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Nov 29 07:33:36 compute-0 python3.9[108005]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:33:36 compute-0 sudo[108001]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:36 compute-0 admiring_robinson[107983]: {
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:     "0": [
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:         {
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "devices": [
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "/dev/loop3"
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             ],
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "lv_name": "ceph_lv0",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "lv_size": "21470642176",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "name": "ceph_lv0",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "tags": {
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.cluster_name": "ceph",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.crush_device_class": "",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.encrypted": "0",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.osd_id": "0",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.type": "block",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.vdo": "0"
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             },
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "type": "block",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "vg_name": "ceph_vg0"
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:         }
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:     ],
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:     "1": [
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:         {
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "devices": [
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "/dev/loop4"
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             ],
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "lv_name": "ceph_lv1",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "lv_size": "21470642176",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "name": "ceph_lv1",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "tags": {
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.cluster_name": "ceph",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.crush_device_class": "",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.encrypted": "0",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.osd_id": "1",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.type": "block",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.vdo": "0"
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             },
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "type": "block",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "vg_name": "ceph_vg1"
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:         }
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:     ],
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:     "2": [
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:         {
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "devices": [
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "/dev/loop5"
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             ],
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "lv_name": "ceph_lv2",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "lv_size": "21470642176",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "name": "ceph_lv2",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "tags": {
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.cluster_name": "ceph",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.crush_device_class": "",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.encrypted": "0",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.osd_id": "2",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.type": "block",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:                 "ceph.vdo": "0"
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             },
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "type": "block",
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:             "vg_name": "ceph_vg2"
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:         }
Nov 29 07:33:36 compute-0 admiring_robinson[107983]:     ]
Nov 29 07:33:36 compute-0 admiring_robinson[107983]: }
Nov 29 07:33:36 compute-0 systemd[1]: libpod-999a4d27baf14e3f39feec9af4c15ed1940822d7586f1c604f94eb004e54b5b8.scope: Deactivated successfully.
Nov 29 07:33:36 compute-0 podman[107931]: 2025-11-29 07:33:36.494166376 +0000 UTC m=+0.917522835 container died 999a4d27baf14e3f39feec9af4c15ed1940822d7586f1c604f94eb004e54b5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_robinson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 07:33:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bc1b853b7d6ca148fccd2fd5507535a66da60977d57476b46cce97b056c4cdf-merged.mount: Deactivated successfully.
Nov 29 07:33:36 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 29 07:33:36 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 29 07:33:36 compute-0 podman[107931]: 2025-11-29 07:33:36.56504724 +0000 UTC m=+0.988403669 container remove 999a4d27baf14e3f39feec9af4c15ed1940822d7586f1c604f94eb004e54b5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_robinson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:33:36 compute-0 systemd[1]: libpod-conmon-999a4d27baf14e3f39feec9af4c15ed1940822d7586f1c604f94eb004e54b5b8.scope: Deactivated successfully.
Nov 29 07:33:36 compute-0 sudo[107728]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Nov 29 07:33:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 07:33:36 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 07:33:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 29 07:33:36 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 07:33:36 compute-0 sudo[108052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:33:36 compute-0 sudo[108052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:36 compute-0 sudo[108052]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 29 07:33:36 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 07:33:36 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 07:33:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 29 07:33:36 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 29 07:33:36 compute-0 ceph-mon[75237]: 10.16 scrub starts
Nov 29 07:33:36 compute-0 ceph-mon[75237]: 10.16 scrub ok
Nov 29 07:33:36 compute-0 ceph-mon[75237]: 11.3 scrub starts
Nov 29 07:33:36 compute-0 ceph-mon[75237]: 11.3 scrub ok
Nov 29 07:33:36 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 07:33:36 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 07:33:36 compute-0 sudo[108081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:33:36 compute-0 sudo[108081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:36 compute-0 sudo[108081]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:36 compute-0 sudo[108126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:33:36 compute-0 sudo[108126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:36 compute-0 sudo[108126]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:36 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 29 07:33:36 compute-0 sudo[108176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjpzadpzapagnkhgaenvkltdcnbtzprj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401615.402138-125-6267260575363/AnsiballZ_dnf.py'
Nov 29 07:33:36 compute-0 sudo[108176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:36 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 29 07:33:36 compute-0 sudo[108179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:33:36 compute-0 sudo[108179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:36 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 29 07:33:36 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 29 07:33:37 compute-0 python3.9[108180]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:33:37 compute-0 podman[108247]: 2025-11-29 07:33:37.249837434 +0000 UTC m=+0.063357588 container create da38286bb95c0941bf047ebe2ffe90294b44b479a1be577caf6c6a89a9465f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:33:37 compute-0 systemd[1]: Started libpod-conmon-da38286bb95c0941bf047ebe2ffe90294b44b479a1be577caf6c6a89a9465f17.scope.
Nov 29 07:33:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:33:37 compute-0 podman[108247]: 2025-11-29 07:33:37.227451879 +0000 UTC m=+0.040972083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:33:37 compute-0 podman[108247]: 2025-11-29 07:33:37.329187672 +0000 UTC m=+0.142707816 container init da38286bb95c0941bf047ebe2ffe90294b44b479a1be577caf6c6a89a9465f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nightingale, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:33:37 compute-0 podman[108247]: 2025-11-29 07:33:37.335577086 +0000 UTC m=+0.149097240 container start da38286bb95c0941bf047ebe2ffe90294b44b479a1be577caf6c6a89a9465f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nightingale, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 07:33:37 compute-0 podman[108247]: 2025-11-29 07:33:37.338504621 +0000 UTC m=+0.152024805 container attach da38286bb95c0941bf047ebe2ffe90294b44b479a1be577caf6c6a89a9465f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:33:37 compute-0 confident_nightingale[108263]: 167 167
Nov 29 07:33:37 compute-0 systemd[1]: libpod-da38286bb95c0941bf047ebe2ffe90294b44b479a1be577caf6c6a89a9465f17.scope: Deactivated successfully.
Nov 29 07:33:37 compute-0 podman[108247]: 2025-11-29 07:33:37.340440676 +0000 UTC m=+0.153960830 container died da38286bb95c0941bf047ebe2ffe90294b44b479a1be577caf6c6a89a9465f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:33:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-0deec0ea6e856415516fbffaf6495563ac9ac66cd9c3cf15d40d1865a50ee781-merged.mount: Deactivated successfully.
Nov 29 07:33:37 compute-0 podman[108247]: 2025-11-29 07:33:37.379550394 +0000 UTC m=+0.193070548 container remove da38286bb95c0941bf047ebe2ffe90294b44b479a1be577caf6c6a89a9465f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 07:33:37 compute-0 systemd[1]: libpod-conmon-da38286bb95c0941bf047ebe2ffe90294b44b479a1be577caf6c6a89a9465f17.scope: Deactivated successfully.
Nov 29 07:33:37 compute-0 podman[108292]: 2025-11-29 07:33:37.546889279 +0000 UTC m=+0.049021954 container create 6add2c423ef2a9a2734668ae2ec6755cc7f5ae77f572d6fa00f9c7450056bc03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:33:37 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 29 07:33:37 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 29 07:33:37 compute-0 systemd[1]: Started libpod-conmon-6add2c423ef2a9a2734668ae2ec6755cc7f5ae77f572d6fa00f9c7450056bc03.scope.
Nov 29 07:33:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:33:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503a28c0a6ae41646d00ee3af4cbb9f6632e3df63d178a29473068eba6952549/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:33:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503a28c0a6ae41646d00ee3af4cbb9f6632e3df63d178a29473068eba6952549/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:33:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503a28c0a6ae41646d00ee3af4cbb9f6632e3df63d178a29473068eba6952549/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:33:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503a28c0a6ae41646d00ee3af4cbb9f6632e3df63d178a29473068eba6952549/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:33:37 compute-0 podman[108292]: 2025-11-29 07:33:37.527797988 +0000 UTC m=+0.029930683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:33:37 compute-0 podman[108292]: 2025-11-29 07:33:37.637202422 +0000 UTC m=+0.139335117 container init 6add2c423ef2a9a2734668ae2ec6755cc7f5ae77f572d6fa00f9c7450056bc03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:33:37 compute-0 podman[108292]: 2025-11-29 07:33:37.648439576 +0000 UTC m=+0.150572241 container start 6add2c423ef2a9a2734668ae2ec6755cc7f5ae77f572d6fa00f9c7450056bc03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dhawan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:33:37 compute-0 podman[108292]: 2025-11-29 07:33:37.652204805 +0000 UTC m=+0.154337550 container attach 6add2c423ef2a9a2734668ae2ec6755cc7f5ae77f572d6fa00f9c7450056bc03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dhawan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:33:37 compute-0 ceph-mon[75237]: 5.1d scrub starts
Nov 29 07:33:37 compute-0 ceph-mon[75237]: 5.1d scrub ok
Nov 29 07:33:37 compute-0 ceph-mon[75237]: pgmap v267: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Nov 29 07:33:37 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 07:33:37 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 07:33:37 compute-0 ceph-mon[75237]: osdmap e98: 3 total, 3 up, 3 in
Nov 29 07:33:37 compute-0 ceph-mon[75237]: 10.17 scrub starts
Nov 29 07:33:37 compute-0 ceph-mon[75237]: 10.17 scrub ok
Nov 29 07:33:37 compute-0 ceph-mon[75237]: 11.2 scrub starts
Nov 29 07:33:37 compute-0 ceph-mon[75237]: 11.2 scrub ok
Nov 29 07:33:37 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 10.d scrub starts
Nov 29 07:33:37 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 10.d scrub ok
Nov 29 07:33:38 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 98 pg[6.f( v 43'39 (0'0,43'39] local-lis/les=61/63 n=1 ec=51/26 lis/c=61/61 les/c/f=63/63/0 sis=98 pruub=11.986457825s) [2] r=-1 lpr=98 pi=[61,98)/1 crt=43'39 mlcod 43'39 active pruub 296.643249512s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:38 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 98 pg[6.f( v 43'39 (0'0,43'39] local-lis/les=61/63 n=1 ec=51/26 lis/c=61/61 les/c/f=63/63/0 sis=98 pruub=11.986065865s) [2] r=-1 lpr=98 pi=[61,98)/1 crt=43'39 mlcod 0'0 unknown NOTIFY pruub 296.643249512s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:38 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 98 pg[6.f( empty local-lis/les=0/0 n=0 ec=51/26 lis/c=61/61 les/c/f=63/63/0 sis=98) [2] r=0 lpr=98 pi=[61,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:33:38
Nov 29 07:33:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:33:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:33:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.control', 'images', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'default.rgw.meta', 'default.rgw.log']
Nov 29 07:33:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 2/10 changes
Nov 29 07:33:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Executing plan auto_2025-11-29_07:33:38
Nov 29 07:33:38 compute-0 ceph-mgr[75527]: [balancer INFO root] ceph osd rm-pg-upmap-items 9.10
Nov 29 07:33:38 compute-0 ceph-mgr[75527]: [balancer INFO root] ceph osd rm-pg-upmap-items 9.1a
Nov 29 07:33:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "9.10"} v 0) v1
Nov 29 07:33:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "9.10"}]: dispatch
Nov 29 07:33:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "9.1a"} v 0) v1
Nov 29 07:33:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "9.1a"}]: dispatch
Nov 29 07:33:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 1 objects/s recovering
Nov 29 07:33:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 07:33:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 07:33:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:33:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:33:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:33:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]: {
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]:         "osd_id": 2,
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]:         "type": "bluestore"
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]:     },
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]:         "osd_id": 0,
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]:         "type": "bluestore"
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]:     },
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]:         "osd_id": 1,
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]:         "type": "bluestore"
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]:     }
Nov 29 07:33:38 compute-0 exciting_dhawan[108312]: }
Nov 29 07:33:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:33:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:33:38 compute-0 systemd[1]: libpod-6add2c423ef2a9a2734668ae2ec6755cc7f5ae77f572d6fa00f9c7450056bc03.scope: Deactivated successfully.
Nov 29 07:33:38 compute-0 systemd[1]: libpod-6add2c423ef2a9a2734668ae2ec6755cc7f5ae77f572d6fa00f9c7450056bc03.scope: Consumed 1.048s CPU time.
Nov 29 07:33:38 compute-0 podman[108292]: 2025-11-29 07:33:38.689482672 +0000 UTC m=+1.191615337 container died 6add2c423ef2a9a2734668ae2ec6755cc7f5ae77f572d6fa00f9c7450056bc03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dhawan, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:33:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-503a28c0a6ae41646d00ee3af4cbb9f6632e3df63d178a29473068eba6952549-merged.mount: Deactivated successfully.
Nov 29 07:33:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 29 07:33:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "9.10"}]': finished
Nov 29 07:33:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "9.1a"}]': finished
Nov 29 07:33:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 07:33:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 29 07:33:38 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 29 07:33:38 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 99 pg[9.1a( v 63'236 (0'0,63'236] local-lis/les=72/73 n=4 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=99 pruub=8.228591919s) [1] r=-1 lpr=99 pi=[72,99)/1 crt=63'236 mlcod 0'0 active pruub 274.452392578s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:38 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 99 pg[9.1a( v 63'236 (0'0,63'236] local-lis/les=72/73 n=4 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=99 pruub=8.228404045s) [1] r=-1 lpr=99 pi=[72,99)/1 crt=63'236 mlcod 0'0 unknown NOTIFY pruub 274.452392578s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:38 compute-0 ceph-mon[75237]: 5.19 scrub starts
Nov 29 07:33:38 compute-0 ceph-mon[75237]: 5.19 scrub ok
Nov 29 07:33:38 compute-0 ceph-mon[75237]: 10.d scrub starts
Nov 29 07:33:38 compute-0 ceph-mon[75237]: 10.d scrub ok
Nov 29 07:33:38 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 99 pg[9.10( v 53'234 (0'0,53'234] local-lis/les=72/73 n=3 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=99 pruub=8.227862358s) [1] r=-1 lpr=99 pi=[72,99)/1 crt=53'234 mlcod 0'0 active pruub 274.452270508s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:38 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "9.10"}]: dispatch
Nov 29 07:33:38 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 99 pg[9.10( v 53'234 (0'0,53'234] local-lis/les=72/73 n=3 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=99 pruub=8.227801323s) [1] r=-1 lpr=99 pi=[72,99)/1 crt=53'234 mlcod 0'0 unknown NOTIFY pruub 274.452270508s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:38 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "9.1a"}]: dispatch
Nov 29 07:33:38 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 07:33:38 compute-0 podman[108292]: 2025-11-29 07:33:38.748817463 +0000 UTC m=+1.250950128 container remove 6add2c423ef2a9a2734668ae2ec6755cc7f5ae77f572d6fa00f9c7450056bc03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dhawan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 07:33:38 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 99 pg[6.f( v 43'39 lc 40'1 (0'0,43'39] local-lis/les=98/99 n=1 ec=51/26 lis/c=61/61 les/c/f=63/63/0 sis=98) [2] r=0 lpr=98 pi=[61,98)/1 crt=43'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:38 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 99 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=99) [1] r=0 lpr=99 pi=[72,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:38 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 99 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=99) [1] r=0 lpr=99 pi=[72,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:38 compute-0 systemd[1]: libpod-conmon-6add2c423ef2a9a2734668ae2ec6755cc7f5ae77f572d6fa00f9c7450056bc03.scope: Deactivated successfully.
Nov 29 07:33:38 compute-0 sudo[108179]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:33:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:33:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:33:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:33:38 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 1d7e6c00-23ec-4ab9-b1f2-6467a401f2ea does not exist
Nov 29 07:33:38 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 662c75c1-9b69-4077-8a65-ce170f7a9953 does not exist
Nov 29 07:33:38 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 29 07:33:38 compute-0 sudo[108386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:33:38 compute-0 sudo[108386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:38 compute-0 sudo[108386]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:38 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 29 07:33:38 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 29 07:33:38 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 29 07:33:38 compute-0 sudo[108414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:33:38 compute-0 sudo[108414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:38 compute-0 sudo[108414]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 29 07:33:39 compute-0 ceph-mon[75237]: pgmap v269: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 1 objects/s recovering
Nov 29 07:33:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "9.10"}]': finished
Nov 29 07:33:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "9.1a"}]': finished
Nov 29 07:33:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 07:33:39 compute-0 ceph-mon[75237]: osdmap e99: 3 total, 3 up, 3 in
Nov 29 07:33:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:33:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:33:39 compute-0 ceph-mon[75237]: 11.d scrub starts
Nov 29 07:33:39 compute-0 ceph-mon[75237]: 11.d scrub ok
Nov 29 07:33:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 29 07:33:39 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 29 07:33:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 100 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=100) [1]/[2] r=-1 lpr=100 pi=[72,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 100 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=100) [1]/[2] r=-1 lpr=100 pi=[72,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 100 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=100) [1]/[2] r=-1 lpr=100 pi=[72,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:39 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 100 pg[9.10( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=100) [1]/[2] r=-1 lpr=100 pi=[72,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:39 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 100 pg[9.1a( v 63'236 (0'0,63'236] local-lis/les=72/73 n=4 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=100) [1]/[2] r=0 lpr=100 pi=[72,100)/1 crt=63'236 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:39 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 100 pg[9.1a( v 63'236 (0'0,63'236] local-lis/les=72/73 n=4 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=100) [1]/[2] r=0 lpr=100 pi=[72,100)/1 crt=63'236 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:39 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 100 pg[9.10( v 53'234 (0'0,53'234] local-lis/les=72/73 n=3 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=100) [1]/[2] r=0 lpr=100 pi=[72,100)/1 crt=53'234 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:39 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 100 pg[9.10( v 53'234 (0'0,53'234] local-lis/les=72/73 n=3 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=100) [1]/[2] r=0 lpr=100 pi=[72,100)/1 crt=53'234 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:39 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 29 07:33:39 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 29 07:33:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:33:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 07:33:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 29 07:33:40 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 07:33:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 29 07:33:40 compute-0 ceph-mon[75237]: 11.19 scrub starts
Nov 29 07:33:40 compute-0 ceph-mon[75237]: 11.19 scrub ok
Nov 29 07:33:40 compute-0 ceph-mon[75237]: osdmap e100: 3 total, 3 up, 3 in
Nov 29 07:33:40 compute-0 ceph-mon[75237]: 11.14 scrub starts
Nov 29 07:33:40 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 07:33:40 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 07:33:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 29 07:33:40 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 29 07:33:41 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 101 pg[9.10( v 53'234 (0'0,53'234] local-lis/les=100/101 n=3 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=100) [1]/[2] async=[1] r=0 lpr=100 pi=[72,100)/1 crt=53'234 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:41 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 101 pg[9.1a( v 63'236 (0'0,63'236] local-lis/les=100/101 n=4 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=100) [1]/[2] async=[1] r=0 lpr=100 pi=[72,100)/1 crt=63'236 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:41 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 29 07:33:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 29 07:33:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 29 07:33:41 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 29 07:33:41 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 29 07:33:41 compute-0 ceph-mon[75237]: 11.14 scrub ok
Nov 29 07:33:41 compute-0 ceph-mon[75237]: pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 07:33:41 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 07:33:41 compute-0 ceph-mon[75237]: osdmap e101: 3 total, 3 up, 3 in
Nov 29 07:33:41 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 102 pg[9.1a( v 63'236 (0'0,63'236] local-lis/les=100/101 n=4 ec=54/40 lis/c=100/72 les/c/f=101/73/0 sis=102 pruub=15.409201622s) [1] async=[1] r=-1 lpr=102 pi=[72,102)/1 crt=63'236 mlcod 63'236 active pruub 284.685913086s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:41 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 102 pg[9.10( v 53'234 (0'0,53'234] local-lis/les=100/101 n=3 ec=54/40 lis/c=100/72 les/c/f=101/73/0 sis=102 pruub=15.405719757s) [1] async=[1] r=-1 lpr=102 pi=[72,102)/1 crt=53'234 mlcod 53'234 active pruub 284.682495117s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:41 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 102 pg[9.10( v 53'234 (0'0,53'234] local-lis/les=100/101 n=3 ec=54/40 lis/c=100/72 les/c/f=101/73/0 sis=102 pruub=15.405545235s) [1] r=-1 lpr=102 pi=[72,102)/1 crt=53'234 mlcod 0'0 unknown NOTIFY pruub 284.682495117s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:41 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 102 pg[9.1a( v 63'236 (0'0,63'236] local-lis/les=100/101 n=4 ec=54/40 lis/c=100/72 les/c/f=101/73/0 sis=102 pruub=15.408885002s) [1] r=-1 lpr=102 pi=[72,102)/1 crt=63'236 mlcod 0'0 unknown NOTIFY pruub 284.685913086s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:41 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 102 pg[9.1a( v 63'236 (0'0,63'236] local-lis/les=0/0 n=4 ec=54/40 lis/c=100/72 les/c/f=101/73/0 sis=102) [1] r=0 lpr=102 pi=[72,102)/1 luod=0'0 crt=63'236 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:41 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 102 pg[9.10( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=100/72 les/c/f=101/73/0 sis=102) [1] r=0 lpr=102 pi=[72,102)/1 luod=0'0 crt=53'234 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:41 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 102 pg[9.1a( v 63'236 (0'0,63'236] local-lis/les=0/0 n=4 ec=54/40 lis/c=100/72 les/c/f=101/73/0 sis=102) [1] r=0 lpr=102 pi=[72,102)/1 crt=63'236 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:41 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 102 pg[9.10( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=100/72 les/c/f=101/73/0 sis=102) [1] r=0 lpr=102 pi=[72,102)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:42 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 29 07:33:42 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 29 07:33:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 178 B/s, 1 objects/s recovering
Nov 29 07:33:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:33:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:33:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:33:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:33:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:33:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:33:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:33:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:33:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:33:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:33:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 29 07:33:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 29 07:33:42 compute-0 ceph-mon[75237]: 11.f scrub starts
Nov 29 07:33:42 compute-0 ceph-mon[75237]: osdmap e102: 3 total, 3 up, 3 in
Nov 29 07:33:42 compute-0 ceph-mon[75237]: 11.f scrub ok
Nov 29 07:33:42 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 29 07:33:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 103 pg[9.1a( v 63'236 (0'0,63'236] local-lis/les=102/103 n=4 ec=54/40 lis/c=100/72 les/c/f=101/73/0 sis=102) [1] r=0 lpr=102 pi=[72,102)/1 crt=63'236 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:42 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 103 pg[9.10( v 53'234 (0'0,53'234] local-lis/les=102/103 n=3 ec=54/40 lis/c=100/72 les/c/f=101/73/0 sis=102) [1] r=0 lpr=102 pi=[72,102)/1 crt=53'234 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:43 compute-0 ceph-mon[75237]: 5.18 scrub starts
Nov 29 07:33:43 compute-0 ceph-mon[75237]: 5.18 scrub ok
Nov 29 07:33:43 compute-0 ceph-mon[75237]: pgmap v275: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 178 B/s, 1 objects/s recovering
Nov 29 07:33:43 compute-0 ceph-mon[75237]: osdmap e103: 3 total, 3 up, 3 in
Nov 29 07:33:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 146 B/s, 1 objects/s recovering
Nov 29 07:33:44 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 29 07:33:44 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 29 07:33:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:33:45 compute-0 ceph-mon[75237]: pgmap v277: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 146 B/s, 1 objects/s recovering
Nov 29 07:33:45 compute-0 ceph-mon[75237]: 11.9 scrub starts
Nov 29 07:33:45 compute-0 ceph-mon[75237]: 11.9 scrub ok
Nov 29 07:33:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 118 B/s, 1 objects/s recovering
Nov 29 07:33:46 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 29 07:33:46 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 29 07:33:46 compute-0 sshd-session[108464]: Invalid user ali from 20.185.243.158 port 50206
Nov 29 07:33:46 compute-0 sshd-session[108464]: Received disconnect from 20.185.243.158 port 50206:11: Bye Bye [preauth]
Nov 29 07:33:46 compute-0 sshd-session[108464]: Disconnected from invalid user ali 20.185.243.158 port 50206 [preauth]
Nov 29 07:33:46 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.8 deep-scrub starts
Nov 29 07:33:46 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.8 deep-scrub ok
Nov 29 07:33:46 compute-0 ceph-mon[75237]: 11.4 scrub starts
Nov 29 07:33:46 compute-0 ceph-mon[75237]: 11.4 scrub ok
Nov 29 07:33:47 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 29 07:33:47 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 29 07:33:47 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 29 07:33:47 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 29 07:33:48 compute-0 ceph-mon[75237]: pgmap v278: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 118 B/s, 1 objects/s recovering
Nov 29 07:33:48 compute-0 ceph-mon[75237]: 11.8 deep-scrub starts
Nov 29 07:33:48 compute-0 ceph-mon[75237]: 11.8 deep-scrub ok
Nov 29 07:33:48 compute-0 ceph-mon[75237]: 11.e scrub starts
Nov 29 07:33:48 compute-0 ceph-mon[75237]: 11.e scrub ok
Nov 29 07:33:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 90 B/s, 1 objects/s recovering
Nov 29 07:33:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Nov 29 07:33:48 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 29 07:33:48 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 29 07:33:48 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 29 07:33:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 29 07:33:49 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 29 07:33:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 29 07:33:49 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 29 07:33:49 compute-0 ceph-mon[75237]: 11.18 scrub starts
Nov 29 07:33:49 compute-0 ceph-mon[75237]: 11.18 scrub ok
Nov 29 07:33:49 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 29 07:33:49 compute-0 ceph-mon[75237]: 11.1 scrub starts
Nov 29 07:33:49 compute-0 ceph-mon[75237]: 11.1 scrub ok
Nov 29 07:33:49 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.1c deep-scrub starts
Nov 29 07:33:49 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.1c deep-scrub ok
Nov 29 07:33:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:33:50 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 29 07:33:50 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 29 07:33:50 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 11.10 deep-scrub starts
Nov 29 07:33:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 29 07:33:50 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 07:33:50 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 11.10 deep-scrub ok
Nov 29 07:33:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 29 07:33:50 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 29 07:33:51 compute-0 ceph-mon[75237]: pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 90 B/s, 1 objects/s recovering
Nov 29 07:33:51 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 29 07:33:51 compute-0 ceph-mon[75237]: osdmap e104: 3 total, 3 up, 3 in
Nov 29 07:33:51 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 29 07:33:51 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 29 07:33:51 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 29 07:33:51 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 07:33:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 29 07:33:51 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 29 07:33:52 compute-0 ceph-mon[75237]: 11.1c deep-scrub starts
Nov 29 07:33:52 compute-0 ceph-mon[75237]: 11.1c deep-scrub ok
Nov 29 07:33:52 compute-0 ceph-mon[75237]: 5.11 scrub starts
Nov 29 07:33:52 compute-0 ceph-mon[75237]: 5.11 scrub ok
Nov 29 07:33:52 compute-0 ceph-mon[75237]: 11.10 deep-scrub starts
Nov 29 07:33:52 compute-0 ceph-mon[75237]: pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:52 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 07:33:52 compute-0 ceph-mon[75237]: 11.10 deep-scrub ok
Nov 29 07:33:52 compute-0 ceph-mon[75237]: 11.b scrub starts
Nov 29 07:33:52 compute-0 ceph-mon[75237]: 11.b scrub ok
Nov 29 07:33:52 compute-0 ceph-mon[75237]: 5.13 scrub starts
Nov 29 07:33:52 compute-0 ceph-mon[75237]: 5.13 scrub ok
Nov 29 07:33:52 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 07:33:52 compute-0 ceph-mon[75237]: osdmap e105: 3 total, 3 up, 3 in
Nov 29 07:33:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 29 07:33:52 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 07:33:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 29 07:33:53 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 07:33:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 29 07:33:53 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 29 07:33:53 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 106 pg[9.13( v 71'237 (0'0,71'237] local-lis/les=66/67 n=4 ec=54/40 lis/c=66/66 les/c/f=67/67/0 sis=106 pruub=10.399278641s) [2] r=-1 lpr=106 pi=[66,106)/1 crt=71'237 lcod 71'236 mlcod 71'236 active pruub 309.655426025s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:53 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 106 pg[9.13( v 71'237 (0'0,71'237] local-lis/les=66/67 n=4 ec=54/40 lis/c=66/66 les/c/f=67/67/0 sis=106 pruub=10.399033546s) [2] r=-1 lpr=106 pi=[66,106)/1 crt=71'237 lcod 71'236 mlcod 0'0 unknown NOTIFY pruub 309.655426025s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:53 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 07:33:53 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 106 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=66/66 les/c/f=67/67/0 sis=106) [2] r=0 lpr=106 pi=[66,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:53 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 29 07:33:53 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 29 07:33:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 29 07:33:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 29 07:33:54 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 29 07:33:54 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 107 pg[9.13( v 71'237 (0'0,71'237] local-lis/les=66/67 n=4 ec=54/40 lis/c=66/66 les/c/f=67/67/0 sis=107) [2]/[0] r=0 lpr=107 pi=[66,107)/1 crt=71'237 lcod 71'236 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:54 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 107 pg[9.13( v 71'237 (0'0,71'237] local-lis/les=66/67 n=4 ec=54/40 lis/c=66/66 les/c/f=67/67/0 sis=107) [2]/[0] r=0 lpr=107 pi=[66,107)/1 crt=71'237 lcod 71'236 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:54 compute-0 ceph-mon[75237]: pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 07:33:54 compute-0 ceph-mon[75237]: osdmap e106: 3 total, 3 up, 3 in
Nov 29 07:33:54 compute-0 ceph-mon[75237]: 5.12 scrub starts
Nov 29 07:33:54 compute-0 ceph-mon[75237]: 5.12 scrub ok
Nov 29 07:33:54 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 107 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=66/66 les/c/f=67/67/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[66,107)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:54 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 107 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=66/66 les/c/f=67/67/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[66,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:54 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 29 07:33:54 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 29 07:33:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 29 07:33:54 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 07:33:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:33:55 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 29 07:33:55 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 29 07:33:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:33:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:33:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:56 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 07:33:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 29 07:33:56 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 29 07:33:56 compute-0 ceph-mon[75237]: osdmap e107: 3 total, 3 up, 3 in
Nov 29 07:33:56 compute-0 ceph-mon[75237]: 5.16 scrub starts
Nov 29 07:33:56 compute-0 ceph-mon[75237]: 5.16 scrub ok
Nov 29 07:33:56 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 07:33:56 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.11 deep-scrub starts
Nov 29 07:33:57 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 108 pg[9.13( v 71'237 (0'0,71'237] local-lis/les=107/108 n=4 ec=54/40 lis/c=66/66 les/c/f=67/67/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[66,107)/1 crt=71'237 lcod 71'236 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:33:57 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.11 deep-scrub ok
Nov 29 07:33:57 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.1 deep-scrub starts
Nov 29 07:33:57 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.1 deep-scrub ok
Nov 29 07:33:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 29 07:33:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 29 07:33:57 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 29 07:33:57 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 29 07:33:58 compute-0 ceph-mon[75237]: pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:58 compute-0 ceph-mon[75237]: 11.1b scrub starts
Nov 29 07:33:58 compute-0 ceph-mon[75237]: 11.1b scrub ok
Nov 29 07:33:58 compute-0 ceph-mon[75237]: pgmap v287: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:58 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 07:33:58 compute-0 ceph-mon[75237]: osdmap e108: 3 total, 3 up, 3 in
Nov 29 07:33:58 compute-0 ceph-mon[75237]: 11.11 deep-scrub starts
Nov 29 07:33:58 compute-0 ceph-mon[75237]: 11.11 deep-scrub ok
Nov 29 07:33:58 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 29 07:33:58 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 109 pg[9.13( v 71'237 (0'0,71'237] local-lis/les=107/108 n=4 ec=54/40 lis/c=107/66 les/c/f=108/67/0 sis=109 pruub=14.910025597s) [2] async=[2] r=-1 lpr=109 pi=[66,109)/1 crt=71'237 lcod 71'236 mlcod 71'236 active pruub 319.318206787s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:58 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 109 pg[9.13( v 71'237 (0'0,71'237] local-lis/les=107/108 n=4 ec=54/40 lis/c=107/66 les/c/f=108/67/0 sis=109 pruub=14.909797668s) [2] r=-1 lpr=109 pi=[66,109)/1 crt=71'237 lcod 71'236 mlcod 0'0 unknown NOTIFY pruub 319.318206787s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:33:58 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 109 pg[9.13( v 71'237 (0'0,71'237] local-lis/les=0/0 n=4 ec=54/40 lis/c=107/66 les/c/f=108/67/0 sis=109) [2] r=0 lpr=109 pi=[66,109)/1 luod=0'0 crt=71'237 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:33:58 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 109 pg[9.13( v 71'237 (0'0,71'237] local-lis/les=0/0 n=4 ec=54/40 lis/c=107/66 les/c/f=108/67/0 sis=109) [2] r=0 lpr=109 pi=[66,109)/1 crt=71'237 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:33:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 29 07:33:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 29 07:33:59 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 29 07:33:59 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 29 07:33:59 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 29 07:33:59 compute-0 ceph-mon[75237]: 5.1 deep-scrub starts
Nov 29 07:33:59 compute-0 ceph-mon[75237]: 5.1 deep-scrub ok
Nov 29 07:33:59 compute-0 ceph-mon[75237]: 11.1e scrub starts
Nov 29 07:33:59 compute-0 ceph-mon[75237]: 11.1e scrub ok
Nov 29 07:33:59 compute-0 ceph-mon[75237]: osdmap e109: 3 total, 3 up, 3 in
Nov 29 07:34:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:00 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 110 pg[9.13( v 71'237 (0'0,71'237] local-lis/les=109/110 n=4 ec=54/40 lis/c=107/66 les/c/f=108/67/0 sis=109) [2] r=0 lpr=109 pi=[66,109)/1 crt=71'237 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:34:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:34:00 compute-0 ceph-mon[75237]: pgmap v290: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:00 compute-0 ceph-mon[75237]: 11.17 scrub starts
Nov 29 07:34:00 compute-0 ceph-mon[75237]: osdmap e110: 3 total, 3 up, 3 in
Nov 29 07:34:00 compute-0 ceph-mon[75237]: 11.17 scrub ok
Nov 29 07:34:01 compute-0 sshd-session[108513]: Invalid user runner from 103.236.140.19 port 55220
Nov 29 07:34:01 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 29 07:34:01 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 29 07:34:01 compute-0 sshd-session[108513]: Received disconnect from 103.236.140.19 port 55220:11: Bye Bye [preauth]
Nov 29 07:34:01 compute-0 sshd-session[108513]: Disconnected from invalid user runner 103.236.140.19 port 55220 [preauth]
Nov 29 07:34:02 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 29 07:34:02 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 29 07:34:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 29 07:34:02 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 07:34:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 07:34:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 29 07:34:02 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Nov 29 07:34:03 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 29 07:34:03 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Nov 29 07:34:03 compute-0 ceph-mon[75237]: pgmap v292: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:03 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 29 07:34:03 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 29 07:34:03 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 29 07:34:04 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 29 07:34:04 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 07:34:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 29 07:34:04 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 29 07:34:04 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 29 07:34:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 0 objects/s recovering
Nov 29 07:34:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 29 07:34:04 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 07:34:04 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.3 deep-scrub starts
Nov 29 07:34:04 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.3 deep-scrub ok
Nov 29 07:34:04 compute-0 ceph-mon[75237]: 11.1a scrub starts
Nov 29 07:34:04 compute-0 ceph-mon[75237]: 11.1a scrub ok
Nov 29 07:34:04 compute-0 ceph-mon[75237]: 5.1a scrub starts
Nov 29 07:34:04 compute-0 ceph-mon[75237]: 5.1a scrub ok
Nov 29 07:34:04 compute-0 ceph-mon[75237]: pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 07:34:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 07:34:04 compute-0 ceph-mon[75237]: 11.6 scrub starts
Nov 29 07:34:04 compute-0 ceph-mon[75237]: 5.9 scrub starts
Nov 29 07:34:04 compute-0 ceph-mon[75237]: 11.6 scrub ok
Nov 29 07:34:04 compute-0 ceph-mon[75237]: 5.9 scrub ok
Nov 29 07:34:04 compute-0 ceph-mon[75237]: 11.1f scrub starts
Nov 29 07:34:04 compute-0 ceph-mon[75237]: 11.1f scrub ok
Nov 29 07:34:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 07:34:04 compute-0 ceph-mon[75237]: osdmap e111: 3 total, 3 up, 3 in
Nov 29 07:34:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 07:34:05 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.f deep-scrub starts
Nov 29 07:34:05 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 5.f deep-scrub ok
Nov 29 07:34:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 29 07:34:05 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 07:34:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 29 07:34:05 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 29 07:34:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:34:05 compute-0 sshd-session[108515]: Invalid user solana from 80.94.92.182 port 46888
Nov 29 07:34:05 compute-0 ceph-mon[75237]: 5.c scrub starts
Nov 29 07:34:05 compute-0 ceph-mon[75237]: 5.c scrub ok
Nov 29 07:34:05 compute-0 ceph-mon[75237]: pgmap v295: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 0 objects/s recovering
Nov 29 07:34:05 compute-0 ceph-mon[75237]: 6.3 deep-scrub starts
Nov 29 07:34:05 compute-0 ceph-mon[75237]: 6.3 deep-scrub ok
Nov 29 07:34:05 compute-0 ceph-mon[75237]: 5.f deep-scrub starts
Nov 29 07:34:05 compute-0 ceph-mon[75237]: 5.f deep-scrub ok
Nov 29 07:34:05 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 07:34:05 compute-0 ceph-mon[75237]: osdmap e112: 3 total, 3 up, 3 in
Nov 29 07:34:06 compute-0 sshd-session[108515]: Connection closed by invalid user solana 80.94.92.182 port 46888 [preauth]
Nov 29 07:34:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 6 B/s, 0 objects/s recovering
Nov 29 07:34:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 29 07:34:06 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 07:34:07 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 112 pg[9.16( v 53'234 (0'0,53'234] local-lis/les=72/73 n=3 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=112 pruub=11.598601341s) [0] r=-1 lpr=112 pi=[72,112)/1 crt=53'234 mlcod 0'0 active pruub 306.454895020s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:07 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 112 pg[9.16( v 53'234 (0'0,53'234] local-lis/les=72/73 n=3 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=112 pruub=11.597889900s) [0] r=-1 lpr=112 pi=[72,112)/1 crt=53'234 mlcod 0'0 unknown NOTIFY pruub 306.454895020s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:34:07 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 29 07:34:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 29 07:34:07 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 112 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=112) [0] r=0 lpr=112 pi=[72,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:34:07 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 111 pg[9.15( v 71'236 (0'0,71'236] local-lis/les=66/67 n=4 ec=54/40 lis/c=66/66 les/c/f=67/67/0 sis=111 pruub=11.621112823s) [1] r=-1 lpr=111 pi=[66,111)/1 crt=71'236 lcod 71'235 mlcod 71'235 active pruub 325.659484863s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:07 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 112 pg[9.15( v 71'236 (0'0,71'236] local-lis/les=66/67 n=4 ec=54/40 lis/c=66/66 les/c/f=67/67/0 sis=111 pruub=11.621025085s) [1] r=-1 lpr=111 pi=[66,111)/1 crt=71'236 lcod 71'235 mlcod 0'0 unknown NOTIFY pruub 325.659484863s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:34:07 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 112 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=66/66 les/c/f=67/67/0 sis=111) [1] r=0 lpr=112 pi=[66,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:34:07 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 29 07:34:07 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 07:34:08 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 07:34:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 29 07:34:08 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 29 07:34:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:34:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:34:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Nov 29 07:34:08 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 29 07:34:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:34:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:34:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:34:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:34:08 compute-0 ceph-mon[75237]: pgmap v297: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 6 B/s, 0 objects/s recovering
Nov 29 07:34:08 compute-0 ceph-mon[75237]: 6.7 scrub starts
Nov 29 07:34:08 compute-0 ceph-mon[75237]: 6.7 scrub ok
Nov 29 07:34:08 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 07:34:08 compute-0 ceph-mon[75237]: osdmap e113: 3 total, 3 up, 3 in
Nov 29 07:34:08 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 29 07:34:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 29 07:34:09 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 29 07:34:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 29 07:34:09 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 29 07:34:09 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 114 pg[9.15( v 71'236 (0'0,71'236] local-lis/les=66/67 n=4 ec=54/40 lis/c=66/66 les/c/f=67/67/0 sis=114) [1]/[0] r=0 lpr=114 pi=[66,114)/1 crt=71'236 lcod 71'235 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:09 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 114 pg[9.15( v 71'236 (0'0,71'236] local-lis/les=66/67 n=4 ec=54/40 lis/c=66/66 les/c/f=67/67/0 sis=114) [1]/[0] r=0 lpr=114 pi=[66,114)/1 crt=71'236 lcod 71'235 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:34:09 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 114 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=114) [0]/[2] r=-1 lpr=114 pi=[72,114)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:09 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 114 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=114) [0]/[2] r=-1 lpr=114 pi=[72,114)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:34:09 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 114 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=66/66 les/c/f=67/67/0 sis=114) [1]/[0] r=-1 lpr=114 pi=[66,114)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:09 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 114 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=66/66 les/c/f=67/67/0 sis=114) [1]/[0] r=-1 lpr=114 pi=[66,114)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:34:09 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 114 pg[9.16( v 53'234 (0'0,53'234] local-lis/les=72/73 n=3 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=114) [0]/[2] r=0 lpr=114 pi=[72,114)/1 crt=53'234 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:09 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 114 pg[9.16( v 53'234 (0'0,53'234] local-lis/les=72/73 n=3 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=114) [0]/[2] r=0 lpr=114 pi=[72,114)/1 crt=53'234 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:34:10 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 29 07:34:10 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 29 07:34:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 29 07:34:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Nov 29 07:34:11 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 29 07:34:11 compute-0 ceph-mon[75237]: pgmap v299: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:11 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 29 07:34:11 compute-0 ceph-mon[75237]: osdmap e114: 3 total, 3 up, 3 in
Nov 29 07:34:11 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 29 07:34:11 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 29 07:34:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 29 07:34:12 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 29 07:34:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:13 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 29 07:34:13 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 9.b scrub starts
Nov 29 07:34:13 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 9.b scrub ok
Nov 29 07:34:13 compute-0 sshd-session[108541]: Invalid user rahul from 114.34.106.146 port 35280
Nov 29 07:34:13 compute-0 sshd-session[108541]: Received disconnect from 114.34.106.146 port 35280:11: Bye Bye [preauth]
Nov 29 07:34:13 compute-0 sshd-session[108541]: Disconnected from invalid user rahul 114.34.106.146 port 35280 [preauth]
Nov 29 07:34:14 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 29 07:34:14 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 29 07:34:14 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 29 07:34:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 29 07:34:15 compute-0 ceph-mon[75237]: 11.5 scrub starts
Nov 29 07:34:15 compute-0 ceph-mon[75237]: 11.5 scrub ok
Nov 29 07:34:15 compute-0 ceph-mon[75237]: pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:15 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 29 07:34:15 compute-0 ceph-mon[75237]: 11.7 scrub starts
Nov 29 07:34:15 compute-0 ceph-mon[75237]: 11.7 scrub ok
Nov 29 07:34:15 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Nov 29 07:34:15 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Nov 29 07:34:15 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 115 pg[9.16( v 53'234 (0'0,53'234] local-lis/les=114/115 n=3 ec=54/40 lis/c=72/72 les/c/f=73/73/0 sis=114) [0]/[2] async=[0] r=0 lpr=114 pi=[72,114)/1 crt=53'234 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:34:15 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 29 07:34:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 29 07:34:16 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 29 07:34:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:34:16 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 11.13 deep-scrub starts
Nov 29 07:34:16 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 115 pg[9.15( v 71'236 (0'0,71'236] local-lis/les=114/115 n=4 ec=54/40 lis/c=66/66 les/c/f=67/67/0 sis=114) [1]/[0] async=[1] r=0 lpr=114 pi=[66,114)/1 crt=71'236 lcod 71'235 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:34:16 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 11.13 deep-scrub ok
Nov 29 07:34:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 1 active+recovering+remapped, 1 activating+remapped, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 8/205 objects misplaced (3.902%)
Nov 29 07:34:16 compute-0 ceph-mon[75237]: 11.a scrub starts
Nov 29 07:34:16 compute-0 ceph-mon[75237]: pgmap v302: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:16 compute-0 ceph-mon[75237]: 11.c scrub starts
Nov 29 07:34:16 compute-0 ceph-mon[75237]: 9.b scrub starts
Nov 29 07:34:16 compute-0 ceph-mon[75237]: 9.b scrub ok
Nov 29 07:34:16 compute-0 ceph-mon[75237]: osdmap e115: 3 total, 3 up, 3 in
Nov 29 07:34:16 compute-0 ceph-mon[75237]: 11.a scrub ok
Nov 29 07:34:16 compute-0 ceph-mon[75237]: 11.c scrub ok
Nov 29 07:34:16 compute-0 ceph-mon[75237]: pgmap v304: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:16 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 29 07:34:16 compute-0 ceph-mon[75237]: osdmap e116: 3 total, 3 up, 3 in
Nov 29 07:34:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 1 active+recovering+remapped, 1 activating+remapped, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 8/205 objects misplaced (3.902%)
Nov 29 07:34:19 compute-0 ceph-mon[75237]: 11.15 scrub starts
Nov 29 07:34:19 compute-0 ceph-mon[75237]: 11.15 scrub ok
Nov 29 07:34:19 compute-0 ceph-mon[75237]: 11.13 deep-scrub starts
Nov 29 07:34:19 compute-0 ceph-mon[75237]: 11.13 deep-scrub ok
Nov 29 07:34:19 compute-0 ceph-mon[75237]: pgmap v306: 305 pgs: 1 active+recovering+remapped, 1 activating+remapped, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 8/205 objects misplaced (3.902%)
Nov 29 07:34:20 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 29 07:34:20 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 29 07:34:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 1 active+recovering+remapped, 1 activating+remapped, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 8/205 objects misplaced (3.902%)
Nov 29 07:34:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 29 07:34:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 29 07:34:21 compute-0 ceph-mon[75237]: pgmap v307: 305 pgs: 1 active+recovering+remapped, 1 activating+remapped, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 8/205 objects misplaced (3.902%)
Nov 29 07:34:21 compute-0 ceph-mon[75237]: 11.16 scrub starts
Nov 29 07:34:21 compute-0 ceph-mon[75237]: 11.16 scrub ok
Nov 29 07:34:21 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 29 07:34:21 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 117 pg[9.15( v 71'236 (0'0,71'236] local-lis/les=114/115 n=4 ec=54/40 lis/c=114/66 les/c/f=115/67/0 sis=117 pruub=10.972854614s) [1] async=[1] r=-1 lpr=117 pi=[66,117)/1 crt=71'236 lcod 71'235 mlcod 71'235 active pruub 338.863067627s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:21 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 117 pg[9.15( v 71'236 (0'0,71'236] local-lis/les=114/115 n=4 ec=54/40 lis/c=114/66 les/c/f=115/67/0 sis=117 pruub=10.972511292s) [1] r=-1 lpr=117 pi=[66,117)/1 crt=71'236 lcod 71'235 mlcod 0'0 unknown NOTIFY pruub 338.863067627s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:34:21 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 117 pg[9.16( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=114/72 les/c/f=115/73/0 sis=117) [0] r=0 lpr=117 pi=[72,117)/1 luod=0'0 crt=53'234 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:21 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 117 pg[9.16( v 53'234 (0'0,53'234] local-lis/les=0/0 n=3 ec=54/40 lis/c=114/72 les/c/f=115/73/0 sis=117) [0] r=0 lpr=117 pi=[72,117)/1 crt=53'234 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:34:22 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 117 pg[9.16( v 53'234 (0'0,53'234] local-lis/les=114/115 n=3 ec=54/40 lis/c=114/72 les/c/f=115/73/0 sis=117 pruub=9.633544922s) [0] async=[0] r=-1 lpr=117 pi=[72,117)/1 crt=53'234 mlcod 53'234 active pruub 319.232940674s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:22 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 117 pg[9.16( v 53'234 (0'0,53'234] local-lis/les=114/115 n=3 ec=54/40 lis/c=114/72 les/c/f=115/73/0 sis=117 pruub=9.633465767s) [0] r=-1 lpr=117 pi=[72,117)/1 crt=53'234 mlcod 0'0 unknown NOTIFY pruub 319.232940674s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:34:22 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 117 pg[9.15( v 71'236 (0'0,71'236] local-lis/les=0/0 n=4 ec=54/40 lis/c=114/66 les/c/f=115/67/0 sis=117) [1] r=0 lpr=117 pi=[66,117)/1 luod=0'0 crt=71'236 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:22 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 117 pg[9.15( v 71'236 (0'0,71'236] local-lis/les=0/0 n=4 ec=54/40 lis/c=114/66 les/c/f=115/67/0 sis=117) [1] r=0 lpr=117 pi=[66,117)/1 crt=71'236 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:34:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 29 07:34:22 compute-0 ceph-mon[75237]: pgmap v308: 305 pgs: 1 active+recovering+remapped, 1 activating+remapped, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 8/205 objects misplaced (3.902%)
Nov 29 07:34:22 compute-0 ceph-mon[75237]: osdmap e117: 3 total, 3 up, 3 in
Nov 29 07:34:22 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 29 07:34:22 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 29 07:34:22 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 9.1b deep-scrub starts
Nov 29 07:34:22 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 9.1b deep-scrub ok
Nov 29 07:34:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 1 peering, 1 active+remapped, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 07:34:22 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 29 07:34:22 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 29 07:34:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 29 07:34:22 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 29 07:34:23 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 118 pg[9.15( v 71'236 (0'0,71'236] local-lis/les=117/118 n=4 ec=54/40 lis/c=114/66 les/c/f=115/67/0 sis=117) [1] r=0 lpr=117 pi=[66,117)/1 crt=71'236 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:34:23 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 118 pg[9.16( v 53'234 (0'0,53'234] local-lis/les=117/118 n=3 ec=54/40 lis/c=114/72 les/c/f=115/73/0 sis=117) [0] r=0 lpr=117 pi=[72,117)/1 crt=53'234 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:34:24 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 29 07:34:24 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Nov 29 07:34:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 1 peering, 1 active+remapped, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Nov 29 07:34:25 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.1d deep-scrub starts
Nov 29 07:34:26 compute-0 sshd-session[108548]: Received disconnect from 103.234.151.178 port 53596:11: Bye Bye [preauth]
Nov 29 07:34:26 compute-0 sshd-session[108548]: Disconnected from authenticating user root 103.234.151.178 port 53596 [preauth]
Nov 29 07:34:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 1 peering, 1 active+remapped, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Nov 29 07:34:28 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 29 07:34:28 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 29 07:34:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Nov 29 07:34:29 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 29 07:34:29 compute-0 ceph-mds[101581]: mds.beacon.cephfs.compute-0.yemcdg missed beacon ack from the monitors
Nov 29 07:34:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:31 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 10.14 deep-scrub starts
Nov 29 07:34:32 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 29 07:34:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:33 compute-0 ceph-mds[101581]: mds.beacon.cephfs.compute-0.yemcdg missed beacon ack from the monitors
Nov 29 07:34:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:37 compute-0 ceph-mds[101581]: mds.beacon.cephfs.compute-0.yemcdg missed beacon ack from the monitors
Nov 29 07:34:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:34:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:34:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:34:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:34:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:34:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:34:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:34:38
Nov 29 07:34:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:34:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:34:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'backups', '.rgw.root', 'volumes', 'vms', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr']
Nov 29 07:34:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:34:39 compute-0 sudo[108553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:34:39 compute-0 sudo[108553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:39 compute-0 sudo[108553]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:39 compute-0 sudo[108578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:34:39 compute-0 sudo[108578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:39 compute-0 sudo[108578]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:39 compute-0 sudo[108603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:34:39 compute-0 ceph-mds[101581]: mds.beacon.cephfs.compute-0.yemcdg MDS connection to Monitors appears to be laggy; 17.6605s since last acked beacon
Nov 29 07:34:39 compute-0 ceph-mds[101581]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:34:39 compute-0 sudo[108603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:39 compute-0 sudo[108603]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:39 compute-0 sudo[108628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:34:39 compute-0 sudo[108628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:39 compute-0 sudo[108628]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 19.9401 seconds
Nov 29 07:34:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:34:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 29 07:34:41 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 07:34:41 compute-0 ceph-mds[101581]: mds.beacon.cephfs.compute-0.yemcdg  MDS is no longer laggy
Nov 29 07:34:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 29 07:34:41 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 07:34:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 29 07:34:41 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 07:34:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 29 07:34:41 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 07:34:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 29 07:34:41 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 07:34:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 29 07:34:41 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 07:34:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:34:41 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:34:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 29 07:34:41 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 07:34:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:34:41 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:34:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:34:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 29 07:34:41 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 13.390050888s
Nov 29 07:34:41 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 13.390050888s
Nov 29 07:34:41 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.390250206s, txc = 0x558bf3d20600
Nov 29 07:34:41 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_commit, latency = 18.092542648s
Nov 29 07:34:41 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 18.092542648s
Nov 29 07:34:41 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 18.093255997s, txc = 0x5571f39da900
Nov 29 07:34:41 compute-0 ceph-osd[90977]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7f815810f640' had timed out after 15.000000954s
Nov 29 07:34:41 compute-0 ceph-osd[90977]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7f8159111640' had timed out after 15.000000954s
Nov 29 07:34:41 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 29 07:34:41 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 29 07:34:41 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 10.14 deep-scrub ok
Nov 29 07:34:41 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 18.360132217s, txc = 0x562226f2c600
Nov 29 07:34:41 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 18.250518799s
Nov 29 07:34:41 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 18.250518799s
Nov 29 07:34:41 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Nov 29 07:34:41 compute-0 ceph-osd[88926]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7f17c6fa5640' had timed out after 15.000000954s
Nov 29 07:34:41 compute-0 ceph-osd[88926]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7f17c5fa3640' had timed out after 15.000000954s
Nov 29 07:34:41 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.1d deep-scrub ok
Nov 29 07:34:41 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 29 07:34:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 2 active+clean+scrubbing+deep, 4 active+clean+scrubbing, 299 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 29 07:34:42 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 07:34:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:34:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:34:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:34:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:34:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:34:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:34:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:34:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:34:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:34:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:34:43 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 07:34:43 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 07:34:43 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 07:34:43 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 07:34:43 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 07:34:43 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 07:34:43 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 07:34:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 29 07:34:43 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 18.785247803s, txc = 0x5571f5337b00
Nov 29 07:34:43 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.775402069s, txc = 0x5571f304e900
Nov 29 07:34:43 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 15.268116951s, txc = 0x558bf3e3af00
Nov 29 07:34:43 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 14.291872025s, txc = 0x558bf45d1500
Nov 29 07:34:43 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.318270683s, txc = 0x558bf434f200
Nov 29 07:34:43 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.346515656s, txc = 0x558bf216e900
Nov 29 07:34:43 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 19.106698990s, txc = 0x562226cee600
Nov 29 07:34:43 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 29 07:34:43 compute-0 ceph-mon[75237]: 11.1d scrub starts
Nov 29 07:34:43 compute-0 ceph-mon[75237]: 11.1d scrub ok
Nov 29 07:34:43 compute-0 ceph-mon[75237]: 9.1b deep-scrub starts
Nov 29 07:34:43 compute-0 ceph-mon[75237]: 9.1b deep-scrub ok
Nov 29 07:34:43 compute-0 ceph-mon[75237]: osdmap e118: 3 total, 3 up, 3 in
Nov 29 07:34:44 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:34:44 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 29 07:34:44 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 29 07:34:44 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 7bcd849f-1b0c-43ea-b370-f390671cdc00 does not exist
Nov 29 07:34:44 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 420e3977-9b0e-4c45-b345-cec44d5787d5 does not exist
Nov 29 07:34:44 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 408a3a15-d4cd-44e4-b6df-8b9df082d047 does not exist
Nov 29 07:34:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:34:44 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:34:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 29 07:34:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:34:44 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:34:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:34:44 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:34:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 2 active+clean+scrubbing+deep, 4 active+clean+scrubbing, 299 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 29 07:34:44 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 07:34:44 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 07:34:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 29 07:34:44 compute-0 ceph-mon[75237]: pgmap v310: 305 pgs: 1 peering, 1 active+remapped, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 07:34:44 compute-0 ceph-mon[75237]: 11.12 scrub starts
Nov 29 07:34:44 compute-0 ceph-mon[75237]: 11.12 scrub ok
Nov 29 07:34:44 compute-0 ceph-mon[75237]: 9.9 scrub starts
Nov 29 07:34:44 compute-0 ceph-mon[75237]: 9.5 scrub starts
Nov 29 07:34:44 compute-0 ceph-mon[75237]: pgmap v312: 305 pgs: 1 peering, 1 active+remapped, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Nov 29 07:34:44 compute-0 ceph-mon[75237]: 9.1d deep-scrub starts
Nov 29 07:34:44 compute-0 ceph-mon[75237]: pgmap v313: 305 pgs: 1 peering, 1 active+remapped, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Nov 29 07:34:44 compute-0 ceph-mon[75237]: 10.12 scrub starts
Nov 29 07:34:44 compute-0 ceph-mon[75237]: 10.12 scrub ok
Nov 29 07:34:44 compute-0 ceph-mon[75237]: pgmap v314: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Nov 29 07:34:44 compute-0 ceph-mon[75237]: 10.11 scrub starts
Nov 29 07:34:44 compute-0 ceph-mon[75237]: pgmap v315: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:44 compute-0 ceph-mon[75237]: 10.14 deep-scrub starts
Nov 29 07:34:44 compute-0 ceph-mon[75237]: 10.13 scrub starts
Nov 29 07:34:44 compute-0 ceph-mon[75237]: pgmap v316: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:44 compute-0 ceph-mon[75237]: pgmap v317: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:44 compute-0 ceph-mon[75237]: pgmap v318: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:44 compute-0 ceph-mon[75237]: pgmap v319: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:44 compute-0 ceph-mon[75237]: pgmap v320: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 07:34:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 07:34:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 07:34:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 07:34:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 07:34:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 07:34:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:34:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 07:34:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:34:44 compute-0 ceph-mon[75237]: 10.13 scrub ok
Nov 29 07:34:44 compute-0 ceph-mon[75237]: 10.11 scrub ok
Nov 29 07:34:44 compute-0 ceph-mon[75237]: 10.14 deep-scrub ok
Nov 29 07:34:44 compute-0 ceph-mon[75237]: 9.5 scrub ok
Nov 29 07:34:44 compute-0 ceph-mon[75237]: 9.1d deep-scrub ok
Nov 29 07:34:44 compute-0 ceph-mon[75237]: 9.9 scrub ok
Nov 29 07:34:44 compute-0 ceph-mon[75237]: pgmap v321: 305 pgs: 2 active+clean+scrubbing+deep, 4 active+clean+scrubbing, 299 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 07:34:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 07:34:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 07:34:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 07:34:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 07:34:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 07:34:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 07:34:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 07:34:44 compute-0 ceph-mon[75237]: 10.10 scrub starts
Nov 29 07:34:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:34:44 compute-0 ceph-mon[75237]: osdmap e119: 3 total, 3 up, 3 in
Nov 29 07:34:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:34:44 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 29 07:34:44 compute-0 sudo[108685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:34:44 compute-0 sudo[108685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:44 compute-0 sudo[108685]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:44 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.19 deep-scrub starts
Nov 29 07:34:44 compute-0 sudo[108176]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:44 compute-0 sudo[108710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:34:44 compute-0 sudo[108710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:44 compute-0 sudo[108710]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:44 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.19 deep-scrub ok
Nov 29 07:34:44 compute-0 sudo[108736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:34:44 compute-0 sudo[108736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:44 compute-0 sudo[108736]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:44 compute-0 sudo[108784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:34:44 compute-0 sudo[108784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:45 compute-0 podman[108922]: 2025-11-29 07:34:45.239322877 +0000 UTC m=+0.040809701 container create bd87ab8face0145572467de50d06ea6fcae6b2d233c7ec0ce8577bc8c31dc29f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:34:45 compute-0 systemd[1]: Started libpod-conmon-bd87ab8face0145572467de50d06ea6fcae6b2d233c7ec0ce8577bc8c31dc29f.scope.
Nov 29 07:34:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:34:45 compute-0 podman[108922]: 2025-11-29 07:34:45.220517872 +0000 UTC m=+0.022004716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:34:45 compute-0 podman[108922]: 2025-11-29 07:34:45.32197721 +0000 UTC m=+0.123464044 container init bd87ab8face0145572467de50d06ea6fcae6b2d233c7ec0ce8577bc8c31dc29f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 07:34:45 compute-0 podman[108922]: 2025-11-29 07:34:45.331724339 +0000 UTC m=+0.133211153 container start bd87ab8face0145572467de50d06ea6fcae6b2d233c7ec0ce8577bc8c31dc29f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 07:34:45 compute-0 elegant_mcnulty[108959]: 167 167
Nov 29 07:34:45 compute-0 systemd[1]: libpod-bd87ab8face0145572467de50d06ea6fcae6b2d233c7ec0ce8577bc8c31dc29f.scope: Deactivated successfully.
Nov 29 07:34:45 compute-0 podman[108922]: 2025-11-29 07:34:45.342276503 +0000 UTC m=+0.143763327 container attach bd87ab8face0145572467de50d06ea6fcae6b2d233c7ec0ce8577bc8c31dc29f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:34:45 compute-0 podman[108922]: 2025-11-29 07:34:45.343070798 +0000 UTC m=+0.144557632 container died bd87ab8face0145572467de50d06ea6fcae6b2d233c7ec0ce8577bc8c31dc29f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mcnulty, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:34:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd23b5e27c81cf09dc90ee88a50ddf1580fc5422bfbe2de51b7e0103897e79c3-merged.mount: Deactivated successfully.
Nov 29 07:34:45 compute-0 sudo[109000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwqfeqiictdeyhyuqrplcxkhpsatxfyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401685.0011525-137-75315580905366/AnsiballZ_command.py'
Nov 29 07:34:45 compute-0 sudo[109000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:45 compute-0 podman[108922]: 2025-11-29 07:34:45.413929109 +0000 UTC m=+0.215415933 container remove bd87ab8face0145572467de50d06ea6fcae6b2d233c7ec0ce8577bc8c31dc29f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mcnulty, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 07:34:45 compute-0 systemd[1]: libpod-conmon-bd87ab8face0145572467de50d06ea6fcae6b2d233c7ec0ce8577bc8c31dc29f.scope: Deactivated successfully.
Nov 29 07:34:45 compute-0 python3.9[109005]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:34:45 compute-0 podman[109013]: 2025-11-29 07:34:45.596240246 +0000 UTC m=+0.071533483 container create d9d3a08283a6eefcdbe2433ac5267b90efd3685c312c6305f4caf795e2f8ef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kowalevski, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 07:34:45 compute-0 systemd[1]: Started libpod-conmon-d9d3a08283a6eefcdbe2433ac5267b90efd3685c312c6305f4caf795e2f8ef02.scope.
Nov 29 07:34:45 compute-0 podman[109013]: 2025-11-29 07:34:45.556441427 +0000 UTC m=+0.031734674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:34:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e7b466bccea94772c4a6b4dc36cb09705cb8c26a795935409e55a6ab60412c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e7b466bccea94772c4a6b4dc36cb09705cb8c26a795935409e55a6ab60412c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e7b466bccea94772c4a6b4dc36cb09705cb8c26a795935409e55a6ab60412c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e7b466bccea94772c4a6b4dc36cb09705cb8c26a795935409e55a6ab60412c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e7b466bccea94772c4a6b4dc36cb09705cb8c26a795935409e55a6ab60412c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:34:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 29 07:34:45 compute-0 podman[109013]: 2025-11-29 07:34:45.709095055 +0000 UTC m=+0.184388292 container init d9d3a08283a6eefcdbe2433ac5267b90efd3685c312c6305f4caf795e2f8ef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 07:34:45 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 07:34:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 29 07:34:45 compute-0 podman[109013]: 2025-11-29 07:34:45.72069729 +0000 UTC m=+0.195990517 container start d9d3a08283a6eefcdbe2433ac5267b90efd3685c312c6305f4caf795e2f8ef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kowalevski, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:34:45 compute-0 ceph-mon[75237]: 10.10 scrub ok
Nov 29 07:34:45 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:34:45 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:34:45 compute-0 ceph-mon[75237]: pgmap v323: 305 pgs: 2 active+clean+scrubbing+deep, 4 active+clean+scrubbing, 299 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:45 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 07:34:45 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 07:34:45 compute-0 ceph-mon[75237]: osdmap e120: 3 total, 3 up, 3 in
Nov 29 07:34:45 compute-0 ceph-mon[75237]: 9.19 deep-scrub starts
Nov 29 07:34:45 compute-0 ceph-mon[75237]: 9.19 deep-scrub ok
Nov 29 07:34:45 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 29 07:34:45 compute-0 podman[109013]: 2025-11-29 07:34:45.730673366 +0000 UTC m=+0.205966603 container attach d9d3a08283a6eefcdbe2433ac5267b90efd3685c312c6305f4caf795e2f8ef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kowalevski, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:34:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:34:46 compute-0 sudo[109000]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 2 active+clean+scrubbing+deep, 4 active+clean+scrubbing, 299 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 29 07:34:46 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 07:34:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 29 07:34:46 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 07:34:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 29 07:34:46 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 29 07:34:46 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 122 pg[9.1c( v 70'238 (0'0,70'238] local-lis/les=96/97 n=5 ec=54/40 lis/c=96/96 les/c/f=97/97/0 sis=122 pruub=13.552791595s) [0] r=-1 lpr=122 pi=[96,122)/1 crt=70'238 mlcod 0'0 active pruub 347.768035889s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:46 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 122 pg[9.1c( v 70'238 (0'0,70'238] local-lis/les=96/97 n=5 ec=54/40 lis/c=96/96 les/c/f=97/97/0 sis=122 pruub=13.552720070s) [0] r=-1 lpr=122 pi=[96,122)/1 crt=70'238 mlcod 0'0 unknown NOTIFY pruub 347.768035889s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:34:46 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 07:34:46 compute-0 ceph-mon[75237]: osdmap e121: 3 total, 3 up, 3 in
Nov 29 07:34:46 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 07:34:46 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 122 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=96/96 les/c/f=97/97/0 sis=122) [0] r=0 lpr=122 pi=[96,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:34:46 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 29 07:34:46 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 29 07:34:46 compute-0 lucid_kowalevski[109030]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:34:46 compute-0 lucid_kowalevski[109030]: --> relative data size: 1.0
Nov 29 07:34:46 compute-0 lucid_kowalevski[109030]: --> All data devices are unavailable
Nov 29 07:34:46 compute-0 systemd[1]: libpod-d9d3a08283a6eefcdbe2433ac5267b90efd3685c312c6305f4caf795e2f8ef02.scope: Deactivated successfully.
Nov 29 07:34:46 compute-0 systemd[1]: libpod-d9d3a08283a6eefcdbe2433ac5267b90efd3685c312c6305f4caf795e2f8ef02.scope: Consumed 1.161s CPU time.
Nov 29 07:34:46 compute-0 podman[109013]: 2025-11-29 07:34:46.939461241 +0000 UTC m=+1.414754478 container died d9d3a08283a6eefcdbe2433ac5267b90efd3685c312c6305f4caf795e2f8ef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 07:34:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-67e7b466bccea94772c4a6b4dc36cb09705cb8c26a795935409e55a6ab60412c-merged.mount: Deactivated successfully.
Nov 29 07:34:47 compute-0 podman[109013]: 2025-11-29 07:34:47.011009674 +0000 UTC m=+1.486302901 container remove d9d3a08283a6eefcdbe2433ac5267b90efd3685c312c6305f4caf795e2f8ef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:34:47 compute-0 systemd[1]: libpod-conmon-d9d3a08283a6eefcdbe2433ac5267b90efd3685c312c6305f4caf795e2f8ef02.scope: Deactivated successfully.
Nov 29 07:34:47 compute-0 sudo[108784]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:47 compute-0 sudo[109305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:34:47 compute-0 sudo[109305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:47 compute-0 sudo[109305]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:47 compute-0 sudo[109354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:34:47 compute-0 sudo[109354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:47 compute-0 sudo[109354]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:47 compute-0 sudo[109403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jphlgfapvbbjgomtxiogrqqvwejgxshc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401686.5672476-145-155726942847584/AnsiballZ_selinux.py'
Nov 29 07:34:47 compute-0 sudo[109403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:47 compute-0 sudo[109408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:34:47 compute-0 sudo[109408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:47 compute-0 sudo[109408]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:47 compute-0 sudo[109433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:34:47 compute-0 sudo[109433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:47 compute-0 python3.9[109407]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 07:34:47 compute-0 sudo[109403]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:47 compute-0 podman[109521]: 2025-11-29 07:34:47.668675088 +0000 UTC m=+0.046348411 container create da7057104800bd4783f6906e210647611a3fa5c5b7ac173dfbc6ff8087b61f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 07:34:47 compute-0 systemd[1]: Started libpod-conmon-da7057104800bd4783f6906e210647611a3fa5c5b7ac173dfbc6ff8087b61f4e.scope.
Nov 29 07:34:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:34:47 compute-0 podman[109521]: 2025-11-29 07:34:47.645583331 +0000 UTC m=+0.023256684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:34:47 compute-0 podman[109521]: 2025-11-29 07:34:47.747901917 +0000 UTC m=+0.125575260 container init da7057104800bd4783f6906e210647611a3fa5c5b7ac173dfbc6ff8087b61f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_antonelli, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:34:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 29 07:34:47 compute-0 ceph-mon[75237]: pgmap v326: 305 pgs: 2 active+clean+scrubbing+deep, 4 active+clean+scrubbing, 299 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:47 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 07:34:47 compute-0 ceph-mon[75237]: osdmap e122: 3 total, 3 up, 3 in
Nov 29 07:34:47 compute-0 ceph-mon[75237]: 9.d scrub starts
Nov 29 07:34:47 compute-0 ceph-mon[75237]: 9.d scrub ok
Nov 29 07:34:47 compute-0 podman[109521]: 2025-11-29 07:34:47.755597373 +0000 UTC m=+0.133270756 container start da7057104800bd4783f6906e210647611a3fa5c5b7ac173dfbc6ff8087b61f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_antonelli, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:34:47 compute-0 recursing_antonelli[109537]: 167 167
Nov 29 07:34:47 compute-0 systemd[1]: libpod-da7057104800bd4783f6906e210647611a3fa5c5b7ac173dfbc6ff8087b61f4e.scope: Deactivated successfully.
Nov 29 07:34:47 compute-0 podman[109521]: 2025-11-29 07:34:47.762822214 +0000 UTC m=+0.140495557 container attach da7057104800bd4783f6906e210647611a3fa5c5b7ac173dfbc6ff8087b61f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:34:47 compute-0 podman[109521]: 2025-11-29 07:34:47.763464614 +0000 UTC m=+0.141137937 container died da7057104800bd4783f6906e210647611a3fa5c5b7ac173dfbc6ff8087b61f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:34:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 29 07:34:47 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 29 07:34:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 123 pg[9.1c( v 70'238 (0'0,70'238] local-lis/les=96/97 n=5 ec=54/40 lis/c=96/96 les/c/f=97/97/0 sis=123) [0]/[2] r=0 lpr=123 pi=[96,123)/1 crt=70'238 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:47 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 123 pg[9.1c( v 70'238 (0'0,70'238] local-lis/les=96/97 n=5 ec=54/40 lis/c=96/96 les/c/f=97/97/0 sis=123) [0]/[2] r=0 lpr=123 pi=[96,123)/1 crt=70'238 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:34:47 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 123 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=96/96 les/c/f=97/97/0 sis=123) [0]/[2] r=-1 lpr=123 pi=[96,123)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:47 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 123 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=96/96 les/c/f=97/97/0 sis=123) [0]/[2] r=-1 lpr=123 pi=[96,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:34:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ff7799b530474ed4e0d5364a299170f7774ea122aceb78ea092cf13b9c1c5c2-merged.mount: Deactivated successfully.
Nov 29 07:34:47 compute-0 podman[109521]: 2025-11-29 07:34:47.807344199 +0000 UTC m=+0.185017522 container remove da7057104800bd4783f6906e210647611a3fa5c5b7ac173dfbc6ff8087b61f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_antonelli, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:34:47 compute-0 systemd[1]: libpod-conmon-da7057104800bd4783f6906e210647611a3fa5c5b7ac173dfbc6ff8087b61f4e.scope: Deactivated successfully.
Nov 29 07:34:48 compute-0 podman[109612]: 2025-11-29 07:34:48.003934673 +0000 UTC m=+0.042643698 container create c91c699aee17b15ac88da119c07c416eb59939ec22ef7f77e35e934518852af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euler, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:34:48 compute-0 systemd[1]: Started libpod-conmon-c91c699aee17b15ac88da119c07c416eb59939ec22ef7f77e35e934518852af1.scope.
Nov 29 07:34:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4edc9253bfd44208dc94e2c3112de4feb4f45bbf8191d81d8494d43e0b441795/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4edc9253bfd44208dc94e2c3112de4feb4f45bbf8191d81d8494d43e0b441795/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4edc9253bfd44208dc94e2c3112de4feb4f45bbf8191d81d8494d43e0b441795/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:34:48 compute-0 podman[109612]: 2025-11-29 07:34:47.985661813 +0000 UTC m=+0.024370858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4edc9253bfd44208dc94e2c3112de4feb4f45bbf8191d81d8494d43e0b441795/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:34:48 compute-0 podman[109612]: 2025-11-29 07:34:48.090857007 +0000 UTC m=+0.129566052 container init c91c699aee17b15ac88da119c07c416eb59939ec22ef7f77e35e934518852af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 07:34:48 compute-0 podman[109612]: 2025-11-29 07:34:48.097633295 +0000 UTC m=+0.136342320 container start c91c699aee17b15ac88da119c07c416eb59939ec22ef7f77e35e934518852af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euler, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:34:48 compute-0 podman[109612]: 2025-11-29 07:34:48.101307417 +0000 UTC m=+0.140016582 container attach c91c699aee17b15ac88da119c07c416eb59939ec22ef7f77e35e934518852af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euler, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 07:34:48 compute-0 sudo[109706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffqwtjhywqqwbagjoukvuzuxvxpxdwpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401687.8674774-156-258627900813872/AnsiballZ_command.py'
Nov 29 07:34:48 compute-0 sudo[109706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:48 compute-0 python3.9[109708]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 07:34:48 compute-0 sudo[109706]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 1 unknown, 1 active+clean+scrubbing, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 29 07:34:48 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Nov 29 07:34:48 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Nov 29 07:34:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 29 07:34:48 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 29 07:34:48 compute-0 ceph-mon[75237]: osdmap e123: 3 total, 3 up, 3 in
Nov 29 07:34:48 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Nov 29 07:34:48 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 124 pg[9.1c( v 70'238 (0'0,70'238] local-lis/les=123/124 n=5 ec=54/40 lis/c=96/96 les/c/f=97/97/0 sis=123) [0]/[2] async=[0] r=0 lpr=123 pi=[96,123)/1 crt=70'238 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:34:48 compute-0 sudo[109860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgnscegdlrbporxoecwaezaynozklmca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401688.5663655-164-143309396753652/AnsiballZ_file.py'
Nov 29 07:34:48 compute-0 sudo[109860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:48 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Nov 29 07:34:48 compute-0 kind_euler[109651]: {
Nov 29 07:34:48 compute-0 kind_euler[109651]:     "0": [
Nov 29 07:34:48 compute-0 kind_euler[109651]:         {
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "devices": [
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "/dev/loop3"
Nov 29 07:34:48 compute-0 kind_euler[109651]:             ],
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "lv_name": "ceph_lv0",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "lv_size": "21470642176",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "name": "ceph_lv0",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "tags": {
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.cluster_name": "ceph",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.crush_device_class": "",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.encrypted": "0",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.osd_id": "0",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.type": "block",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.vdo": "0"
Nov 29 07:34:48 compute-0 kind_euler[109651]:             },
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "type": "block",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "vg_name": "ceph_vg0"
Nov 29 07:34:48 compute-0 kind_euler[109651]:         }
Nov 29 07:34:48 compute-0 kind_euler[109651]:     ],
Nov 29 07:34:48 compute-0 kind_euler[109651]:     "1": [
Nov 29 07:34:48 compute-0 kind_euler[109651]:         {
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "devices": [
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "/dev/loop4"
Nov 29 07:34:48 compute-0 kind_euler[109651]:             ],
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "lv_name": "ceph_lv1",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "lv_size": "21470642176",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "name": "ceph_lv1",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "tags": {
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.cluster_name": "ceph",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.crush_device_class": "",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.encrypted": "0",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.osd_id": "1",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.type": "block",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.vdo": "0"
Nov 29 07:34:48 compute-0 kind_euler[109651]:             },
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "type": "block",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "vg_name": "ceph_vg1"
Nov 29 07:34:48 compute-0 kind_euler[109651]:         }
Nov 29 07:34:48 compute-0 kind_euler[109651]:     ],
Nov 29 07:34:48 compute-0 kind_euler[109651]:     "2": [
Nov 29 07:34:48 compute-0 kind_euler[109651]:         {
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "devices": [
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "/dev/loop5"
Nov 29 07:34:48 compute-0 kind_euler[109651]:             ],
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "lv_name": "ceph_lv2",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "lv_size": "21470642176",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "name": "ceph_lv2",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "tags": {
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.cluster_name": "ceph",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.crush_device_class": "",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.encrypted": "0",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.osd_id": "2",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.type": "block",
Nov 29 07:34:48 compute-0 kind_euler[109651]:                 "ceph.vdo": "0"
Nov 29 07:34:48 compute-0 kind_euler[109651]:             },
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "type": "block",
Nov 29 07:34:48 compute-0 kind_euler[109651]:             "vg_name": "ceph_vg2"
Nov 29 07:34:48 compute-0 kind_euler[109651]:         }
Nov 29 07:34:48 compute-0 kind_euler[109651]:     ]
Nov 29 07:34:48 compute-0 kind_euler[109651]: }
Nov 29 07:34:48 compute-0 systemd[1]: libpod-c91c699aee17b15ac88da119c07c416eb59939ec22ef7f77e35e934518852af1.scope: Deactivated successfully.
Nov 29 07:34:48 compute-0 podman[109612]: 2025-11-29 07:34:48.952505203 +0000 UTC m=+0.991214228 container died c91c699aee17b15ac88da119c07c416eb59939ec22ef7f77e35e934518852af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euler, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:34:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-4edc9253bfd44208dc94e2c3112de4feb4f45bbf8191d81d8494d43e0b441795-merged.mount: Deactivated successfully.
Nov 29 07:34:49 compute-0 podman[109612]: 2025-11-29 07:34:49.022793868 +0000 UTC m=+1.061502893 container remove c91c699aee17b15ac88da119c07c416eb59939ec22ef7f77e35e934518852af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euler, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 07:34:49 compute-0 systemd[1]: libpod-conmon-c91c699aee17b15ac88da119c07c416eb59939ec22ef7f77e35e934518852af1.scope: Deactivated successfully.
Nov 29 07:34:49 compute-0 sudo[109433]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:49 compute-0 python3.9[109864]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:34:49 compute-0 sudo[109860]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:49 compute-0 sudo[109879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:34:49 compute-0 sudo[109879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:49 compute-0 sudo[109879]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:49 compute-0 sudo[109904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:34:49 compute-0 sudo[109904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:49 compute-0 sudo[109904]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:49 compute-0 sudo[109953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:34:49 compute-0 sudo[109953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:49 compute-0 sudo[109953]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:49 compute-0 sudo[109978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:34:49 compute-0 sudo[109978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:49 compute-0 podman[110094]: 2025-11-29 07:34:49.595600703 +0000 UTC m=+0.049110956 container create 30a1d9605be057dc690915016dc1b7941d488524c8de3b2e11f9c921b388a06f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_merkle, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:34:49 compute-0 systemd[1]: Started libpod-conmon-30a1d9605be057dc690915016dc1b7941d488524c8de3b2e11f9c921b388a06f.scope.
Nov 29 07:34:49 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:34:49 compute-0 podman[110094]: 2025-11-29 07:34:49.570618217 +0000 UTC m=+0.024128450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:34:49 compute-0 podman[110094]: 2025-11-29 07:34:49.679578266 +0000 UTC m=+0.133088489 container init 30a1d9605be057dc690915016dc1b7941d488524c8de3b2e11f9c921b388a06f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_merkle, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 07:34:49 compute-0 podman[110094]: 2025-11-29 07:34:49.690632925 +0000 UTC m=+0.144143138 container start 30a1d9605be057dc690915016dc1b7941d488524c8de3b2e11f9c921b388a06f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_merkle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:34:49 compute-0 podman[110094]: 2025-11-29 07:34:49.695211095 +0000 UTC m=+0.148721338 container attach 30a1d9605be057dc690915016dc1b7941d488524c8de3b2e11f9c921b388a06f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_merkle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:34:49 compute-0 angry_merkle[110133]: 167 167
Nov 29 07:34:49 compute-0 systemd[1]: libpod-30a1d9605be057dc690915016dc1b7941d488524c8de3b2e11f9c921b388a06f.scope: Deactivated successfully.
Nov 29 07:34:49 compute-0 podman[110094]: 2025-11-29 07:34:49.698191786 +0000 UTC m=+0.151702019 container died 30a1d9605be057dc690915016dc1b7941d488524c8de3b2e11f9c921b388a06f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 29 07:34:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-81577c240eb64932f14447beaff828e27035c55dc96326c2def556aa3dd57406-merged.mount: Deactivated successfully.
Nov 29 07:34:49 compute-0 podman[110094]: 2025-11-29 07:34:49.754009627 +0000 UTC m=+0.207519850 container remove 30a1d9605be057dc690915016dc1b7941d488524c8de3b2e11f9c921b388a06f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:34:49 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 29 07:34:49 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 29 07:34:49 compute-0 systemd[1]: libpod-conmon-30a1d9605be057dc690915016dc1b7941d488524c8de3b2e11f9c921b388a06f.scope: Deactivated successfully.
Nov 29 07:34:49 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 29 07:34:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 29 07:34:49 compute-0 ceph-mon[75237]: pgmap v329: 305 pgs: 1 unknown, 1 active+clean+scrubbing, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:49 compute-0 ceph-mon[75237]: 10.19 scrub starts
Nov 29 07:34:49 compute-0 ceph-mon[75237]: 10.19 scrub ok
Nov 29 07:34:49 compute-0 ceph-mon[75237]: osdmap e124: 3 total, 3 up, 3 in
Nov 29 07:34:49 compute-0 ceph-mon[75237]: 9.3 scrub starts
Nov 29 07:34:49 compute-0 sudo[110202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsimkrjrzzhhprxwrpkrnldashfvkxeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401689.2829943-172-9697802811444/AnsiballZ_mount.py'
Nov 29 07:34:49 compute-0 sudo[110202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 29 07:34:49 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 29 07:34:49 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 29 07:34:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 125 pg[9.1c( v 70'238 (0'0,70'238] local-lis/les=0/0 n=5 ec=54/40 lis/c=123/96 les/c/f=124/97/0 sis=125) [0] r=0 lpr=125 pi=[96,125)/1 luod=0'0 crt=70'238 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:49 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 125 pg[9.1c( v 70'238 (0'0,70'238] local-lis/les=0/0 n=5 ec=54/40 lis/c=123/96 les/c/f=124/97/0 sis=125) [0] r=0 lpr=125 pi=[96,125)/1 crt=70'238 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:34:49 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 125 pg[9.1c( v 70'238 (0'0,70'238] local-lis/les=123/124 n=5 ec=54/40 lis/c=123/96 les/c/f=124/97/0 sis=125 pruub=15.001372337s) [0] async=[0] r=-1 lpr=125 pi=[96,125)/1 crt=70'238 mlcod 70'238 active pruub 352.315979004s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:49 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 125 pg[9.1c( v 70'238 (0'0,70'238] local-lis/les=123/124 n=5 ec=54/40 lis/c=123/96 les/c/f=124/97/0 sis=125 pruub=15.000927925s) [0] r=-1 lpr=125 pi=[96,125)/1 crt=70'238 mlcod 0'0 unknown NOTIFY pruub 352.315979004s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:34:49 compute-0 podman[110210]: 2025-11-29 07:34:49.895677939 +0000 UTC m=+0.022279034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:34:50 compute-0 python3.9[110204]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 07:34:50 compute-0 sudo[110202]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:50 compute-0 podman[110210]: 2025-11-29 07:34:50.213559811 +0000 UTC m=+0.340160866 container create f8f1624e9b0635e1def07a3dd5aa294e69391d3168551611d9e8387b487ce386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_greider, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:34:50 compute-0 systemd[1]: Started libpod-conmon-f8f1624e9b0635e1def07a3dd5aa294e69391d3168551611d9e8387b487ce386.scope.
Nov 29 07:34:50 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98ce2505495803d4c25d31ca93deda278385fed71eeb3760e9ed417e1fde3ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98ce2505495803d4c25d31ca93deda278385fed71eeb3760e9ed417e1fde3ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98ce2505495803d4c25d31ca93deda278385fed71eeb3760e9ed417e1fde3ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98ce2505495803d4c25d31ca93deda278385fed71eeb3760e9ed417e1fde3ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:34:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 1 unknown, 1 active+clean+scrubbing, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:50 compute-0 podman[110210]: 2025-11-29 07:34:50.867257904 +0000 UTC m=+0.993858989 container init f8f1624e9b0635e1def07a3dd5aa294e69391d3168551611d9e8387b487ce386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:34:50 compute-0 podman[110210]: 2025-11-29 07:34:50.87820434 +0000 UTC m=+1.004805425 container start f8f1624e9b0635e1def07a3dd5aa294e69391d3168551611d9e8387b487ce386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 07:34:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 29 07:34:50 compute-0 podman[110210]: 2025-11-29 07:34:50.930657287 +0000 UTC m=+1.057258352 container attach f8f1624e9b0635e1def07a3dd5aa294e69391d3168551611d9e8387b487ce386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_greider, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 07:34:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 29 07:34:50 compute-0 ceph-mon[75237]: 9.3 scrub ok
Nov 29 07:34:50 compute-0 ceph-mon[75237]: 10.b scrub starts
Nov 29 07:34:50 compute-0 ceph-mon[75237]: 10.b scrub ok
Nov 29 07:34:50 compute-0 ceph-mon[75237]: 9.1 scrub starts
Nov 29 07:34:50 compute-0 ceph-mon[75237]: 9.1 scrub ok
Nov 29 07:34:50 compute-0 ceph-mon[75237]: osdmap e125: 3 total, 3 up, 3 in
Nov 29 07:34:51 compute-0 sudo[110380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdzlixodmtyzizamjzijyudnedmjuqwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401690.7545145-200-127938526674522/AnsiballZ_file.py'
Nov 29 07:34:51 compute-0 sudo[110380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:51 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 29 07:34:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:34:51 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 126 pg[9.1c( v 70'238 (0'0,70'238] local-lis/les=125/126 n=5 ec=54/40 lis/c=123/96 les/c/f=124/97/0 sis=125) [0] r=0 lpr=125 pi=[96,125)/1 crt=70'238 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:34:51 compute-0 python3.9[110382]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:34:51 compute-0 sudo[110380]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:51 compute-0 sudo[110546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doxmvwcfvmincukzcqflwsjfmsnketqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401691.4212446-208-168870389805636/AnsiballZ_stat.py'
Nov 29 07:34:51 compute-0 sudo[110546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:51 compute-0 happy_greider[110250]: {
Nov 29 07:34:51 compute-0 happy_greider[110250]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:34:51 compute-0 happy_greider[110250]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:34:51 compute-0 happy_greider[110250]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:34:51 compute-0 happy_greider[110250]:         "osd_id": 2,
Nov 29 07:34:51 compute-0 happy_greider[110250]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:34:51 compute-0 happy_greider[110250]:         "type": "bluestore"
Nov 29 07:34:51 compute-0 happy_greider[110250]:     },
Nov 29 07:34:51 compute-0 happy_greider[110250]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:34:51 compute-0 happy_greider[110250]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:34:51 compute-0 happy_greider[110250]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:34:51 compute-0 happy_greider[110250]:         "osd_id": 0,
Nov 29 07:34:51 compute-0 happy_greider[110250]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:34:51 compute-0 happy_greider[110250]:         "type": "bluestore"
Nov 29 07:34:51 compute-0 happy_greider[110250]:     },
Nov 29 07:34:51 compute-0 happy_greider[110250]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:34:51 compute-0 happy_greider[110250]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:34:51 compute-0 happy_greider[110250]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:34:51 compute-0 happy_greider[110250]:         "osd_id": 1,
Nov 29 07:34:51 compute-0 happy_greider[110250]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:34:51 compute-0 happy_greider[110250]:         "type": "bluestore"
Nov 29 07:34:51 compute-0 happy_greider[110250]:     }
Nov 29 07:34:51 compute-0 happy_greider[110250]: }
Nov 29 07:34:51 compute-0 systemd[1]: libpod-f8f1624e9b0635e1def07a3dd5aa294e69391d3168551611d9e8387b487ce386.scope: Deactivated successfully.
Nov 29 07:34:51 compute-0 podman[110210]: 2025-11-29 07:34:51.923853325 +0000 UTC m=+2.050454390 container died f8f1624e9b0635e1def07a3dd5aa294e69391d3168551611d9e8387b487ce386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_greider, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:34:51 compute-0 systemd[1]: libpod-f8f1624e9b0635e1def07a3dd5aa294e69391d3168551611d9e8387b487ce386.scope: Consumed 1.042s CPU time.
Nov 29 07:34:51 compute-0 python3.9[110548]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:34:51 compute-0 ceph-mon[75237]: pgmap v332: 305 pgs: 1 unknown, 1 active+clean+scrubbing, 303 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:51 compute-0 ceph-mon[75237]: osdmap e126: 3 total, 3 up, 3 in
Nov 29 07:34:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f98ce2505495803d4c25d31ca93deda278385fed71eeb3760e9ed417e1fde3ba-merged.mount: Deactivated successfully.
Nov 29 07:34:51 compute-0 sudo[110546]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:51 compute-0 podman[110210]: 2025-11-29 07:34:51.995319225 +0000 UTC m=+2.121920290 container remove f8f1624e9b0635e1def07a3dd5aa294e69391d3168551611d9e8387b487ce386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:34:52 compute-0 systemd[1]: libpod-conmon-f8f1624e9b0635e1def07a3dd5aa294e69391d3168551611d9e8387b487ce386.scope: Deactivated successfully.
Nov 29 07:34:52 compute-0 sudo[109978]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:34:52 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:34:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:34:52 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:34:52 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 8149b881-ab5d-420b-9162-019af51ccf49 does not exist
Nov 29 07:34:52 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev f4d43c67-b4d0-454e-be9c-302e5e08c48a does not exist
Nov 29 07:34:52 compute-0 sudo[110600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:34:52 compute-0 sudo[110600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:52 compute-0 sudo[110600]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:52 compute-0 sudo[110649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:34:52 compute-0 sudo[110649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:52 compute-0 sudo[110649]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:52 compute-0 sudo[110698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbiridogalxepderuyquskbkvwptoeea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401691.4212446-208-168870389805636/AnsiballZ_file.py'
Nov 29 07:34:52 compute-0 sudo[110698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:52 compute-0 python3.9[110702]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:34:52 compute-0 sudo[110698]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 29 07:34:52 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 07:34:52 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 29 07:34:52 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 29 07:34:52 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 29 07:34:52 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 29 07:34:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 29 07:34:53 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:34:53 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:34:53 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 07:34:53 compute-0 ceph-mon[75237]: 9.11 scrub starts
Nov 29 07:34:53 compute-0 ceph-mon[75237]: 10.1a scrub starts
Nov 29 07:34:53 compute-0 ceph-mon[75237]: 10.1a scrub ok
Nov 29 07:34:53 compute-0 ceph-mon[75237]: 9.11 scrub ok
Nov 29 07:34:53 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 07:34:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 29 07:34:53 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 29 07:34:53 compute-0 sudo[110852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmjlmsaewbimgjjydzgyhhjxmoygdicr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401693.2606115-229-266590827203517/AnsiballZ_stat.py'
Nov 29 07:34:53 compute-0 sudo[110852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:53 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 29 07:34:53 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 29 07:34:53 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 29 07:34:53 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 29 07:34:53 compute-0 python3.9[110854]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:34:53 compute-0 sudo[110852]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:54 compute-0 ceph-mon[75237]: pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 07:34:54 compute-0 ceph-mon[75237]: osdmap e127: 3 total, 3 up, 3 in
Nov 29 07:34:54 compute-0 ceph-mon[75237]: 6.5 scrub starts
Nov 29 07:34:54 compute-0 ceph-mon[75237]: 6.5 scrub ok
Nov 29 07:34:54 compute-0 ceph-mon[75237]: 10.f scrub starts
Nov 29 07:34:54 compute-0 ceph-mon[75237]: 10.f scrub ok
Nov 29 07:34:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 29 07:34:54 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 07:34:54 compute-0 sudo[111006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnndoqerphcezvabjoeviyqdwsfsayqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401694.229658-242-42708348161133/AnsiballZ_getent.py'
Nov 29 07:34:54 compute-0 sudo[111006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:54 compute-0 python3.9[111008]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 07:34:54 compute-0 sudo[111006]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 29 07:34:55 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 07:34:55 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 07:34:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 29 07:34:55 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 29 07:34:55 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 128 pg[9.1e( v 71'240 (0'0,71'240] local-lis/les=78/79 n=6 ec=54/40 lis/c=78/78 les/c/f=79/79/0 sis=128 pruub=11.987521172s) [0] r=-1 lpr=128 pi=[78,128)/1 crt=71'240 mlcod 0'0 active pruub 354.762023926s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:55 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 128 pg[9.1e( v 71'240 (0'0,71'240] local-lis/les=78/79 n=6 ec=54/40 lis/c=78/78 les/c/f=79/79/0 sis=128 pruub=11.987465858s) [0] r=-1 lpr=128 pi=[78,128)/1 crt=71'240 mlcod 0'0 unknown NOTIFY pruub 354.762023926s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:34:55 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=78/78 les/c/f=79/79/0 sis=128) [0] r=0 lpr=128 pi=[78,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:34:55 compute-0 sudo[111159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syoeautqyufeygmlzumhnxwttfsmxkfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401695.2203262-252-1759647389920/AnsiballZ_getent.py'
Nov 29 07:34:55 compute-0 sudo[111159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:34:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:34:55 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 29 07:34:55 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 29 07:34:55 compute-0 python3.9[111161]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 07:34:55 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 29 07:34:55 compute-0 sudo[111159]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:55 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 29 07:34:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:34:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Nov 29 07:34:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Nov 29 07:34:56 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Nov 29 07:34:56 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=78/78 les/c/f=79/79/0 sis=129) [0]/[2] r=-1 lpr=129 pi=[78,129)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:56 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=78/78 les/c/f=79/79/0 sis=129) [0]/[2] r=-1 lpr=129 pi=[78,129)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:34:56 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 129 pg[9.1e( v 71'240 (0'0,71'240] local-lis/les=78/79 n=6 ec=54/40 lis/c=78/78 les/c/f=79/79/0 sis=129) [0]/[2] r=0 lpr=129 pi=[78,129)/1 crt=71'240 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:56 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 129 pg[9.1e( v 71'240 (0'0,71'240] local-lis/les=78/79 n=6 ec=54/40 lis/c=78/78 les/c/f=79/79/0 sis=129) [0]/[2] r=0 lpr=129 pi=[78,129)/1 crt=71'240 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:34:56 compute-0 ceph-mon[75237]: pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:56 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 07:34:56 compute-0 ceph-mon[75237]: osdmap e128: 3 total, 3 up, 3 in
Nov 29 07:34:56 compute-0 ceph-mon[75237]: 6.b scrub starts
Nov 29 07:34:56 compute-0 ceph-mon[75237]: 6.b scrub ok
Nov 29 07:34:56 compute-0 ceph-mon[75237]: osdmap e129: 3 total, 3 up, 3 in
Nov 29 07:34:56 compute-0 sudo[111312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qohwnvhdmyattkwusexiocilwjcqetxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401695.9792638-260-73719701087740/AnsiballZ_group.py'
Nov 29 07:34:56 compute-0 sudo[111312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:34:56 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:34:56 compute-0 python3.9[111314]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 07:34:56 compute-0 sudo[111312]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:56 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Nov 29 07:34:56 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Nov 29 07:34:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Nov 29 07:34:57 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:34:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Nov 29 07:34:57 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Nov 29 07:34:57 compute-0 ceph-mon[75237]: 9.e scrub starts
Nov 29 07:34:57 compute-0 ceph-mon[75237]: 9.e scrub ok
Nov 29 07:34:57 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:34:57 compute-0 ceph-mon[75237]: 6.9 deep-scrub starts
Nov 29 07:34:57 compute-0 ceph-mon[75237]: 6.9 deep-scrub ok
Nov 29 07:34:57 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 130 pg[9.1f( v 71'238 (0'0,71'238] local-lis/les=84/85 n=5 ec=54/40 lis/c=84/84 les/c/f=85/85/0 sis=130 pruub=12.348845482s) [1] r=-1 lpr=130 pi=[84,130)/1 crt=71'238 mlcod 0'0 active pruub 357.162292480s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:57 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 130 pg[9.1f( v 71'238 (0'0,71'238] local-lis/les=84/85 n=5 ec=54/40 lis/c=84/84 les/c/f=85/85/0 sis=130 pruub=12.348786354s) [1] r=-1 lpr=130 pi=[84,130)/1 crt=71'238 mlcod 0'0 unknown NOTIFY pruub 357.162292480s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:34:57 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 130 pg[9.1e( v 71'240 (0'0,71'240] local-lis/les=129/130 n=6 ec=54/40 lis/c=78/78 les/c/f=79/79/0 sis=129) [0]/[2] async=[0] r=0 lpr=129 pi=[78,129)/1 crt=71'240 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:34:57 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 130 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=84/84 les/c/f=85/85/0 sis=130) [1] r=0 lpr=130 pi=[84,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:34:57 compute-0 sudo[111464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugqrulenvroxmsiizwbdshneknadqbdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401697.0064838-269-12314922240553/AnsiballZ_file.py'
Nov 29 07:34:57 compute-0 sudo[111464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:57 compute-0 python3.9[111466]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 07:34:57 compute-0 sudo[111464]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:57 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 29 07:34:57 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 29 07:34:57 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 29 07:34:57 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 29 07:34:58 compute-0 sudo[111616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsnqnkgbbwxphcbwweegbdtwxirsrrmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401697.9718955-280-134318444075521/AnsiballZ_dnf.py'
Nov 29 07:34:58 compute-0 sudo[111616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Nov 29 07:34:58 compute-0 ceph-mon[75237]: pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:58 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:34:58 compute-0 ceph-mon[75237]: osdmap e130: 3 total, 3 up, 3 in
Nov 29 07:34:58 compute-0 ceph-mon[75237]: 6.d scrub starts
Nov 29 07:34:58 compute-0 ceph-mon[75237]: 6.d scrub ok
Nov 29 07:34:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Nov 29 07:34:58 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Nov 29 07:34:58 compute-0 python3.9[111618]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:34:58 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 131 pg[9.1e( v 71'240 (0'0,71'240] local-lis/les=0/0 n=6 ec=54/40 lis/c=129/78 les/c/f=130/79/0 sis=131) [0] r=0 lpr=131 pi=[78,131)/1 luod=0'0 crt=71'240 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:58 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 131 pg[9.1e( v 71'240 (0'0,71'240] local-lis/les=0/0 n=6 ec=54/40 lis/c=129/78 les/c/f=130/79/0 sis=131) [0] r=0 lpr=131 pi=[78,131)/1 crt=71'240 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:34:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:58 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 131 pg[9.1e( v 71'240 (0'0,71'240] local-lis/les=129/130 n=6 ec=54/40 lis/c=129/78 les/c/f=130/79/0 sis=131 pruub=14.651765823s) [0] async=[0] r=-1 lpr=131 pi=[78,131)/1 crt=71'240 mlcod 71'240 active pruub 360.820220947s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:58 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 131 pg[9.1e( v 71'240 (0'0,71'240] local-lis/les=129/130 n=6 ec=54/40 lis/c=129/78 les/c/f=130/79/0 sis=131 pruub=14.651434898s) [0] r=-1 lpr=131 pi=[78,131)/1 crt=71'240 mlcod 0'0 unknown NOTIFY pruub 360.820220947s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:34:58 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 131 pg[9.1f( v 71'238 (0'0,71'238] local-lis/les=84/85 n=5 ec=54/40 lis/c=84/84 les/c/f=85/85/0 sis=131) [1]/[2] r=0 lpr=131 pi=[84,131)/1 crt=71'238 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:58 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 131 pg[9.1f( v 71'238 (0'0,71'238] local-lis/les=84/85 n=5 ec=54/40 lis/c=84/84 les/c/f=85/85/0 sis=131) [1]/[2] r=0 lpr=131 pi=[84,131)/1 crt=71'238 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:34:58 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 131 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=84/84 les/c/f=85/85/0 sis=131) [1]/[2] r=-1 lpr=131 pi=[84,131)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:34:58 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 131 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=84/84 les/c/f=85/85/0 sis=131) [1]/[2] r=-1 lpr=131 pi=[84,131)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:34:58 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Nov 29 07:34:58 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Nov 29 07:34:58 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Nov 29 07:34:58 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Nov 29 07:34:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Nov 29 07:34:59 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 29 07:34:59 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 29 07:34:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Nov 29 07:34:59 compute-0 ceph-mon[75237]: 9.6 scrub starts
Nov 29 07:34:59 compute-0 ceph-mon[75237]: 9.6 scrub ok
Nov 29 07:34:59 compute-0 ceph-mon[75237]: osdmap e131: 3 total, 3 up, 3 in
Nov 29 07:34:59 compute-0 ceph-mon[75237]: 9.10 scrub starts
Nov 29 07:34:59 compute-0 ceph-mon[75237]: 9.16 scrub starts
Nov 29 07:35:00 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Nov 29 07:35:00 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 132 pg[9.1f( v 71'238 (0'0,71'238] local-lis/les=131/132 n=5 ec=54/40 lis/c=84/84 les/c/f=85/85/0 sis=131) [1]/[2] async=[1] r=0 lpr=131 pi=[84,131)/1 crt=71'238 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:35:00 compute-0 ceph-osd[88926]: osd.0 pg_epoch: 132 pg[9.1e( v 71'240 (0'0,71'240] local-lis/les=131/132 n=6 ec=54/40 lis/c=129/78 les/c/f=130/79/0 sis=131) [0] r=0 lpr=131 pi=[78,131)/1 crt=71'240 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:35:00 compute-0 sshd-session[111620]: Invalid user monitoring from 20.185.243.158 port 44500
Nov 29 07:35:00 compute-0 sudo[111616]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:00 compute-0 sshd-session[111620]: Received disconnect from 20.185.243.158 port 44500:11: Bye Bye [preauth]
Nov 29 07:35:00 compute-0 sshd-session[111620]: Disconnected from invalid user monitoring 20.185.243.158 port 44500 [preauth]
Nov 29 07:35:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:00 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 29 07:35:00 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 29 07:35:00 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 29 07:35:00 compute-0 ceph-osd[88926]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 29 07:35:00 compute-0 sudo[111772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmsnpbigqebdmydqkoihubqmsrcpbrek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401700.537591-288-202594403001998/AnsiballZ_file.py'
Nov 29 07:35:00 compute-0 sudo[111772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:00 compute-0 python3.9[111774]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:35:01 compute-0 sudo[111772]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Nov 29 07:35:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Nov 29 07:35:01 compute-0 ceph-mon[75237]: pgmap v342: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:01 compute-0 ceph-mon[75237]: 9.10 scrub ok
Nov 29 07:35:01 compute-0 ceph-mon[75237]: 9.16 scrub ok
Nov 29 07:35:01 compute-0 ceph-mon[75237]: 9.1c scrub starts
Nov 29 07:35:01 compute-0 ceph-mon[75237]: 9.1c scrub ok
Nov 29 07:35:01 compute-0 ceph-mon[75237]: osdmap e132: 3 total, 3 up, 3 in
Nov 29 07:35:01 compute-0 ceph-mon[75237]: 9.1e scrub starts
Nov 29 07:35:01 compute-0 ceph-mon[75237]: 9.1e scrub ok
Nov 29 07:35:01 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Nov 29 07:35:01 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 133 pg[9.1f( v 71'238 (0'0,71'238] local-lis/les=131/132 n=5 ec=54/40 lis/c=131/84 les/c/f=132/85/0 sis=133 pruub=14.968995094s) [1] async=[1] r=-1 lpr=133 pi=[84,133)/1 crt=71'238 mlcod 71'238 active pruub 363.803009033s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:35:01 compute-0 ceph-osd[90977]: osd.2 pg_epoch: 133 pg[9.1f( v 71'238 (0'0,71'238] local-lis/les=131/132 n=5 ec=54/40 lis/c=131/84 les/c/f=132/85/0 sis=133 pruub=14.968913078s) [1] r=-1 lpr=133 pi=[84,133)/1 crt=71'238 mlcod 0'0 unknown NOTIFY pruub 363.803009033s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:35:01 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 133 pg[9.1f( v 71'238 (0'0,71'238] local-lis/les=0/0 n=5 ec=54/40 lis/c=131/84 les/c/f=132/85/0 sis=133) [1] r=0 lpr=133 pi=[84,133)/1 luod=0'0 crt=71'238 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:35:01 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 133 pg[9.1f( v 71'238 (0'0,71'238] local-lis/les=0/0 n=5 ec=54/40 lis/c=131/84 les/c/f=132/85/0 sis=133) [1] r=0 lpr=133 pi=[84,133)/1 crt=71'238 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:35:01 compute-0 sudo[111924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arivvxdxdkygloihardftwtlljgovnau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401701.1768699-296-217380095655197/AnsiballZ_stat.py'
Nov 29 07:35:01 compute-0 sudo[111924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:01 compute-0 python3.9[111926]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:35:01 compute-0 sudo[111924]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:01 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 29 07:35:01 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 29 07:35:01 compute-0 sudo[112002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-effgadrfcimbaknsaorrruslefgqugwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401701.1768699-296-217380095655197/AnsiballZ_file.py'
Nov 29 07:35:01 compute-0 sudo[112002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:02 compute-0 python3.9[112004]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:35:02 compute-0 sudo[112002]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Nov 29 07:35:02 compute-0 ceph-mon[75237]: pgmap v344: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:02 compute-0 ceph-mon[75237]: 9.17 scrub starts
Nov 29 07:35:02 compute-0 ceph-mon[75237]: 9.17 scrub ok
Nov 29 07:35:02 compute-0 ceph-mon[75237]: osdmap e133: 3 total, 3 up, 3 in
Nov 29 07:35:02 compute-0 ceph-mon[75237]: 9.1a scrub starts
Nov 29 07:35:02 compute-0 ceph-mon[75237]: 9.1a scrub ok
Nov 29 07:35:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Nov 29 07:35:02 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 29 07:35:02 compute-0 ceph-osd[89968]: osd.1 pg_epoch: 134 pg[9.1f( v 71'238 (0'0,71'238] local-lis/les=133/134 n=5 ec=54/40 lis/c=131/84 les/c/f=132/85/0 sis=133) [1] r=0 lpr=133 pi=[84,133)/1 crt=71'238 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:35:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:02 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Nov 29 07:35:02 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 29 07:35:02 compute-0 sudo[112154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovlrckpacepjnumzpmyhpnphxmklffsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401702.4065845-309-125776658022787/AnsiballZ_stat.py'
Nov 29 07:35:02 compute-0 sudo[112154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:02 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 29 07:35:02 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Nov 29 07:35:02 compute-0 python3.9[112156]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:35:02 compute-0 sudo[112154]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:03 compute-0 sudo[112232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-capasmdlwzanbvznbdinihefgjjfrmge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401702.4065845-309-125776658022787/AnsiballZ_file.py'
Nov 29 07:35:03 compute-0 sudo[112232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:03 compute-0 ceph-mon[75237]: osdmap e134: 3 total, 3 up, 3 in
Nov 29 07:35:03 compute-0 ceph-mon[75237]: 9.15 scrub starts
Nov 29 07:35:03 compute-0 ceph-mon[75237]: 9.15 scrub ok
Nov 29 07:35:03 compute-0 python3.9[112234]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:35:03 compute-0 sudo[112232]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:04 compute-0 sudo[112384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idibvambknuczykirxlrjmjqlkbgnfjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401703.723797-324-259072639880462/AnsiballZ_dnf.py'
Nov 29 07:35:04 compute-0 sudo[112384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:04 compute-0 python3.9[112386]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:35:04 compute-0 ceph-mon[75237]: pgmap v347: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:04 compute-0 ceph-mon[75237]: 9.7 scrub starts
Nov 29 07:35:04 compute-0 ceph-mon[75237]: 9.7 scrub ok
Nov 29 07:35:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:04 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 29 07:35:04 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 29 07:35:04 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Nov 29 07:35:04 compute-0 ceph-osd[89968]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Nov 29 07:35:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:35:06 compute-0 ceph-mon[75237]: 9.1f scrub starts
Nov 29 07:35:06 compute-0 ceph-mon[75237]: 9.1f scrub ok
Nov 29 07:35:06 compute-0 sudo[112384]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:07 compute-0 python3.9[112537]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:35:07 compute-0 ceph-mon[75237]: pgmap v348: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:07 compute-0 ceph-mon[75237]: 9.f scrub starts
Nov 29 07:35:07 compute-0 ceph-mon[75237]: 9.f scrub ok
Nov 29 07:35:07 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.18 deep-scrub starts
Nov 29 07:35:07 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.18 deep-scrub ok
Nov 29 07:35:08 compute-0 python3.9[112689]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 07:35:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:35:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:35:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:35:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:35:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:35:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:35:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:08 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Nov 29 07:35:08 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Nov 29 07:35:08 compute-0 ceph-mon[75237]: pgmap v349: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:08 compute-0 python3.9[112839]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:35:09 compute-0 ceph-mon[75237]: 9.18 deep-scrub starts
Nov 29 07:35:09 compute-0 ceph-mon[75237]: 9.18 deep-scrub ok
Nov 29 07:35:09 compute-0 ceph-mon[75237]: pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:09 compute-0 ceph-mon[75237]: 9.8 scrub starts
Nov 29 07:35:09 compute-0 ceph-mon[75237]: 9.8 scrub ok
Nov 29 07:35:09 compute-0 sudo[112989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmosbgthrzmfuiipboxpcqnngosskels ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401709.2282476-365-241710333882437/AnsiballZ_systemd.py'
Nov 29 07:35:09 compute-0 sudo[112989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:10 compute-0 python3.9[112991]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:35:10 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 07:35:10 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 07:35:10 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 07:35:10 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 07:35:10 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 07:35:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:10 compute-0 sudo[112989]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:10 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 29 07:35:10 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 29 07:35:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:35:11 compute-0 python3.9[113154]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 07:35:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:13 compute-0 ceph-mon[75237]: pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:13 compute-0 ceph-mon[75237]: 9.c scrub starts
Nov 29 07:35:13 compute-0 ceph-mon[75237]: 9.c scrub ok
Nov 29 07:35:13 compute-0 sudo[113304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghvehtfjejxsbrfcmcnbykymzottukis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401713.2747264-422-213899976621034/AnsiballZ_systemd.py'
Nov 29 07:35:13 compute-0 sudo[113304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:13 compute-0 python3.9[113306]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:35:13 compute-0 sudo[113304]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:14 compute-0 sudo[113458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuhuugkqvisnmgrmpfqxnsqqqatsrifu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401714.1163607-422-46646216058758/AnsiballZ_systemd.py'
Nov 29 07:35:14 compute-0 sudo[113458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:14 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 29 07:35:14 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 29 07:35:14 compute-0 python3.9[113460]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:35:14 compute-0 ceph-mon[75237]: pgmap v352: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:14 compute-0 sudo[113458]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:15 compute-0 sshd-session[105603]: Connection closed by 192.168.122.30 port 40250
Nov 29 07:35:15 compute-0 sshd-session[105600]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:35:15 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Nov 29 07:35:15 compute-0 systemd[1]: session-35.scope: Consumed 1min 6.948s CPU time.
Nov 29 07:35:15 compute-0 systemd-logind[782]: Session 35 logged out. Waiting for processes to exit.
Nov 29 07:35:15 compute-0 systemd-logind[782]: Removed session 35.
Nov 29 07:35:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:35:16 compute-0 ceph-mon[75237]: pgmap v353: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:16 compute-0 ceph-mon[75237]: 6.f scrub starts
Nov 29 07:35:16 compute-0 ceph-mon[75237]: 6.f scrub ok
Nov 29 07:35:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:16 compute-0 sshd-session[113487]: Invalid user system from 103.236.140.19 port 35188
Nov 29 07:35:17 compute-0 sshd-session[113487]: Received disconnect from 103.236.140.19 port 35188:11: Bye Bye [preauth]
Nov 29 07:35:17 compute-0 sshd-session[113487]: Disconnected from invalid user system 103.236.140.19 port 35188 [preauth]
Nov 29 07:35:17 compute-0 ceph-mon[75237]: pgmap v354: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:19 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Nov 29 07:35:19 compute-0 ceph-osd[90977]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Nov 29 07:35:19 compute-0 ceph-mon[75237]: pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:21 compute-0 sshd-session[113489]: Accepted publickey for zuul from 192.168.122.30 port 52212 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:35:21 compute-0 systemd-logind[782]: New session 36 of user zuul.
Nov 29 07:35:21 compute-0 systemd[1]: Started Session 36 of User zuul.
Nov 29 07:35:21 compute-0 sshd-session[113489]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:35:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:35:22 compute-0 python3.9[113642]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:35:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:22 compute-0 ceph-mon[75237]: 9.13 scrub starts
Nov 29 07:35:22 compute-0 ceph-mon[75237]: 9.13 scrub ok
Nov 29 07:35:22 compute-0 ceph-mon[75237]: pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:23 compute-0 sudo[113796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akwmknzeuiwrcxxjhgasquvqqencwsgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401723.01849-36-66273253110205/AnsiballZ_getent.py'
Nov 29 07:35:23 compute-0 sudo[113796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:23 compute-0 python3.9[113798]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 07:35:23 compute-0 sudo[113796]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:23 compute-0 ceph-mon[75237]: pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:24 compute-0 sudo[113949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnamxgulywyjcwdqqnehaqsoygbgoijb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401723.9427326-48-206876572666092/AnsiballZ_setup.py'
Nov 29 07:35:24 compute-0 sudo[113949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:24 compute-0 python3.9[113951]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:35:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:24 compute-0 sudo[113949]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:25 compute-0 sudo[114033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqjngkgxnujivxeicvuhxhtwbwreravo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401723.9427326-48-206876572666092/AnsiballZ_dnf.py'
Nov 29 07:35:25 compute-0 sudo[114033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:25 compute-0 python3.9[114035]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 07:35:25 compute-0 ceph-mon[75237]: pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:26 compute-0 sshd-session[114036]: Invalid user bob from 114.34.106.146 port 51508
Nov 29 07:35:26 compute-0 sshd-session[114036]: Received disconnect from 114.34.106.146 port 51508:11: Bye Bye [preauth]
Nov 29 07:35:26 compute-0 sshd-session[114036]: Disconnected from invalid user bob 114.34.106.146 port 51508 [preauth]
Nov 29 07:35:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:35:26 compute-0 sudo[114033]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:27 compute-0 sudo[114188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfanllegqpjowgtpxwqrwmgdalbrfnyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401727.219936-62-199227886379756/AnsiballZ_dnf.py'
Nov 29 07:35:27 compute-0 sudo[114188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:27 compute-0 python3.9[114190]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:35:28 compute-0 ceph-mon[75237]: pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:29 compute-0 ceph-mon[75237]: pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:29 compute-0 sudo[114188]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:30 compute-0 sudo[114341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dagmpruzuxobondvkrdttjzuutedhijb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401729.9207792-70-233499770804154/AnsiballZ_systemd.py'
Nov 29 07:35:30 compute-0 sudo[114341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:30 compute-0 python3.9[114343]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:35:30 compute-0 sudo[114341]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:31 compute-0 python3.9[114496]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:35:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:35:32 compute-0 ceph-mon[75237]: pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:32 compute-0 sudo[114646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxnccobvphmzegueehlzomaqbstozlni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401731.9095554-88-179566661636252/AnsiballZ_sefcontext.py'
Nov 29 07:35:32 compute-0 sudo[114646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:32 compute-0 python3.9[114648]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 07:35:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:32 compute-0 sudo[114646]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:33 compute-0 python3.9[114798]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:35:34 compute-0 ceph-mon[75237]: pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:34 compute-0 sudo[114954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylgpnkvmwjydqctummldjpiamhfyzavx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401734.2196186-106-4787651994741/AnsiballZ_dnf.py'
Nov 29 07:35:34 compute-0 sudo[114954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:34 compute-0 python3.9[114956]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:35:36 compute-0 sudo[114954]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:36 compute-0 ceph-mon[75237]: pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:36 compute-0 sudo[115109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtcidzetuxwiioepvmjpsperuuiffxoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401736.3552048-114-16687180151236/AnsiballZ_command.py'
Nov 29 07:35:36 compute-0 sudo[115109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:35:37 compute-0 python3.9[115111]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:35:37 compute-0 sudo[115109]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:38 compute-0 sudo[115396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlyvwdoultqaszqrshhogyqhijvdfabd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401738.0921507-122-186017068381715/AnsiballZ_file.py'
Nov 29 07:35:38 compute-0 sudo[115396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:35:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:35:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:35:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:35:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:35:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:35:38 compute-0 python3.9[115398]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 07:35:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:38 compute-0 sudo[115396]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:38 compute-0 sshd-session[115096]: Invalid user qwerty from 103.234.151.178 port 13884
Nov 29 07:35:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:35:38
Nov 29 07:35:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:35:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:35:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'vms', 'default.rgw.meta', '.mgr', 'images', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', '.rgw.root']
Nov 29 07:35:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:35:39 compute-0 sshd-session[115096]: Received disconnect from 103.234.151.178 port 13884:11: Bye Bye [preauth]
Nov 29 07:35:39 compute-0 sshd-session[115096]: Disconnected from invalid user qwerty 103.234.151.178 port 13884 [preauth]
Nov 29 07:35:39 compute-0 python3.9[115548]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:35:39 compute-0 ceph-mon[75237]: pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:40 compute-0 sudo[115700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdsxvuhnfbipvqpolshwumnojpaydlnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401739.7658749-138-206761556781831/AnsiballZ_dnf.py'
Nov 29 07:35:40 compute-0 sudo[115700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:40 compute-0 python3.9[115702]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:35:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:41 compute-0 ceph-mon[75237]: pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:35:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:35:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:35:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:35:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:35:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:35:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:35:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:35:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:35:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:35:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:35:43 compute-0 sudo[115700]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:43 compute-0 ceph-mon[75237]: pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:43 compute-0 sudo[115853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyunibeqvovcgqdomycefozwmlugnhwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401743.4173722-147-61144745308834/AnsiballZ_dnf.py'
Nov 29 07:35:43 compute-0 sudo[115853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:43 compute-0 python3.9[115855]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:35:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:45 compute-0 sudo[115853]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:45 compute-0 ceph-mon[75237]: pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:46 compute-0 sudo[116006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofddfjpukkxmketbqovskorbtcehvpop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401746.2082584-159-65800829285705/AnsiballZ_stat.py'
Nov 29 07:35:46 compute-0 sudo[116006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:46 compute-0 python3.9[116008]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:35:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:46 compute-0 sudo[116006]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:47 compute-0 sudo[116160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgsgppmvrlknshgakrpudqqgivfusylg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401746.8841765-167-200942008309928/AnsiballZ_slurp.py'
Nov 29 07:35:47 compute-0 sudo[116160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:51 compute-0 python3.9[116162]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 29 07:35:51 compute-0 sudo[116160]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:35:52 compute-0 sudo[116187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:35:52 compute-0 sudo[116187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:52 compute-0 sudo[116187]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:52 compute-0 sudo[116212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:35:52 compute-0 sudo[116212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:52 compute-0 sudo[116212]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:52 compute-0 sudo[116237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:35:52 compute-0 sudo[116237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:52 compute-0 sudo[116237]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:52 compute-0 sudo[116262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:35:52 compute-0 sudo[116262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:52 compute-0 ceph-mon[75237]: pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:52 compute-0 sshd-session[113492]: Connection closed by 192.168.122.30 port 52212
Nov 29 07:35:52 compute-0 sshd-session[113489]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:35:52 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Nov 29 07:35:52 compute-0 systemd[1]: session-36.scope: Consumed 18.801s CPU time.
Nov 29 07:35:52 compute-0 systemd-logind[782]: Session 36 logged out. Waiting for processes to exit.
Nov 29 07:35:52 compute-0 systemd-logind[782]: Removed session 36.
Nov 29 07:35:52 compute-0 sudo[116262]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:35:52 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:35:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:35:52 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:35:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:35:53 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:35:53 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 5354aa46-9bd5-4f32-8347-1dd9acce9bd3 does not exist
Nov 29 07:35:53 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 958b6592-07e7-4271-8651-5e75b3083f00 does not exist
Nov 29 07:35:53 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 1bd311ef-a4bf-4201-b93c-c9fcbe8081f2 does not exist
Nov 29 07:35:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:35:53 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:35:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:35:53 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:35:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:35:53 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:35:53 compute-0 sudo[116317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:35:53 compute-0 sudo[116317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:53 compute-0 sudo[116317]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:53 compute-0 sudo[116342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:35:53 compute-0 sudo[116342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:53 compute-0 sudo[116342]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:53 compute-0 sudo[116367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:35:53 compute-0 sudo[116367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:53 compute-0 sudo[116367]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:53 compute-0 sudo[116392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:35:53 compute-0 sudo[116392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:53 compute-0 podman[116457]: 2025-11-29 07:35:53.769297249 +0000 UTC m=+0.085566054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:35:54 compute-0 podman[116457]: 2025-11-29 07:35:54.000185624 +0000 UTC m=+0.316454389 container create d02f4345160beefdc287bd65fe9bd6312a82b3001636af26d171d51ccaf7e02e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 07:35:54 compute-0 ceph-mon[75237]: pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:54 compute-0 ceph-mon[75237]: pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:54 compute-0 ceph-mon[75237]: pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:54 compute-0 ceph-mon[75237]: pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:35:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:35:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:35:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:35:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:35:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:35:54 compute-0 systemd[1]: Started libpod-conmon-d02f4345160beefdc287bd65fe9bd6312a82b3001636af26d171d51ccaf7e02e.scope.
Nov 29 07:35:54 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:35:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:55 compute-0 podman[116457]: 2025-11-29 07:35:55.021614502 +0000 UTC m=+1.337883347 container init d02f4345160beefdc287bd65fe9bd6312a82b3001636af26d171d51ccaf7e02e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 07:35:55 compute-0 podman[116457]: 2025-11-29 07:35:55.029489567 +0000 UTC m=+1.345758362 container start d02f4345160beefdc287bd65fe9bd6312a82b3001636af26d171d51ccaf7e02e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_shirley, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:35:55 compute-0 infallible_shirley[116475]: 167 167
Nov 29 07:35:55 compute-0 systemd[1]: libpod-d02f4345160beefdc287bd65fe9bd6312a82b3001636af26d171d51ccaf7e02e.scope: Deactivated successfully.
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:35:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:35:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:57 compute-0 podman[116457]: 2025-11-29 07:35:57.202728274 +0000 UTC m=+3.518997069 container attach d02f4345160beefdc287bd65fe9bd6312a82b3001636af26d171d51ccaf7e02e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_shirley, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 07:35:57 compute-0 podman[116457]: 2025-11-29 07:35:57.203808243 +0000 UTC m=+3.520077028 container died d02f4345160beefdc287bd65fe9bd6312a82b3001636af26d171d51ccaf7e02e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_shirley, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:35:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:35:58 compute-0 ceph-mon[75237]: pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:58 compute-0 sshd-session[116493]: Accepted publickey for zuul from 192.168.122.30 port 40796 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:35:58 compute-0 systemd-logind[782]: New session 37 of user zuul.
Nov 29 07:35:58 compute-0 systemd[1]: Started Session 37 of User zuul.
Nov 29 07:35:58 compute-0 sshd-session[116493]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:35:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5035dd1c2a5beb013fdac36c748f5fa505b02858d8babf90f48b8468d3a3389-merged.mount: Deactivated successfully.
Nov 29 07:35:59 compute-0 python3.9[116648]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:36:00 compute-0 python3.9[116802]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:36:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:01 compute-0 ceph-mon[75237]: pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:01 compute-0 python3.9[116995]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:36:01 compute-0 podman[116457]: 2025-11-29 07:36:01.770569206 +0000 UTC m=+8.086837971 container remove d02f4345160beefdc287bd65fe9bd6312a82b3001636af26d171d51ccaf7e02e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_shirley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 07:36:01 compute-0 systemd[1]: libpod-conmon-d02f4345160beefdc287bd65fe9bd6312a82b3001636af26d171d51ccaf7e02e.scope: Deactivated successfully.
Nov 29 07:36:02 compute-0 podman[117028]: 2025-11-29 07:36:01.996757509 +0000 UTC m=+0.029380278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:36:02 compute-0 sshd-session[116498]: Connection closed by 192.168.122.30 port 40796
Nov 29 07:36:02 compute-0 sshd-session[116493]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:36:02 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Nov 29 07:36:02 compute-0 systemd[1]: session-37.scope: Consumed 2.771s CPU time.
Nov 29 07:36:02 compute-0 systemd-logind[782]: Session 37 logged out. Waiting for processes to exit.
Nov 29 07:36:02 compute-0 systemd-logind[782]: Removed session 37.
Nov 29 07:36:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:07 compute-0 sshd-session[117042]: Accepted publickey for zuul from 192.168.122.30 port 51644 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:36:07 compute-0 systemd-logind[782]: New session 38 of user zuul.
Nov 29 07:36:07 compute-0 systemd[1]: Started Session 38 of User zuul.
Nov 29 07:36:07 compute-0 sshd-session[117042]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:36:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:36:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:36:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:36:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:36:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:36:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:36:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:08 compute-0 python3.9[117195]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:36:09 compute-0 podman[117028]: 2025-11-29 07:36:09.208825089 +0000 UTC m=+7.241447858 container create b9ce454e040d2db3b58e70501f6bddf65bdb178058b360c252c486040e6b52b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:36:09 compute-0 ceph-mon[75237]: pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:09 compute-0 ceph-mon[75237]: pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:09 compute-0 systemd[1]: Started libpod-conmon-b9ce454e040d2db3b58e70501f6bddf65bdb178058b360c252c486040e6b52b6.scope.
Nov 29 07:36:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eef62c62d1048950fda4382a6e3a8e44dc5a187fcf7eca016eb69ca895911f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eef62c62d1048950fda4382a6e3a8e44dc5a187fcf7eca016eb69ca895911f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eef62c62d1048950fda4382a6e3a8e44dc5a187fcf7eca016eb69ca895911f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eef62c62d1048950fda4382a6e3a8e44dc5a187fcf7eca016eb69ca895911f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eef62c62d1048950fda4382a6e3a8e44dc5a187fcf7eca016eb69ca895911f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:09 compute-0 python3.9[117352]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:36:10 compute-0 podman[117028]: 2025-11-29 07:36:10.085301423 +0000 UTC m=+8.117924212 container init b9ce454e040d2db3b58e70501f6bddf65bdb178058b360c252c486040e6b52b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:36:10 compute-0 podman[117028]: 2025-11-29 07:36:10.095365196 +0000 UTC m=+8.127987955 container start b9ce454e040d2db3b58e70501f6bddf65bdb178058b360c252c486040e6b52b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:36:10 compute-0 podman[117028]: 2025-11-29 07:36:10.25797155 +0000 UTC m=+8.290594349 container attach b9ce454e040d2db3b58e70501f6bddf65bdb178058b360c252c486040e6b52b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 07:36:10 compute-0 ceph-mon[75237]: pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:10 compute-0 ceph-mon[75237]: pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:10 compute-0 ceph-mon[75237]: pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:10 compute-0 ceph-mon[75237]: pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:10 compute-0 sudo[117511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nogeqttnymqbdjfpxytziueuxyqlcoks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401770.141014-40-88858282377266/AnsiballZ_setup.py'
Nov 29 07:36:10 compute-0 sudo[117511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:10 compute-0 python3.9[117513]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:36:10 compute-0 sudo[117511]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:11 compute-0 gifted_wiles[117353]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:36:11 compute-0 gifted_wiles[117353]: --> relative data size: 1.0
Nov 29 07:36:11 compute-0 gifted_wiles[117353]: --> All data devices are unavailable
Nov 29 07:36:11 compute-0 systemd[1]: libpod-b9ce454e040d2db3b58e70501f6bddf65bdb178058b360c252c486040e6b52b6.scope: Deactivated successfully.
Nov 29 07:36:11 compute-0 podman[117028]: 2025-11-29 07:36:11.285211728 +0000 UTC m=+9.317834497 container died b9ce454e040d2db3b58e70501f6bddf65bdb178058b360c252c486040e6b52b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 07:36:11 compute-0 systemd[1]: libpod-b9ce454e040d2db3b58e70501f6bddf65bdb178058b360c252c486040e6b52b6.scope: Consumed 1.152s CPU time.
Nov 29 07:36:11 compute-0 sudo[117631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtfzrvijvowtbcwybjobdjmcbnzkpays ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401770.141014-40-88858282377266/AnsiballZ_dnf.py'
Nov 29 07:36:11 compute-0 sudo[117631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:11 compute-0 python3.9[117633]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:36:11 compute-0 sshd-session[108550]: error: kex_exchange_identification: read: Connection reset by peer
Nov 29 07:36:11 compute-0 sshd-session[108550]: Connection reset by 101.47.142.104 port 54560
Nov 29 07:36:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:13 compute-0 sshd-session[117635]: Invalid user system from 20.185.243.158 port 60070
Nov 29 07:36:13 compute-0 sshd-session[117635]: Received disconnect from 20.185.243.158 port 60070:11: Bye Bye [preauth]
Nov 29 07:36:13 compute-0 sshd-session[117635]: Disconnected from invalid user system 20.185.243.158 port 60070 [preauth]
Nov 29 07:36:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:14 compute-0 ceph-mon[75237]: pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:15 compute-0 sudo[117631]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-2eef62c62d1048950fda4382a6e3a8e44dc5a187fcf7eca016eb69ca895911f4-merged.mount: Deactivated successfully.
Nov 29 07:36:15 compute-0 podman[117028]: 2025-11-29 07:36:15.146397957 +0000 UTC m=+13.179020716 container remove b9ce454e040d2db3b58e70501f6bddf65bdb178058b360c252c486040e6b52b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 07:36:15 compute-0 systemd[1]: libpod-conmon-b9ce454e040d2db3b58e70501f6bddf65bdb178058b360c252c486040e6b52b6.scope: Deactivated successfully.
Nov 29 07:36:15 compute-0 sudo[116392]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:15 compute-0 sudo[117676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:36:15 compute-0 sudo[117676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:15 compute-0 sudo[117676]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:15 compute-0 sudo[117733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:36:15 compute-0 sudo[117733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:15 compute-0 sudo[117733]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:15 compute-0 sudo[117769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:36:15 compute-0 sudo[117769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:15 compute-0 sudo[117769]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:15 compute-0 sudo[117814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:36:15 compute-0 sudo[117814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:15 compute-0 sudo[117889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcfaoeigxfyfowseuipxdoekphsptorv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401775.2070487-52-269221783612691/AnsiballZ_setup.py'
Nov 29 07:36:15 compute-0 sudo[117889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:15 compute-0 python3.9[117891]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:36:15 compute-0 podman[117930]: 2025-11-29 07:36:15.747759422 +0000 UTC m=+0.024074460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:36:15 compute-0 podman[117930]: 2025-11-29 07:36:15.859365604 +0000 UTC m=+0.135680622 container create 73aad8f4b002421cd8b692cccf651d0b018e6a96baef8cf3c74fb66a87c69201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_newton, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 07:36:15 compute-0 systemd[1]: Started libpod-conmon-73aad8f4b002421cd8b692cccf651d0b018e6a96baef8cf3c74fb66a87c69201.scope.
Nov 29 07:36:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:36:15 compute-0 podman[117930]: 2025-11-29 07:36:15.949025914 +0000 UTC m=+0.225340962 container init 73aad8f4b002421cd8b692cccf651d0b018e6a96baef8cf3c74fb66a87c69201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_newton, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:36:15 compute-0 podman[117930]: 2025-11-29 07:36:15.957958547 +0000 UTC m=+0.234273565 container start 73aad8f4b002421cd8b692cccf651d0b018e6a96baef8cf3c74fb66a87c69201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_newton, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 07:36:15 compute-0 modest_newton[117960]: 167 167
Nov 29 07:36:15 compute-0 systemd[1]: libpod-73aad8f4b002421cd8b692cccf651d0b018e6a96baef8cf3c74fb66a87c69201.scope: Deactivated successfully.
Nov 29 07:36:15 compute-0 podman[117930]: 2025-11-29 07:36:15.966140511 +0000 UTC m=+0.242455529 container attach 73aad8f4b002421cd8b692cccf651d0b018e6a96baef8cf3c74fb66a87c69201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:36:15 compute-0 podman[117930]: 2025-11-29 07:36:15.967599979 +0000 UTC m=+0.243915037 container died 73aad8f4b002421cd8b692cccf651d0b018e6a96baef8cf3c74fb66a87c69201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:36:15 compute-0 ceph-mon[75237]: pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:15 compute-0 ceph-mon[75237]: pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-11e19644299b3f64926bfb1377c82ce4f1c733269f40ac245c4a2a720d3ea480-merged.mount: Deactivated successfully.
Nov 29 07:36:16 compute-0 podman[117930]: 2025-11-29 07:36:16.017691546 +0000 UTC m=+0.294006564 container remove 73aad8f4b002421cd8b692cccf651d0b018e6a96baef8cf3c74fb66a87c69201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:36:16 compute-0 systemd[1]: libpod-conmon-73aad8f4b002421cd8b692cccf651d0b018e6a96baef8cf3c74fb66a87c69201.scope: Deactivated successfully.
Nov 29 07:36:16 compute-0 sudo[117889]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:16 compute-0 podman[118013]: 2025-11-29 07:36:16.206492733 +0000 UTC m=+0.069342220 container create cadbed38025c5eb53f0313c3fc2f3f89928209d7a7b05b42479947587d8cb5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 07:36:16 compute-0 systemd[1]: Started libpod-conmon-cadbed38025c5eb53f0313c3fc2f3f89928209d7a7b05b42479947587d8cb5fe.scope.
Nov 29 07:36:16 compute-0 podman[118013]: 2025-11-29 07:36:16.164289642 +0000 UTC m=+0.027139139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:36:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:36:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e08aa1ac786fe1b40f37d65bd53e9d320199d1226b173e2ee03b64fb8512563e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e08aa1ac786fe1b40f37d65bd53e9d320199d1226b173e2ee03b64fb8512563e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e08aa1ac786fe1b40f37d65bd53e9d320199d1226b173e2ee03b64fb8512563e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e08aa1ac786fe1b40f37d65bd53e9d320199d1226b173e2ee03b64fb8512563e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:16 compute-0 podman[118013]: 2025-11-29 07:36:16.297553339 +0000 UTC m=+0.160402836 container init cadbed38025c5eb53f0313c3fc2f3f89928209d7a7b05b42479947587d8cb5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:36:16 compute-0 podman[118013]: 2025-11-29 07:36:16.304379638 +0000 UTC m=+0.167229115 container start cadbed38025c5eb53f0313c3fc2f3f89928209d7a7b05b42479947587d8cb5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brattain, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:36:16 compute-0 podman[118013]: 2025-11-29 07:36:16.311238127 +0000 UTC m=+0.174087634 container attach cadbed38025c5eb53f0313c3fc2f3f89928209d7a7b05b42479947587d8cb5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brattain, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 07:36:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:16 compute-0 sudo[118183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anuwpydsabdmvxlrizghxqsclbpymtqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401776.35744-63-50121699690681/AnsiballZ_file.py'
Nov 29 07:36:16 compute-0 sudo[118183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:17 compute-0 python3.9[118185]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:17 compute-0 sudo[118183]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:17 compute-0 bold_brattain[118053]: {
Nov 29 07:36:17 compute-0 bold_brattain[118053]:     "0": [
Nov 29 07:36:17 compute-0 bold_brattain[118053]:         {
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "devices": [
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "/dev/loop3"
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             ],
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "lv_name": "ceph_lv0",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "lv_size": "21470642176",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "name": "ceph_lv0",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "tags": {
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.cluster_name": "ceph",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.crush_device_class": "",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.encrypted": "0",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.osd_id": "0",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.type": "block",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.vdo": "0"
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             },
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "type": "block",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "vg_name": "ceph_vg0"
Nov 29 07:36:17 compute-0 bold_brattain[118053]:         }
Nov 29 07:36:17 compute-0 bold_brattain[118053]:     ],
Nov 29 07:36:17 compute-0 bold_brattain[118053]:     "1": [
Nov 29 07:36:17 compute-0 bold_brattain[118053]:         {
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "devices": [
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "/dev/loop4"
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             ],
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "lv_name": "ceph_lv1",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "lv_size": "21470642176",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "name": "ceph_lv1",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "tags": {
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.cluster_name": "ceph",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.crush_device_class": "",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.encrypted": "0",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.osd_id": "1",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.type": "block",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.vdo": "0"
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             },
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "type": "block",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "vg_name": "ceph_vg1"
Nov 29 07:36:17 compute-0 bold_brattain[118053]:         }
Nov 29 07:36:17 compute-0 bold_brattain[118053]:     ],
Nov 29 07:36:17 compute-0 bold_brattain[118053]:     "2": [
Nov 29 07:36:17 compute-0 bold_brattain[118053]:         {
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "devices": [
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "/dev/loop5"
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             ],
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "lv_name": "ceph_lv2",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "lv_size": "21470642176",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "name": "ceph_lv2",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "tags": {
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.cluster_name": "ceph",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.crush_device_class": "",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.encrypted": "0",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.osd_id": "2",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.type": "block",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:                 "ceph.vdo": "0"
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             },
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "type": "block",
Nov 29 07:36:17 compute-0 bold_brattain[118053]:             "vg_name": "ceph_vg2"
Nov 29 07:36:17 compute-0 bold_brattain[118053]:         }
Nov 29 07:36:17 compute-0 bold_brattain[118053]:     ]
Nov 29 07:36:17 compute-0 bold_brattain[118053]: }
Nov 29 07:36:17 compute-0 systemd[1]: libpod-cadbed38025c5eb53f0313c3fc2f3f89928209d7a7b05b42479947587d8cb5fe.scope: Deactivated successfully.
Nov 29 07:36:17 compute-0 podman[118013]: 2025-11-29 07:36:17.148435126 +0000 UTC m=+1.011284613 container died cadbed38025c5eb53f0313c3fc2f3f89928209d7a7b05b42479947587d8cb5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 07:36:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e08aa1ac786fe1b40f37d65bd53e9d320199d1226b173e2ee03b64fb8512563e-merged.mount: Deactivated successfully.
Nov 29 07:36:17 compute-0 podman[118013]: 2025-11-29 07:36:17.286253022 +0000 UTC m=+1.149102499 container remove cadbed38025c5eb53f0313c3fc2f3f89928209d7a7b05b42479947587d8cb5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 07:36:17 compute-0 systemd[1]: libpod-conmon-cadbed38025c5eb53f0313c3fc2f3f89928209d7a7b05b42479947587d8cb5fe.scope: Deactivated successfully.
Nov 29 07:36:17 compute-0 sudo[117814]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:17 compute-0 sudo[118280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:36:17 compute-0 sudo[118280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:17 compute-0 sudo[118280]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:17 compute-0 sudo[118305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:36:17 compute-0 sudo[118305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:17 compute-0 sudo[118305]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:17 compute-0 sudo[118353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:36:17 compute-0 sudo[118353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:17 compute-0 sudo[118353]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:17 compute-0 sudo[118401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:36:17 compute-0 sudo[118401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:17 compute-0 sudo[118452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdowyiaqvmcpjirgzsswogvsbynprlgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401777.1884365-71-165162806626828/AnsiballZ_command.py'
Nov 29 07:36:17 compute-0 sudo[118452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:17 compute-0 python3.9[118455]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:36:17 compute-0 sudo[118452]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:17 compute-0 podman[118509]: 2025-11-29 07:36:17.971126187 +0000 UTC m=+0.065475880 container create 741189df5b8c93ed6e6386eeeed05bf228f23bbb4b880d4d435cfcee22df64ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 07:36:18 compute-0 systemd[1]: Started libpod-conmon-741189df5b8c93ed6e6386eeeed05bf228f23bbb4b880d4d435cfcee22df64ac.scope.
Nov 29 07:36:18 compute-0 podman[118509]: 2025-11-29 07:36:17.942649333 +0000 UTC m=+0.036999116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:36:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:36:18 compute-0 podman[118509]: 2025-11-29 07:36:18.057274515 +0000 UTC m=+0.151624228 container init 741189df5b8c93ed6e6386eeeed05bf228f23bbb4b880d4d435cfcee22df64ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 07:36:18 compute-0 podman[118509]: 2025-11-29 07:36:18.065172781 +0000 UTC m=+0.159522474 container start 741189df5b8c93ed6e6386eeeed05bf228f23bbb4b880d4d435cfcee22df64ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 07:36:18 compute-0 podman[118509]: 2025-11-29 07:36:18.069024541 +0000 UTC m=+0.163374234 container attach 741189df5b8c93ed6e6386eeeed05bf228f23bbb4b880d4d435cfcee22df64ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wilbur, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:36:18 compute-0 amazing_wilbur[118550]: 167 167
Nov 29 07:36:18 compute-0 systemd[1]: libpod-741189df5b8c93ed6e6386eeeed05bf228f23bbb4b880d4d435cfcee22df64ac.scope: Deactivated successfully.
Nov 29 07:36:18 compute-0 podman[118509]: 2025-11-29 07:36:18.071299121 +0000 UTC m=+0.165648834 container died 741189df5b8c93ed6e6386eeeed05bf228f23bbb4b880d4d435cfcee22df64ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wilbur, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:36:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffa5c1cc3c18ac47cb3d0225ce65e208534eaf5793ec4dcd2af6b829fd4dc44e-merged.mount: Deactivated successfully.
Nov 29 07:36:18 compute-0 podman[118509]: 2025-11-29 07:36:18.111365707 +0000 UTC m=+0.205715410 container remove 741189df5b8c93ed6e6386eeeed05bf228f23bbb4b880d4d435cfcee22df64ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:36:18 compute-0 systemd[1]: libpod-conmon-741189df5b8c93ed6e6386eeeed05bf228f23bbb4b880d4d435cfcee22df64ac.scope: Deactivated successfully.
Nov 29 07:36:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:18 compute-0 ceph-mon[75237]: pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:18 compute-0 podman[118624]: 2025-11-29 07:36:18.285206443 +0000 UTC m=+0.046815653 container create 760e5f833dd92028ffaa6ae35eb1469b612d8365045d215de50105180eb4c36a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 07:36:18 compute-0 systemd[1]: Started libpod-conmon-760e5f833dd92028ffaa6ae35eb1469b612d8365045d215de50105180eb4c36a.scope.
Nov 29 07:36:18 compute-0 podman[118624]: 2025-11-29 07:36:18.264848972 +0000 UTC m=+0.026458112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:36:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:36:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c329f79eea0d9696a7cd2937320ec00eca75823115cbeb9ad1cb2903b141a5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c329f79eea0d9696a7cd2937320ec00eca75823115cbeb9ad1cb2903b141a5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c329f79eea0d9696a7cd2937320ec00eca75823115cbeb9ad1cb2903b141a5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c329f79eea0d9696a7cd2937320ec00eca75823115cbeb9ad1cb2903b141a5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:36:18 compute-0 sudo[118716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crnkvpbhssjdvijfnkobrazblxpgqnya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401778.0522041-79-80558753623308/AnsiballZ_stat.py'
Nov 29 07:36:18 compute-0 sudo[118716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:18 compute-0 podman[118624]: 2025-11-29 07:36:18.516015967 +0000 UTC m=+0.277625097 container init 760e5f833dd92028ffaa6ae35eb1469b612d8365045d215de50105180eb4c36a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 07:36:18 compute-0 podman[118624]: 2025-11-29 07:36:18.525606648 +0000 UTC m=+0.287215768 container start 760e5f833dd92028ffaa6ae35eb1469b612d8365045d215de50105180eb4c36a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swanson, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:36:18 compute-0 podman[118624]: 2025-11-29 07:36:18.542647122 +0000 UTC m=+0.304256232 container attach 760e5f833dd92028ffaa6ae35eb1469b612d8365045d215de50105180eb4c36a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swanson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 07:36:18 compute-0 python3.9[118718]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:36:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:18 compute-0 sudo[118716]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:18 compute-0 sudo[118797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pncuiftnxdwizkuxxlvijhwsryfhzkvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401778.0522041-79-80558753623308/AnsiballZ_file.py'
Nov 29 07:36:18 compute-0 sudo[118797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:19 compute-0 python3.9[118799]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:19 compute-0 sudo[118797]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:19 compute-0 jovial_swanson[118663]: {
Nov 29 07:36:19 compute-0 jovial_swanson[118663]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:36:19 compute-0 jovial_swanson[118663]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:36:19 compute-0 jovial_swanson[118663]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:36:19 compute-0 jovial_swanson[118663]:         "osd_id": 2,
Nov 29 07:36:19 compute-0 jovial_swanson[118663]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:36:19 compute-0 jovial_swanson[118663]:         "type": "bluestore"
Nov 29 07:36:19 compute-0 jovial_swanson[118663]:     },
Nov 29 07:36:19 compute-0 jovial_swanson[118663]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:36:19 compute-0 jovial_swanson[118663]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:36:19 compute-0 jovial_swanson[118663]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:36:19 compute-0 jovial_swanson[118663]:         "osd_id": 0,
Nov 29 07:36:19 compute-0 jovial_swanson[118663]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:36:19 compute-0 jovial_swanson[118663]:         "type": "bluestore"
Nov 29 07:36:19 compute-0 jovial_swanson[118663]:     },
Nov 29 07:36:19 compute-0 jovial_swanson[118663]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:36:19 compute-0 jovial_swanson[118663]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:36:19 compute-0 jovial_swanson[118663]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:36:19 compute-0 jovial_swanson[118663]:         "osd_id": 1,
Nov 29 07:36:19 compute-0 jovial_swanson[118663]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:36:19 compute-0 jovial_swanson[118663]:         "type": "bluestore"
Nov 29 07:36:19 compute-0 jovial_swanson[118663]:     }
Nov 29 07:36:19 compute-0 jovial_swanson[118663]: }
Nov 29 07:36:19 compute-0 systemd[1]: libpod-760e5f833dd92028ffaa6ae35eb1469b612d8365045d215de50105180eb4c36a.scope: Deactivated successfully.
Nov 29 07:36:19 compute-0 systemd[1]: libpod-760e5f833dd92028ffaa6ae35eb1469b612d8365045d215de50105180eb4c36a.scope: Consumed 1.028s CPU time.
Nov 29 07:36:19 compute-0 podman[118950]: 2025-11-29 07:36:19.59095178 +0000 UTC m=+0.025252050 container died 760e5f833dd92028ffaa6ae35eb1469b612d8365045d215de50105180eb4c36a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swanson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:36:19 compute-0 sudo[118988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyejgrawlfkhyahwfotgqtpdgpjxyslz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401779.2981374-91-102134420673128/AnsiballZ_stat.py'
Nov 29 07:36:19 compute-0 sudo[118988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c329f79eea0d9696a7cd2937320ec00eca75823115cbeb9ad1cb2903b141a5b-merged.mount: Deactivated successfully.
Nov 29 07:36:19 compute-0 python3.9[118990]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:36:19 compute-0 podman[118950]: 2025-11-29 07:36:19.841569631 +0000 UTC m=+0.275869891 container remove 760e5f833dd92028ffaa6ae35eb1469b612d8365045d215de50105180eb4c36a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swanson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:36:19 compute-0 systemd[1]: libpod-conmon-760e5f833dd92028ffaa6ae35eb1469b612d8365045d215de50105180eb4c36a.scope: Deactivated successfully.
Nov 29 07:36:19 compute-0 sudo[118401]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:36:19 compute-0 sudo[118988]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:19 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:36:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:36:20 compute-0 sudo[119068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjbstlnayubjsymyfzxrgogtnxitbepe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401779.2981374-91-102134420673128/AnsiballZ_file.py'
Nov 29 07:36:20 compute-0 sudo[119068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:20 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:36:20 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev f1c162c4-62d2-4ed4-8c44-f7246366dae8 does not exist
Nov 29 07:36:20 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 6ec1c439-07e8-44a8-859e-4425fdf6ee81 does not exist
Nov 29 07:36:20 compute-0 sudo[119071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:36:20 compute-0 sudo[119071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:20 compute-0 sudo[119071]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:20 compute-0 sudo[119096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:36:20 compute-0 sudo[119096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:20 compute-0 sudo[119096]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:20 compute-0 python3.9[119070]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:36:20 compute-0 sudo[119068]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:20 compute-0 ceph-mon[75237]: pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:20 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:36:20 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:36:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:20 compute-0 sudo[119270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbsuicbjmsazerdagjnnpguskqoyatpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401780.5211203-104-164571708343417/AnsiballZ_ini_file.py'
Nov 29 07:36:20 compute-0 sudo[119270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:21 compute-0 python3.9[119272]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:36:21 compute-0 sudo[119270]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:21 compute-0 sudo[119422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crwqggkpqlidsxbcckrhvzuwchjewypj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401781.2933881-104-69580352120367/AnsiballZ_ini_file.py'
Nov 29 07:36:21 compute-0 sudo[119422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:21 compute-0 python3.9[119424]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:36:21 compute-0 sudo[119422]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:22 compute-0 sudo[119574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmksuvrkmgvveaffwtctqxfanhdixrpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401781.9943304-104-92543358034864/AnsiballZ_ini_file.py'
Nov 29 07:36:22 compute-0 sudo[119574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:22 compute-0 python3.9[119576]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:36:22 compute-0 sudo[119574]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:22 compute-0 ceph-mon[75237]: pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:22 compute-0 sudo[119726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dudxyotedcitmhcbjzthdpecvvcugnad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401782.6119452-104-144299672897178/AnsiballZ_ini_file.py'
Nov 29 07:36:22 compute-0 sudo[119726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:23 compute-0 python3.9[119728]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:36:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:23 compute-0 sudo[119726]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:23 compute-0 sudo[119878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqwvuorhtqdzjtijlglpzxvnwpvhmxrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401783.4258175-135-98153894589915/AnsiballZ_dnf.py'
Nov 29 07:36:23 compute-0 sudo[119878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:23 compute-0 python3.9[119880]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:36:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:24 compute-0 ceph-mon[75237]: pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:26 compute-0 sudo[119878]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:26 compute-0 ceph-mon[75237]: pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:27 compute-0 sudo[120031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drsaagcxxgwbfvdbppokkwflzxyfdobm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401787.0097458-146-12425397042369/AnsiballZ_setup.py'
Nov 29 07:36:27 compute-0 sudo[120031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:27 compute-0 python3.9[120033]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:36:27 compute-0 sudo[120031]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:27 compute-0 ceph-mon[75237]: pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:28 compute-0 sudo[120185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpembnngsjojiwahrnvlxwpazgutmdck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401787.838486-154-257933919830341/AnsiballZ_stat.py'
Nov 29 07:36:28 compute-0 sudo[120185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:28 compute-0 python3.9[120187]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:36:28 compute-0 sudo[120185]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:28 compute-0 sudo[120337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfwcuuvqepdgldoohbqgavgcuovoxrhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401788.6080291-163-188477328083958/AnsiballZ_stat.py'
Nov 29 07:36:28 compute-0 sudo[120337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:31 compute-0 python3.9[120339]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:36:31 compute-0 sudo[120337]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:31 compute-0 sudo[120491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwnwczzmbatlsequwbpvdhhoeptoyihk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401791.6057816-173-165666453867661/AnsiballZ_command.py'
Nov 29 07:36:31 compute-0 sudo[120491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:32 compute-0 python3.9[120493]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:36:32 compute-0 sudo[120491]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:32 compute-0 ceph-mon[75237]: pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:32 compute-0 sshd-session[120340]: Received disconnect from 103.236.140.19 port 36368:11: Bye Bye [preauth]
Nov 29 07:36:32 compute-0 sshd-session[120340]: Disconnected from authenticating user root 103.236.140.19 port 36368 [preauth]
Nov 29 07:36:32 compute-0 sudo[120644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewemfaddexujybdgnolkvxhkzxdtnskz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401792.4114444-183-259577304586336/AnsiballZ_service_facts.py'
Nov 29 07:36:32 compute-0 sudo[120644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:33 compute-0 python3.9[120646]: ansible-service_facts Invoked
Nov 29 07:36:33 compute-0 network[120663]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:36:33 compute-0 network[120664]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:36:33 compute-0 network[120665]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:36:34 compute-0 sshd[1003]: Timeout before authentication for connection from 45.78.219.195 to 38.102.83.203, pid = 108551
Nov 29 07:36:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:35 compute-0 ceph-mon[75237]: pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:36 compute-0 sudo[120644]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:36 compute-0 ceph-mon[75237]: pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:36 compute-0 ceph-mon[75237]: pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:37 compute-0 sudo[120950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jisowbjetiqxyvfxoquygxbjnizvaiqi ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764401797.385749-198-128475571443207/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764401797.385749-198-128475571443207/args'
Nov 29 07:36:37 compute-0 sudo[120950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:37 compute-0 sudo[120950]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:37 compute-0 ceph-mon[75237]: pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:38 compute-0 sshd-session[120800]: Invalid user supermaint from 114.34.106.146 port 33580
Nov 29 07:36:38 compute-0 sshd-session[120800]: Received disconnect from 114.34.106.146 port 33580:11: Bye Bye [preauth]
Nov 29 07:36:38 compute-0 sshd-session[120800]: Disconnected from invalid user supermaint 114.34.106.146 port 33580 [preauth]
Nov 29 07:36:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:36:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:36:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:36:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:36:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:36:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:36:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:38 compute-0 sudo[121117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wimurjouszyamsjruninhyrmwakdcogj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401798.234467-209-35470233580515/AnsiballZ_dnf.py'
Nov 29 07:36:38 compute-0 sudo[121117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:36:38
Nov 29 07:36:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:36:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:36:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', '.mgr', 'images']
Nov 29 07:36:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:36:38 compute-0 python3.9[121119]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:36:40 compute-0 ceph-mon[75237]: pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:40 compute-0 sudo[121117]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:41 compute-0 sudo[121270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fthpchaepgqegnfgmuplxefluktfbkdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401800.7734315-222-29253835544751/AnsiballZ_package_facts.py'
Nov 29 07:36:41 compute-0 sudo[121270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:41 compute-0 python3.9[121272]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 07:36:41 compute-0 sudo[121270]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:42 compute-0 ceph-mon[75237]: pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:36:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:36:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:36:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:36:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:36:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:36:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:36:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:36:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:36:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:36:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:42 compute-0 sudo[121422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtahtwwptmkbjqnhbhfoblggytihjwif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401802.5793486-232-206283237999324/AnsiballZ_stat.py'
Nov 29 07:36:42 compute-0 sudo[121422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:43 compute-0 python3.9[121424]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:36:43 compute-0 sudo[121422]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:43 compute-0 sudo[121500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owrhbemyvysteoldrcwlfwdvyofimydy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401802.5793486-232-206283237999324/AnsiballZ_file.py'
Nov 29 07:36:43 compute-0 sudo[121500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:43 compute-0 python3.9[121502]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:43 compute-0 sudo[121500]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:44 compute-0 ceph-mon[75237]: pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:44 compute-0 sudo[121652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdrufklzlpuzrcurqlxsrgweioqvfvqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401803.952544-244-231983208390668/AnsiballZ_stat.py'
Nov 29 07:36:44 compute-0 sudo[121652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:44 compute-0 python3.9[121654]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:36:44 compute-0 sudo[121652]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:44 compute-0 sudo[121730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtwfsrpgermatjdgkoxndoijrelhpnmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401803.952544-244-231983208390668/AnsiballZ_file.py'
Nov 29 07:36:44 compute-0 sudo[121730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:44 compute-0 sshd-session[121733]: Connection closed by 101.168.0.223 port 36398
Nov 29 07:36:45 compute-0 python3.9[121732]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:45 compute-0 sudo[121730]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:45 compute-0 sudo[121883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppsrxojfswbsapcufnrbuerflnbwnatp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401805.5065258-262-245820468367670/AnsiballZ_lineinfile.py'
Nov 29 07:36:45 compute-0 sudo[121883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:46 compute-0 python3.9[121885]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:46 compute-0 sudo[121883]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:46 compute-0 ceph-mon[75237]: pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:46 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:36:46 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 1595 writes, 7334 keys, 1595 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 1595 writes, 1595 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1595 writes, 7334 keys, 1595 commit groups, 1.0 writes per commit group, ingest: 9.55 MB, 0.02 MB/s
                                           Interval WAL: 1595 writes, 1595 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   55.86 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     24.3      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0   55.86 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     24.3      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     24.3      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55dbdf32d1f0#2 capacity: 308.00 MB usage: 46.42 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(8,45.64 KB,0.0144711%) FilterBlock(2,0.42 KB,0.000133762%) IndexBlock(2,0.36 KB,0.000113946%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 07:36:47 compute-0 sudo[122035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohwjvkeetrhnqxtghgglsqnnbivgnygv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401806.7117982-277-257611877639195/AnsiballZ_setup.py'
Nov 29 07:36:47 compute-0 sudo[122035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:47 compute-0 python3.9[122037]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:36:47 compute-0 sudo[122035]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:48 compute-0 sudo[122121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdnnamglwolwngdqdhfbjvwcmnbldaej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401806.7117982-277-257611877639195/AnsiballZ_systemd.py'
Nov 29 07:36:48 compute-0 sudo[122121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:48 compute-0 ceph-mon[75237]: pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:48 compute-0 python3.9[122123]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:36:48 compute-0 sudo[122121]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:49 compute-0 sshd-session[117045]: Connection closed by 192.168.122.30 port 51644
Nov 29 07:36:49 compute-0 sshd-session[117042]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:36:49 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Nov 29 07:36:49 compute-0 systemd[1]: session-38.scope: Consumed 25.820s CPU time.
Nov 29 07:36:49 compute-0 systemd-logind[782]: Session 38 logged out. Waiting for processes to exit.
Nov 29 07:36:49 compute-0 systemd-logind[782]: Removed session 38.
Nov 29 07:36:50 compute-0 sshd-session[122126]: Invalid user vps from 103.234.151.178 port 37704
Nov 29 07:36:50 compute-0 sshd-session[122126]: Received disconnect from 103.234.151.178 port 37704:11: Bye Bye [preauth]
Nov 29 07:36:50 compute-0 sshd-session[122126]: Disconnected from invalid user vps 103.234.151.178 port 37704 [preauth]
Nov 29 07:36:50 compute-0 ceph-mon[75237]: pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:52 compute-0 ceph-mon[75237]: pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:52 compute-0 sshd-session[122046]: Invalid user devops from 101.47.142.104 port 33190
Nov 29 07:36:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:52 compute-0 sshd-session[122046]: Received disconnect from 101.47.142.104 port 33190:11: Bye Bye [preauth]
Nov 29 07:36:52 compute-0 sshd-session[122046]: Disconnected from invalid user devops 101.47.142.104 port 33190 [preauth]
Nov 29 07:36:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:54 compute-0 ceph-mon[75237]: pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:55 compute-0 sshd-session[122152]: Accepted publickey for zuul from 192.168.122.30 port 35314 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:36:55 compute-0 systemd-logind[782]: New session 39 of user zuul.
Nov 29 07:36:55 compute-0 systemd[1]: Started Session 39 of User zuul.
Nov 29 07:36:55 compute-0 sshd-session[122152]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:36:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:36:56 compute-0 sudo[122305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywrqgewvfxotmsdqvhfwxxfbzeatmsxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401815.5776863-22-69116603150673/AnsiballZ_file.py'
Nov 29 07:36:56 compute-0 sudo[122305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:56 compute-0 python3.9[122307]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:56 compute-0 sudo[122305]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:56 compute-0 ceph-mon[75237]: pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:57 compute-0 sudo[122457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-objxhdeexyaxgvkgusgslprjpossptip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401816.4816413-34-54498779933592/AnsiballZ_stat.py'
Nov 29 07:36:57 compute-0 sudo[122457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:57 compute-0 python3.9[122459]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:36:57 compute-0 sudo[122457]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:57 compute-0 sudo[122535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwiihtsnlbcaenhcfradwgysssraxnzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401816.4816413-34-54498779933592/AnsiballZ_file.py'
Nov 29 07:36:57 compute-0 sudo[122535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:57 compute-0 python3.9[122537]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:57 compute-0 sudo[122535]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:57 compute-0 sshd-session[122155]: Connection closed by 192.168.122.30 port 35314
Nov 29 07:36:57 compute-0 sshd-session[122152]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:36:57 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Nov 29 07:36:57 compute-0 systemd[1]: session-39.scope: Consumed 1.611s CPU time.
Nov 29 07:36:57 compute-0 systemd-logind[782]: Session 39 logged out. Waiting for processes to exit.
Nov 29 07:36:57 compute-0 systemd-logind[782]: Removed session 39.
Nov 29 07:36:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:59 compute-0 ceph-mon[75237]: pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:59 compute-0 sshd[1003]: drop connection #0 from [45.78.219.195]:44916 on [38.102.83.203]:22 penalty: exceeded LoginGraceTime
Nov 29 07:37:00 compute-0 ceph-mon[75237]: pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:37:00.254420) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401820254576, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7688, "num_deletes": 251, "total_data_size": 10248291, "memory_usage": 10519744, "flush_reason": "Manual Compaction"}
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401820319802, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 8541861, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 141, "largest_seqno": 7826, "table_properties": {"data_size": 8511627, "index_size": 20071, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9093, "raw_key_size": 85854, "raw_average_key_size": 23, "raw_value_size": 8441109, "raw_average_value_size": 2329, "num_data_blocks": 874, "num_entries": 3624, "num_filter_entries": 3624, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401206, "oldest_key_time": 1764401206, "file_creation_time": 1764401820, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 65440 microseconds, and 18371 cpu microseconds.
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:37:00.319873) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 8541861 bytes OK
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:37:00.319892) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:37:00.323678) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:37:00.323692) EVENT_LOG_v1 {"time_micros": 1764401820323688, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:37:00.323714) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 10213724, prev total WAL file size 10213724, number of live WAL files 2.
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:37:00.326870) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(8341KB) 13(53KB) 8(1944B)]
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401820327023, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 8599062, "oldest_snapshot_seqno": -1}
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3440 keys, 8554458 bytes, temperature: kUnknown
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401820396992, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 8554458, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8524603, "index_size": 20145, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8645, "raw_key_size": 83926, "raw_average_key_size": 24, "raw_value_size": 8455657, "raw_average_value_size": 2458, "num_data_blocks": 879, "num_entries": 3440, "num_filter_entries": 3440, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764401820, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:37:00.397318) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 8554458 bytes
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:37:00.399476) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.7 rd, 122.0 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(8.2, 0.0 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3730, records dropped: 290 output_compression: NoCompression
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:37:00.399503) EVENT_LOG_v1 {"time_micros": 1764401820399491, "job": 4, "event": "compaction_finished", "compaction_time_micros": 70105, "compaction_time_cpu_micros": 22173, "output_level": 6, "num_output_files": 1, "total_output_size": 8554458, "num_input_records": 3730, "num_output_records": 3440, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401820401462, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401820401543, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401820401630, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 29 07:37:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:37:00.326606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:37:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:02 compute-0 ceph-mon[75237]: pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:37:03 compute-0 sshd-session[122563]: Accepted publickey for zuul from 192.168.122.30 port 56524 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:37:03 compute-0 systemd-logind[782]: New session 40 of user zuul.
Nov 29 07:37:03 compute-0 systemd[1]: Started Session 40 of User zuul.
Nov 29 07:37:03 compute-0 sshd-session[122563]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:37:04 compute-0 ceph-mon[75237]: pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:04 compute-0 python3.9[122716]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:37:05 compute-0 sudo[122870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttjnkycwuivfyfqkgxwnjmdgypiybcms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401825.1547549-33-189345600441573/AnsiballZ_file.py'
Nov 29 07:37:05 compute-0 sudo[122870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:05 compute-0 python3.9[122872]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:05 compute-0 sudo[122870]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:06 compute-0 ceph-mon[75237]: pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:06 compute-0 sudo[123045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epdteofqkyhrvcduobbduqcozjjofuss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401826.157424-41-51972098104671/AnsiballZ_stat.py'
Nov 29 07:37:06 compute-0 sudo[123045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:06 compute-0 python3.9[123047]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:06 compute-0 sudo[123045]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:07 compute-0 sudo[123123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtjgvdqnobqkjtqanmxijtooyclptwhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401826.157424-41-51972098104671/AnsiballZ_file.py'
Nov 29 07:37:07 compute-0 sudo[123123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:07 compute-0 python3.9[123125]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.25q_ea2d recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:07 compute-0 sudo[123123]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:08 compute-0 sudo[123275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vudsyuznyswyeylvgoflicltptqyivms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401827.7770703-61-14290002985743/AnsiballZ_stat.py'
Nov 29 07:37:08 compute-0 sudo[123275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:37:08 compute-0 python3.9[123277]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:08 compute-0 sudo[123275]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:08 compute-0 sudo[123353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tthncxqsastylcsnqhzrsfyszexesrec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401827.7770703-61-14290002985743/AnsiballZ_file.py'
Nov 29 07:37:08 compute-0 sudo[123353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:37:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:37:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:37:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:37:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:37:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:37:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:08 compute-0 python3.9[123355]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.k2d_a0lv recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:08 compute-0 sudo[123353]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:08 compute-0 ceph-mon[75237]: pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:09 compute-0 sudo[123505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auxvfqvskrfskhazrbjabqyghkesitjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401828.9569912-74-239808894249268/AnsiballZ_file.py'
Nov 29 07:37:09 compute-0 sudo[123505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:09 compute-0 python3.9[123507]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:37:09 compute-0 sudo[123505]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:09 compute-0 sudo[123657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zobmzsuvxyybrjnobhzbszwhidmiqgva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401829.6163158-82-83680977390602/AnsiballZ_stat.py'
Nov 29 07:37:09 compute-0 sudo[123657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:10 compute-0 ceph-mon[75237]: pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:10 compute-0 python3.9[123659]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:10 compute-0 sudo[123657]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:11 compute-0 sudo[123735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwainrjkivnoehlpursfytkmwdrckutq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401829.6163158-82-83680977390602/AnsiballZ_file.py'
Nov 29 07:37:11 compute-0 sudo[123735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:11 compute-0 python3.9[123737]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:37:11 compute-0 sudo[123735]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:11 compute-0 ceph-mon[75237]: pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:11 compute-0 sudo[123887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-butadiwgolzbgvhisnyyqvjxjhpobmdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401831.4743404-82-265363636180747/AnsiballZ_stat.py'
Nov 29 07:37:11 compute-0 sudo[123887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:11 compute-0 python3.9[123889]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:11 compute-0 sudo[123887]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:12 compute-0 sudo[123965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esnossxczfndhmsoibfjbdhntsxumwsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401831.4743404-82-265363636180747/AnsiballZ_file.py'
Nov 29 07:37:12 compute-0 sudo[123965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:12 compute-0 python3.9[123967]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:37:12 compute-0 sudo[123965]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:37:13 compute-0 sudo[124118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmqohhzedybobcoearxdzeherhqipkpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401832.979494-105-120878538554967/AnsiballZ_file.py'
Nov 29 07:37:13 compute-0 sudo[124118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:13 compute-0 python3.9[124120]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:13 compute-0 sudo[124118]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:13 compute-0 ceph-mon[75237]: pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:13 compute-0 sudo[124270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpwkvxffqthfxuwznzfswsjwvpmhebgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401833.6378934-113-178555030509272/AnsiballZ_stat.py'
Nov 29 07:37:13 compute-0 sudo[124270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:14 compute-0 python3.9[124272]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:14 compute-0 sudo[124270]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:14 compute-0 sudo[124348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uosizvdittkfqbvxxmgwzskfwlnbgvhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401833.6378934-113-178555030509272/AnsiballZ_file.py'
Nov 29 07:37:14 compute-0 sudo[124348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:14 compute-0 python3.9[124350]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:14 compute-0 sudo[124348]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:15 compute-0 sudo[124500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pahijcpmsvqwwilqnwhkrzvujqhrrcxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401834.8075588-125-192494468962848/AnsiballZ_stat.py'
Nov 29 07:37:15 compute-0 sudo[124500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:16 compute-0 python3.9[124502]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:16 compute-0 sudo[124500]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:16 compute-0 sudo[124578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qntafpzzhceejvxmjzfbxbpgxfeyitzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401834.8075588-125-192494468962848/AnsiballZ_file.py'
Nov 29 07:37:16 compute-0 sudo[124578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:16 compute-0 python3.9[124580]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:16 compute-0 sudo[124578]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:17 compute-0 ceph-mon[75237]: pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:17 compute-0 sudo[124730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxzyuqhzeakkiyvfnafpqzlfliwwwwuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401836.7709963-137-184408287823199/AnsiballZ_systemd.py'
Nov 29 07:37:17 compute-0 sudo[124730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:17 compute-0 python3.9[124732]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:37:17 compute-0 systemd[1]: Reloading.
Nov 29 07:37:17 compute-0 systemd-rc-local-generator[124760]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:37:17 compute-0 systemd-sysv-generator[124763]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:37:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:37:18 compute-0 ceph-mon[75237]: pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:18 compute-0 sudo[124730]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:18 compute-0 sudo[124920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bixozzwunrnklgkyvoqwpmskwvoefrlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401838.4396465-145-217190963064436/AnsiballZ_stat.py'
Nov 29 07:37:18 compute-0 sudo[124920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:18 compute-0 python3.9[124922]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:18 compute-0 sudo[124920]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:19 compute-0 sudo[124998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwekipwqlvynqjqxqaefcfcrrljsbllx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401838.4396465-145-217190963064436/AnsiballZ_file.py'
Nov 29 07:37:19 compute-0 sudo[124998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:19 compute-0 python3.9[125000]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:19 compute-0 sudo[124998]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:19 compute-0 sudo[125150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymjzzlinjxeostkicvadekvtkmcqfxiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401839.6284685-157-158122234010422/AnsiballZ_stat.py'
Nov 29 07:37:19 compute-0 sudo[125150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:20 compute-0 python3.9[125152]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:20 compute-0 sudo[125150]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:20 compute-0 sudo[125178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:20 compute-0 sudo[125178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:20 compute-0 sudo[125178]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:20 compute-0 sudo[125226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:37:20 compute-0 sudo[125226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:20 compute-0 sudo[125226]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:20 compute-0 sudo[125279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lishipwiklpztngvjzhqpjrgtrifebvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401839.6284685-157-158122234010422/AnsiballZ_file.py'
Nov 29 07:37:20 compute-0 sudo[125279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:20 compute-0 sudo[125278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:20 compute-0 sudo[125278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:20 compute-0 sudo[125278]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:20 compute-0 sudo[125306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:37:20 compute-0 sudo[125306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:20 compute-0 ceph-mon[75237]: pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:20 compute-0 python3.9[125288]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:20 compute-0 sudo[125279]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:21 compute-0 sudo[125306]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:37:21 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:37:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:37:21 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:37:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:37:21 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:37:21 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev c1a593d6-2d36-4085-929a-1f184e427544 does not exist
Nov 29 07:37:21 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 71337813-7155-41f9-9820-b82c84fd204d does not exist
Nov 29 07:37:21 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 0436b474-7cac-4575-a844-c7841465a162 does not exist
Nov 29 07:37:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:37:21 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:37:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:37:21 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:37:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:37:21 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:37:21 compute-0 sudo[125362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:21 compute-0 sudo[125362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:21 compute-0 sudo[125362]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:21 compute-0 sudo[125387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:37:21 compute-0 sudo[125387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:21 compute-0 sudo[125387]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:21 compute-0 sudo[125412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:21 compute-0 sudo[125412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:21 compute-0 sudo[125412]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:21 compute-0 sudo[125437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:37:21 compute-0 sudo[125437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:21 compute-0 ceph-mon[75237]: pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:21 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:37:21 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:37:21 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:37:21 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:37:21 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:37:21 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:37:21 compute-0 podman[125580]: 2025-11-29 07:37:21.799665818 +0000 UTC m=+0.030362096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:37:21 compute-0 podman[125580]: 2025-11-29 07:37:21.956170094 +0000 UTC m=+0.186866362 container create 665b82a067e96d1e167a045bb4b1856e8bcf941255017e7dd17ae7e307ba5d24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lumiere, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 07:37:21 compute-0 sudo[125662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrdjiokgaakmyvwesctojpfadoljcdoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401841.6244204-169-54754722742494/AnsiballZ_systemd.py'
Nov 29 07:37:21 compute-0 sudo[125662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:22 compute-0 systemd[1]: Started libpod-conmon-665b82a067e96d1e167a045bb4b1856e8bcf941255017e7dd17ae7e307ba5d24.scope.
Nov 29 07:37:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:37:22 compute-0 podman[125580]: 2025-11-29 07:37:22.233364651 +0000 UTC m=+0.464060929 container init 665b82a067e96d1e167a045bb4b1856e8bcf941255017e7dd17ae7e307ba5d24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lumiere, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 07:37:22 compute-0 podman[125580]: 2025-11-29 07:37:22.240888434 +0000 UTC m=+0.471584662 container start 665b82a067e96d1e167a045bb4b1856e8bcf941255017e7dd17ae7e307ba5d24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:37:22 compute-0 podman[125580]: 2025-11-29 07:37:22.245019539 +0000 UTC m=+0.475715827 container attach 665b82a067e96d1e167a045bb4b1856e8bcf941255017e7dd17ae7e307ba5d24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lumiere, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 07:37:22 compute-0 festive_lumiere[125667]: 167 167
Nov 29 07:37:22 compute-0 systemd[1]: libpod-665b82a067e96d1e167a045bb4b1856e8bcf941255017e7dd17ae7e307ba5d24.scope: Deactivated successfully.
Nov 29 07:37:22 compute-0 podman[125580]: 2025-11-29 07:37:22.247237525 +0000 UTC m=+0.477933753 container died 665b82a067e96d1e167a045bb4b1856e8bcf941255017e7dd17ae7e307ba5d24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:37:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e5cd79af8503ed1bbeb81b352be2001bb318c524e22dacc91996d9e868c18d3-merged.mount: Deactivated successfully.
Nov 29 07:37:22 compute-0 podman[125580]: 2025-11-29 07:37:22.293795705 +0000 UTC m=+0.524491933 container remove 665b82a067e96d1e167a045bb4b1856e8bcf941255017e7dd17ae7e307ba5d24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:37:22 compute-0 systemd[1]: libpod-conmon-665b82a067e96d1e167a045bb4b1856e8bcf941255017e7dd17ae7e307ba5d24.scope: Deactivated successfully.
Nov 29 07:37:22 compute-0 python3.9[125664]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:37:22 compute-0 systemd[1]: Reloading.
Nov 29 07:37:22 compute-0 systemd-rc-local-generator[125724]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:37:22 compute-0 systemd-sysv-generator[125729]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:37:22 compute-0 podman[125694]: 2025-11-29 07:37:22.497131966 +0000 UTC m=+0.055618011 container create 5ce6a624f63ed215001037137854d6619a7e8366675fc3d6050ee1c9f6f219f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_stonebraker, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:37:22 compute-0 podman[125694]: 2025-11-29 07:37:22.479193938 +0000 UTC m=+0.037680023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:37:22 compute-0 systemd[1]: Started libpod-conmon-5ce6a624f63ed215001037137854d6619a7e8366675fc3d6050ee1c9f6f219f2.scope.
Nov 29 07:37:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde22d9cdc53ce99c3eb13ab98164e8ad481a631e389e4617299085d54aaca16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde22d9cdc53ce99c3eb13ab98164e8ad481a631e389e4617299085d54aaca16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde22d9cdc53ce99c3eb13ab98164e8ad481a631e389e4617299085d54aaca16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde22d9cdc53ce99c3eb13ab98164e8ad481a631e389e4617299085d54aaca16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde22d9cdc53ce99c3eb13ab98164e8ad481a631e389e4617299085d54aaca16/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:22 compute-0 podman[125694]: 2025-11-29 07:37:22.755949625 +0000 UTC m=+0.314435690 container init 5ce6a624f63ed215001037137854d6619a7e8366675fc3d6050ee1c9f6f219f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 07:37:22 compute-0 systemd[1]: Starting Create netns directory...
Nov 29 07:37:22 compute-0 podman[125694]: 2025-11-29 07:37:22.766201196 +0000 UTC m=+0.324687241 container start 5ce6a624f63ed215001037137854d6619a7e8366675fc3d6050ee1c9f6f219f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_stonebraker, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:37:22 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 07:37:22 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 07:37:22 compute-0 systemd[1]: Finished Create netns directory.
Nov 29 07:37:22 compute-0 podman[125694]: 2025-11-29 07:37:22.797727641 +0000 UTC m=+0.356213696 container attach 5ce6a624f63ed215001037137854d6619a7e8366675fc3d6050ee1c9f6f219f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_stonebraker, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 07:37:22 compute-0 sudo[125662]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:37:23 compute-0 infallible_stonebraker[125744]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:37:23 compute-0 infallible_stonebraker[125744]: --> relative data size: 1.0
Nov 29 07:37:23 compute-0 infallible_stonebraker[125744]: --> All data devices are unavailable
Nov 29 07:37:23 compute-0 systemd[1]: libpod-5ce6a624f63ed215001037137854d6619a7e8366675fc3d6050ee1c9f6f219f2.scope: Deactivated successfully.
Nov 29 07:37:23 compute-0 podman[125694]: 2025-11-29 07:37:23.888699176 +0000 UTC m=+1.447185241 container died 5ce6a624f63ed215001037137854d6619a7e8366675fc3d6050ee1c9f6f219f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 07:37:23 compute-0 systemd[1]: libpod-5ce6a624f63ed215001037137854d6619a7e8366675fc3d6050ee1c9f6f219f2.scope: Consumed 1.043s CPU time.
Nov 29 07:37:23 compute-0 ceph-mon[75237]: pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:24 compute-0 python3.9[125939]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:37:24 compute-0 network[125956]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:37:24 compute-0 network[125957]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:37:24 compute-0 network[125958]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:37:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-fde22d9cdc53ce99c3eb13ab98164e8ad481a631e389e4617299085d54aaca16-merged.mount: Deactivated successfully.
Nov 29 07:37:25 compute-0 podman[125694]: 2025-11-29 07:37:25.232877346 +0000 UTC m=+2.791363401 container remove 5ce6a624f63ed215001037137854d6619a7e8366675fc3d6050ee1c9f6f219f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_stonebraker, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 07:37:25 compute-0 systemd[1]: libpod-conmon-5ce6a624f63ed215001037137854d6619a7e8366675fc3d6050ee1c9f6f219f2.scope: Deactivated successfully.
Nov 29 07:37:25 compute-0 sudo[125437]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:25 compute-0 sudo[125973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:25 compute-0 sudo[125973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:25 compute-0 sudo[125973]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:25 compute-0 sudo[126001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:37:25 compute-0 sudo[126001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:25 compute-0 sudo[126001]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:25 compute-0 sudo[126029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:25 compute-0 sudo[126029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:25 compute-0 sudo[126029]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:25 compute-0 sudo[126057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:37:25 compute-0 sudo[126057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:25 compute-0 podman[126136]: 2025-11-29 07:37:25.89298627 +0000 UTC m=+0.070208874 container create a49c4949b30ae99d5c5cc61493d0ee9b4d2f7a7794fcd28bc6bef2124342e010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 07:37:25 compute-0 systemd[1]: Started libpod-conmon-a49c4949b30ae99d5c5cc61493d0ee9b4d2f7a7794fcd28bc6bef2124342e010.scope.
Nov 29 07:37:25 compute-0 podman[126136]: 2025-11-29 07:37:25.843681141 +0000 UTC m=+0.020903765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:37:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:37:26 compute-0 podman[126136]: 2025-11-29 07:37:26.262667439 +0000 UTC m=+0.439890063 container init a49c4949b30ae99d5c5cc61493d0ee9b4d2f7a7794fcd28bc6bef2124342e010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_dijkstra, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 07:37:26 compute-0 podman[126136]: 2025-11-29 07:37:26.27172845 +0000 UTC m=+0.448951085 container start a49c4949b30ae99d5c5cc61493d0ee9b4d2f7a7794fcd28bc6bef2124342e010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_dijkstra, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:37:26 compute-0 systemd[1]: libpod-a49c4949b30ae99d5c5cc61493d0ee9b4d2f7a7794fcd28bc6bef2124342e010.scope: Deactivated successfully.
Nov 29 07:37:26 compute-0 hardcore_dijkstra[126158]: 167 167
Nov 29 07:37:26 compute-0 conmon[126158]: conmon a49c4949b30ae99d5c5c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a49c4949b30ae99d5c5cc61493d0ee9b4d2f7a7794fcd28bc6bef2124342e010.scope/container/memory.events
Nov 29 07:37:26 compute-0 ceph-mon[75237]: pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:26 compute-0 podman[126136]: 2025-11-29 07:37:26.392548935 +0000 UTC m=+0.569771539 container attach a49c4949b30ae99d5c5cc61493d0ee9b4d2f7a7794fcd28bc6bef2124342e010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 07:37:26 compute-0 podman[126136]: 2025-11-29 07:37:26.393487189 +0000 UTC m=+0.570709813 container died a49c4949b30ae99d5c5cc61493d0ee9b4d2f7a7794fcd28bc6bef2124342e010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:37:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:28 compute-0 sudo[126389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhhsoidlzpurtyerjhffmnfelymjycgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401847.8244703-195-171923159292086/AnsiballZ_stat.py'
Nov 29 07:37:28 compute-0 sudo[126389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:37:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-f36f96769c9f532919e9464a715ae9098b91c6dc73a40a1775713da1d074ad14-merged.mount: Deactivated successfully.
Nov 29 07:37:28 compute-0 python3.9[126391]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:28 compute-0 sudo[126389]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:28 compute-0 sshd-session[126392]: Received disconnect from 20.185.243.158 port 40324:11: Bye Bye [preauth]
Nov 29 07:37:28 compute-0 sshd-session[126392]: Disconnected from authenticating user root 20.185.243.158 port 40324 [preauth]
Nov 29 07:37:28 compute-0 ceph-mon[75237]: pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:28 compute-0 sudo[126469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzcqkxlducrfswcljlkztimrsqbxwcsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401847.8244703-195-171923159292086/AnsiballZ_file.py'
Nov 29 07:37:28 compute-0 sudo[126469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:28 compute-0 python3.9[126471]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:28 compute-0 sudo[126469]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:29 compute-0 podman[126136]: 2025-11-29 07:37:29.267407576 +0000 UTC m=+3.444630180 container remove a49c4949b30ae99d5c5cc61493d0ee9b4d2f7a7794fcd28bc6bef2124342e010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_dijkstra, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:37:29 compute-0 systemd[1]: libpod-conmon-a49c4949b30ae99d5c5cc61493d0ee9b4d2f7a7794fcd28bc6bef2124342e010.scope: Deactivated successfully.
Nov 29 07:37:29 compute-0 sudo[126625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eavrhglagmqcnukxckkjdcqivxouhjpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401849.1228096-208-228700091468995/AnsiballZ_file.py'
Nov 29 07:37:29 compute-0 sudo[126625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:29 compute-0 podman[126629]: 2025-11-29 07:37:29.441430248 +0000 UTC m=+0.023351506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:37:29 compute-0 python3.9[126632]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:29 compute-0 sudo[126625]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:29 compute-0 podman[126629]: 2025-11-29 07:37:29.911748459 +0000 UTC m=+0.493669727 container create ba9194e79a70b06727fb564154969c29d8dbb0f03b63f661ee0c1a2e4ddff9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:37:30 compute-0 sudo[126794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plzocigjigyihupdfpbmbybekxfrkggg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401849.798191-216-175877300194257/AnsiballZ_stat.py'
Nov 29 07:37:30 compute-0 sudo[126794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:30 compute-0 python3.9[126796]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:30 compute-0 sudo[126794]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:30 compute-0 systemd[1]: Started libpod-conmon-ba9194e79a70b06727fb564154969c29d8dbb0f03b63f661ee0c1a2e4ddff9cc.scope.
Nov 29 07:37:30 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4983eb7f65c266d32c6e8690012f5c7e371b42cc566565eb399e1ed95400d078/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4983eb7f65c266d32c6e8690012f5c7e371b42cc566565eb399e1ed95400d078/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4983eb7f65c266d32c6e8690012f5c7e371b42cc566565eb399e1ed95400d078/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4983eb7f65c266d32c6e8690012f5c7e371b42cc566565eb399e1ed95400d078/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:30 compute-0 sudo[126877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvotcurdgarhaykbtgwvaoqdqaxrrwfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401849.798191-216-175877300194257/AnsiballZ_file.py'
Nov 29 07:37:30 compute-0 sudo[126877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:30 compute-0 python3.9[126879]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:30 compute-0 sudo[126877]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:31 compute-0 ceph-mon[75237]: pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:31 compute-0 podman[126629]: 2025-11-29 07:37:31.603729447 +0000 UTC m=+2.185650705 container init ba9194e79a70b06727fb564154969c29d8dbb0f03b63f661ee0c1a2e4ddff9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 07:37:31 compute-0 podman[126629]: 2025-11-29 07:37:31.615582418 +0000 UTC m=+2.197503656 container start ba9194e79a70b06727fb564154969c29d8dbb0f03b63f661ee0c1a2e4ddff9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:37:31 compute-0 podman[126629]: 2025-11-29 07:37:31.687156673 +0000 UTC m=+2.269077921 container attach ba9194e79a70b06727fb564154969c29d8dbb0f03b63f661ee0c1a2e4ddff9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ellis, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:37:31 compute-0 sshd-session[71497]: Received disconnect from 38.102.83.164 port 38010:11: disconnected by user
Nov 29 07:37:31 compute-0 sshd-session[71497]: Disconnected from user zuul 38.102.83.164 port 38010
Nov 29 07:37:31 compute-0 sshd-session[71494]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:37:31 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Nov 29 07:37:31 compute-0 systemd-logind[782]: Session 18 logged out. Waiting for processes to exit.
Nov 29 07:37:31 compute-0 systemd[1]: session-18.scope: Consumed 1min 29.851s CPU time.
Nov 29 07:37:31 compute-0 systemd-logind[782]: Removed session 18.
Nov 29 07:37:31 compute-0 sudo[127031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoejbzrmjyxxwwvgyfkfqgbdmmjxvcnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401851.296955-231-33536379473736/AnsiballZ_timezone.py'
Nov 29 07:37:31 compute-0 sudo[127031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:31 compute-0 python3.9[127033]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 07:37:31 compute-0 systemd[1]: Starting Time & Date Service...
Nov 29 07:37:32 compute-0 systemd[1]: Started Time & Date Service.
Nov 29 07:37:32 compute-0 sudo[127031]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:32 compute-0 ceph-mon[75237]: pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:32 compute-0 cool_ellis[126824]: {
Nov 29 07:37:32 compute-0 cool_ellis[126824]:     "0": [
Nov 29 07:37:32 compute-0 cool_ellis[126824]:         {
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "devices": [
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "/dev/loop3"
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             ],
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "lv_name": "ceph_lv0",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "lv_size": "21470642176",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "name": "ceph_lv0",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "tags": {
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.cluster_name": "ceph",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.crush_device_class": "",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.encrypted": "0",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.osd_id": "0",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.type": "block",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.vdo": "0"
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             },
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "type": "block",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "vg_name": "ceph_vg0"
Nov 29 07:37:32 compute-0 cool_ellis[126824]:         }
Nov 29 07:37:32 compute-0 cool_ellis[126824]:     ],
Nov 29 07:37:32 compute-0 cool_ellis[126824]:     "1": [
Nov 29 07:37:32 compute-0 cool_ellis[126824]:         {
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "devices": [
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "/dev/loop4"
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             ],
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "lv_name": "ceph_lv1",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "lv_size": "21470642176",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "name": "ceph_lv1",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "tags": {
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.cluster_name": "ceph",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.crush_device_class": "",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.encrypted": "0",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.osd_id": "1",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.type": "block",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.vdo": "0"
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             },
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "type": "block",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "vg_name": "ceph_vg1"
Nov 29 07:37:32 compute-0 cool_ellis[126824]:         }
Nov 29 07:37:32 compute-0 cool_ellis[126824]:     ],
Nov 29 07:37:32 compute-0 cool_ellis[126824]:     "2": [
Nov 29 07:37:32 compute-0 cool_ellis[126824]:         {
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "devices": [
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "/dev/loop5"
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             ],
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "lv_name": "ceph_lv2",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "lv_size": "21470642176",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "name": "ceph_lv2",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "tags": {
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.cluster_name": "ceph",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.crush_device_class": "",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.encrypted": "0",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.osd_id": "2",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.type": "block",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:                 "ceph.vdo": "0"
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             },
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "type": "block",
Nov 29 07:37:32 compute-0 cool_ellis[126824]:             "vg_name": "ceph_vg2"
Nov 29 07:37:32 compute-0 cool_ellis[126824]:         }
Nov 29 07:37:32 compute-0 cool_ellis[126824]:     ]
Nov 29 07:37:32 compute-0 cool_ellis[126824]: }
Nov 29 07:37:32 compute-0 systemd[1]: libpod-ba9194e79a70b06727fb564154969c29d8dbb0f03b63f661ee0c1a2e4ddff9cc.scope: Deactivated successfully.
Nov 29 07:37:32 compute-0 podman[126629]: 2025-11-29 07:37:32.457483672 +0000 UTC m=+3.039404920 container died ba9194e79a70b06727fb564154969c29d8dbb0f03b63f661ee0c1a2e4ddff9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:37:32 compute-0 sudo[127202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pthllbleblyollvdbnrbtaeahdhzqnbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401852.4070282-240-231889851440782/AnsiballZ_file.py'
Nov 29 07:37:32 compute-0 sudo[127202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:32 compute-0 python3.9[127204]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:32 compute-0 sudo[127202]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:37:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4983eb7f65c266d32c6e8690012f5c7e371b42cc566565eb399e1ed95400d078-merged.mount: Deactivated successfully.
Nov 29 07:37:33 compute-0 sudo[127356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjthobtaejpaswwbjwohcrnmcpjfumiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401853.133452-248-198352650798132/AnsiballZ_stat.py'
Nov 29 07:37:33 compute-0 sudo[127356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:33 compute-0 python3.9[127358]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:33 compute-0 sudo[127356]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:34 compute-0 sudo[127434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cokfjfwlnbbyhrmkhjgmsdkuuzskzrvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401853.133452-248-198352650798132/AnsiballZ_file.py'
Nov 29 07:37:34 compute-0 sudo[127434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:34 compute-0 ceph-mon[75237]: pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:34 compute-0 python3.9[127436]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:34 compute-0 sudo[127434]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:35 compute-0 sudo[127586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsbupxmpxdgqhpctpkriukwqhzozybqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401855.2500985-260-141064463925271/AnsiballZ_stat.py'
Nov 29 07:37:35 compute-0 sudo[127586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:35 compute-0 python3.9[127588]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:36 compute-0 sudo[127586]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:36 compute-0 sudo[127664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dghpmqxwxmhiwbwfvfsuxvyecjbnmmte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401855.2500985-260-141064463925271/AnsiballZ_file.py'
Nov 29 07:37:36 compute-0 sudo[127664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:36 compute-0 podman[126629]: 2025-11-29 07:37:36.440078677 +0000 UTC m=+7.021999965 container remove ba9194e79a70b06727fb564154969c29d8dbb0f03b63f661ee0c1a2e4ddff9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 07:37:36 compute-0 systemd[1]: libpod-conmon-ba9194e79a70b06727fb564154969c29d8dbb0f03b63f661ee0c1a2e4ddff9cc.scope: Deactivated successfully.
Nov 29 07:37:36 compute-0 sudo[126057]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:36 compute-0 python3.9[127666]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.swpxsnoa recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:36 compute-0 sudo[127664]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:36 compute-0 sudo[127667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:36 compute-0 sudo[127667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:36 compute-0 sudo[127667]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:36 compute-0 sudo[127705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:37:36 compute-0 sudo[127705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:36 compute-0 sudo[127705]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:36 compute-0 sudo[127741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:36 compute-0 sudo[127741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:36 compute-0 sudo[127741]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:36 compute-0 sudo[127766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:37:36 compute-0 sudo[127766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:37 compute-0 ceph-mon[75237]: pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:37 compute-0 podman[127843]: 2025-11-29 07:37:37.091531383 +0000 UTC m=+0.025242257 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:37:37 compute-0 sudo[127970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbiaeeialjphhxwqgbskgssmvonkoeww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401857.0757558-272-116867134551228/AnsiballZ_stat.py'
Nov 29 07:37:37 compute-0 sudo[127970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:37 compute-0 podman[127843]: 2025-11-29 07:37:37.651634847 +0000 UTC m=+0.585345691 container create 36a24dff7d3c14a04896fb4e5f2e357e3bf628d89e0b89d935b42ddb6af7291f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:37:37 compute-0 sshd-session[127791]: Invalid user solv from 80.94.92.182 port 49490
Nov 29 07:37:37 compute-0 python3.9[127972]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:37 compute-0 sudo[127970]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:37 compute-0 sudo[128048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmmxdizjimhchdavqhgupuvqjdeozvbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401857.0757558-272-116867134551228/AnsiballZ_file.py'
Nov 29 07:37:37 compute-0 sudo[128048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:38 compute-0 python3.9[128050]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:38 compute-0 sudo[128048]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:38 compute-0 sshd-session[127791]: Connection closed by invalid user solv 80.94.92.182 port 49490 [preauth]
Nov 29 07:37:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:37:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:37:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:37:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:37:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:37:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:37:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:38 compute-0 sudo[128200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ettadskeevuuaovdagauwjliwjrairmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401858.350355-285-89039230162824/AnsiballZ_command.py'
Nov 29 07:37:38 compute-0 sudo[128200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:37:38
Nov 29 07:37:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:37:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:37:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', '.rgw.root', '.mgr', 'images']
Nov 29 07:37:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:37:38 compute-0 python3.9[128202]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:37:38 compute-0 sudo[128200]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:39 compute-0 sudo[128353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kalluqhrhlwrwlpfrvttilumpsvvobij ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764401859.1723635-293-225802157359294/AnsiballZ_edpm_nftables_from_files.py'
Nov 29 07:37:39 compute-0 sudo[128353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:37:39 compute-0 python3[128355]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 07:37:39 compute-0 sudo[128353]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:40 compute-0 systemd[1]: Started libpod-conmon-36a24dff7d3c14a04896fb4e5f2e357e3bf628d89e0b89d935b42ddb6af7291f.scope.
Nov 29 07:37:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:37:40 compute-0 sudo[128510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpnkerkpdpwnltfdezcukwwtybebcwgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401860.0536923-301-260476341842307/AnsiballZ_stat.py'
Nov 29 07:37:40 compute-0 sudo[128510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:40 compute-0 python3.9[128512]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:40 compute-0 sudo[128510]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:40 compute-0 sudo[128588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryyhtydocmgmspllaniqdkrgyrwgfyvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401860.0536923-301-260476341842307/AnsiballZ_file.py'
Nov 29 07:37:40 compute-0 sudo[128588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:41 compute-0 python3.9[128590]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:41 compute-0 sudo[128588]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:41 compute-0 podman[127843]: 2025-11-29 07:37:41.092941491 +0000 UTC m=+4.026652355 container init 36a24dff7d3c14a04896fb4e5f2e357e3bf628d89e0b89d935b42ddb6af7291f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 07:37:41 compute-0 podman[127843]: 2025-11-29 07:37:41.103315764 +0000 UTC m=+4.037026608 container start 36a24dff7d3c14a04896fb4e5f2e357e3bf628d89e0b89d935b42ddb6af7291f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_moser, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:37:41 compute-0 recursing_moser[128452]: 167 167
Nov 29 07:37:41 compute-0 systemd[1]: libpod-36a24dff7d3c14a04896fb4e5f2e357e3bf628d89e0b89d935b42ddb6af7291f.scope: Deactivated successfully.
Nov 29 07:37:41 compute-0 conmon[128452]: conmon 36a24dff7d3c14a04896 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-36a24dff7d3c14a04896fb4e5f2e357e3bf628d89e0b89d935b42ddb6af7291f.scope/container/memory.events
Nov 29 07:37:41 compute-0 ceph-mon[75237]: pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:41 compute-0 podman[127843]: 2025-11-29 07:37:41.232778712 +0000 UTC m=+4.166489556 container attach 36a24dff7d3c14a04896fb4e5f2e357e3bf628d89e0b89d935b42ddb6af7291f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_moser, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:37:41 compute-0 podman[127843]: 2025-11-29 07:37:41.234399426 +0000 UTC m=+4.168110270 container died 36a24dff7d3c14a04896fb4e5f2e357e3bf628d89e0b89d935b42ddb6af7291f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:37:41 compute-0 sudo[128752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auolrqsoxmxidbmpwqtdhqskdamwpjmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401861.2560747-313-196099118607947/AnsiballZ_stat.py'
Nov 29 07:37:41 compute-0 sudo[128752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:41 compute-0 python3.9[128754]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:41 compute-0 sudo[128752]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0d11c11689dc8f0a36c6ddd65789c0cecdd20f4e71f9fb43933bbfecfa61feb-merged.mount: Deactivated successfully.
Nov 29 07:37:42 compute-0 sudo[128830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gexgmaqyfnwzuixuicytmmzibjrcdqoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401861.2560747-313-196099118607947/AnsiballZ_file.py'
Nov 29 07:37:42 compute-0 sudo[128830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:37:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:37:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:37:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:37:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:37:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:37:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:37:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:37:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:37:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:37:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:43 compute-0 python3.9[128832]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:43 compute-0 sudo[128830]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:43 compute-0 sudo[128984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xifbsnzdowonqiryfigfprdlmciavalx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401863.337812-325-240278806356416/AnsiballZ_stat.py'
Nov 29 07:37:43 compute-0 sudo[128984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:43 compute-0 python3.9[128986]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:44 compute-0 sudo[128984]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:44 compute-0 sudo[129062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbmazbbtpnffnfqambogwffvaqxgczon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401863.337812-325-240278806356416/AnsiballZ_file.py'
Nov 29 07:37:44 compute-0 sudo[129062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:44 compute-0 python3.9[129064]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:44 compute-0 sudo[129062]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:45 compute-0 sudo[129214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wobklhcufzfempemukxgnozxplarnybe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401864.7172725-337-195838447857828/AnsiballZ_stat.py'
Nov 29 07:37:45 compute-0 sudo[129214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:45 compute-0 python3.9[129216]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:45 compute-0 sudo[129214]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:37:45 compute-0 sudo[129292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubrkxaxhpddacznwycskpoaoqrcvzflz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401864.7172725-337-195838447857828/AnsiballZ_file.py'
Nov 29 07:37:45 compute-0 sudo[129292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:45 compute-0 ceph-mon[75237]: pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:45 compute-0 ceph-mon[75237]: pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:45 compute-0 python3.9[129294]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:45 compute-0 sudo[129292]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:46 compute-0 podman[127843]: 2025-11-29 07:37:46.042797167 +0000 UTC m=+8.976508011 container remove 36a24dff7d3c14a04896fb4e5f2e357e3bf628d89e0b89d935b42ddb6af7291f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_moser, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:37:46 compute-0 systemd[1]: libpod-conmon-36a24dff7d3c14a04896fb4e5f2e357e3bf628d89e0b89d935b42ddb6af7291f.scope: Deactivated successfully.
Nov 29 07:37:46 compute-0 podman[129366]: 2025-11-29 07:37:46.193836682 +0000 UTC m=+0.026495901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:37:46 compute-0 sudo[129464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exciiqkhvsyeietyubdbtthafoarwzbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401866.114972-349-147764301425779/AnsiballZ_stat.py'
Nov 29 07:37:46 compute-0 sudo[129464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:46 compute-0 podman[129366]: 2025-11-29 07:37:46.529918727 +0000 UTC m=+0.362577926 container create c393bdd3397d95b941ab74db3c5eacaea96eeb450f653613088d87d859f2fe5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_spence, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:37:46 compute-0 python3.9[129466]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:46 compute-0 systemd[1]: Started libpod-conmon-c393bdd3397d95b941ab74db3c5eacaea96eeb450f653613088d87d859f2fe5a.scope.
Nov 29 07:37:46 compute-0 sudo[129464]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08629d241332aa6790df073dd6c0a426dcedb3d854cd0f24474801946be3aa0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08629d241332aa6790df073dd6c0a426dcedb3d854cd0f24474801946be3aa0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08629d241332aa6790df073dd6c0a426dcedb3d854cd0f24474801946be3aa0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08629d241332aa6790df073dd6c0a426dcedb3d854cd0f24474801946be3aa0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:46 compute-0 sudo[129548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbtzzxrhfhgwawijsqgzkzdzblpzgasd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401866.114972-349-147764301425779/AnsiballZ_file.py'
Nov 29 07:37:46 compute-0 sudo[129548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:47 compute-0 python3.9[129550]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:47 compute-0 sudo[129548]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:47 compute-0 sudo[129702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuzsbtyuqurzilolccqfrfkxdhemzapq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401867.452536-362-120484469269570/AnsiballZ_command.py'
Nov 29 07:37:47 compute-0 sudo[129702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:47 compute-0 python3.9[129704]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:37:47 compute-0 sudo[129702]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:48 compute-0 sudo[129857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daifedmdnbiwjbiwjkzjkgbfbwwctvan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401868.1752117-370-112533295293717/AnsiballZ_blockinfile.py'
Nov 29 07:37:48 compute-0 sudo[129857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:48 compute-0 podman[129366]: 2025-11-29 07:37:48.840573711 +0000 UTC m=+2.673232990 container init c393bdd3397d95b941ab74db3c5eacaea96eeb450f653613088d87d859f2fe5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_spence, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Nov 29 07:37:48 compute-0 podman[129366]: 2025-11-29 07:37:48.849841702 +0000 UTC m=+2.682500901 container start c393bdd3397d95b941ab74db3c5eacaea96eeb450f653613088d87d859f2fe5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_spence, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:37:48 compute-0 sshd-session[129575]: Received disconnect from 103.236.140.19 port 34336:11: Bye Bye [preauth]
Nov 29 07:37:48 compute-0 sshd-session[129575]: Disconnected from authenticating user root 103.236.140.19 port 34336 [preauth]
Nov 29 07:37:48 compute-0 python3.9[129859]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:48 compute-0 sudo[129857]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:49 compute-0 sudo[130015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhvbyphxmjgcvymdudzslnmlwqocqgzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401869.2191675-379-247799636545712/AnsiballZ_file.py'
Nov 29 07:37:49 compute-0 sudo[130015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:49 compute-0 ceph-mon[75237]: pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:49 compute-0 ceph-mon[75237]: pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:49 compute-0 podman[129366]: 2025-11-29 07:37:49.587159472 +0000 UTC m=+3.419818671 container attach c393bdd3397d95b941ab74db3c5eacaea96eeb450f653613088d87d859f2fe5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_spence, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 07:37:49 compute-0 frosty_spence[129471]: {
Nov 29 07:37:49 compute-0 frosty_spence[129471]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:37:49 compute-0 frosty_spence[129471]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:37:49 compute-0 frosty_spence[129471]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:37:49 compute-0 frosty_spence[129471]:         "osd_id": 2,
Nov 29 07:37:49 compute-0 frosty_spence[129471]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:37:49 compute-0 frosty_spence[129471]:         "type": "bluestore"
Nov 29 07:37:49 compute-0 frosty_spence[129471]:     },
Nov 29 07:37:49 compute-0 frosty_spence[129471]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:37:49 compute-0 frosty_spence[129471]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:37:49 compute-0 frosty_spence[129471]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:37:49 compute-0 frosty_spence[129471]:         "osd_id": 0,
Nov 29 07:37:49 compute-0 frosty_spence[129471]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:37:49 compute-0 frosty_spence[129471]:         "type": "bluestore"
Nov 29 07:37:49 compute-0 frosty_spence[129471]:     },
Nov 29 07:37:49 compute-0 frosty_spence[129471]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:37:49 compute-0 frosty_spence[129471]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:37:49 compute-0 frosty_spence[129471]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:37:49 compute-0 frosty_spence[129471]:         "osd_id": 1,
Nov 29 07:37:49 compute-0 frosty_spence[129471]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:37:49 compute-0 frosty_spence[129471]:         "type": "bluestore"
Nov 29 07:37:49 compute-0 frosty_spence[129471]:     }
Nov 29 07:37:49 compute-0 frosty_spence[129471]: }
Nov 29 07:37:49 compute-0 python3.9[130018]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:49 compute-0 systemd[1]: libpod-c393bdd3397d95b941ab74db3c5eacaea96eeb450f653613088d87d859f2fe5a.scope: Deactivated successfully.
Nov 29 07:37:49 compute-0 podman[129366]: 2025-11-29 07:37:49.801677763 +0000 UTC m=+3.634336962 container died c393bdd3397d95b941ab74db3c5eacaea96eeb450f653613088d87d859f2fe5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_spence, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:37:49 compute-0 sudo[130015]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:50 compute-0 sudo[130203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdywhvhdivvhxlacssyvawvzexkuavjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401869.9298525-379-141303378882764/AnsiballZ_file.py'
Nov 29 07:37:50 compute-0 sudo[130203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:50 compute-0 python3.9[130205]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:50 compute-0 sudo[130203]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:37:50 compute-0 ceph-mon[75237]: pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:50 compute-0 ceph-mon[75237]: pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-08629d241332aa6790df073dd6c0a426dcedb3d854cd0f24474801946be3aa0e-merged.mount: Deactivated successfully.
Nov 29 07:37:51 compute-0 sudo[130356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edyxtpppzrizraoczbzopwclbczqbtgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401870.7043996-394-105827269180977/AnsiballZ_mount.py'
Nov 29 07:37:51 compute-0 sudo[130356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:51 compute-0 podman[129366]: 2025-11-29 07:37:51.321789249 +0000 UTC m=+5.154448448 container remove c393bdd3397d95b941ab74db3c5eacaea96eeb450f653613088d87d859f2fe5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:37:51 compute-0 sudo[127766]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:37:51 compute-0 systemd[1]: libpod-conmon-c393bdd3397d95b941ab74db3c5eacaea96eeb450f653613088d87d859f2fe5a.scope: Deactivated successfully.
Nov 29 07:37:51 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:37:51 compute-0 python3.9[130358]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 07:37:51 compute-0 sudo[130356]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:37:51 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:37:51 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 522727e4-2091-4b9e-ae67-0409a1d85077 does not exist
Nov 29 07:37:51 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev f0064bdb-9097-47bf-afe1-206200a2887c does not exist
Nov 29 07:37:51 compute-0 sudo[130380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:51 compute-0 sudo[130380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:51 compute-0 sudo[130380]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:51 compute-0 sudo[130421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:37:51 compute-0 sudo[130421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:51 compute-0 sudo[130421]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:51 compute-0 sudo[130560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkxuuouiwomxrnmyyghnlznfqjuubrhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401871.6755896-394-168000417743871/AnsiballZ_mount.py'
Nov 29 07:37:51 compute-0 sudo[130560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:52 compute-0 python3.9[130562]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 07:37:52 compute-0 sudo[130560]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:52 compute-0 ceph-mon[75237]: pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:52 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:37:52 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:37:52 compute-0 sshd-session[122566]: Connection closed by 192.168.122.30 port 56524
Nov 29 07:37:52 compute-0 sshd-session[122563]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:37:52 compute-0 systemd-logind[782]: Session 40 logged out. Waiting for processes to exit.
Nov 29 07:37:52 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Nov 29 07:37:52 compute-0 systemd[1]: session-40.scope: Consumed 32.503s CPU time.
Nov 29 07:37:52 compute-0 systemd-logind[782]: Removed session 40.
Nov 29 07:37:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:52 compute-0 sshd-session[130448]: Invalid user odoo from 114.34.106.146 port 36280
Nov 29 07:37:53 compute-0 sshd-session[130448]: Received disconnect from 114.34.106.146 port 36280:11: Bye Bye [preauth]
Nov 29 07:37:53 compute-0 sshd-session[130448]: Disconnected from invalid user odoo 114.34.106.146 port 36280 [preauth]
Nov 29 07:37:54 compute-0 ceph-mon[75237]: pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:37:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:37:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:37:56 compute-0 ceph-mon[75237]: pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:57 compute-0 sshd-session[130587]: Accepted publickey for zuul from 192.168.122.30 port 45868 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:37:57 compute-0 systemd-logind[782]: New session 41 of user zuul.
Nov 29 07:37:57 compute-0 systemd[1]: Started Session 41 of User zuul.
Nov 29 07:37:57 compute-0 sshd-session[130587]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:37:58 compute-0 sudo[130740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awrtxkjwdytuffvxvalcjahtkatufwmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401877.7341948-16-34185460525591/AnsiballZ_tempfile.py'
Nov 29 07:37:58 compute-0 sudo[130740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:58 compute-0 python3.9[130742]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 07:37:58 compute-0 sudo[130740]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:58 compute-0 ceph-mon[75237]: pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:59 compute-0 sudo[130892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvxcjbhgtmdnlnqvbntcbaqcozausjsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401878.5596888-28-113086668662456/AnsiballZ_stat.py'
Nov 29 07:37:59 compute-0 sudo[130892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:59 compute-0 python3.9[130894]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:37:59 compute-0 sudo[130892]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:59 compute-0 sudo[131046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgsokumbdccamaduplttblhdbtisbeed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401879.4568782-36-241425496450642/AnsiballZ_slurp.py'
Nov 29 07:37:59 compute-0 sudo[131046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:00 compute-0 python3.9[131048]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 29 07:38:00 compute-0 sudo[131046]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:00 compute-0 sudo[131198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iybiyjuhbqtijnwktxfmxaokqmdvozns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401880.2591717-44-257821008473271/AnsiballZ_stat.py'
Nov 29 07:38:00 compute-0 sudo[131198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:00 compute-0 ceph-mon[75237]: pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:00 compute-0 python3.9[131200]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.r83m98aa follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:38:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:00 compute-0 sudo[131198]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:01 compute-0 sudo[131323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srvhfrmmtkiiutnlqshtgismwhwjizfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401880.2591717-44-257821008473271/AnsiballZ_copy.py'
Nov 29 07:38:01 compute-0 sudo[131323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:01 compute-0 python3.9[131325]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.r83m98aa mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401880.2591717-44-257821008473271/.source.r83m98aa _original_basename=.6vs4k1qg follow=False checksum=9d112cacadc342c45006ed82f38c8ef40f457843 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:38:01 compute-0 sudo[131323]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:02 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 07:38:02 compute-0 ceph-mon[75237]: pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:02 compute-0 sudo[131477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfmhojztywjbefrmxxokaagcyztylyhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401881.7343733-59-141966274888564/AnsiballZ_setup.py'
Nov 29 07:38:02 compute-0 sudo[131477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:02 compute-0 python3.9[131479]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:38:02 compute-0 sudo[131477]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:03 compute-0 sudo[131629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdejrtyqygujkvaofwdcobwmonovybke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401883.0462537-68-175317560196699/AnsiballZ_blockinfile.py'
Nov 29 07:38:03 compute-0 sudo[131629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:03 compute-0 python3.9[131631]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRCkuuHZs47SCzcu7tjcimdSkKiN0f+0uYKphtJHmNUbnKzTarwpz0zTzwNB6Rkfa0cNXzjM0eQ2CPe+Snkw0qyJtc2enMqtbj360S5H3yQR2rhUDSVpK9OthgSSa87le6SFKfC02rAVgrgzJCApppzYPI9bW/0S+nrRzwKLahzug8A4ADYyEBlm+Jl3DGbTL5d+Ryvws5Qze65DRbFOeFwoKbeEYnrApC92h6s/WPMTg/PhnvcI8OEOvHjHiJwmglVVJJDpcwmcyHixANCM3zJgtVdS7gG+rMrXG4QRXI/xryq832mzGzPwl3y9wI8DGle8cAycke8o4IXAogH8jRzrJiFLG9v4CwlViuukpGRoURnKy50qhpjPFKMNUc69dZRerAgnEUjGQ2CytcZZPjdGsbwKXadWtQKzVIHo8voavaQrujz9oZY6UtfQWFC9kZrieKVrYwl/OxcAA2ta/ogyuwbE7/9Mq8b+4yY0ng8rzb9l4TDRQA6AxzAROl7H8=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK+05mnacD3gOVCqPwMC2ZPXt1TacIIrH2bpY65vzLCO
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA/zUKxW+GMg5y7+JQdnqaiSzO9TWuAqKOZ40ijEIbMdhLCmDPz2JJeEGgT+Ou1gk72ewR7yoXP5Gzbj0L3RGPI=
                                              create=True mode=0644 path=/tmp/ansible.r83m98aa state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:38:03 compute-0 sudo[131629]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:04 compute-0 ceph-mon[75237]: pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:05 compute-0 sudo[131781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdgzhgzylxtlsryhlhzptmeeqcxtzvzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401884.710213-76-73378504462059/AnsiballZ_command.py'
Nov 29 07:38:05 compute-0 sudo[131781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:05 compute-0 python3.9[131783]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.r83m98aa' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:38:05 compute-0 sudo[131781]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:06 compute-0 ceph-mon[75237]: pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:06 compute-0 sudo[131935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uswucbifmrwjyusudpnpkihekwhskgxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401885.6646574-84-79909770777382/AnsiballZ_file.py'
Nov 29 07:38:06 compute-0 sudo[131935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:07 compute-0 python3.9[131937]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.r83m98aa state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:38:07 compute-0 sudo[131935]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:07 compute-0 sshd-session[130590]: Connection closed by 192.168.122.30 port 45868
Nov 29 07:38:07 compute-0 sshd-session[130587]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:38:07 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Nov 29 07:38:07 compute-0 systemd[1]: session-41.scope: Consumed 5.533s CPU time.
Nov 29 07:38:07 compute-0 systemd-logind[782]: Session 41 logged out. Waiting for processes to exit.
Nov 29 07:38:07 compute-0 systemd-logind[782]: Removed session 41.
Nov 29 07:38:08 compute-0 ceph-mon[75237]: pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:38:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:38:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:38:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:38:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:38:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:38:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:09 compute-0 ceph-mon[75237]: pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:12 compute-0 ceph-mon[75237]: pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:13 compute-0 sshd-session[131962]: Accepted publickey for zuul from 192.168.122.30 port 33778 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:38:13 compute-0 systemd-logind[782]: New session 42 of user zuul.
Nov 29 07:38:13 compute-0 systemd[1]: Started Session 42 of User zuul.
Nov 29 07:38:13 compute-0 sshd-session[131962]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:38:14 compute-0 python3.9[132115]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:38:14 compute-0 ceph-mon[75237]: pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:15 compute-0 sudo[132269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryxwrfvqirymtmuwmkrgeuqsvcxtneof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401894.7538755-32-113968989681200/AnsiballZ_systemd.py'
Nov 29 07:38:15 compute-0 sudo[132269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:15 compute-0 python3.9[132271]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 07:38:15 compute-0 sudo[132269]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:15 compute-0 ceph-mon[75237]: pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:16 compute-0 sudo[132423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oimkfpqlqtsrsgmggcjkeecwsvabhjow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401896.4049447-40-174069021260247/AnsiballZ_systemd.py'
Nov 29 07:38:16 compute-0 sudo[132423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:17 compute-0 python3.9[132425]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:38:17 compute-0 sudo[132423]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:17 compute-0 sudo[132576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtppsqfgjfrakybcpgczoxnbadsgothk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401897.2757719-49-176670061527350/AnsiballZ_command.py'
Nov 29 07:38:17 compute-0 sudo[132576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:17 compute-0 python3.9[132578]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:38:17 compute-0 sudo[132576]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:18 compute-0 ceph-mon[75237]: pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:18 compute-0 sudo[132729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acbdrbrdgunjnhvalwsharxwzpygsult ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401898.421258-57-80883945218474/AnsiballZ_stat.py'
Nov 29 07:38:18 compute-0 sudo[132729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:19 compute-0 python3.9[132731]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:38:19 compute-0 sudo[132729]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:19 compute-0 sudo[132881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmhbvcktoytojekharcbwqcikhcsvwro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401899.2830083-66-155700080183573/AnsiballZ_file.py'
Nov 29 07:38:19 compute-0 sudo[132881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:19 compute-0 python3.9[132883]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:38:19 compute-0 sudo[132881]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:20 compute-0 sshd-session[131965]: Connection closed by 192.168.122.30 port 33778
Nov 29 07:38:20 compute-0 sshd-session[131962]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:38:20 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Nov 29 07:38:20 compute-0 systemd[1]: session-42.scope: Consumed 4.040s CPU time.
Nov 29 07:38:20 compute-0 systemd-logind[782]: Session 42 logged out. Waiting for processes to exit.
Nov 29 07:38:20 compute-0 systemd-logind[782]: Removed session 42.
Nov 29 07:38:20 compute-0 ceph-mon[75237]: pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:22 compute-0 ceph-mon[75237]: pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:24 compute-0 ceph-mon[75237]: pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:26 compute-0 sshd-session[132908]: Accepted publickey for zuul from 192.168.122.30 port 41868 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:38:26 compute-0 systemd-logind[782]: New session 43 of user zuul.
Nov 29 07:38:26 compute-0 systemd[1]: Started Session 43 of User zuul.
Nov 29 07:38:26 compute-0 sshd-session[132908]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:38:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:26 compute-0 ceph-mon[75237]: pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:27 compute-0 python3.9[133061]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:38:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:28 compute-0 ceph-mon[75237]: pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:29 compute-0 sudo[133215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwsdngizlxvpwofkxnvhxhttdwvutjbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401908.999881-34-158239821755428/AnsiballZ_setup.py'
Nov 29 07:38:29 compute-0 sudo[133215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:29 compute-0 python3.9[133217]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:38:29 compute-0 sudo[133215]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:30 compute-0 sudo[133299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kklwwsjiwmvldezsegmyxmsziogceezk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401908.999881-34-158239821755428/AnsiballZ_dnf.py'
Nov 29 07:38:30 compute-0 sudo[133299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:30 compute-0 python3.9[133301]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 07:38:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:30 compute-0 ceph-mon[75237]: pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:31 compute-0 sudo[133299]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:32 compute-0 ceph-mon[75237]: pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:32 compute-0 python3.9[133452]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:38:34 compute-0 python3.9[133603]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 07:38:34 compute-0 ceph-mon[75237]: pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:35 compute-0 python3.9[133753]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:38:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:35 compute-0 python3.9[133903]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:38:36.316406) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401916316551, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 914, "num_deletes": 252, "total_data_size": 1316487, "memory_usage": 1346504, "flush_reason": "Manual Compaction"}
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401916337068, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 792563, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7827, "largest_seqno": 8740, "table_properties": {"data_size": 788923, "index_size": 1356, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9114, "raw_average_key_size": 19, "raw_value_size": 781228, "raw_average_value_size": 1676, "num_data_blocks": 64, "num_entries": 466, "num_filter_entries": 466, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401821, "oldest_key_time": 1764401821, "file_creation_time": 1764401916, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 20763 microseconds, and 4451 cpu microseconds.
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:38:36.337190) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 792563 bytes OK
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:38:36.337209) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:38:36.340277) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:38:36.340312) EVENT_LOG_v1 {"time_micros": 1764401916340294, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:38:36.340335) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1312069, prev total WAL file size 1312069, number of live WAL files 2.
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:38:36.341364) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(773KB)], [20(8353KB)]
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401916341463, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9347021, "oldest_snapshot_seqno": -1}
Nov 29 07:38:36 compute-0 sshd-session[132911]: Connection closed by 192.168.122.30 port 41868
Nov 29 07:38:36 compute-0 sshd-session[132908]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:38:36 compute-0 systemd-logind[782]: Session 43 logged out. Waiting for processes to exit.
Nov 29 07:38:36 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Nov 29 07:38:36 compute-0 systemd[1]: session-43.scope: Consumed 6.482s CPU time.
Nov 29 07:38:36 compute-0 systemd-logind[782]: Removed session 43.
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3429 keys, 7198862 bytes, temperature: kUnknown
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401916602921, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7198862, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7171806, "index_size": 17410, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8581, "raw_key_size": 84028, "raw_average_key_size": 24, "raw_value_size": 7105613, "raw_average_value_size": 2072, "num_data_blocks": 763, "num_entries": 3429, "num_filter_entries": 3429, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764401916, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:38:36.603879) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7198862 bytes
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:38:36.605240) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 35.7 rd, 27.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 8.2 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(20.9) write-amplify(9.1) OK, records in: 3906, records dropped: 477 output_compression: NoCompression
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:38:36.605263) EVENT_LOG_v1 {"time_micros": 1764401916605252, "job": 6, "event": "compaction_finished", "compaction_time_micros": 262114, "compaction_time_cpu_micros": 29371, "output_level": 6, "num_output_files": 1, "total_output_size": 7198862, "num_input_records": 3906, "num_output_records": 3429, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401916605495, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 29 07:38:36 compute-0 ceph-mon[75237]: pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401916606929, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:38:36.341249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:38:36.606966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:38:36.606970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:38:36.606972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:38:36.606973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:38:36 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:38:36.606974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:38:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:38:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:38:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:38:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:38:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:38:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:38:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:38:38
Nov 29 07:38:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:38:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:38:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'vms', 'backups', '.mgr', 'default.rgw.meta', 'volumes', 'images', 'cephfs.cephfs.data']
Nov 29 07:38:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:38:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:38 compute-0 ceph-mon[75237]: pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:40 compute-0 ceph-mon[75237]: pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:42 compute-0 sshd-session[133928]: Accepted publickey for zuul from 192.168.122.30 port 42660 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:38:42 compute-0 systemd-logind[782]: New session 44 of user zuul.
Nov 29 07:38:42 compute-0 systemd[1]: Started Session 44 of User zuul.
Nov 29 07:38:42 compute-0 sshd-session[133928]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:38:42 compute-0 ceph-mon[75237]: pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:38:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:38:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:38:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:38:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:38:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:38:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:38:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:38:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:38:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:38:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:43 compute-0 python3.9[134081]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:38:44 compute-0 ceph-mon[75237]: pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:45 compute-0 sudo[134235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuxadojmcofwaqcbcyvmgwntojweqndu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401924.735266-50-149030158499062/AnsiballZ_file.py'
Nov 29 07:38:45 compute-0 sudo[134235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:45 compute-0 python3.9[134237]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:38:45 compute-0 sudo[134235]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:46 compute-0 sudo[134387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnmenybeulfoqskptwcmwkyzxbcbmhrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401925.6395376-50-49623569989323/AnsiballZ_file.py'
Nov 29 07:38:46 compute-0 sudo[134387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:46 compute-0 ceph-mon[75237]: pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:46 compute-0 python3.9[134389]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:38:46 compute-0 sudo[134387]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:46 compute-0 sshd-session[134390]: Invalid user terraria from 20.185.243.158 port 42416
Nov 29 07:38:46 compute-0 sshd-session[134390]: Received disconnect from 20.185.243.158 port 42416:11: Bye Bye [preauth]
Nov 29 07:38:46 compute-0 sshd-session[134390]: Disconnected from invalid user terraria 20.185.243.158 port 42416 [preauth]
Nov 29 07:38:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:47 compute-0 sudo[134541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riampgywgnluiggkqseohfisvcavtfcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401926.617524-65-6762581769318/AnsiballZ_stat.py'
Nov 29 07:38:47 compute-0 sudo[134541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:47 compute-0 python3.9[134543]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:38:47 compute-0 sudo[134541]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:47 compute-0 sudo[134664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgijorumdkrhhnzvrgovjpvmxhpbzerw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401926.617524-65-6762581769318/AnsiballZ_copy.py'
Nov 29 07:38:47 compute-0 sudo[134664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:48 compute-0 python3.9[134666]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401926.617524-65-6762581769318/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=5b6de430a93bc11bceb266c60ed1932802612201 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:38:48 compute-0 sudo[134664]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:48 compute-0 sudo[134816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hujuloyxupwwjljuvxlfhyhjpslmxfrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401928.3079357-65-48791107403175/AnsiballZ_stat.py'
Nov 29 07:38:48 compute-0 sudo[134816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:49 compute-0 python3.9[134818]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:38:49 compute-0 sudo[134816]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:49 compute-0 sudo[134939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geefnzdbwddxrdzmvyounkszlkhqvpve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401928.3079357-65-48791107403175/AnsiballZ_copy.py'
Nov 29 07:38:49 compute-0 sudo[134939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:51 compute-0 sudo[134942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:38:51 compute-0 sudo[134942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:51 compute-0 sudo[134942]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:51 compute-0 sudo[134967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:38:51 compute-0 sudo[134967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:51 compute-0 sudo[134967]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:51 compute-0 sudo[134992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:38:51 compute-0 sudo[134992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:52 compute-0 sudo[134992]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:52 compute-0 sudo[135017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 07:38:52 compute-0 sudo[135017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:52 compute-0 sudo[135017]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:38:52 compute-0 ceph-mon[75237]: pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:52 compute-0 python3.9[134941]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401928.3079357-65-48791107403175/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=57389c991d49416ed4d5d06ce556b1492a8c1683 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:38:52 compute-0 sudo[134939]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:53 compute-0 sudo[135210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcczwxbuviunseuattgvqiwflpwvydab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401932.758439-65-171665673956828/AnsiballZ_stat.py'
Nov 29 07:38:53 compute-0 sudo[135210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:53 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:38:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:38:53 compute-0 python3.9[135212]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:38:53 compute-0 sudo[135210]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:53 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:38:53 compute-0 sudo[135236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:38:53 compute-0 sudo[135236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:53 compute-0 sudo[135236]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:53 compute-0 sudo[135261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:38:53 compute-0 sudo[135261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:53 compute-0 sudo[135261]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:53 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:38:53 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Cumulative writes: 5588 writes, 24K keys, 5588 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5588 writes, 825 syncs, 6.77 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5588 writes, 24K keys, 5588 commit groups, 1.0 writes per commit group, ingest: 18.91 MB, 0.03 MB/s
                                           Interval WAL: 5588 writes, 825 syncs, 6.77 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.74              0.00         1    0.739       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.74              0.00         1    0.739       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.74              0.00         1    0.739       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.7 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562223181090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562223181090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562223181090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 07:38:53 compute-0 sudo[135286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:38:53 compute-0 sudo[135286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:53 compute-0 sudo[135286]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:53 compute-0 sudo[135311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:38:53 compute-0 sudo[135311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:54 compute-0 sudo[135311]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:38:54 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:38:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:38:54 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:38:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:38:54 compute-0 sudo[135464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sekfanxiflxybbirkvcepjjiaxrakpbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401932.758439-65-171665673956828/AnsiballZ_copy.py'
Nov 29 07:38:54 compute-0 sudo[135464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:54 compute-0 ceph-mon[75237]: pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:54 compute-0 ceph-mon[75237]: pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:38:54 compute-0 python3.9[135466]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401932.758439-65-171665673956828/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=006152b3f34630c69e94549c880b06bc2b81c1ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:38:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:54 compute-0 sudo[135464]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:54 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:38:54 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev b4b0fc87-5b89-4d0b-a06a-a440a4529a43 does not exist
Nov 29 07:38:54 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 856e59a9-174c-4f87-a0a9-8d73c8bb5bca does not exist
Nov 29 07:38:54 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 13b44b31-da71-47f7-a036-a9dc06340bca does not exist
Nov 29 07:38:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:38:54 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:38:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:38:54 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:38:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:38:54 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:38:54 compute-0 sudo[135491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:38:54 compute-0 sudo[135491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:54 compute-0 sudo[135491]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:55 compute-0 sudo[135516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:38:55 compute-0 sudo[135516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:55 compute-0 sudo[135516]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:55 compute-0 sudo[135564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:38:55 compute-0 sudo[135564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:55 compute-0 sudo[135564]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:55 compute-0 sudo[135618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:38:55 compute-0 sudo[135618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:55 compute-0 sudo[135734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogjjwvtvdopcimljgtmxeofsgpmkrxjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401935.046985-109-233367104555637/AnsiballZ_file.py'
Nov 29 07:38:55 compute-0 sudo[135734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:55 compute-0 python3.9[135742]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:38:55 compute-0 podman[135759]: 2025-11-29 07:38:55.534218834 +0000 UTC m=+0.021388147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:38:55 compute-0 sudo[135734]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:38:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:38:56 compute-0 sudo[135922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmlkgichufnedlaejxppbifaamzrwcwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401935.7929351-109-26909682120841/AnsiballZ_file.py'
Nov 29 07:38:56 compute-0 sudo[135922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:56 compute-0 python3.9[135924]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:38:56 compute-0 sudo[135922]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:56 compute-0 podman[135759]: 2025-11-29 07:38:56.631605054 +0000 UTC m=+1.118774387 container create 7f039828cf61236af04d6963649dd9e9d72879136f0b4a483474566395f98451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_easley, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 07:38:56 compute-0 ceph-mon[75237]: pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:56 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:38:56 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:38:56 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:38:56 compute-0 ceph-mon[75237]: pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:56 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:38:56 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:38:56 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:38:56 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:38:56 compute-0 systemd[1]: Started libpod-conmon-7f039828cf61236af04d6963649dd9e9d72879136f0b4a483474566395f98451.scope.
Nov 29 07:38:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:38:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:57 compute-0 sudo[136079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcvxjvwpxejnfilhjxkqvxqrbcrfqlty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401936.679217-124-188920562689118/AnsiballZ_stat.py'
Nov 29 07:38:57 compute-0 sudo[136079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:57 compute-0 python3.9[136081]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:38:57 compute-0 sudo[136079]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:57 compute-0 podman[135759]: 2025-11-29 07:38:57.247341782 +0000 UTC m=+1.734511095 container init 7f039828cf61236af04d6963649dd9e9d72879136f0b4a483474566395f98451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_easley, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 07:38:57 compute-0 podman[135759]: 2025-11-29 07:38:57.255769453 +0000 UTC m=+1.742938746 container start 7f039828cf61236af04d6963649dd9e9d72879136f0b4a483474566395f98451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_easley, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:38:57 compute-0 vigorous_easley[135997]: 167 167
Nov 29 07:38:57 compute-0 systemd[1]: libpod-7f039828cf61236af04d6963649dd9e9d72879136f0b4a483474566395f98451.scope: Deactivated successfully.
Nov 29 07:38:57 compute-0 podman[135759]: 2025-11-29 07:38:57.487437747 +0000 UTC m=+1.974607060 container attach 7f039828cf61236af04d6963649dd9e9d72879136f0b4a483474566395f98451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_easley, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:38:57 compute-0 podman[135759]: 2025-11-29 07:38:57.488770554 +0000 UTC m=+1.975939847 container died 7f039828cf61236af04d6963649dd9e9d72879136f0b4a483474566395f98451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 07:38:57 compute-0 sudo[136217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvfzwsqlncaywosjqbvkiftouqzqnepy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401936.679217-124-188920562689118/AnsiballZ_copy.py'
Nov 29 07:38:57 compute-0 sudo[136217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-027ce184fbea3c6b83315d6f2373a684bec0565c7a8a7c443ac5d6ea23e8885b-merged.mount: Deactivated successfully.
Nov 29 07:38:57 compute-0 python3.9[136219]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401936.679217-124-188920562689118/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=dd30847499aefcae9b66c3818f30232632fedd4b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:38:57 compute-0 sudo[136217]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:58 compute-0 sudo[136369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtgldywzaytbngwjyrobqgousvxwkouh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401937.994521-124-229578853084581/AnsiballZ_stat.py'
Nov 29 07:38:58 compute-0 sudo[136369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:58 compute-0 python3.9[136371]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:38:58 compute-0 sudo[136369]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:58 compute-0 sudo[136492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbvtomwrgnvufwsmmoytoesxxsprkpny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401937.994521-124-229578853084581/AnsiballZ_copy.py'
Nov 29 07:38:58 compute-0 sudo[136492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:59 compute-0 python3.9[136494]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401937.994521-124-229578853084581/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=b25c97ef9b21aeb307c1bf07cdd342eff131e5fb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:38:59 compute-0 sudo[136492]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:59 compute-0 sudo[136644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pleffleurnpdvzqexqytwkagbjfcovxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401939.3709579-124-261782307708238/AnsiballZ_stat.py'
Nov 29 07:38:59 compute-0 sudo[136644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:59 compute-0 python3.9[136646]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:38:59 compute-0 sudo[136644]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:00 compute-0 sudo[136767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddamfqamfyhbitglqumeexmswejyljmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401939.3709579-124-261782307708238/AnsiballZ_copy.py'
Nov 29 07:39:00 compute-0 sudo[136767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:00 compute-0 python3.9[136769]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401939.3709579-124-261782307708238/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=e03c671d2ddf5f652eab8da9ca6f9f2468ee5444 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:39:00 compute-0 sudo[136767]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:01 compute-0 sudo[136919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goovbdshcmjduagncqurbqjomztgjwcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401940.9834316-168-53392150450222/AnsiballZ_file.py'
Nov 29 07:39:01 compute-0 sudo[136919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:01 compute-0 python3.9[136921]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:39:01 compute-0 sudo[136919]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:02 compute-0 sudo[137071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhwgkosziqfdcokcvrmjjuzmycljuxqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401941.736104-168-88092623614767/AnsiballZ_file.py'
Nov 29 07:39:02 compute-0 sudo[137071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:02 compute-0 python3.9[137073]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:39:02 compute-0 sudo[137071]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:02 compute-0 sudo[137223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erltjszmyipqqrwfmibawiakqllyxmyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401942.4826515-183-160776004842719/AnsiballZ_stat.py'
Nov 29 07:39:02 compute-0 sudo[137223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:03 compute-0 python3.9[137225]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:39:03 compute-0 sudo[137223]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:04 compute-0 sudo[137346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnpjcdlzotagkurbqtdjlvxgnxwnsmtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401942.4826515-183-160776004842719/AnsiballZ_copy.py'
Nov 29 07:39:04 compute-0 sudo[137346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:04 compute-0 python3.9[137348]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401942.4826515-183-160776004842719/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=adb0123ff6ed4150ffcaad268abcad1dd1e81841 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:39:04 compute-0 sudo[137346]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:04 compute-0 sudo[137498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yayjenprmogcxxpxptaextbgleyscfnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401944.49677-183-64319476894393/AnsiballZ_stat.py'
Nov 29 07:39:04 compute-0 sudo[137498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:05 compute-0 python3.9[137500]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:39:05 compute-0 sudo[137498]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:05 compute-0 ceph-mds[101581]: mds.beacon.cephfs.compute-0.yemcdg missed beacon ack from the monitors
Nov 29 07:39:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:39:06 compute-0 sudo[137621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byjbhlpdfeohpazqgutmurcanhnqyvzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401944.49677-183-64319476894393/AnsiballZ_copy.py'
Nov 29 07:39:06 compute-0 sudo[137621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:06 compute-0 python3.9[137623]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401944.49677-183-64319476894393/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=b25c97ef9b21aeb307c1bf07cdd342eff131e5fb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:39:06 compute-0 sudo[137621]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:06 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:39:06 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 6559 writes, 27K keys, 6559 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6559 writes, 1036 syncs, 6.33 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6559 writes, 27K keys, 6559 commit groups, 1.0 writes per commit group, ingest: 19.69 MB, 0.03 MB/s
                                           Interval WAL: 6559 writes, 1036 syncs, 6.33 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094e430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094e430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094e430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 07:39:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:06 compute-0 sudo[137775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tybnemdakgoysyjgouyajugstshypwqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401946.5758135-183-144064316213953/AnsiballZ_stat.py'
Nov 29 07:39:06 compute-0 sudo[137775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:06 compute-0 ceph-mon[75237]: pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:07 compute-0 python3.9[137777]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:39:07 compute-0 sudo[137775]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:07 compute-0 sudo[137900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrjkktvelpyykqorsoudlsymglqulqgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401946.5758135-183-144064316213953/AnsiballZ_copy.py'
Nov 29 07:39:07 compute-0 sudo[137900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:07 compute-0 python3.9[137902]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401946.5758135-183-144064316213953/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=c953a5ebdc9251e393fff28af8a49f826dc39d19 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:39:07 compute-0 sudo[137900]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:08 compute-0 sshd-session[137703]: Invalid user work from 103.236.140.19 port 42716
Nov 29 07:39:08 compute-0 sshd-session[137703]: Received disconnect from 103.236.140.19 port 42716:11: Bye Bye [preauth]
Nov 29 07:39:08 compute-0 sshd-session[137703]: Disconnected from invalid user work 103.236.140.19 port 42716 [preauth]
Nov 29 07:39:08 compute-0 sshd-session[137855]: Invalid user old from 114.34.106.146 port 54578
Nov 29 07:39:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:39:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:39:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:39:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:39:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:39:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:39:08 compute-0 sshd-session[137855]: Received disconnect from 114.34.106.146 port 54578:11: Bye Bye [preauth]
Nov 29 07:39:08 compute-0 sshd-session[137855]: Disconnected from invalid user old 114.34.106.146 port 54578 [preauth]
Nov 29 07:39:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:08 compute-0 sudo[138052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irfvtdltzsqpoifolcvovpgdasaaamrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401948.5451288-243-175239749117976/AnsiballZ_file.py'
Nov 29 07:39:08 compute-0 sudo[138052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:09 compute-0 python3.9[138054]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:39:09 compute-0 sudo[138052]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:09 compute-0 podman[135759]: 2025-11-29 07:39:09.743368304 +0000 UTC m=+14.230537638 container remove 7f039828cf61236af04d6963649dd9e9d72879136f0b4a483474566395f98451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_easley, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:39:09 compute-0 sudo[138204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quihzccrnyiwbpyuxqsogudztqcucdru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401949.3616452-251-142366994287205/AnsiballZ_stat.py'
Nov 29 07:39:09 compute-0 sudo[138204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:09 compute-0 systemd[1]: libpod-conmon-7f039828cf61236af04d6963649dd9e9d72879136f0b4a483474566395f98451.scope: Deactivated successfully.
Nov 29 07:39:09 compute-0 python3.9[138206]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:39:09 compute-0 sudo[138204]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:10 compute-0 podman[138214]: 2025-11-29 07:39:09.965382704 +0000 UTC m=+0.051958956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:39:10 compute-0 sudo[138348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnrresqhfkaiwffcnadjiukxwkhgfzlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401949.3616452-251-142366994287205/AnsiballZ_copy.py'
Nov 29 07:39:10 compute-0 sudo[138348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:10 compute-0 podman[138214]: 2025-11-29 07:39:10.450299994 +0000 UTC m=+0.536876236 container create 438e8314ca42d0a69406fba64f38d14ca1bf2f478de5ba49921dff73a13fadda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:39:10 compute-0 python3.9[138350]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401949.3616452-251-142366994287205/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3646d8a87b7da827f60eae99acd128a9e4b8a41a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:39:10 compute-0 sudo[138348]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:10 compute-0 ceph-mon[75237]: pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:10 compute-0 ceph-mon[75237]: pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:10 compute-0 ceph-mon[75237]: pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:10 compute-0 ceph-mon[75237]: pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:10 compute-0 ceph-mon[75237]: pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:10 compute-0 ceph-mon[75237]: pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:39:11 compute-0 systemd[1]: Started libpod-conmon-438e8314ca42d0a69406fba64f38d14ca1bf2f478de5ba49921dff73a13fadda.scope.
Nov 29 07:39:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:39:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f649ac1763cecafbc992dba38db3cecc81ffa4625e4d9efcd58778d9ca901b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:39:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f649ac1763cecafbc992dba38db3cecc81ffa4625e4d9efcd58778d9ca901b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:39:11 compute-0 sudo[138505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vysrdkargloifnptstohrhcxlfnwwhwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401950.8849273-267-154537735054344/AnsiballZ_file.py'
Nov 29 07:39:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f649ac1763cecafbc992dba38db3cecc81ffa4625e4d9efcd58778d9ca901b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:39:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f649ac1763cecafbc992dba38db3cecc81ffa4625e4d9efcd58778d9ca901b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:39:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f649ac1763cecafbc992dba38db3cecc81ffa4625e4d9efcd58778d9ca901b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:39:11 compute-0 sudo[138505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:11 compute-0 podman[138214]: 2025-11-29 07:39:11.387533642 +0000 UTC m=+1.474109914 container init 438e8314ca42d0a69406fba64f38d14ca1bf2f478de5ba49921dff73a13fadda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 07:39:11 compute-0 podman[138214]: 2025-11-29 07:39:11.396044775 +0000 UTC m=+1.482621027 container start 438e8314ca42d0a69406fba64f38d14ca1bf2f478de5ba49921dff73a13fadda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:39:11 compute-0 podman[138214]: 2025-11-29 07:39:11.417708669 +0000 UTC m=+1.504284931 container attach 438e8314ca42d0a69406fba64f38d14ca1bf2f478de5ba49921dff73a13fadda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 29 07:39:11 compute-0 python3.9[138507]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:39:11 compute-0 sudo[138505]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:11 compute-0 ceph-mon[75237]: pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:12 compute-0 sudo[138661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uruhtbdqaanahkgppuhkzqngqcxdqpfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401951.7320154-275-2328539653740/AnsiballZ_stat.py'
Nov 29 07:39:12 compute-0 sudo[138661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:12 compute-0 python3.9[138663]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:39:12 compute-0 sudo[138661]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:12 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:39:12 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 5653 writes, 24K keys, 5653 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5653 writes, 798 syncs, 7.08 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5653 writes, 24K keys, 5653 commit groups, 1.0 writes per commit group, ingest: 18.78 MB, 0.03 MB/s
                                           Interval WAL: 5653 writes, 798 syncs, 7.08 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.024       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.024       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.024       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f1743090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f1743090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f1743090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 07:39:12 compute-0 sleepy_keller[138486]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:39:12 compute-0 sleepy_keller[138486]: --> relative data size: 1.0
Nov 29 07:39:12 compute-0 sleepy_keller[138486]: --> All data devices are unavailable
Nov 29 07:39:12 compute-0 systemd[1]: libpod-438e8314ca42d0a69406fba64f38d14ca1bf2f478de5ba49921dff73a13fadda.scope: Deactivated successfully.
Nov 29 07:39:12 compute-0 systemd[1]: libpod-438e8314ca42d0a69406fba64f38d14ca1bf2f478de5ba49921dff73a13fadda.scope: Consumed 1.156s CPU time.
Nov 29 07:39:12 compute-0 podman[138214]: 2025-11-29 07:39:12.614236458 +0000 UTC m=+2.700812700 container died 438e8314ca42d0a69406fba64f38d14ca1bf2f478de5ba49921dff73a13fadda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:39:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-94f649ac1763cecafbc992dba38db3cecc81ffa4625e4d9efcd58778d9ca901b-merged.mount: Deactivated successfully.
Nov 29 07:39:12 compute-0 podman[138214]: 2025-11-29 07:39:12.68359414 +0000 UTC m=+2.770170382 container remove 438e8314ca42d0a69406fba64f38d14ca1bf2f478de5ba49921dff73a13fadda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:39:12 compute-0 systemd[1]: libpod-conmon-438e8314ca42d0a69406fba64f38d14ca1bf2f478de5ba49921dff73a13fadda.scope: Deactivated successfully.
Nov 29 07:39:12 compute-0 sudo[135618]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:12 compute-0 sudo[138830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mibyeshunipwxdxgmsscggndnrpvtole ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401951.7320154-275-2328539653740/AnsiballZ_copy.py'
Nov 29 07:39:12 compute-0 sudo[138830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:12 compute-0 sshd-session[138510]: Invalid user dspace from 101.47.142.104 port 34318
Nov 29 07:39:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:12 compute-0 sudo[138816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:39:12 compute-0 sudo[138816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:12 compute-0 sudo[138816]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:12 compute-0 sudo[138850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:39:12 compute-0 sudo[138850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:12 compute-0 sudo[138850]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:12 compute-0 sudo[138875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:39:12 compute-0 sudo[138875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:12 compute-0 sudo[138875]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:12 compute-0 sudo[138900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:39:12 compute-0 sudo[138900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:12 compute-0 python3.9[138847]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401951.7320154-275-2328539653740/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3646d8a87b7da827f60eae99acd128a9e4b8a41a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:39:12 compute-0 sudo[138830]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:13 compute-0 sshd-session[138510]: Received disconnect from 101.47.142.104 port 34318:11: Bye Bye [preauth]
Nov 29 07:39:13 compute-0 sshd-session[138510]: Disconnected from invalid user dspace 101.47.142.104 port 34318 [preauth]
Nov 29 07:39:13 compute-0 podman[139032]: 2025-11-29 07:39:13.333800944 +0000 UTC m=+0.045685265 container create 2e561a2adecaee9aaae2cefad7a74f1636e7c25240f9a9fe29dc8832202d36b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_golick, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:39:13 compute-0 podman[139032]: 2025-11-29 07:39:13.314444863 +0000 UTC m=+0.026329204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:39:13 compute-0 systemd[1]: Started libpod-conmon-2e561a2adecaee9aaae2cefad7a74f1636e7c25240f9a9fe29dc8832202d36b3.scope.
Nov 29 07:39:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:39:13 compute-0 podman[139032]: 2025-11-29 07:39:13.537553612 +0000 UTC m=+0.249437953 container init 2e561a2adecaee9aaae2cefad7a74f1636e7c25240f9a9fe29dc8832202d36b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_golick, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:39:13 compute-0 sudo[139130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmxapxftfjqlpcgkyrqhtvluliyvqpkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401953.2251163-291-126921927110363/AnsiballZ_file.py'
Nov 29 07:39:13 compute-0 sudo[139130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:13 compute-0 podman[139032]: 2025-11-29 07:39:13.546495828 +0000 UTC m=+0.258380149 container start 2e561a2adecaee9aaae2cefad7a74f1636e7c25240f9a9fe29dc8832202d36b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:39:13 compute-0 sleepy_golick[139108]: 167 167
Nov 29 07:39:13 compute-0 systemd[1]: libpod-2e561a2adecaee9aaae2cefad7a74f1636e7c25240f9a9fe29dc8832202d36b3.scope: Deactivated successfully.
Nov 29 07:39:13 compute-0 podman[139032]: 2025-11-29 07:39:13.59620312 +0000 UTC m=+0.308087461 container attach 2e561a2adecaee9aaae2cefad7a74f1636e7c25240f9a9fe29dc8832202d36b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_golick, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 07:39:13 compute-0 podman[139032]: 2025-11-29 07:39:13.597170607 +0000 UTC m=+0.309054938 container died 2e561a2adecaee9aaae2cefad7a74f1636e7c25240f9a9fe29dc8832202d36b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:39:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e22adaff294324bfabe1f8b90c2cc8cd4cb9602fcd4d987033911f91e6e3f966-merged.mount: Deactivated successfully.
Nov 29 07:39:13 compute-0 python3.9[139133]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:39:13 compute-0 podman[139032]: 2025-11-29 07:39:13.751563191 +0000 UTC m=+0.463447542 container remove 2e561a2adecaee9aaae2cefad7a74f1636e7c25240f9a9fe29dc8832202d36b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:39:13 compute-0 systemd[1]: libpod-conmon-2e561a2adecaee9aaae2cefad7a74f1636e7c25240f9a9fe29dc8832202d36b3.scope: Deactivated successfully.
Nov 29 07:39:13 compute-0 sudo[139130]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:13 compute-0 podman[139178]: 2025-11-29 07:39:13.905445012 +0000 UTC m=+0.027656160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:39:14 compute-0 podman[139178]: 2025-11-29 07:39:14.005889387 +0000 UTC m=+0.128100435 container create c48bb1664aea1c7e082887753495cfb55ea37dc476af978cc2510bcb45d7082e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gates, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:39:14 compute-0 ceph-mon[75237]: pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:14 compute-0 systemd[1]: Started libpod-conmon-c48bb1664aea1c7e082887753495cfb55ea37dc476af978cc2510bcb45d7082e.scope.
Nov 29 07:39:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21bca7b39056a373b66a5b4d9882b9f53e0ef8f0727bf7005bfb50c6d4d74589/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21bca7b39056a373b66a5b4d9882b9f53e0ef8f0727bf7005bfb50c6d4d74589/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21bca7b39056a373b66a5b4d9882b9f53e0ef8f0727bf7005bfb50c6d4d74589/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21bca7b39056a373b66a5b4d9882b9f53e0ef8f0727bf7005bfb50c6d4d74589/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:39:14 compute-0 podman[139178]: 2025-11-29 07:39:14.122707631 +0000 UTC m=+0.244918669 container init c48bb1664aea1c7e082887753495cfb55ea37dc476af978cc2510bcb45d7082e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 29 07:39:14 compute-0 podman[139178]: 2025-11-29 07:39:14.130124524 +0000 UTC m=+0.252335562 container start c48bb1664aea1c7e082887753495cfb55ea37dc476af978cc2510bcb45d7082e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gates, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:39:14 compute-0 podman[139178]: 2025-11-29 07:39:14.133352973 +0000 UTC m=+0.255564011 container attach c48bb1664aea1c7e082887753495cfb55ea37dc476af978cc2510bcb45d7082e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gates, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:39:14 compute-0 sudo[139325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldeiheslarpkfdvdiezucssggxaytpnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401953.9515548-299-134684316933034/AnsiballZ_stat.py'
Nov 29 07:39:14 compute-0 sudo[139325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:14 compute-0 python3.9[139327]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:39:14 compute-0 sudo[139325]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:14 compute-0 compassionate_gates[139259]: {
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:     "0": [
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:         {
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "devices": [
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "/dev/loop3"
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             ],
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "lv_name": "ceph_lv0",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "lv_size": "21470642176",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "name": "ceph_lv0",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "tags": {
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.cluster_name": "ceph",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.crush_device_class": "",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.encrypted": "0",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.osd_id": "0",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.type": "block",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.vdo": "0"
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             },
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "type": "block",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "vg_name": "ceph_vg0"
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:         }
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:     ],
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:     "1": [
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:         {
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "devices": [
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "/dev/loop4"
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             ],
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "lv_name": "ceph_lv1",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "lv_size": "21470642176",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "name": "ceph_lv1",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "tags": {
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.cluster_name": "ceph",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.crush_device_class": "",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.encrypted": "0",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.osd_id": "1",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.type": "block",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.vdo": "0"
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             },
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "type": "block",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "vg_name": "ceph_vg1"
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:         }
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:     ],
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:     "2": [
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:         {
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "devices": [
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "/dev/loop5"
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             ],
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "lv_name": "ceph_lv2",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "lv_size": "21470642176",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "name": "ceph_lv2",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "tags": {
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.cluster_name": "ceph",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.crush_device_class": "",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.encrypted": "0",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.osd_id": "2",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.type": "block",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:                 "ceph.vdo": "0"
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             },
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "type": "block",
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:             "vg_name": "ceph_vg2"
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:         }
Nov 29 07:39:14 compute-0 compassionate_gates[139259]:     ]
Nov 29 07:39:14 compute-0 compassionate_gates[139259]: }
Nov 29 07:39:14 compute-0 systemd[1]: libpod-c48bb1664aea1c7e082887753495cfb55ea37dc476af978cc2510bcb45d7082e.scope: Deactivated successfully.
Nov 29 07:39:14 compute-0 podman[139178]: 2025-11-29 07:39:14.945435567 +0000 UTC m=+1.067646605 container died c48bb1664aea1c7e082887753495cfb55ea37dc476af978cc2510bcb45d7082e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 29 07:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-21bca7b39056a373b66a5b4d9882b9f53e0ef8f0727bf7005bfb50c6d4d74589-merged.mount: Deactivated successfully.
Nov 29 07:39:15 compute-0 sudo[139463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjnswnnajuojuiiffseseqwcenwbgmls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401953.9515548-299-134684316933034/AnsiballZ_copy.py'
Nov 29 07:39:15 compute-0 sudo[139463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:15 compute-0 python3.9[139465]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401953.9515548-299-134684316933034/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3646d8a87b7da827f60eae99acd128a9e4b8a41a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:39:15 compute-0 sudo[139463]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:15 compute-0 ceph-mgr[75527]: [devicehealth INFO root] Check health
Nov 29 07:39:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:39:16 compute-0 sudo[139615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhhxejnoivumyacxtyfforyyrsuvcuis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401955.9378247-315-46312215067188/AnsiballZ_file.py'
Nov 29 07:39:16 compute-0 sudo[139615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:16 compute-0 podman[139178]: 2025-11-29 07:39:16.531723777 +0000 UTC m=+2.653934855 container remove c48bb1664aea1c7e082887753495cfb55ea37dc476af978cc2510bcb45d7082e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gates, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 07:39:16 compute-0 python3.9[139617]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:39:16 compute-0 systemd[1]: libpod-conmon-c48bb1664aea1c7e082887753495cfb55ea37dc476af978cc2510bcb45d7082e.scope: Deactivated successfully.
Nov 29 07:39:16 compute-0 sudo[139615]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:16 compute-0 sudo[138900]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:16 compute-0 sudo[139618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:39:16 compute-0 sudo[139618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:16 compute-0 sudo[139618]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:16 compute-0 sudo[139667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:39:16 compute-0 sudo[139667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:16 compute-0 sudo[139667]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:16 compute-0 sudo[139716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:39:16 compute-0 sudo[139716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:16 compute-0 sudo[139716]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:16 compute-0 ceph-mon[75237]: pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:16 compute-0 sudo[139770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:39:16 compute-0 sudo[139770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:17 compute-0 sudo[139874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zazwqgavcxhykpsbqymmxoxerirxpklo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401956.7452953-323-246666206620242/AnsiballZ_stat.py'
Nov 29 07:39:17 compute-0 sudo[139874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:17 compute-0 python3.9[139877]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:39:17 compute-0 sudo[139874]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:17 compute-0 podman[139910]: 2025-11-29 07:39:17.307354581 +0000 UTC m=+0.038372894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:39:17 compute-0 podman[139910]: 2025-11-29 07:39:17.745470517 +0000 UTC m=+0.476488820 container create cb8eca0738b39272ce9fd68f8c8a20fb8fdfbae8f26c9588c0a64cef7ed51f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mestorf, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:39:18 compute-0 sudo[140044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyppmorjbpdrlnkozqmmswzagfeldner ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401956.7452953-323-246666206620242/AnsiballZ_copy.py'
Nov 29 07:39:18 compute-0 sudo[140044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:18 compute-0 python3.9[140046]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401956.7452953-323-246666206620242/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3646d8a87b7da827f60eae99acd128a9e4b8a41a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:39:18 compute-0 sudo[140044]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:18 compute-0 sudo[140196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rujbzpzcpuqdezxtvtqxcthlrgwyjlxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401958.5731695-339-2988268685041/AnsiballZ_file.py'
Nov 29 07:39:18 compute-0 sudo[140196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:19 compute-0 systemd[1]: Started libpod-conmon-cb8eca0738b39272ce9fd68f8c8a20fb8fdfbae8f26c9588c0a64cef7ed51f4f.scope.
Nov 29 07:39:19 compute-0 ceph-mon[75237]: pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:39:19 compute-0 python3.9[140198]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:39:19 compute-0 sudo[140196]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:19 compute-0 podman[139910]: 2025-11-29 07:39:19.261622282 +0000 UTC m=+1.992640575 container init cb8eca0738b39272ce9fd68f8c8a20fb8fdfbae8f26c9588c0a64cef7ed51f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:39:19 compute-0 podman[139910]: 2025-11-29 07:39:19.272133871 +0000 UTC m=+2.003152184 container start cb8eca0738b39272ce9fd68f8c8a20fb8fdfbae8f26c9588c0a64cef7ed51f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mestorf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 07:39:19 compute-0 ecstatic_mestorf[140201]: 167 167
Nov 29 07:39:19 compute-0 systemd[1]: libpod-cb8eca0738b39272ce9fd68f8c8a20fb8fdfbae8f26c9588c0a64cef7ed51f4f.scope: Deactivated successfully.
Nov 29 07:39:19 compute-0 podman[139910]: 2025-11-29 07:39:19.709365503 +0000 UTC m=+2.440383796 container attach cb8eca0738b39272ce9fd68f8c8a20fb8fdfbae8f26c9588c0a64cef7ed51f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mestorf, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:39:19 compute-0 podman[139910]: 2025-11-29 07:39:19.711532503 +0000 UTC m=+2.442550766 container died cb8eca0738b39272ce9fd68f8c8a20fb8fdfbae8f26c9588c0a64cef7ed51f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mestorf, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 07:39:19 compute-0 sudo[140366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dowdcczdalumdtdojgaukezysydxdnaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401959.421328-347-252997510462046/AnsiballZ_stat.py'
Nov 29 07:39:19 compute-0 sudo[140366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:19 compute-0 python3.9[140368]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:39:19 compute-0 sudo[140366]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:20 compute-0 sudo[140489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygvlpcncylzwkqucnzekejivxsekmtys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401959.421328-347-252997510462046/AnsiballZ_copy.py'
Nov 29 07:39:20 compute-0 sudo[140489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:20 compute-0 ceph-mon[75237]: pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:20 compute-0 python3.9[140491]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401959.421328-347-252997510462046/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3646d8a87b7da827f60eae99acd128a9e4b8a41a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:39:20 compute-0 sudo[140489]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cfda464283ce3615c7de50387e46754635c3b99a46adebb40d175cad0518a19-merged.mount: Deactivated successfully.
Nov 29 07:39:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:39:21 compute-0 sudo[140642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wugsbptktquyycdpksqenfzikoeebdqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401960.8912601-363-64296086532254/AnsiballZ_file.py'
Nov 29 07:39:21 compute-0 sudo[140642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:21 compute-0 python3.9[140644]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:39:21 compute-0 sudo[140642]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:21 compute-0 sudo[140794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqqzxwcikmxxowenjlhjjvkriplvjkaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401961.6388357-371-174769205214153/AnsiballZ_stat.py'
Nov 29 07:39:21 compute-0 sudo[140794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:22 compute-0 python3.9[140796]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:39:22 compute-0 sudo[140794]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:22 compute-0 sudo[140917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djspfbdmvrxgrlvjgshanwkgyjruzbou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401961.6388357-371-174769205214153/AnsiballZ_copy.py'
Nov 29 07:39:22 compute-0 sudo[140917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:22 compute-0 python3.9[140919]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401961.6388357-371-174769205214153/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3646d8a87b7da827f60eae99acd128a9e4b8a41a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:39:22 compute-0 sudo[140917]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:23 compute-0 sshd-session[133931]: Connection closed by 192.168.122.30 port 42660
Nov 29 07:39:23 compute-0 sshd-session[133928]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:39:23 compute-0 systemd-logind[782]: Session 44 logged out. Waiting for processes to exit.
Nov 29 07:39:23 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Nov 29 07:39:23 compute-0 systemd[1]: session-44.scope: Consumed 26.885s CPU time.
Nov 29 07:39:23 compute-0 systemd-logind[782]: Removed session 44.
Nov 29 07:39:23 compute-0 podman[139910]: 2025-11-29 07:39:23.934687646 +0000 UTC m=+6.665705939 container remove cb8eca0738b39272ce9fd68f8c8a20fb8fdfbae8f26c9588c0a64cef7ed51f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 07:39:23 compute-0 systemd[1]: libpod-conmon-cb8eca0738b39272ce9fd68f8c8a20fb8fdfbae8f26c9588c0a64cef7ed51f4f.scope: Deactivated successfully.
Nov 29 07:39:24 compute-0 podman[140951]: 2025-11-29 07:39:24.137345325 +0000 UTC m=+0.030118757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:39:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:25 compute-0 ceph-mon[75237]: pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:25 compute-0 podman[140951]: 2025-11-29 07:39:25.471944152 +0000 UTC m=+1.364717584 container create 0288d1077879e95c1d9b4bc1be7a4a6b53dba9b3a7546f83a131504e08f3ba16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:39:25 compute-0 systemd[1]: Started libpod-conmon-0288d1077879e95c1d9b4bc1be7a4a6b53dba9b3a7546f83a131504e08f3ba16.scope.
Nov 29 07:39:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b17733b64fbd54b7befd7135a83aad26d8a606d72f329ba0efda3a5d0aadb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b17733b64fbd54b7befd7135a83aad26d8a606d72f329ba0efda3a5d0aadb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b17733b64fbd54b7befd7135a83aad26d8a606d72f329ba0efda3a5d0aadb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b17733b64fbd54b7befd7135a83aad26d8a606d72f329ba0efda3a5d0aadb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:39:25 compute-0 podman[140951]: 2025-11-29 07:39:25.963852213 +0000 UTC m=+1.856625655 container init 0288d1077879e95c1d9b4bc1be7a4a6b53dba9b3a7546f83a131504e08f3ba16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 07:39:25 compute-0 podman[140951]: 2025-11-29 07:39:25.980804048 +0000 UTC m=+1.873577490 container start 0288d1077879e95c1d9b4bc1be7a4a6b53dba9b3a7546f83a131504e08f3ba16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:39:26 compute-0 podman[140951]: 2025-11-29 07:39:26.083868215 +0000 UTC m=+1.976641677 container attach 0288d1077879e95c1d9b4bc1be7a4a6b53dba9b3a7546f83a131504e08f3ba16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:39:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:39:26 compute-0 ceph-mon[75237]: pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:26 compute-0 ceph-mon[75237]: pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:27 compute-0 agitated_allen[140969]: {
Nov 29 07:39:27 compute-0 agitated_allen[140969]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:39:27 compute-0 agitated_allen[140969]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:39:27 compute-0 agitated_allen[140969]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:39:27 compute-0 agitated_allen[140969]:         "osd_id": 2,
Nov 29 07:39:27 compute-0 agitated_allen[140969]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:39:27 compute-0 agitated_allen[140969]:         "type": "bluestore"
Nov 29 07:39:27 compute-0 agitated_allen[140969]:     },
Nov 29 07:39:27 compute-0 agitated_allen[140969]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:39:27 compute-0 agitated_allen[140969]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:39:27 compute-0 agitated_allen[140969]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:39:27 compute-0 agitated_allen[140969]:         "osd_id": 0,
Nov 29 07:39:27 compute-0 agitated_allen[140969]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:39:27 compute-0 agitated_allen[140969]:         "type": "bluestore"
Nov 29 07:39:27 compute-0 agitated_allen[140969]:     },
Nov 29 07:39:27 compute-0 agitated_allen[140969]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:39:27 compute-0 agitated_allen[140969]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:39:27 compute-0 agitated_allen[140969]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:39:27 compute-0 agitated_allen[140969]:         "osd_id": 1,
Nov 29 07:39:27 compute-0 agitated_allen[140969]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:39:27 compute-0 agitated_allen[140969]:         "type": "bluestore"
Nov 29 07:39:27 compute-0 agitated_allen[140969]:     }
Nov 29 07:39:27 compute-0 agitated_allen[140969]: }
Nov 29 07:39:27 compute-0 systemd[1]: libpod-0288d1077879e95c1d9b4bc1be7a4a6b53dba9b3a7546f83a131504e08f3ba16.scope: Deactivated successfully.
Nov 29 07:39:27 compute-0 podman[140951]: 2025-11-29 07:39:27.048849033 +0000 UTC m=+2.941622445 container died 0288d1077879e95c1d9b4bc1be7a4a6b53dba9b3a7546f83a131504e08f3ba16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 29 07:39:27 compute-0 systemd[1]: libpod-0288d1077879e95c1d9b4bc1be7a4a6b53dba9b3a7546f83a131504e08f3ba16.scope: Consumed 1.075s CPU time.
Nov 29 07:39:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5b17733b64fbd54b7befd7135a83aad26d8a606d72f329ba0efda3a5d0aadb5-merged.mount: Deactivated successfully.
Nov 29 07:39:27 compute-0 podman[140951]: 2025-11-29 07:39:27.116235511 +0000 UTC m=+3.009008923 container remove 0288d1077879e95c1d9b4bc1be7a4a6b53dba9b3a7546f83a131504e08f3ba16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:39:27 compute-0 systemd[1]: libpod-conmon-0288d1077879e95c1d9b4bc1be7a4a6b53dba9b3a7546f83a131504e08f3ba16.scope: Deactivated successfully.
Nov 29 07:39:27 compute-0 sudo[139770]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:39:27 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:39:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:39:27 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:39:27 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev d3738010-8ed5-403c-997f-7fca9fd5cf0d does not exist
Nov 29 07:39:27 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev a08ecdf6-1830-499e-9742-5d61e069b185 does not exist
Nov 29 07:39:27 compute-0 sudo[141017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:39:27 compute-0 sudo[141017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:27 compute-0 sudo[141017]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:27 compute-0 sudo[141042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:39:27 compute-0 sudo[141042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:27 compute-0 sudo[141042]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:28 compute-0 ceph-mon[75237]: pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:39:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:39:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:29 compute-0 sshd-session[141069]: Accepted publickey for zuul from 192.168.122.30 port 36084 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:39:29 compute-0 systemd-logind[782]: New session 45 of user zuul.
Nov 29 07:39:29 compute-0 systemd[1]: Started Session 45 of User zuul.
Nov 29 07:39:29 compute-0 sshd-session[141069]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:39:30 compute-0 sshd-session[141067]: Invalid user cisco from 45.78.219.195 port 46590
Nov 29 07:39:30 compute-0 sudo[141222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxqexjmgcujjsuhcyvjvihasfdfwintd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401969.6758013-22-155973513526982/AnsiballZ_file.py'
Nov 29 07:39:30 compute-0 sudo[141222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:30 compute-0 sshd-session[141067]: Received disconnect from 45.78.219.195 port 46590:11: Bye Bye [preauth]
Nov 29 07:39:30 compute-0 sshd-session[141067]: Disconnected from invalid user cisco 45.78.219.195 port 46590 [preauth]
Nov 29 07:39:30 compute-0 python3.9[141224]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:39:30 compute-0 sudo[141222]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:31 compute-0 sudo[141374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evvrimqybzdaawiznhicifsmzhrbdkkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401970.6387525-34-72105155024221/AnsiballZ_stat.py'
Nov 29 07:39:31 compute-0 sudo[141374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:31 compute-0 python3.9[141376]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:39:31 compute-0 sudo[141374]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:39:31 compute-0 sudo[141497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxluqbzgyiuxtweloknoormtcxlbxnsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401970.6387525-34-72105155024221/AnsiballZ_copy.py'
Nov 29 07:39:31 compute-0 sudo[141497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:32 compute-0 ceph-mon[75237]: pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:32 compute-0 python3.9[141499]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401970.6387525-34-72105155024221/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=73fd3d3bf796904cf7cd5d4cb8d16865a2ca06f9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:39:32 compute-0 sudo[141497]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:32 compute-0 sudo[141649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbnvuzmzajldesuzxzcpbgmnvqbdnohk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401972.5063775-34-198990318655087/AnsiballZ_stat.py'
Nov 29 07:39:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:32 compute-0 sudo[141649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:32 compute-0 python3.9[141651]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:39:33 compute-0 sudo[141649]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:33 compute-0 ceph-mon[75237]: pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:33 compute-0 sudo[141772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncccfhueqayijzmkyahvdfondfytrxon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401972.5063775-34-198990318655087/AnsiballZ_copy.py'
Nov 29 07:39:33 compute-0 sudo[141772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:33 compute-0 python3.9[141774]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401972.5063775-34-198990318655087/.source.conf _original_basename=ceph.conf follow=False checksum=5dc744c549982db9c60431bad5a0735ec7db83ad backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:39:33 compute-0 sudo[141772]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:33 compute-0 sshd-session[141072]: Connection closed by 192.168.122.30 port 36084
Nov 29 07:39:33 compute-0 sshd-session[141069]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:39:33 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Nov 29 07:39:33 compute-0 systemd[1]: session-45.scope: Consumed 2.881s CPU time.
Nov 29 07:39:33 compute-0 systemd-logind[782]: Session 45 logged out. Waiting for processes to exit.
Nov 29 07:39:33 compute-0 systemd-logind[782]: Removed session 45.
Nov 29 07:39:34 compute-0 ceph-mon[75237]: pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:36 compute-0 ceph-mon[75237]: pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:39:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:38 compute-0 ceph-mon[75237]: pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:39:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:39:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:39:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:39:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:39:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:39:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:39:38
Nov 29 07:39:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:39:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:39:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['vms', 'images', 'default.rgw.meta', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', '.rgw.root']
Nov 29 07:39:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:39:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:39 compute-0 sshd-session[141799]: Invalid user gerrit from 103.234.151.178 port 21776
Nov 29 07:39:40 compute-0 sshd-session[141799]: Received disconnect from 103.234.151.178 port 21776:11: Bye Bye [preauth]
Nov 29 07:39:40 compute-0 sshd-session[141799]: Disconnected from invalid user gerrit 103.234.151.178 port 21776 [preauth]
Nov 29 07:39:40 compute-0 ceph-mon[75237]: pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:40 compute-0 sshd-session[141801]: Accepted publickey for zuul from 192.168.122.30 port 59564 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:39:40 compute-0 systemd-logind[782]: New session 46 of user zuul.
Nov 29 07:39:40 compute-0 systemd[1]: Started Session 46 of User zuul.
Nov 29 07:39:40 compute-0 sshd-session[141801]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:39:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:41 compute-0 python3.9[141954]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:39:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:39:42 compute-0 sudo[142108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbcuqagevjshamieaznffkqwlwpbbkrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401982.2014687-34-153821400523594/AnsiballZ_file.py'
Nov 29 07:39:42 compute-0 sudo[142108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:39:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:39:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:39:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:39:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:39:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:39:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:39:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:39:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:39:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:39:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:42 compute-0 python3.9[142110]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:39:42 compute-0 sudo[142108]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:43 compute-0 sudo[142260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miitaujvcbosmfucebbflzpkadlqoghq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401983.0631819-34-27682180401111/AnsiballZ_file.py'
Nov 29 07:39:43 compute-0 sudo[142260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:43 compute-0 python3.9[142262]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:39:43 compute-0 sudo[142260]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:44 compute-0 ceph-mon[75237]: pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:44 compute-0 python3.9[142413]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:39:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:45 compute-0 sudo[142563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kufthhcwoxxkcljbtnqdiejnnconsaxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401984.6341143-57-47412514040495/AnsiballZ_seboolean.py'
Nov 29 07:39:45 compute-0 sudo[142563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:45 compute-0 ceph-mon[75237]: pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:45 compute-0 python3.9[142565]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 07:39:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:46 compute-0 ceph-mon[75237]: pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:39:47 compute-0 sudo[142563]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:48 compute-0 sudo[142719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsrtcxfmaktejsiqemueeippkzxrzlji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401987.7332256-67-272553474884437/AnsiballZ_setup.py'
Nov 29 07:39:48 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 29 07:39:48 compute-0 sudo[142719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:48 compute-0 ceph-mon[75237]: pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:48 compute-0 python3.9[142721]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:39:48 compute-0 sudo[142719]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:49 compute-0 sudo[142803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlepcsoudtyyjxtdacvrcwujvigudswy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401987.7332256-67-272553474884437/AnsiballZ_dnf.py'
Nov 29 07:39:49 compute-0 sudo[142803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:49 compute-0 python3.9[142805]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:39:50 compute-0 ceph-mon[75237]: pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:50 compute-0 sudo[142803]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:51 compute-0 sudo[142956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pujxxmrvtrfwvgmpywjxratjxrxwpeim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401991.2835846-79-161655565903624/AnsiballZ_systemd.py'
Nov 29 07:39:51 compute-0 sudo[142956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:52 compute-0 python3.9[142958]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:39:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:39:52 compute-0 sudo[142956]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:53 compute-0 sudo[143111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyajuzcgtyyktrqfwvxrezuwjudngvzj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764401992.5293536-87-79362378853815/AnsiballZ_edpm_nftables_snippet.py'
Nov 29 07:39:53 compute-0 sudo[143111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:53 compute-0 python3[143113]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 29 07:39:54 compute-0 sudo[143111]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:54 compute-0 ceph-mon[75237]: pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:54 compute-0 sudo[143263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlsfhsbudbpnwqjkwurmdolduxlnnrtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401994.3190734-96-188924676479704/AnsiballZ_file.py'
Nov 29 07:39:54 compute-0 sudo[143263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:54 compute-0 python3.9[143265]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:39:54 compute-0 sudo[143263]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:55 compute-0 ceph-mon[75237]: pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:55 compute-0 sudo[143415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imfqtgapsrphlcfujdkcyobuzoatxknc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401995.052229-104-147992374962879/AnsiballZ_stat.py'
Nov 29 07:39:55 compute-0 sudo[143415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:39:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:39:55 compute-0 python3.9[143417]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:39:55 compute-0 sudo[143415]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:56 compute-0 sudo[143493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adzjklroszfqzkuzeehirgylkbxyzdvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401995.052229-104-147992374962879/AnsiballZ_file.py'
Nov 29 07:39:56 compute-0 sudo[143493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:56 compute-0 ceph-mon[75237]: pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:56 compute-0 python3.9[143495]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:39:56 compute-0 sudo[143493]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:56 compute-0 sudo[143645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xblqovwcgoqgzvferhcnoeqkicdcjrgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401996.5840437-116-85548864633123/AnsiballZ_stat.py'
Nov 29 07:39:56 compute-0 sudo[143645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:57 compute-0 python3.9[143647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:39:57 compute-0 sudo[143645]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:57 compute-0 sudo[143723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjnbqamuhjpnyifsluevxcommeanxsxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401996.5840437-116-85548864633123/AnsiballZ_file.py'
Nov 29 07:39:57 compute-0 sudo[143723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:57 compute-0 python3.9[143725]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.jgja4kq6 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:39:57 compute-0 sudo[143723]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:58 compute-0 sudo[143875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sieklnmlkcggvhcojgwqygdugtdevuww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401997.794554-128-271040496720055/AnsiballZ_stat.py'
Nov 29 07:39:58 compute-0 sudo[143875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:58 compute-0 python3.9[143877]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:39:58 compute-0 sudo[143875]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:58 compute-0 sudo[143953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mowiqapykbermaqdbfjqqwgujtyqkexl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401997.794554-128-271040496720055/AnsiballZ_file.py'
Nov 29 07:39:58 compute-0 sudo[143953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:39:58 compute-0 python3.9[143955]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:39:58 compute-0 sudo[143953]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:59 compute-0 sudo[144105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niokprciumjlrmdcgwbtdwxqluuvfwyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401999.2160957-141-204805941150320/AnsiballZ_command.py'
Nov 29 07:39:59 compute-0 sudo[144105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:39:59 compute-0 python3.9[144107]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:39:59 compute-0 sudo[144105]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:00 compute-0 ceph-mon[75237]: pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:00 compute-0 sudo[144258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvbqfnwqjnvflgjliluoirvkpauekkoo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764402000.1544383-149-113700490057233/AnsiballZ_edpm_nftables_from_files.py'
Nov 29 07:40:00 compute-0 sudo[144258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:00 compute-0 python3[144260]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 07:40:00 compute-0 sudo[144258]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:01 compute-0 ceph-mon[75237]: pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:01 compute-0 sshd-session[144291]: Invalid user exx from 20.185.243.158 port 37788
Nov 29 07:40:01 compute-0 sshd-session[144291]: Received disconnect from 20.185.243.158 port 37788:11: Bye Bye [preauth]
Nov 29 07:40:01 compute-0 sshd-session[144291]: Disconnected from invalid user exx 20.185.243.158 port 37788 [preauth]
Nov 29 07:40:02 compute-0 sudo[144412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvrclbxhmduloiyscudpvwvugbuaerex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402001.767388-157-207035973625427/AnsiballZ_stat.py'
Nov 29 07:40:02 compute-0 sudo[144412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:02 compute-0 ceph-mon[75237]: pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:02 compute-0 python3.9[144414]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:40:02 compute-0 sudo[144412]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:03 compute-0 sudo[144537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgwoecphzburzaqbsxkdmeodvzxemjrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402001.767388-157-207035973625427/AnsiballZ_copy.py'
Nov 29 07:40:03 compute-0 sudo[144537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:03 compute-0 python3.9[144539]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764402001.767388-157-207035973625427/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:40:03 compute-0 sudo[144537]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:40:04 compute-0 sudo[144689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seqowkyhacsafbgqsbrvqtooauzbxcur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402003.9271882-172-20269619285697/AnsiballZ_stat.py'
Nov 29 07:40:04 compute-0 sudo[144689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:04 compute-0 ceph-mon[75237]: pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:04 compute-0 python3.9[144691]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:40:04 compute-0 sudo[144689]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:04 compute-0 sudo[144814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukepvjgaatqaarkeymugdfwmspdthtdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402003.9271882-172-20269619285697/AnsiballZ_copy.py'
Nov 29 07:40:04 compute-0 sudo[144814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:05 compute-0 python3.9[144816]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764402003.9271882-172-20269619285697/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:40:05 compute-0 sudo[144814]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:05 compute-0 sudo[144966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdbpwuzehmpqzfgwmgvwyywoicuggcba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402005.3483033-187-58395508375405/AnsiballZ_stat.py'
Nov 29 07:40:05 compute-0 sudo[144966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:05 compute-0 python3.9[144968]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:40:05 compute-0 sudo[144966]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:06 compute-0 ceph-mon[75237]: pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:06 compute-0 sudo[145091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oczxgppnxzqsuedwjmrlpkkbhwsahgdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402005.3483033-187-58395508375405/AnsiballZ_copy.py'
Nov 29 07:40:06 compute-0 sudo[145091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:06 compute-0 python3.9[145093]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764402005.3483033-187-58395508375405/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:40:06 compute-0 sudo[145091]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:07 compute-0 sudo[145244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zootvyjswgdybzeoajbziyjgfleaklru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402006.8079333-202-204109693927403/AnsiballZ_stat.py'
Nov 29 07:40:07 compute-0 sudo[145244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:07 compute-0 python3.9[145246]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:40:07 compute-0 sudo[145244]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:07 compute-0 sudo[145369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkedujqewueokvkgmwugihlhkqnnbglu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402006.8079333-202-204109693927403/AnsiballZ_copy.py'
Nov 29 07:40:07 compute-0 sudo[145369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:07 compute-0 python3.9[145371]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764402006.8079333-202-204109693927403/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:40:08 compute-0 sudo[145369]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:08 compute-0 sudo[145521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjzdgpxmculigmexvyvstcxkjqjfzjwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402008.2078664-217-134270902672073/AnsiballZ_stat.py'
Nov 29 07:40:08 compute-0 sudo[145521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:40:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:40:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:40:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:40:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:40:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:40:08 compute-0 ceph-mon[75237]: pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:08 compute-0 python3.9[145523]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:40:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:40:08 compute-0 sudo[145521]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:09 compute-0 sudo[145646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npjlgalbibrzhbgwslxbjowjtyplkzrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402008.2078664-217-134270902672073/AnsiballZ_copy.py'
Nov 29 07:40:09 compute-0 sudo[145646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:09 compute-0 python3.9[145648]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764402008.2078664-217-134270902672073/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:40:09 compute-0 sudo[145646]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:09 compute-0 ceph-mon[75237]: pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:10 compute-0 sudo[145798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldiwkvvoajsmenmpqenefztfapmczdau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402009.7278144-232-171662187053834/AnsiballZ_file.py'
Nov 29 07:40:10 compute-0 sudo[145798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:10 compute-0 python3.9[145800]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:40:10 compute-0 sudo[145798]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:10 compute-0 sudo[145950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfcqxfnmadwlxbhlkrfxdacdnwphjfqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402010.4987292-240-136107130228947/AnsiballZ_command.py'
Nov 29 07:40:10 compute-0 sudo[145950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:11 compute-0 python3.9[145952]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:40:11 compute-0 sudo[145950]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:11 compute-0 sudo[146105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvzvtmflldnlslfnboowadfkfvdqdlai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402011.3212614-248-132842120906576/AnsiballZ_blockinfile.py'
Nov 29 07:40:11 compute-0 sudo[146105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:12 compute-0 ceph-mon[75237]: pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:12 compute-0 python3.9[146107]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:40:12 compute-0 sudo[146105]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:12 compute-0 sudo[146257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlwgjngjrixdiixanoxkqtjgavpeiaog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402012.25993-257-188474417828619/AnsiballZ_command.py'
Nov 29 07:40:12 compute-0 sudo[146257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:12 compute-0 python3.9[146259]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:40:12 compute-0 sudo[146257]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:13 compute-0 sudo[146410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqzovbfqzzfcthfwipeqpysiuxndvtzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402013.118394-265-99753079239216/AnsiballZ_stat.py'
Nov 29 07:40:13 compute-0 sudo[146410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:13 compute-0 python3.9[146412]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:40:13 compute-0 sudo[146410]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:40:14 compute-0 ceph-mon[75237]: pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:15 compute-0 sudo[146564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npmdrwhptjnlnjoxjofxnjbpbltkcyvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402013.916328-273-279662881707161/AnsiballZ_command.py'
Nov 29 07:40:15 compute-0 sudo[146564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:15 compute-0 python3.9[146566]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:40:15 compute-0 sudo[146564]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:15 compute-0 sudo[146719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwsfctzsjevvnzitwpyouhzulfabpyxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402015.5939445-281-169798509989223/AnsiballZ_file.py'
Nov 29 07:40:15 compute-0 sudo[146719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:16 compute-0 ceph-mon[75237]: pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:16 compute-0 python3.9[146721]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:40:16 compute-0 sudo[146719]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:18 compute-0 python3.9[146871]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:40:18 compute-0 ceph-mon[75237]: pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:40:19 compute-0 sudo[147022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vophtmylvtkfaksigkzssjyiimwlolfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402018.7071116-321-227545475898205/AnsiballZ_command.py'
Nov 29 07:40:19 compute-0 sudo[147022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:19 compute-0 python3.9[147024]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:40:19 compute-0 ovs-vsctl[147025]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 29 07:40:19 compute-0 sudo[147022]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:19 compute-0 sudo[147175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slkqyzrmeugpeoatvvipzmewudnuifzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402019.5049334-330-158735706033104/AnsiballZ_command.py'
Nov 29 07:40:19 compute-0 sudo[147175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:20 compute-0 python3.9[147177]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:40:20 compute-0 sudo[147175]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:20 compute-0 sudo[147330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zthtwbhfhhkeiqnvencqwltxqbweaerd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402020.2726536-338-190705070725946/AnsiballZ_command.py'
Nov 29 07:40:20 compute-0 sudo[147330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:20 compute-0 ceph-mon[75237]: pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:20 compute-0 python3.9[147332]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:40:20 compute-0 ovs-vsctl[147333]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 29 07:40:20 compute-0 sudo[147330]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:21 compute-0 python3.9[147483]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:40:22 compute-0 sudo[147639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfqmedocnugvmpaycjrwgmbszhozbpkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402021.921057-355-84030477419552/AnsiballZ_file.py'
Nov 29 07:40:22 compute-0 sudo[147639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:22 compute-0 ceph-mon[75237]: pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:22 compute-0 python3.9[147641]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:40:22 compute-0 sudo[147639]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:22 compute-0 sshd-session[147513]: Invalid user hamed from 114.34.106.146 port 36740
Nov 29 07:40:23 compute-0 sshd-session[147502]: Invalid user ali from 103.236.140.19 port 58030
Nov 29 07:40:23 compute-0 sudo[147791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtpoclzdgatmvdvoqwpermuirpnbobow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402022.7550735-363-3726851287285/AnsiballZ_stat.py'
Nov 29 07:40:23 compute-0 sudo[147791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:23 compute-0 sshd-session[147513]: Received disconnect from 114.34.106.146 port 36740:11: Bye Bye [preauth]
Nov 29 07:40:23 compute-0 sshd-session[147513]: Disconnected from invalid user hamed 114.34.106.146 port 36740 [preauth]
Nov 29 07:40:23 compute-0 sshd-session[147502]: Received disconnect from 103.236.140.19 port 58030:11: Bye Bye [preauth]
Nov 29 07:40:23 compute-0 sshd-session[147502]: Disconnected from invalid user ali 103.236.140.19 port 58030 [preauth]
Nov 29 07:40:23 compute-0 python3.9[147793]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:40:23 compute-0 sudo[147791]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:23 compute-0 sudo[147869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjriittqqlfqkoiutwhvnvsbawipofbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402022.7550735-363-3726851287285/AnsiballZ_file.py'
Nov 29 07:40:23 compute-0 sudo[147869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:40:23 compute-0 python3.9[147871]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:40:23 compute-0 sudo[147869]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:24 compute-0 ceph-mon[75237]: pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:24 compute-0 sudo[148021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqnzddznptokuzsmpkyaumyigotumgby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402024.0617998-363-188285587449832/AnsiballZ_stat.py'
Nov 29 07:40:24 compute-0 sudo[148021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:24 compute-0 python3.9[148023]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:40:24 compute-0 sudo[148021]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:24 compute-0 sudo[148099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hncxgsnilgpxzstkrtkdozfrwgodwbgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402024.0617998-363-188285587449832/AnsiballZ_file.py'
Nov 29 07:40:24 compute-0 sudo[148099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:25 compute-0 python3.9[148101]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:40:25 compute-0 sudo[148099]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:25 compute-0 sudo[148251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofwpmboqdguimjhvvtpatxwzseiufzft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402025.296022-386-176689333121202/AnsiballZ_file.py'
Nov 29 07:40:25 compute-0 sudo[148251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:25 compute-0 python3.9[148253]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:40:25 compute-0 sudo[148251]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:26 compute-0 sudo[148403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgrrbngksflahpulcmjhfqsbdclbnrft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402025.9910066-394-253804471270942/AnsiballZ_stat.py'
Nov 29 07:40:26 compute-0 sudo[148403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:26 compute-0 ceph-mon[75237]: pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:26 compute-0 python3.9[148405]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:40:26 compute-0 sudo[148403]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:26 compute-0 sudo[148481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbqgjwnmwmeavtivwybyijnhjkwqgrlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402025.9910066-394-253804471270942/AnsiballZ_file.py'
Nov 29 07:40:26 compute-0 sudo[148481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:27 compute-0 python3.9[148483]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:40:27 compute-0 sudo[148481]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:27 compute-0 sudo[148508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:40:27 compute-0 sudo[148508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:27 compute-0 sudo[148508]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:27 compute-0 sudo[148533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:40:27 compute-0 sudo[148533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:27 compute-0 sudo[148533]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:27 compute-0 sudo[148558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:40:27 compute-0 sudo[148558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:27 compute-0 sudo[148558]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:27 compute-0 sudo[148583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:40:27 compute-0 sudo[148583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:28 compute-0 sudo[148583]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 07:40:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:40:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:40:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:40:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:40:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:40:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:40:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:40:28 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 9237a858-019f-4e72-8fb3-38339370ef5d does not exist
Nov 29 07:40:28 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev d8dd535e-3e6a-4f00-a177-9ba0e979ee21 does not exist
Nov 29 07:40:28 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev f0efcd46-dc00-4ba7-85a8-7c2eb83308ce does not exist
Nov 29 07:40:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:40:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:40:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:40:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:40:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:40:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:40:28 compute-0 sudo[148676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:40:28 compute-0 sudo[148676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:28 compute-0 sudo[148676]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:28 compute-0 ceph-mon[75237]: pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:40:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:40:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:40:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:40:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:40:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:40:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:40:28 compute-0 sudo[148727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:40:28 compute-0 sudo[148727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:28 compute-0 sudo[148727]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:28 compute-0 sudo[148769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:40:28 compute-0 sudo[148769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:28 compute-0 sudo[148769]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:28 compute-0 sudo[148856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uocyhfxsjyteapkhqtejsdrxbrbfeiul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402028.2118697-406-212458042372911/AnsiballZ_stat.py'
Nov 29 07:40:28 compute-0 sudo[148856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:28 compute-0 sudo[148824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:40:28 compute-0 sudo[148824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:28 compute-0 python3.9[148864]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:40:28 compute-0 sudo[148856]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:40:28 compute-0 podman[148929]: 2025-11-29 07:40:28.919616414 +0000 UTC m=+0.047292632 container create d5f3303a2848304d19ae3349b8ae2673937c0125acf7ee42371454674026a9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_liskov, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:40:28 compute-0 systemd[1]: Started libpod-conmon-d5f3303a2848304d19ae3349b8ae2673937c0125acf7ee42371454674026a9c4.scope.
Nov 29 07:40:28 compute-0 podman[148929]: 2025-11-29 07:40:28.894515398 +0000 UTC m=+0.022191636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:40:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:40:29 compute-0 podman[148929]: 2025-11-29 07:40:29.021888181 +0000 UTC m=+0.149564399 container init d5f3303a2848304d19ae3349b8ae2673937c0125acf7ee42371454674026a9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 07:40:29 compute-0 podman[148929]: 2025-11-29 07:40:29.030704579 +0000 UTC m=+0.158380797 container start d5f3303a2848304d19ae3349b8ae2673937c0125acf7ee42371454674026a9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:40:29 compute-0 gifted_liskov[148969]: 167 167
Nov 29 07:40:29 compute-0 systemd[1]: libpod-d5f3303a2848304d19ae3349b8ae2673937c0125acf7ee42371454674026a9c4.scope: Deactivated successfully.
Nov 29 07:40:29 compute-0 sudo[148998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gopfwbploslfhymximxzbfcnkrhwxxld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402028.2118697-406-212458042372911/AnsiballZ_file.py'
Nov 29 07:40:29 compute-0 sudo[148998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:29 compute-0 python3.9[149003]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:40:29 compute-0 sudo[148998]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:29 compute-0 podman[148929]: 2025-11-29 07:40:29.415213645 +0000 UTC m=+0.542889963 container attach d5f3303a2848304d19ae3349b8ae2673937c0125acf7ee42371454674026a9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:40:29 compute-0 podman[148929]: 2025-11-29 07:40:29.415850913 +0000 UTC m=+0.543527191 container died d5f3303a2848304d19ae3349b8ae2673937c0125acf7ee42371454674026a9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:40:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-29b8dad84ec4b852855c487082fce92bc6f2bb6b6e8ed6ae2ffa856a0e714cc2-merged.mount: Deactivated successfully.
Nov 29 07:40:29 compute-0 podman[148929]: 2025-11-29 07:40:29.507050769 +0000 UTC m=+0.634726987 container remove d5f3303a2848304d19ae3349b8ae2673937c0125acf7ee42371454674026a9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 07:40:29 compute-0 systemd[1]: libpod-conmon-d5f3303a2848304d19ae3349b8ae2673937c0125acf7ee42371454674026a9c4.scope: Deactivated successfully.
Nov 29 07:40:29 compute-0 podman[149099]: 2025-11-29 07:40:29.702562148 +0000 UTC m=+0.061263074 container create de35baec3cc64a66f81be4e36e4cb1491561b57ca7a00184b074067cb7429ff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:40:29 compute-0 systemd[1]: Started libpod-conmon-de35baec3cc64a66f81be4e36e4cb1491561b57ca7a00184b074067cb7429ff0.scope.
Nov 29 07:40:29 compute-0 podman[149099]: 2025-11-29 07:40:29.674273853 +0000 UTC m=+0.032974859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:40:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0073c8f2489632cabe049060d2c1c76714b65717d9d06410d6492839a782b810/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0073c8f2489632cabe049060d2c1c76714b65717d9d06410d6492839a782b810/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0073c8f2489632cabe049060d2c1c76714b65717d9d06410d6492839a782b810/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0073c8f2489632cabe049060d2c1c76714b65717d9d06410d6492839a782b810/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0073c8f2489632cabe049060d2c1c76714b65717d9d06410d6492839a782b810/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:40:29 compute-0 podman[149099]: 2025-11-29 07:40:29.836156237 +0000 UTC m=+0.194857193 container init de35baec3cc64a66f81be4e36e4cb1491561b57ca7a00184b074067cb7429ff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:40:29 compute-0 podman[149099]: 2025-11-29 07:40:29.848923856 +0000 UTC m=+0.207624782 container start de35baec3cc64a66f81be4e36e4cb1491561b57ca7a00184b074067cb7429ff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hopper, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:40:29 compute-0 podman[149099]: 2025-11-29 07:40:29.85405656 +0000 UTC m=+0.212757486 container attach de35baec3cc64a66f81be4e36e4cb1491561b57ca7a00184b074067cb7429ff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:40:30 compute-0 sudo[149194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohicnlvjcwsqwnhjnynjglwvgzfbpjia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402029.4844377-418-270443045299970/AnsiballZ_systemd.py'
Nov 29 07:40:30 compute-0 sudo[149194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:30 compute-0 ceph-mon[75237]: pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:30 compute-0 python3.9[149196]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:40:30 compute-0 systemd[1]: Reloading.
Nov 29 07:40:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:30 compute-0 systemd-rc-local-generator[149241]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:40:30 compute-0 systemd-sysv-generator[149245]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:40:31 compute-0 stoic_hopper[149116]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:40:31 compute-0 stoic_hopper[149116]: --> relative data size: 1.0
Nov 29 07:40:31 compute-0 stoic_hopper[149116]: --> All data devices are unavailable
Nov 29 07:40:31 compute-0 systemd[1]: libpod-de35baec3cc64a66f81be4e36e4cb1491561b57ca7a00184b074067cb7429ff0.scope: Deactivated successfully.
Nov 29 07:40:31 compute-0 systemd[1]: libpod-de35baec3cc64a66f81be4e36e4cb1491561b57ca7a00184b074067cb7429ff0.scope: Consumed 1.149s CPU time.
Nov 29 07:40:31 compute-0 conmon[149116]: conmon de35baec3cc64a66f81b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-de35baec3cc64a66f81be4e36e4cb1491561b57ca7a00184b074067cb7429ff0.scope/container/memory.events
Nov 29 07:40:31 compute-0 podman[149099]: 2025-11-29 07:40:31.059476349 +0000 UTC m=+1.418177315 container died de35baec3cc64a66f81be4e36e4cb1491561b57ca7a00184b074067cb7429ff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hopper, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:40:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-0073c8f2489632cabe049060d2c1c76714b65717d9d06410d6492839a782b810-merged.mount: Deactivated successfully.
Nov 29 07:40:31 compute-0 sudo[149194]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:31 compute-0 podman[149099]: 2025-11-29 07:40:31.125177977 +0000 UTC m=+1.483878893 container remove de35baec3cc64a66f81be4e36e4cb1491561b57ca7a00184b074067cb7429ff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hopper, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:40:31 compute-0 systemd[1]: libpod-conmon-de35baec3cc64a66f81be4e36e4cb1491561b57ca7a00184b074067cb7429ff0.scope: Deactivated successfully.
Nov 29 07:40:31 compute-0 sudo[148824]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:31 compute-0 sudo[149279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:40:31 compute-0 sudo[149279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:31 compute-0 sudo[149279]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:31 compute-0 sudo[149320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:40:31 compute-0 sudo[149320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:31 compute-0 sudo[149320]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:31 compute-0 sudo[149368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:40:31 compute-0 sudo[149368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:31 compute-0 sudo[149368]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:31 compute-0 sudo[149422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:40:31 compute-0 sudo[149422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:31 compute-0 sudo[149527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfxpwkzmkgzzfvuwnryassnxnmqhbqdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402031.320978-426-112878478993658/AnsiballZ_stat.py'
Nov 29 07:40:31 compute-0 sudo[149527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:31 compute-0 podman[149563]: 2025-11-29 07:40:31.79713071 +0000 UTC m=+0.041361274 container create 0dd36c7592fb128f7e5eb9286ca55f50e7b3a78117d6cd30942c7aca14df7151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 07:40:31 compute-0 python3.9[149535]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:40:31 compute-0 systemd[1]: Started libpod-conmon-0dd36c7592fb128f7e5eb9286ca55f50e7b3a78117d6cd30942c7aca14df7151.scope.
Nov 29 07:40:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:40:31 compute-0 podman[149563]: 2025-11-29 07:40:31.777784575 +0000 UTC m=+0.022015169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:40:31 compute-0 sudo[149527]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:31 compute-0 podman[149563]: 2025-11-29 07:40:31.895308271 +0000 UTC m=+0.139538885 container init 0dd36c7592fb128f7e5eb9286ca55f50e7b3a78117d6cd30942c7aca14df7151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jang, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 07:40:31 compute-0 podman[149563]: 2025-11-29 07:40:31.904252984 +0000 UTC m=+0.148483558 container start 0dd36c7592fb128f7e5eb9286ca55f50e7b3a78117d6cd30942c7aca14df7151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jang, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:40:31 compute-0 podman[149563]: 2025-11-29 07:40:31.908689838 +0000 UTC m=+0.152920412 container attach 0dd36c7592fb128f7e5eb9286ca55f50e7b3a78117d6cd30942c7aca14df7151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jang, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:40:31 compute-0 romantic_jang[149581]: 167 167
Nov 29 07:40:31 compute-0 systemd[1]: libpod-0dd36c7592fb128f7e5eb9286ca55f50e7b3a78117d6cd30942c7aca14df7151.scope: Deactivated successfully.
Nov 29 07:40:31 compute-0 podman[149563]: 2025-11-29 07:40:31.91052328 +0000 UTC m=+0.154753844 container died 0dd36c7592fb128f7e5eb9286ca55f50e7b3a78117d6cd30942c7aca14df7151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jang, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:40:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9436ad8b4ad61e63a4dabeee886d64b4b08b7f0d49fff56c69b19aa5a3eb5ee-merged.mount: Deactivated successfully.
Nov 29 07:40:31 compute-0 podman[149563]: 2025-11-29 07:40:31.970292191 +0000 UTC m=+0.214522765 container remove 0dd36c7592fb128f7e5eb9286ca55f50e7b3a78117d6cd30942c7aca14df7151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 07:40:31 compute-0 systemd[1]: libpod-conmon-0dd36c7592fb128f7e5eb9286ca55f50e7b3a78117d6cd30942c7aca14df7151.scope: Deactivated successfully.
Nov 29 07:40:32 compute-0 sudo[149683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fclscydlumibwrdhuuddzgxtedzgbqcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402031.320978-426-112878478993658/AnsiballZ_file.py'
Nov 29 07:40:32 compute-0 sudo[149683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:32 compute-0 podman[149665]: 2025-11-29 07:40:32.12062234 +0000 UTC m=+0.040911352 container create 84f8a43d2438a77a767084058bc5037b1d567e29d67ee24a5e5590a7e7dd2ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 07:40:32 compute-0 systemd[1]: Started libpod-conmon-84f8a43d2438a77a767084058bc5037b1d567e29d67ee24a5e5590a7e7dd2ece.scope.
Nov 29 07:40:32 compute-0 podman[149665]: 2025-11-29 07:40:32.102767787 +0000 UTC m=+0.023056819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:40:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a3bd88725172304ab0cc071e56121e252a7beac9906639f4c305b7a96b9390/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a3bd88725172304ab0cc071e56121e252a7beac9906639f4c305b7a96b9390/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a3bd88725172304ab0cc071e56121e252a7beac9906639f4c305b7a96b9390/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a3bd88725172304ab0cc071e56121e252a7beac9906639f4c305b7a96b9390/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:40:32 compute-0 podman[149665]: 2025-11-29 07:40:32.222210248 +0000 UTC m=+0.142499270 container init 84f8a43d2438a77a767084058bc5037b1d567e29d67ee24a5e5590a7e7dd2ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cannon, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 07:40:32 compute-0 podman[149665]: 2025-11-29 07:40:32.229245566 +0000 UTC m=+0.149534578 container start 84f8a43d2438a77a767084058bc5037b1d567e29d67ee24a5e5590a7e7dd2ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 07:40:32 compute-0 podman[149665]: 2025-11-29 07:40:32.235385918 +0000 UTC m=+0.155675020 container attach 84f8a43d2438a77a767084058bc5037b1d567e29d67ee24a5e5590a7e7dd2ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cannon, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 07:40:32 compute-0 python3.9[149692]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:40:32 compute-0 sudo[149683]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:32 compute-0 ceph-mon[75237]: pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:32 compute-0 sudo[149849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trfqbaujijsuqsmfntfytlhghrwywqgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402032.491933-438-6423260228729/AnsiballZ_stat.py'
Nov 29 07:40:32 compute-0 sudo[149849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:33 compute-0 python3.9[149851]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:40:33 compute-0 silly_cannon[149695]: {
Nov 29 07:40:33 compute-0 silly_cannon[149695]:     "0": [
Nov 29 07:40:33 compute-0 silly_cannon[149695]:         {
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "devices": [
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "/dev/loop3"
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             ],
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "lv_name": "ceph_lv0",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "lv_size": "21470642176",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "name": "ceph_lv0",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "tags": {
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.cluster_name": "ceph",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.crush_device_class": "",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.encrypted": "0",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.osd_id": "0",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.type": "block",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.vdo": "0"
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             },
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "type": "block",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "vg_name": "ceph_vg0"
Nov 29 07:40:33 compute-0 silly_cannon[149695]:         }
Nov 29 07:40:33 compute-0 silly_cannon[149695]:     ],
Nov 29 07:40:33 compute-0 silly_cannon[149695]:     "1": [
Nov 29 07:40:33 compute-0 silly_cannon[149695]:         {
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "devices": [
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "/dev/loop4"
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             ],
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "lv_name": "ceph_lv1",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:40:33 compute-0 sudo[149849]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "lv_size": "21470642176",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "name": "ceph_lv1",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "tags": {
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.cluster_name": "ceph",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.crush_device_class": "",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.encrypted": "0",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.osd_id": "1",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.type": "block",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.vdo": "0"
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             },
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "type": "block",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "vg_name": "ceph_vg1"
Nov 29 07:40:33 compute-0 silly_cannon[149695]:         }
Nov 29 07:40:33 compute-0 silly_cannon[149695]:     ],
Nov 29 07:40:33 compute-0 silly_cannon[149695]:     "2": [
Nov 29 07:40:33 compute-0 silly_cannon[149695]:         {
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "devices": [
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "/dev/loop5"
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             ],
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "lv_name": "ceph_lv2",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "lv_size": "21470642176",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "name": "ceph_lv2",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "tags": {
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.cluster_name": "ceph",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.crush_device_class": "",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.encrypted": "0",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.osd_id": "2",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.type": "block",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:                 "ceph.vdo": "0"
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             },
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "type": "block",
Nov 29 07:40:33 compute-0 silly_cannon[149695]:             "vg_name": "ceph_vg2"
Nov 29 07:40:33 compute-0 silly_cannon[149695]:         }
Nov 29 07:40:33 compute-0 silly_cannon[149695]:     ]
Nov 29 07:40:33 compute-0 silly_cannon[149695]: }
Nov 29 07:40:33 compute-0 systemd[1]: libpod-84f8a43d2438a77a767084058bc5037b1d567e29d67ee24a5e5590a7e7dd2ece.scope: Deactivated successfully.
Nov 29 07:40:33 compute-0 podman[149665]: 2025-11-29 07:40:33.069177264 +0000 UTC m=+0.989466276 container died 84f8a43d2438a77a767084058bc5037b1d567e29d67ee24a5e5590a7e7dd2ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 07:40:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-34a3bd88725172304ab0cc071e56121e252a7beac9906639f4c305b7a96b9390-merged.mount: Deactivated successfully.
Nov 29 07:40:33 compute-0 podman[149665]: 2025-11-29 07:40:33.1533237 +0000 UTC m=+1.073612722 container remove 84f8a43d2438a77a767084058bc5037b1d567e29d67ee24a5e5590a7e7dd2ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cannon, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:40:33 compute-0 systemd[1]: libpod-conmon-84f8a43d2438a77a767084058bc5037b1d567e29d67ee24a5e5590a7e7dd2ece.scope: Deactivated successfully.
Nov 29 07:40:33 compute-0 sudo[149422]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:33 compute-0 sudo[149915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:40:33 compute-0 sudo[149915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:33 compute-0 sudo[149915]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:33 compute-0 sudo[149970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldejgzlbytlathpgcyogxprhdywwzlcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402032.491933-438-6423260228729/AnsiballZ_file.py'
Nov 29 07:40:33 compute-0 sudo[149970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:33 compute-0 sudo[149967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:40:33 compute-0 sudo[149967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:33 compute-0 sudo[149967]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:33 compute-0 sudo[149995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:40:33 compute-0 sudo[149995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:33 compute-0 sudo[149995]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:33 compute-0 sudo[150020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:40:33 compute-0 sudo[150020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:33 compute-0 python3.9[149992]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:40:33 compute-0 sudo[149970]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:33 compute-0 podman[150155]: 2025-11-29 07:40:33.867872351 +0000 UTC m=+0.051424618 container create 06ea2847828eefa9b2a966b63bbad1711354d83024f49222ce41fc922a4ad9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:40:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:40:33 compute-0 systemd[1]: Started libpod-conmon-06ea2847828eefa9b2a966b63bbad1711354d83024f49222ce41fc922a4ad9b0.scope.
Nov 29 07:40:33 compute-0 podman[150155]: 2025-11-29 07:40:33.842939199 +0000 UTC m=+0.026491426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:40:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:40:33 compute-0 podman[150155]: 2025-11-29 07:40:33.966808484 +0000 UTC m=+0.150360721 container init 06ea2847828eefa9b2a966b63bbad1711354d83024f49222ce41fc922a4ad9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 07:40:33 compute-0 podman[150155]: 2025-11-29 07:40:33.97520985 +0000 UTC m=+0.158762097 container start 06ea2847828eefa9b2a966b63bbad1711354d83024f49222ce41fc922a4ad9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 07:40:33 compute-0 podman[150155]: 2025-11-29 07:40:33.980659044 +0000 UTC m=+0.164211291 container attach 06ea2847828eefa9b2a966b63bbad1711354d83024f49222ce41fc922a4ad9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 07:40:33 compute-0 pedantic_wiles[150200]: 167 167
Nov 29 07:40:33 compute-0 systemd[1]: libpod-06ea2847828eefa9b2a966b63bbad1711354d83024f49222ce41fc922a4ad9b0.scope: Deactivated successfully.
Nov 29 07:40:33 compute-0 podman[150155]: 2025-11-29 07:40:33.982630309 +0000 UTC m=+0.166182546 container died 06ea2847828eefa9b2a966b63bbad1711354d83024f49222ce41fc922a4ad9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:40:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fec6ec96f71aa7b2cd048e71f9d27499f4179f0694b66cc792a900142ba162d-merged.mount: Deactivated successfully.
Nov 29 07:40:34 compute-0 podman[150155]: 2025-11-29 07:40:34.029344213 +0000 UTC m=+0.212896440 container remove 06ea2847828eefa9b2a966b63bbad1711354d83024f49222ce41fc922a4ad9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:40:34 compute-0 systemd[1]: libpod-conmon-06ea2847828eefa9b2a966b63bbad1711354d83024f49222ce41fc922a4ad9b0.scope: Deactivated successfully.
Nov 29 07:40:34 compute-0 sudo[150269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhfmbomevtwfxnrqiwsatkrxqolbhulp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402033.7482727-450-189452409222900/AnsiballZ_systemd.py'
Nov 29 07:40:34 compute-0 sudo[150269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:34 compute-0 podman[150277]: 2025-11-29 07:40:34.182071349 +0000 UTC m=+0.039857972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:40:34 compute-0 python3.9[150271]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:40:34 compute-0 systemd[1]: Reloading.
Nov 29 07:40:34 compute-0 systemd-rc-local-generator[150319]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:40:34 compute-0 systemd-sysv-generator[150323]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:40:34 compute-0 podman[150277]: 2025-11-29 07:40:34.700441552 +0000 UTC m=+0.558228125 container create 08163484ebd57efd9f046add75747e77fd61ca2ae38b8f07cee8f7362db38f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:40:34 compute-0 ceph-mon[75237]: pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:34 compute-0 systemd[1]: Started libpod-conmon-08163484ebd57efd9f046add75747e77fd61ca2ae38b8f07cee8f7362db38f96.scope.
Nov 29 07:40:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378c2388bfe480bcad36a2b192c2f139f5021d48168d962b35ada313f3783818/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378c2388bfe480bcad36a2b192c2f139f5021d48168d962b35ada313f3783818/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378c2388bfe480bcad36a2b192c2f139f5021d48168d962b35ada313f3783818/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378c2388bfe480bcad36a2b192c2f139f5021d48168d962b35ada313f3783818/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:40:34 compute-0 podman[150277]: 2025-11-29 07:40:34.983314009 +0000 UTC m=+0.841100582 container init 08163484ebd57efd9f046add75747e77fd61ca2ae38b8f07cee8f7362db38f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jepsen, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:40:34 compute-0 podman[150277]: 2025-11-29 07:40:34.99473753 +0000 UTC m=+0.852524113 container start 08163484ebd57efd9f046add75747e77fd61ca2ae38b8f07cee8f7362db38f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jepsen, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Nov 29 07:40:34 compute-0 systemd[1]: Starting Create netns directory...
Nov 29 07:40:35 compute-0 podman[150277]: 2025-11-29 07:40:35.005264206 +0000 UTC m=+0.863050769 container attach 08163484ebd57efd9f046add75747e77fd61ca2ae38b8f07cee8f7362db38f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jepsen, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 07:40:35 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 07:40:35 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 07:40:35 compute-0 systemd[1]: Finished Create netns directory.
Nov 29 07:40:35 compute-0 sudo[150269]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:35 compute-0 sudo[150490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmwbndhqegbwxkelltvmrzidragkftdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402035.3301609-460-41093410124090/AnsiballZ_file.py'
Nov 29 07:40:35 compute-0 sudo[150490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:35 compute-0 python3.9[150493]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:40:35 compute-0 sudo[150490]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:36 compute-0 epic_jepsen[150329]: {
Nov 29 07:40:36 compute-0 epic_jepsen[150329]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:40:36 compute-0 epic_jepsen[150329]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:40:36 compute-0 epic_jepsen[150329]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:40:36 compute-0 epic_jepsen[150329]:         "osd_id": 2,
Nov 29 07:40:36 compute-0 epic_jepsen[150329]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:40:36 compute-0 epic_jepsen[150329]:         "type": "bluestore"
Nov 29 07:40:36 compute-0 epic_jepsen[150329]:     },
Nov 29 07:40:36 compute-0 epic_jepsen[150329]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:40:36 compute-0 epic_jepsen[150329]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:40:36 compute-0 epic_jepsen[150329]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:40:36 compute-0 epic_jepsen[150329]:         "osd_id": 0,
Nov 29 07:40:36 compute-0 epic_jepsen[150329]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:40:36 compute-0 epic_jepsen[150329]:         "type": "bluestore"
Nov 29 07:40:36 compute-0 epic_jepsen[150329]:     },
Nov 29 07:40:36 compute-0 epic_jepsen[150329]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:40:36 compute-0 epic_jepsen[150329]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:40:36 compute-0 epic_jepsen[150329]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:40:36 compute-0 epic_jepsen[150329]:         "osd_id": 1,
Nov 29 07:40:36 compute-0 epic_jepsen[150329]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:40:36 compute-0 epic_jepsen[150329]:         "type": "bluestore"
Nov 29 07:40:36 compute-0 epic_jepsen[150329]:     }
Nov 29 07:40:36 compute-0 epic_jepsen[150329]: }
Nov 29 07:40:36 compute-0 systemd[1]: libpod-08163484ebd57efd9f046add75747e77fd61ca2ae38b8f07cee8f7362db38f96.scope: Deactivated successfully.
Nov 29 07:40:36 compute-0 systemd[1]: libpod-08163484ebd57efd9f046add75747e77fd61ca2ae38b8f07cee8f7362db38f96.scope: Consumed 1.105s CPU time.
Nov 29 07:40:36 compute-0 podman[150277]: 2025-11-29 07:40:36.096911145 +0000 UTC m=+1.954697728 container died 08163484ebd57efd9f046add75747e77fd61ca2ae38b8f07cee8f7362db38f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jepsen, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:40:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-378c2388bfe480bcad36a2b192c2f139f5021d48168d962b35ada313f3783818-merged.mount: Deactivated successfully.
Nov 29 07:40:36 compute-0 podman[150277]: 2025-11-29 07:40:36.186789503 +0000 UTC m=+2.044576046 container remove 08163484ebd57efd9f046add75747e77fd61ca2ae38b8f07cee8f7362db38f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jepsen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:40:36 compute-0 systemd[1]: libpod-conmon-08163484ebd57efd9f046add75747e77fd61ca2ae38b8f07cee8f7362db38f96.scope: Deactivated successfully.
Nov 29 07:40:36 compute-0 sudo[150020]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:40:36 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:40:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:40:36 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:40:36 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 9105bd85-c631-45ba-ba08-fb7dd51f59c4 does not exist
Nov 29 07:40:36 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 4c4012e0-452a-4493-ba79-105f7cc19c91 does not exist
Nov 29 07:40:36 compute-0 sudo[150609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:40:36 compute-0 sudo[150609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:36 compute-0 sudo[150609]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:36 compute-0 sudo[150657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:40:36 compute-0 sudo[150657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:36 compute-0 sudo[150657]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:36 compute-0 sudo[150732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyunjvogacapmabqinrumgxipucvfody ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402036.132853-468-254726590329775/AnsiballZ_stat.py'
Nov 29 07:40:36 compute-0 sudo[150732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:36 compute-0 ceph-mon[75237]: pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:36 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:40:36 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:40:36 compute-0 python3.9[150734]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:40:36 compute-0 sudo[150732]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:37 compute-0 sudo[150855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esrqwuwjsexwtgqzllpozcgtkadkjpex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402036.132853-468-254726590329775/AnsiballZ_copy.py'
Nov 29 07:40:37 compute-0 sudo[150855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:37 compute-0 python3.9[150857]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402036.132853-468-254726590329775/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:40:37 compute-0 sudo[150855]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:38 compute-0 sudo[151007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkdocwncxzcxlpllxbfefnmtooftldmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402037.8392305-485-178840592254265/AnsiballZ_file.py'
Nov 29 07:40:38 compute-0 sudo[151007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:38 compute-0 python3.9[151009]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:40:38 compute-0 sudo[151007]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:40:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:40:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:40:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:40:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:40:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:40:38 compute-0 ceph-mon[75237]: pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:40:38
Nov 29 07:40:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:40:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:40:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['images', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'backups', 'volumes', 'default.rgw.log']
Nov 29 07:40:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:40:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:40:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:40:39 compute-0 ceph-mon[75237]: pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:40:39 compute-0 sudo[151159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhfheackwvgclpfigdzzeswsyeghxizb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402038.659179-493-203156081461387/AnsiballZ_stat.py'
Nov 29 07:40:39 compute-0 sudo[151159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:39 compute-0 python3.9[151161]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:40:40 compute-0 sudo[151159]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:40 compute-0 sudo[151282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omcjqsvtgxlnysgoqrlakzgmbgbkvsve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402038.659179-493-203156081461387/AnsiballZ_copy.py'
Nov 29 07:40:40 compute-0 sudo[151282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:40 compute-0 python3.9[151284]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764402038.659179-493-203156081461387/.source.json _original_basename=.p6ycvuuo follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:40:40 compute-0 sudo[151282]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:40:41 compute-0 sudo[151434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfzjqrnziyzlknfgcwfhlpbphumpypul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402041.4993458-508-147273607536914/AnsiballZ_file.py'
Nov 29 07:40:41 compute-0 sudo[151434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:41 compute-0 ceph-mon[75237]: pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:40:42 compute-0 python3.9[151436]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:40:42 compute-0 sudo[151434]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:42 compute-0 sudo[151586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kivvamjasbylaboiumlwnqubxozvuqnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402042.2891126-516-189316617901144/AnsiballZ_stat.py'
Nov 29 07:40:42 compute-0 sudo[151586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:40:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:40:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:40:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:40:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:40:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:40:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:40:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:40:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:40:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:40:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:40:42 compute-0 sudo[151586]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:43 compute-0 sudo[151709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgjldhmgtyspshpugcfwptwdcirdznsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402042.2891126-516-189316617901144/AnsiballZ_copy.py'
Nov 29 07:40:43 compute-0 sudo[151709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:43 compute-0 sudo[151709]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:40:43 compute-0 ceph-mon[75237]: pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:40:44 compute-0 sudo[151861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooqkxiphckdkwaqkaxfpleqvfjgshwvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402043.8537014-533-185980427229191/AnsiballZ_container_config_data.py'
Nov 29 07:40:44 compute-0 sudo[151861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:44 compute-0 python3.9[151863]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 29 07:40:44 compute-0 sudo[151861]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:40:45 compute-0 sudo[152013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcruthlnlldvlblozzgssemblioqekox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402044.8015256-542-150128367736971/AnsiballZ_container_config_hash.py'
Nov 29 07:40:45 compute-0 sudo[152013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:45 compute-0 python3.9[152015]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 07:40:45 compute-0 sudo[152013]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:46 compute-0 ceph-mon[75237]: pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:40:46 compute-0 sudo[152165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdzvamtcnvecrxbfnbdavmkqhxwopobf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402045.8081677-551-186659731530427/AnsiballZ_podman_container_info.py'
Nov 29 07:40:46 compute-0 sudo[152165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:46 compute-0 python3.9[152167]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 07:40:46 compute-0 sudo[152165]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:40:47 compute-0 sudo[152344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngqufhzrlzazeyzrbsxnsmmfjbwauhxr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764402047.2267034-564-173984291424075/AnsiballZ_edpm_container_manage.py'
Nov 29 07:40:47 compute-0 sudo[152344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:48 compute-0 python3[152346]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 07:40:48 compute-0 ceph-mon[75237]: pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:40:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:40:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:40:50 compute-0 ceph-mon[75237]: pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:40:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:52 compute-0 ceph-mon[75237]: pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:53 compute-0 podman[152359]: 2025-11-29 07:40:53.878669625 +0000 UTC m=+5.706429916 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 07:40:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:40:54 compute-0 podman[152481]: 2025-11-29 07:40:54.089469088 +0000 UTC m=+0.065851863 container create 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 07:40:54 compute-0 podman[152481]: 2025-11-29 07:40:54.052975172 +0000 UTC m=+0.029358017 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 07:40:54 compute-0 python3[152346]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 07:40:54 compute-0 ceph-mon[75237]: pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:54 compute-0 sshd-session[152441]: Invalid user ts1 from 103.234.151.178 port 45594
Nov 29 07:40:54 compute-0 sudo[152344]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:54 compute-0 sshd-session[152441]: Received disconnect from 103.234.151.178 port 45594:11: Bye Bye [preauth]
Nov 29 07:40:54 compute-0 sshd-session[152441]: Disconnected from invalid user ts1 103.234.151.178 port 45594 [preauth]
Nov 29 07:40:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:54 compute-0 sudo[152669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixbbvtpsksidjjlpzpzcndtlgyxmifpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402054.5024436-572-198698575614080/AnsiballZ_stat.py'
Nov 29 07:40:54 compute-0 sudo[152669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:55 compute-0 python3.9[152671]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:40:55 compute-0 sudo[152669]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:40:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:40:56 compute-0 sudo[152823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmhgpwrwazsdckfkdqrdasmcamigxddq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402055.8291783-581-113267714871602/AnsiballZ_file.py'
Nov 29 07:40:56 compute-0 sudo[152823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:56 compute-0 python3.9[152825]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:40:56 compute-0 ceph-mon[75237]: pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:56 compute-0 sudo[152823]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:56 compute-0 sudo[152899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obihlswsvvdjoetujkxczewauvatwmrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402055.8291783-581-113267714871602/AnsiballZ_stat.py'
Nov 29 07:40:56 compute-0 sudo[152899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:56 compute-0 python3.9[152901]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:40:56 compute-0 sudo[152899]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:57 compute-0 sudo[153050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vliqkqfimubdmblgspjfjagaxsleibkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402057.001723-581-84875333782182/AnsiballZ_copy.py'
Nov 29 07:40:57 compute-0 sudo[153050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:57 compute-0 python3.9[153052]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764402057.001723-581-84875333782182/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:40:57 compute-0 sudo[153050]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:58 compute-0 sudo[153126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acbzpnzjocmutqsbrsdbmqrrvwgrrnrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402057.001723-581-84875333782182/AnsiballZ_systemd.py'
Nov 29 07:40:58 compute-0 sudo[153126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:58 compute-0 python3.9[153128]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:40:58 compute-0 systemd[1]: Reloading.
Nov 29 07:40:58 compute-0 ceph-mon[75237]: pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:58 compute-0 systemd-rc-local-generator[153155]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:40:58 compute-0 systemd-sysv-generator[153159]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:40:58 compute-0 sudo[153126]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:40:59 compute-0 sudo[153236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzqtdofvqrshckboqshbczuiopshempv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402057.001723-581-84875333782182/AnsiballZ_systemd.py'
Nov 29 07:40:59 compute-0 sudo[153236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:40:59 compute-0 python3.9[153238]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:40:59 compute-0 systemd[1]: Reloading.
Nov 29 07:40:59 compute-0 systemd-rc-local-generator[153267]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:40:59 compute-0 systemd-sysv-generator[153270]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:40:59 compute-0 systemd[1]: Starting ovn_controller container...
Nov 29 07:40:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:40:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0148d1c673a59baa10280d7c2ad570693a8d05ac2fe08c9c6f1bd44fb66336/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 29 07:40:59 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf.
Nov 29 07:40:59 compute-0 podman[153279]: 2025-11-29 07:40:59.895740792 +0000 UTC m=+0.128810550 container init 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller)
Nov 29 07:40:59 compute-0 ovn_controller[153295]: + sudo -E kolla_set_configs
Nov 29 07:40:59 compute-0 podman[153279]: 2025-11-29 07:40:59.929194937 +0000 UTC m=+0.162264705 container start 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:40:59 compute-0 edpm-start-podman-container[153279]: ovn_controller
Nov 29 07:40:59 compute-0 systemd[1]: Created slice User Slice of UID 0.
Nov 29 07:40:59 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 29 07:40:59 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 29 07:41:00 compute-0 systemd[1]: Starting User Manager for UID 0...
Nov 29 07:41:00 compute-0 edpm-start-podman-container[153278]: Creating additional drop-in dependency for "ovn_controller" (9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf)
Nov 29 07:41:00 compute-0 systemd[153335]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Nov 29 07:41:00 compute-0 podman[153302]: 2025-11-29 07:41:00.024981092 +0000 UTC m=+0.081987015 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 07:41:00 compute-0 systemd[1]: 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf-564202747b752636.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 07:41:00 compute-0 systemd[1]: 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf-564202747b752636.service: Failed with result 'exit-code'.
Nov 29 07:41:00 compute-0 systemd[1]: Reloading.
Nov 29 07:41:00 compute-0 systemd-rc-local-generator[153381]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:41:00 compute-0 systemd-sysv-generator[153384]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:41:00 compute-0 systemd[153335]: Queued start job for default target Main User Target.
Nov 29 07:41:00 compute-0 systemd[153335]: Created slice User Application Slice.
Nov 29 07:41:00 compute-0 systemd[153335]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 29 07:41:00 compute-0 systemd[153335]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 07:41:00 compute-0 systemd[153335]: Reached target Paths.
Nov 29 07:41:00 compute-0 systemd[153335]: Reached target Timers.
Nov 29 07:41:00 compute-0 systemd[153335]: Starting D-Bus User Message Bus Socket...
Nov 29 07:41:00 compute-0 systemd[153335]: Starting Create User's Volatile Files and Directories...
Nov 29 07:41:00 compute-0 systemd[153335]: Finished Create User's Volatile Files and Directories.
Nov 29 07:41:00 compute-0 systemd[153335]: Listening on D-Bus User Message Bus Socket.
Nov 29 07:41:00 compute-0 systemd[153335]: Reached target Sockets.
Nov 29 07:41:00 compute-0 systemd[153335]: Reached target Basic System.
Nov 29 07:41:00 compute-0 systemd[153335]: Reached target Main User Target.
Nov 29 07:41:00 compute-0 systemd[153335]: Startup finished in 146ms.
Nov 29 07:41:00 compute-0 systemd[1]: Started User Manager for UID 0.
Nov 29 07:41:00 compute-0 systemd[1]: Started ovn_controller container.
Nov 29 07:41:00 compute-0 systemd[1]: Started Session c1 of User root.
Nov 29 07:41:00 compute-0 sudo[153236]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:00 compute-0 ovn_controller[153295]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 07:41:00 compute-0 ovn_controller[153295]: INFO:__main__:Validating config file
Nov 29 07:41:00 compute-0 ovn_controller[153295]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 07:41:00 compute-0 ovn_controller[153295]: INFO:__main__:Writing out command to execute
Nov 29 07:41:00 compute-0 ovn_controller[153295]: ++ cat /run_command
Nov 29 07:41:00 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 29 07:41:00 compute-0 ovn_controller[153295]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 07:41:00 compute-0 ovn_controller[153295]: + ARGS=
Nov 29 07:41:00 compute-0 ovn_controller[153295]: + sudo kolla_copy_cacerts
Nov 29 07:41:00 compute-0 systemd[1]: Started Session c2 of User root.
Nov 29 07:41:00 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 29 07:41:00 compute-0 ovn_controller[153295]: + [[ ! -n '' ]]
Nov 29 07:41:00 compute-0 ovn_controller[153295]: + . kolla_extend_start
Nov 29 07:41:00 compute-0 ovn_controller[153295]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 07:41:00 compute-0 ovn_controller[153295]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 29 07:41:00 compute-0 ovn_controller[153295]: + umask 0022
Nov 29 07:41:00 compute-0 ovn_controller[153295]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 29 07:41:00 compute-0 ceph-mon[75237]: pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 29 07:41:00 compute-0 NetworkManager[49116]: <info>  [1764402060.4918] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 29 07:41:00 compute-0 NetworkManager[49116]: <info>  [1764402060.4926] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:41:00 compute-0 NetworkManager[49116]: <info>  [1764402060.4937] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 29 07:41:00 compute-0 NetworkManager[49116]: <info>  [1764402060.4944] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 29 07:41:00 compute-0 NetworkManager[49116]: <info>  [1764402060.4947] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 07:41:00 compute-0 kernel: br-int: entered promiscuous mode
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 07:41:00 compute-0 systemd-udevd[153431]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:41:00 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 07:41:00 compute-0 ovn_controller[153295]: 2025-11-29T07:41:00Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 07:41:00 compute-0 NetworkManager[49116]: <info>  [1764402060.5997] manager: (ovn-08bd3e-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 29 07:41:00 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Nov 29 07:41:00 compute-0 systemd-udevd[153448]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:41:00 compute-0 NetworkManager[49116]: <info>  [1764402060.6210] device (genev_sys_6081): carrier: link connected
Nov 29 07:41:00 compute-0 NetworkManager[49116]: <info>  [1764402060.6213] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 29 07:41:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:00 compute-0 sudo[153558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovzboqydcvgbxnusyfkgpwhdwstihsya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402060.570369-609-220568079225002/AnsiballZ_command.py'
Nov 29 07:41:00 compute-0 sudo[153558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:01 compute-0 python3.9[153560]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:41:01 compute-0 ovs-vsctl[153561]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 29 07:41:01 compute-0 sudo[153558]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:01 compute-0 sudo[153711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsxynnwectpuupqavovvzdmmbiqhbiqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402061.3048158-617-96651877541040/AnsiballZ_command.py'
Nov 29 07:41:01 compute-0 sudo[153711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:01 compute-0 python3.9[153713]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:41:01 compute-0 ovs-vsctl[153715]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 29 07:41:01 compute-0 sudo[153711]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:02 compute-0 ceph-mon[75237]: pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:02 compute-0 sudo[153866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxanhbcfzpslhbzzpzviupzokzwvbrid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402062.3827486-631-219803137958941/AnsiballZ_command.py'
Nov 29 07:41:02 compute-0 sudo[153866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:02 compute-0 python3.9[153868]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:41:02 compute-0 ovs-vsctl[153869]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 29 07:41:02 compute-0 sudo[153866]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:03 compute-0 sshd-session[141804]: Connection closed by 192.168.122.30 port 59564
Nov 29 07:41:03 compute-0 sshd-session[141801]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:41:03 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Nov 29 07:41:03 compute-0 systemd[1]: session-46.scope: Consumed 1min 3.993s CPU time.
Nov 29 07:41:03 compute-0 systemd-logind[782]: Session 46 logged out. Waiting for processes to exit.
Nov 29 07:41:03 compute-0 systemd-logind[782]: Removed session 46.
Nov 29 07:41:03 compute-0 ceph-mon[75237]: pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:05 compute-0 sshd-session[153894]: Invalid user sol from 80.94.92.182 port 52060
Nov 29 07:41:06 compute-0 sshd-session[153894]: Connection closed by invalid user sol 80.94.92.182 port 52060 [preauth]
Nov 29 07:41:06 compute-0 ceph-mon[75237]: pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:08 compute-0 ceph-mon[75237]: pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:41:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:41:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:41:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:41:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:41:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:41:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:09 compute-0 sshd-session[153896]: Accepted publickey for zuul from 192.168.122.30 port 38640 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:41:09 compute-0 systemd-logind[782]: New session 48 of user zuul.
Nov 29 07:41:09 compute-0 systemd[1]: Started Session 48 of User zuul.
Nov 29 07:41:09 compute-0 sshd-session[153896]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:41:10 compute-0 ceph-mon[75237]: pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:10 compute-0 systemd[1]: Stopping User Manager for UID 0...
Nov 29 07:41:10 compute-0 systemd[153335]: Activating special unit Exit the Session...
Nov 29 07:41:10 compute-0 systemd[153335]: Stopped target Main User Target.
Nov 29 07:41:10 compute-0 systemd[153335]: Stopped target Basic System.
Nov 29 07:41:10 compute-0 systemd[153335]: Stopped target Paths.
Nov 29 07:41:10 compute-0 systemd[153335]: Stopped target Sockets.
Nov 29 07:41:10 compute-0 systemd[153335]: Stopped target Timers.
Nov 29 07:41:10 compute-0 systemd[153335]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 07:41:10 compute-0 systemd[153335]: Closed D-Bus User Message Bus Socket.
Nov 29 07:41:10 compute-0 systemd[153335]: Stopped Create User's Volatile Files and Directories.
Nov 29 07:41:10 compute-0 systemd[153335]: Removed slice User Application Slice.
Nov 29 07:41:10 compute-0 systemd[153335]: Reached target Shutdown.
Nov 29 07:41:10 compute-0 systemd[153335]: Finished Exit the Session.
Nov 29 07:41:10 compute-0 systemd[153335]: Reached target Exit the Session.
Nov 29 07:41:10 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Nov 29 07:41:10 compute-0 systemd[1]: Stopped User Manager for UID 0.
Nov 29 07:41:10 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 29 07:41:10 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 29 07:41:10 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 29 07:41:10 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 29 07:41:10 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Nov 29 07:41:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:10 compute-0 python3.9[154051]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:41:11 compute-0 sudo[154205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbpvwfdgzclnjhgvyixiktficvmefwgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402071.4184644-34-96927195200572/AnsiballZ_file.py'
Nov 29 07:41:11 compute-0 sudo[154205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:12 compute-0 python3.9[154207]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:41:12 compute-0 sudo[154205]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:12 compute-0 ceph-mon[75237]: pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:12 compute-0 sudo[154357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjxlrglryufpbdinisrijbzvsnikznyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402072.299662-34-25890997371864/AnsiballZ_file.py'
Nov 29 07:41:12 compute-0 sudo[154357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:12 compute-0 python3.9[154359]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:41:12 compute-0 sudo[154357]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:13 compute-0 sudo[154509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmolciakdsrkwzmxdiwhcceorftshxfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402072.9843903-34-204506173196778/AnsiballZ_file.py'
Nov 29 07:41:13 compute-0 sudo[154509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:13 compute-0 python3.9[154511]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:41:13 compute-0 sudo[154509]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:14 compute-0 sudo[154661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsqpohtddkyudotwqmefxnzpafvvcnmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402073.7342453-34-192189090689896/AnsiballZ_file.py'
Nov 29 07:41:14 compute-0 sudo[154661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:14 compute-0 python3.9[154663]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:41:14 compute-0 sudo[154661]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:14 compute-0 sudo[154814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovnadkylgpanalpxkbqhtxfffnoalhkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402074.615762-34-74207468112713/AnsiballZ_file.py'
Nov 29 07:41:14 compute-0 ceph-mon[75237]: pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:14 compute-0 sudo[154814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:15 compute-0 python3.9[154816]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:41:15 compute-0 sudo[154814]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:15 compute-0 ceph-mon[75237]: pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:15 compute-0 python3.9[154966]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:41:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:17 compute-0 sudo[155116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fanwwzihtxeqvgajtmdogpoqsdcuafhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402076.182375-78-35212556780938/AnsiballZ_seboolean.py'
Nov 29 07:41:17 compute-0 sudo[155116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:17 compute-0 python3.9[155118]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 07:41:18 compute-0 sudo[155116]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:19 compute-0 ceph-mon[75237]: pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:19 compute-0 python3.9[155269]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:41:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:20 compute-0 ceph-mon[75237]: pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:20 compute-0 sshd-session[155330]: Received disconnect from 20.185.243.158 port 52492:11: Bye Bye [preauth]
Nov 29 07:41:20 compute-0 sshd-session[155330]: Disconnected from authenticating user root 20.185.243.158 port 52492 [preauth]
Nov 29 07:41:20 compute-0 python3.9[155392]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402078.894838-86-79122159095024/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:41:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:21 compute-0 python3.9[155542]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:41:22 compute-0 python3.9[155663]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402080.5144846-101-17681800838622/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:41:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:23 compute-0 sudo[155818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akzsvghzpiblibodmzpvipcvukoomkfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402083.1304233-118-248242476428389/AnsiballZ_setup.py'
Nov 29 07:41:23 compute-0 sudo[155818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:24 compute-0 python3.9[155820]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:41:25 compute-0 sudo[155818]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:25 compute-0 ceph-mon[75237]: pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:41:25.398546) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402085398724, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1390, "num_deletes": 251, "total_data_size": 2252391, "memory_usage": 2298544, "flush_reason": "Manual Compaction"}
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402085423258, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 2210755, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8741, "largest_seqno": 10130, "table_properties": {"data_size": 2204239, "index_size": 3715, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13072, "raw_average_key_size": 19, "raw_value_size": 2191149, "raw_average_value_size": 3203, "num_data_blocks": 174, "num_entries": 684, "num_filter_entries": 684, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401916, "oldest_key_time": 1764401916, "file_creation_time": 1764402085, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 24773 microseconds, and 8120 cpu microseconds.
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:41:25.423339) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 2210755 bytes OK
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:41:25.423361) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:41:25.425012) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:41:25.425028) EVENT_LOG_v1 {"time_micros": 1764402085425023, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:41:25.425046) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 2246202, prev total WAL file size 2246202, number of live WAL files 2.
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:41:25.426310) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(2158KB)], [23(7030KB)]
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402085426638, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 9409617, "oldest_snapshot_seqno": -1}
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3599 keys, 7459895 bytes, temperature: kUnknown
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402085505038, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 7459895, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7432130, "index_size": 17671, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9029, "raw_key_size": 88175, "raw_average_key_size": 24, "raw_value_size": 7363385, "raw_average_value_size": 2045, "num_data_blocks": 766, "num_entries": 3599, "num_filter_entries": 3599, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764402085, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:41:25.505379) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7459895 bytes
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:41:25.506955) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.8 rd, 95.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 6.9 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(7.6) write-amplify(3.4) OK, records in: 4113, records dropped: 514 output_compression: NoCompression
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:41:25.506975) EVENT_LOG_v1 {"time_micros": 1764402085506965, "job": 8, "event": "compaction_finished", "compaction_time_micros": 78536, "compaction_time_cpu_micros": 39189, "output_level": 6, "num_output_files": 1, "total_output_size": 7459895, "num_input_records": 4113, "num_output_records": 3599, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402085507627, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402085509418, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:41:25.425944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:41:25.509464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:41:25.509469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:41:25.509470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:41:25.509472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:41:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:41:25.509473) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:41:25 compute-0 sudo[155902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jemwxhclvkoxmeasnpvxwobbtcxrrnmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402083.1304233-118-248242476428389/AnsiballZ_dnf.py'
Nov 29 07:41:25 compute-0 sudo[155902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:25 compute-0 python3.9[155904]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:41:26 compute-0 ceph-mon[75237]: pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:26 compute-0 ceph-mon[75237]: pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:27 compute-0 sudo[155902]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:28 compute-0 sudo[156055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjsqzmzzywwpmiccptmtijuzeuypmbzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402087.5983121-130-263803493232084/AnsiballZ_systemd.py'
Nov 29 07:41:28 compute-0 sudo[156055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:28 compute-0 python3.9[156057]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:41:28 compute-0 ceph-mon[75237]: pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:28 compute-0 sudo[156055]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:30 compute-0 python3.9[156210]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:41:30 compute-0 ovn_controller[153295]: 2025-11-29T07:41:30Z|00025|memory|INFO|16256 kB peak resident set size after 30.1 seconds
Nov 29 07:41:30 compute-0 ovn_controller[153295]: 2025-11-29T07:41:30Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 29 07:41:30 compute-0 podman[156305]: 2025-11-29 07:41:30.60129367 +0000 UTC m=+0.117063783 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 29 07:41:30 compute-0 ceph-mon[75237]: pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:30 compute-0 python3.9[156347]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402089.5884967-138-116914094115336/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:41:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:31 compute-0 python3.9[156507]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:41:32 compute-0 python3.9[156628]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402090.910185-138-267430743592372/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:41:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:33 compute-0 ceph-mon[75237]: pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:33 compute-0 python3.9[156778]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:41:34 compute-0 python3.9[156901]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402093.1571903-182-96555843535551/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:41:34 compute-0 ceph-mon[75237]: pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:34 compute-0 sshd-session[156815]: Invalid user sopuser from 114.34.106.146 port 40078
Nov 29 07:41:35 compute-0 python3.9[157051]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:41:35 compute-0 sshd-session[156815]: Received disconnect from 114.34.106.146 port 40078:11: Bye Bye [preauth]
Nov 29 07:41:35 compute-0 sshd-session[156815]: Disconnected from invalid user sopuser 114.34.106.146 port 40078 [preauth]
Nov 29 07:41:35 compute-0 python3.9[157172]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402094.509181-182-62634198768657/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:41:36 compute-0 python3.9[157322]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:41:36 compute-0 sudo[157349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:41:36 compute-0 sudo[157349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:36 compute-0 sudo[157349]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:36 compute-0 sudo[157378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:41:36 compute-0 sudo[157378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:36 compute-0 sudo[157378]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:36 compute-0 sudo[157430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:41:36 compute-0 sudo[157430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:36 compute-0 sudo[157430]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:36 compute-0 sudo[157479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:41:36 compute-0 sudo[157479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:36 compute-0 ceph-mon[75237]: pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:36 compute-0 sudo[157578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqaseevdsurlqxsbouidwiqheunaskwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402096.525516-220-36227639123131/AnsiballZ_file.py'
Nov 29 07:41:36 compute-0 sudo[157578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:37 compute-0 python3.9[157585]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:41:37 compute-0 sudo[157578]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:37 compute-0 podman[157676]: 2025-11-29 07:41:37.271117466 +0000 UTC m=+0.118064528 container exec 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:41:37 compute-0 podman[157676]: 2025-11-29 07:41:37.398584269 +0000 UTC m=+0.245531331 container exec_died 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:41:37 compute-0 sudo[157842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kakbhmfsasbnjtpseooxebtfcagavgjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402097.2400482-228-178469827776122/AnsiballZ_stat.py'
Nov 29 07:41:37 compute-0 sudo[157842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:37 compute-0 python3.9[157851]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:41:37 compute-0 sudo[157842]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:37 compute-0 ceph-mon[75237]: pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:37 compute-0 sudo[158017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaofvcjrqelbeugvkndvxilqkegxfzeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402097.2400482-228-178469827776122/AnsiballZ_file.py'
Nov 29 07:41:37 compute-0 sudo[158017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:38 compute-0 sshd-session[157498]: Invalid user superset from 103.236.140.19 port 60676
Nov 29 07:41:38 compute-0 sudo[157479]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:41:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:41:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:41:38 compute-0 python3.9[158022]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:41:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:41:38 compute-0 sudo[158017]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:38 compute-0 sudo[158037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:41:38 compute-0 sudo[158037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:38 compute-0 sudo[158037]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:38 compute-0 sshd-session[157498]: Received disconnect from 103.236.140.19 port 60676:11: Bye Bye [preauth]
Nov 29 07:41:38 compute-0 sshd-session[157498]: Disconnected from invalid user superset 103.236.140.19 port 60676 [preauth]
Nov 29 07:41:38 compute-0 sudo[158086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:41:38 compute-0 sudo[158086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:38 compute-0 sudo[158086]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:38 compute-0 sudo[158134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:41:38 compute-0 sudo[158134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:38 compute-0 sudo[158134]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:38 compute-0 sudo[158188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:41:38 compute-0 sudo[158188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:41:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:41:38 compute-0 sudo[158296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snzedfbolgmhvvfevjphmzysvfetguft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402098.368236-228-270952038917053/AnsiballZ_stat.py'
Nov 29 07:41:38 compute-0 sudo[158296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:41:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:41:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:41:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:41:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:41:38
Nov 29 07:41:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:41:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:41:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['vms', 'images', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', '.rgw.root']
Nov 29 07:41:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:41:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:38 compute-0 python3.9[158302]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:41:38 compute-0 sudo[158296]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:38 compute-0 sudo[158188]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:41:39 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:41:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:41:39 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:41:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:41:39 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:41:39 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 11d13c08-236a-47b5-8bd8-ca552a5f4c71 does not exist
Nov 29 07:41:39 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 46423c2a-0710-4fe8-a300-176a73f80457 does not exist
Nov 29 07:41:39 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 5711d8ce-52a4-4125-a74f-4958934b3a6a does not exist
Nov 29 07:41:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:41:39 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:41:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:41:39 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:41:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:41:39 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:41:39 compute-0 sudo[158349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:41:39 compute-0 sudo[158349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:39 compute-0 sudo[158349]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:39 compute-0 sudo[158400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:41:39 compute-0 sudo[158400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:39 compute-0 sudo[158443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpprlobvokciiheztskpzxvvtftfqtet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402098.368236-228-270952038917053/AnsiballZ_file.py'
Nov 29 07:41:39 compute-0 sudo[158443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:39 compute-0 sudo[158400]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:41:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:41:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:41:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:41:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:41:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:41:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:41:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:41:39 compute-0 sudo[158448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:41:39 compute-0 sudo[158448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:39 compute-0 sudo[158448]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:39 compute-0 sudo[158473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:41:39 compute-0 sudo[158473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:39 compute-0 python3.9[158447]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:41:39 compute-0 sudo[158443]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:39 compute-0 podman[158585]: 2025-11-29 07:41:39.651781422 +0000 UTC m=+0.052731120 container create 2a80471e478ae481697dda8fa49f7e97472df38a3b6b0fdecd2c9573c7a699ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kepler, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 07:41:39 compute-0 systemd[1]: Started libpod-conmon-2a80471e478ae481697dda8fa49f7e97472df38a3b6b0fdecd2c9573c7a699ff.scope.
Nov 29 07:41:39 compute-0 podman[158585]: 2025-11-29 07:41:39.625619977 +0000 UTC m=+0.026569725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:41:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:41:39 compute-0 podman[158585]: 2025-11-29 07:41:39.742220806 +0000 UTC m=+0.143170524 container init 2a80471e478ae481697dda8fa49f7e97472df38a3b6b0fdecd2c9573c7a699ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kepler, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:41:39 compute-0 podman[158585]: 2025-11-29 07:41:39.750448342 +0000 UTC m=+0.151398040 container start 2a80471e478ae481697dda8fa49f7e97472df38a3b6b0fdecd2c9573c7a699ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kepler, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 07:41:39 compute-0 podman[158585]: 2025-11-29 07:41:39.7545927 +0000 UTC m=+0.155542398 container attach 2a80471e478ae481697dda8fa49f7e97472df38a3b6b0fdecd2c9573c7a699ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kepler, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 07:41:39 compute-0 systemd[1]: libpod-2a80471e478ae481697dda8fa49f7e97472df38a3b6b0fdecd2c9573c7a699ff.scope: Deactivated successfully.
Nov 29 07:41:39 compute-0 sweet_kepler[158635]: 167 167
Nov 29 07:41:39 compute-0 conmon[158635]: conmon 2a80471e478ae481697d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a80471e478ae481697dda8fa49f7e97472df38a3b6b0fdecd2c9573c7a699ff.scope/container/memory.events
Nov 29 07:41:39 compute-0 podman[158585]: 2025-11-29 07:41:39.75728205 +0000 UTC m=+0.158231748 container died 2a80471e478ae481697dda8fa49f7e97472df38a3b6b0fdecd2c9573c7a699ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:41:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-2af7b5cdcd9de987406a9fd4bdeb602fa6bbce710f66ca19baa6acbc86e5432a-merged.mount: Deactivated successfully.
Nov 29 07:41:39 compute-0 podman[158585]: 2025-11-29 07:41:39.798302283 +0000 UTC m=+0.199251981 container remove 2a80471e478ae481697dda8fa49f7e97472df38a3b6b0fdecd2c9573c7a699ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kepler, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 29 07:41:39 compute-0 systemd[1]: libpod-conmon-2a80471e478ae481697dda8fa49f7e97472df38a3b6b0fdecd2c9573c7a699ff.scope: Deactivated successfully.
Nov 29 07:41:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:39 compute-0 sudo[158722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxdbshgjaktaopavkguzjyjylsjptyyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402099.5853794-251-233065361403500/AnsiballZ_file.py'
Nov 29 07:41:39 compute-0 sudo[158722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:39 compute-0 podman[158730]: 2025-11-29 07:41:39.982718496 +0000 UTC m=+0.048388386 container create e788935dd49553f8da1f328f3d651a18c4e72209399d379b73c90307409bef52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cori, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:41:40 compute-0 systemd[1]: Started libpod-conmon-e788935dd49553f8da1f328f3d651a18c4e72209399d379b73c90307409bef52.scope.
Nov 29 07:41:40 compute-0 podman[158730]: 2025-11-29 07:41:39.961296776 +0000 UTC m=+0.026966696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:41:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:41:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f149aca2d4e677a3840e6be77f8503e5d5c5419c4230b141f8715e8724b0a5c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:41:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f149aca2d4e677a3840e6be77f8503e5d5c5419c4230b141f8715e8724b0a5c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:41:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f149aca2d4e677a3840e6be77f8503e5d5c5419c4230b141f8715e8724b0a5c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:41:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f149aca2d4e677a3840e6be77f8503e5d5c5419c4230b141f8715e8724b0a5c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:41:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f149aca2d4e677a3840e6be77f8503e5d5c5419c4230b141f8715e8724b0a5c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:41:40 compute-0 python3.9[158724]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:41:40 compute-0 podman[158730]: 2025-11-29 07:41:40.083254235 +0000 UTC m=+0.148924145 container init e788935dd49553f8da1f328f3d651a18c4e72209399d379b73c90307409bef52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cori, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:41:40 compute-0 podman[158730]: 2025-11-29 07:41:40.092174778 +0000 UTC m=+0.157844668 container start e788935dd49553f8da1f328f3d651a18c4e72209399d379b73c90307409bef52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:41:40 compute-0 podman[158730]: 2025-11-29 07:41:40.097381554 +0000 UTC m=+0.163051444 container attach e788935dd49553f8da1f328f3d651a18c4e72209399d379b73c90307409bef52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cori, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 07:41:40 compute-0 sudo[158722]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:40 compute-0 ceph-mon[75237]: pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:41 compute-0 sudo[158917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbwnvcsyyiayfhwkgylrditvadzmyrft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402100.7990818-259-89851745735533/AnsiballZ_stat.py'
Nov 29 07:41:41 compute-0 sudo[158917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:41 compute-0 flamboyant_cori[158746]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:41:41 compute-0 flamboyant_cori[158746]: --> relative data size: 1.0
Nov 29 07:41:41 compute-0 flamboyant_cori[158746]: --> All data devices are unavailable
Nov 29 07:41:41 compute-0 systemd[1]: libpod-e788935dd49553f8da1f328f3d651a18c4e72209399d379b73c90307409bef52.scope: Deactivated successfully.
Nov 29 07:41:41 compute-0 podman[158730]: 2025-11-29 07:41:41.242661233 +0000 UTC m=+1.308331123 container died e788935dd49553f8da1f328f3d651a18c4e72209399d379b73c90307409bef52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cori, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 07:41:41 compute-0 systemd[1]: libpod-e788935dd49553f8da1f328f3d651a18c4e72209399d379b73c90307409bef52.scope: Consumed 1.071s CPU time.
Nov 29 07:41:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-f149aca2d4e677a3840e6be77f8503e5d5c5419c4230b141f8715e8724b0a5c0-merged.mount: Deactivated successfully.
Nov 29 07:41:41 compute-0 podman[158730]: 2025-11-29 07:41:41.30255077 +0000 UTC m=+1.368220660 container remove e788935dd49553f8da1f328f3d651a18c4e72209399d379b73c90307409bef52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cori, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 07:41:41 compute-0 python3.9[158921]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:41:41 compute-0 systemd[1]: libpod-conmon-e788935dd49553f8da1f328f3d651a18c4e72209399d379b73c90307409bef52.scope: Deactivated successfully.
Nov 29 07:41:41 compute-0 sudo[158473]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:41 compute-0 sudo[158917]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:41 compute-0 sudo[158943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:41:41 compute-0 sudo[158943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:41 compute-0 sudo[158943]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:41 compute-0 sudo[158974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:41:41 compute-0 sudo[158974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:41 compute-0 sudo[158974]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:41 compute-0 sudo[159016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:41:41 compute-0 sudo[159016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:41 compute-0 sudo[159016]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:41 compute-0 sudo[159065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:41:41 compute-0 sudo[159065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:41 compute-0 sudo[159114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weqrfmrmobjikjywdrviteqloilosizt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402100.7990818-259-89851745735533/AnsiballZ_file.py'
Nov 29 07:41:41 compute-0 sudo[159114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:41 compute-0 python3.9[159118]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:41:41 compute-0 sudo[159114]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:41 compute-0 podman[159167]: 2025-11-29 07:41:41.905383443 +0000 UTC m=+0.045429418 container create df68a1e9cac278a57d8c857afaf2cefc51d717809c96356e05a4da5699271ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_fermi, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:41:41 compute-0 systemd[1]: Started libpod-conmon-df68a1e9cac278a57d8c857afaf2cefc51d717809c96356e05a4da5699271ef8.scope.
Nov 29 07:41:41 compute-0 podman[159167]: 2025-11-29 07:41:41.886261324 +0000 UTC m=+0.026307319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:41:41 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:41:42 compute-0 podman[159167]: 2025-11-29 07:41:42.007035852 +0000 UTC m=+0.147081837 container init df68a1e9cac278a57d8c857afaf2cefc51d717809c96356e05a4da5699271ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_fermi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:41:42 compute-0 podman[159167]: 2025-11-29 07:41:42.019074826 +0000 UTC m=+0.159120801 container start df68a1e9cac278a57d8c857afaf2cefc51d717809c96356e05a4da5699271ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:41:42 compute-0 podman[159167]: 2025-11-29 07:41:42.02337836 +0000 UTC m=+0.163424435 container attach df68a1e9cac278a57d8c857afaf2cefc51d717809c96356e05a4da5699271ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_fermi, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:41:42 compute-0 flamboyant_fermi[159203]: 167 167
Nov 29 07:41:42 compute-0 systemd[1]: libpod-df68a1e9cac278a57d8c857afaf2cefc51d717809c96356e05a4da5699271ef8.scope: Deactivated successfully.
Nov 29 07:41:42 compute-0 podman[159167]: 2025-11-29 07:41:42.027920438 +0000 UTC m=+0.167966413 container died df68a1e9cac278a57d8c857afaf2cefc51d717809c96356e05a4da5699271ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_fermi, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:41:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8aae235a5507d42519ece35cf3c7c9eb9e89cab49a0d0e5d75a092c5153d2ef-merged.mount: Deactivated successfully.
Nov 29 07:41:42 compute-0 podman[159167]: 2025-11-29 07:41:42.072287588 +0000 UTC m=+0.212333563 container remove df68a1e9cac278a57d8c857afaf2cefc51d717809c96356e05a4da5699271ef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_fermi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:41:42 compute-0 systemd[1]: libpod-conmon-df68a1e9cac278a57d8c857afaf2cefc51d717809c96356e05a4da5699271ef8.scope: Deactivated successfully.
Nov 29 07:41:42 compute-0 ceph-mon[75237]: pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:42 compute-0 podman[159274]: 2025-11-29 07:41:42.249057521 +0000 UTC m=+0.048662994 container create 3bf4347166686c94ce0ad3093ea3d1715191426c0aef00ce7ee84b0730eb7f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 07:41:42 compute-0 systemd[1]: Started libpod-conmon-3bf4347166686c94ce0ad3093ea3d1715191426c0aef00ce7ee84b0730eb7f33.scope.
Nov 29 07:41:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:41:42 compute-0 podman[159274]: 2025-11-29 07:41:42.225808143 +0000 UTC m=+0.025413666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8884085889b9d861c62c7bf42854df7a332ab61edbaeb123e3b26dbe995ebdd8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8884085889b9d861c62c7bf42854df7a332ab61edbaeb123e3b26dbe995ebdd8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8884085889b9d861c62c7bf42854df7a332ab61edbaeb123e3b26dbe995ebdd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8884085889b9d861c62c7bf42854df7a332ab61edbaeb123e3b26dbe995ebdd8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:41:42 compute-0 podman[159274]: 2025-11-29 07:41:42.344699752 +0000 UTC m=+0.144305305 container init 3bf4347166686c94ce0ad3093ea3d1715191426c0aef00ce7ee84b0730eb7f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_germain, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 07:41:42 compute-0 podman[159274]: 2025-11-29 07:41:42.353377459 +0000 UTC m=+0.152982972 container start 3bf4347166686c94ce0ad3093ea3d1715191426c0aef00ce7ee84b0730eb7f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_germain, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 07:41:42 compute-0 podman[159274]: 2025-11-29 07:41:42.35838653 +0000 UTC m=+0.157992223 container attach 3bf4347166686c94ce0ad3093ea3d1715191426c0aef00ce7ee84b0730eb7f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_germain, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:41:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:41:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:41:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:41:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:41:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:41:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:41:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:41:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:41:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:41:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:41:42 compute-0 sudo[159368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxddrhzbleutoklkoxcljumgugurueet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402101.9813716-271-82809984903184/AnsiballZ_stat.py'
Nov 29 07:41:42 compute-0 sudo[159368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:42 compute-0 python3.9[159370]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:41:43 compute-0 sudo[159368]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:43 compute-0 wonderful_germain[159290]: {
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:     "0": [
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:         {
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "devices": [
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "/dev/loop3"
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             ],
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "lv_name": "ceph_lv0",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "lv_size": "21470642176",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "name": "ceph_lv0",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "tags": {
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.cluster_name": "ceph",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.crush_device_class": "",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.encrypted": "0",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.osd_id": "0",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.type": "block",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.vdo": "0"
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             },
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "type": "block",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "vg_name": "ceph_vg0"
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:         }
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:     ],
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:     "1": [
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:         {
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "devices": [
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "/dev/loop4"
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             ],
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "lv_name": "ceph_lv1",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "lv_size": "21470642176",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "name": "ceph_lv1",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "tags": {
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.cluster_name": "ceph",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.crush_device_class": "",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.encrypted": "0",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.osd_id": "1",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.type": "block",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.vdo": "0"
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             },
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "type": "block",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "vg_name": "ceph_vg1"
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:         }
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:     ],
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:     "2": [
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:         {
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "devices": [
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "/dev/loop5"
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             ],
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "lv_name": "ceph_lv2",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "lv_size": "21470642176",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "name": "ceph_lv2",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "tags": {
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.cluster_name": "ceph",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.crush_device_class": "",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.encrypted": "0",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.osd_id": "2",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.type": "block",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:                 "ceph.vdo": "0"
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             },
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "type": "block",
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:             "vg_name": "ceph_vg2"
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:         }
Nov 29 07:41:43 compute-0 wonderful_germain[159290]:     ]
Nov 29 07:41:43 compute-0 wonderful_germain[159290]: }
Nov 29 07:41:43 compute-0 systemd[1]: libpod-3bf4347166686c94ce0ad3093ea3d1715191426c0aef00ce7ee84b0730eb7f33.scope: Deactivated successfully.
Nov 29 07:41:43 compute-0 podman[159274]: 2025-11-29 07:41:43.228803451 +0000 UTC m=+1.028408924 container died 3bf4347166686c94ce0ad3093ea3d1715191426c0aef00ce7ee84b0730eb7f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_germain, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 07:41:43 compute-0 sudo[159460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaqtbgheczimcfsjlbhzpmktpvavnzgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402101.9813716-271-82809984903184/AnsiballZ_file.py'
Nov 29 07:41:43 compute-0 sudo[159460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-8884085889b9d861c62c7bf42854df7a332ab61edbaeb123e3b26dbe995ebdd8-merged.mount: Deactivated successfully.
Nov 29 07:41:43 compute-0 podman[159274]: 2025-11-29 07:41:43.47309996 +0000 UTC m=+1.272705433 container remove 3bf4347166686c94ce0ad3093ea3d1715191426c0aef00ce7ee84b0730eb7f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_germain, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:41:43 compute-0 python3.9[159462]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:41:43 compute-0 systemd[1]: libpod-conmon-3bf4347166686c94ce0ad3093ea3d1715191426c0aef00ce7ee84b0730eb7f33.scope: Deactivated successfully.
Nov 29 07:41:43 compute-0 sudo[159065]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:43 compute-0 sudo[159460]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:43 compute-0 sudo[159464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:41:43 compute-0 sudo[159464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:43 compute-0 sudo[159464]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:43 compute-0 sudo[159513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:41:43 compute-0 sudo[159513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:43 compute-0 sudo[159513]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:43 compute-0 sudo[159551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:41:43 compute-0 sudo[159551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:43 compute-0 sudo[159551]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:43 compute-0 sudo[159604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:41:43 compute-0 sudo[159604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:43 compute-0 sudo[159727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwipyimnlhkxmlhmeomklpxtrwvzunhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402103.677334-283-202116629571536/AnsiballZ_systemd.py'
Nov 29 07:41:43 compute-0 sudo[159727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:44 compute-0 podman[159756]: 2025-11-29 07:41:44.097538849 +0000 UTC m=+0.051751795 container create 1192940f281d52200f8821b7ba223d3ad7bb8eb886f4cdb5fc80322b289fd9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:41:44 compute-0 systemd[1]: Started libpod-conmon-1192940f281d52200f8821b7ba223d3ad7bb8eb886f4cdb5fc80322b289fd9e8.scope.
Nov 29 07:41:44 compute-0 podman[159756]: 2025-11-29 07:41:44.074409193 +0000 UTC m=+0.028622169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:41:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:41:44 compute-0 podman[159756]: 2025-11-29 07:41:44.191060694 +0000 UTC m=+0.145273660 container init 1192940f281d52200f8821b7ba223d3ad7bb8eb886f4cdb5fc80322b289fd9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kapitsa, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 07:41:44 compute-0 podman[159756]: 2025-11-29 07:41:44.199937976 +0000 UTC m=+0.154150922 container start 1192940f281d52200f8821b7ba223d3ad7bb8eb886f4cdb5fc80322b289fd9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kapitsa, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 07:41:44 compute-0 podman[159756]: 2025-11-29 07:41:44.204663339 +0000 UTC m=+0.158876305 container attach 1192940f281d52200f8821b7ba223d3ad7bb8eb886f4cdb5fc80322b289fd9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kapitsa, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 07:41:44 compute-0 fervent_kapitsa[159772]: 167 167
Nov 29 07:41:44 compute-0 systemd[1]: libpod-1192940f281d52200f8821b7ba223d3ad7bb8eb886f4cdb5fc80322b289fd9e8.scope: Deactivated successfully.
Nov 29 07:41:44 compute-0 conmon[159772]: conmon 1192940f281d52200f88 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1192940f281d52200f8821b7ba223d3ad7bb8eb886f4cdb5fc80322b289fd9e8.scope/container/memory.events
Nov 29 07:41:44 compute-0 podman[159756]: 2025-11-29 07:41:44.208391267 +0000 UTC m=+0.162604213 container died 1192940f281d52200f8821b7ba223d3ad7bb8eb886f4cdb5fc80322b289fd9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kapitsa, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 29 07:41:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-51eddbcb1f811dfe727e25d9d83c3156e88effc8426b0217fd2237d9f295b3eb-merged.mount: Deactivated successfully.
Nov 29 07:41:44 compute-0 podman[159756]: 2025-11-29 07:41:44.251922815 +0000 UTC m=+0.206135761 container remove 1192940f281d52200f8821b7ba223d3ad7bb8eb886f4cdb5fc80322b289fd9e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:41:44 compute-0 systemd[1]: libpod-conmon-1192940f281d52200f8821b7ba223d3ad7bb8eb886f4cdb5fc80322b289fd9e8.scope: Deactivated successfully.
Nov 29 07:41:44 compute-0 python3.9[159739]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:41:44 compute-0 systemd[1]: Reloading.
Nov 29 07:41:44 compute-0 systemd-sysv-generator[159833]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:41:44 compute-0 systemd-rc-local-generator[159827]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:41:44 compute-0 podman[159797]: 2025-11-29 07:41:44.461393633 +0000 UTC m=+0.065766951 container create 225100bb132e88b0281d942c4f738e7e31b2fdc865febfae982dadc1796d1c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_merkle, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:41:44 compute-0 ceph-mon[75237]: pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:44 compute-0 podman[159797]: 2025-11-29 07:41:44.439825358 +0000 UTC m=+0.044198696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:41:44 compute-0 systemd[1]: Started libpod-conmon-225100bb132e88b0281d942c4f738e7e31b2fdc865febfae982dadc1796d1c55.scope.
Nov 29 07:41:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:41:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e769dadd5287402d78db318411373d688182c4ecba6eaeeb45be29f8bf43f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:41:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e769dadd5287402d78db318411373d688182c4ecba6eaeeb45be29f8bf43f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:41:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e769dadd5287402d78db318411373d688182c4ecba6eaeeb45be29f8bf43f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:41:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e769dadd5287402d78db318411373d688182c4ecba6eaeeb45be29f8bf43f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:41:44 compute-0 sudo[159727]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:44 compute-0 podman[159797]: 2025-11-29 07:41:44.836733958 +0000 UTC m=+0.441107306 container init 225100bb132e88b0281d942c4f738e7e31b2fdc865febfae982dadc1796d1c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_merkle, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:41:44 compute-0 podman[159797]: 2025-11-29 07:41:44.850050616 +0000 UTC m=+0.454423924 container start 225100bb132e88b0281d942c4f738e7e31b2fdc865febfae982dadc1796d1c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 07:41:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:45 compute-0 podman[159797]: 2025-11-29 07:41:45.273834519 +0000 UTC m=+0.878207907 container attach 225100bb132e88b0281d942c4f738e7e31b2fdc865febfae982dadc1796d1c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 07:41:45 compute-0 sudo[160004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mafiogzjwhlwkjaiwkqiadhllrwbasqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402104.9684634-291-246457770213567/AnsiballZ_stat.py'
Nov 29 07:41:45 compute-0 sudo[160004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:45 compute-0 python3.9[160006]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:41:45 compute-0 sudo[160004]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:45 compute-0 sudo[160104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdavfctjfxbvidfmjpookhcepvjxfayx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402104.9684634-291-246457770213567/AnsiballZ_file.py'
Nov 29 07:41:45 compute-0 sudo[160104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:45 compute-0 keen_merkle[159849]: {
Nov 29 07:41:45 compute-0 keen_merkle[159849]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:41:45 compute-0 keen_merkle[159849]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:41:45 compute-0 keen_merkle[159849]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:41:45 compute-0 keen_merkle[159849]:         "osd_id": 2,
Nov 29 07:41:45 compute-0 keen_merkle[159849]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:41:45 compute-0 keen_merkle[159849]:         "type": "bluestore"
Nov 29 07:41:45 compute-0 keen_merkle[159849]:     },
Nov 29 07:41:45 compute-0 keen_merkle[159849]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:41:45 compute-0 keen_merkle[159849]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:41:45 compute-0 keen_merkle[159849]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:41:45 compute-0 keen_merkle[159849]:         "osd_id": 0,
Nov 29 07:41:45 compute-0 keen_merkle[159849]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:41:45 compute-0 keen_merkle[159849]:         "type": "bluestore"
Nov 29 07:41:45 compute-0 keen_merkle[159849]:     },
Nov 29 07:41:45 compute-0 keen_merkle[159849]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:41:45 compute-0 keen_merkle[159849]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:41:45 compute-0 keen_merkle[159849]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:41:45 compute-0 keen_merkle[159849]:         "osd_id": 1,
Nov 29 07:41:45 compute-0 keen_merkle[159849]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:41:45 compute-0 keen_merkle[159849]:         "type": "bluestore"
Nov 29 07:41:45 compute-0 keen_merkle[159849]:     }
Nov 29 07:41:45 compute-0 keen_merkle[159849]: }
Nov 29 07:41:45 compute-0 systemd[1]: libpod-225100bb132e88b0281d942c4f738e7e31b2fdc865febfae982dadc1796d1c55.scope: Deactivated successfully.
Nov 29 07:41:45 compute-0 systemd[1]: libpod-225100bb132e88b0281d942c4f738e7e31b2fdc865febfae982dadc1796d1c55.scope: Consumed 1.087s CPU time.
Nov 29 07:41:45 compute-0 podman[159797]: 2025-11-29 07:41:45.934700451 +0000 UTC m=+1.539073759 container died 225100bb132e88b0281d942c4f738e7e31b2fdc865febfae982dadc1796d1c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_merkle, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:41:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-90e769dadd5287402d78db318411373d688182c4ecba6eaeeb45be29f8bf43f1-merged.mount: Deactivated successfully.
Nov 29 07:41:46 compute-0 podman[159797]: 2025-11-29 07:41:46.006832387 +0000 UTC m=+1.611205695 container remove 225100bb132e88b0281d942c4f738e7e31b2fdc865febfae982dadc1796d1c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_merkle, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:41:46 compute-0 systemd[1]: libpod-conmon-225100bb132e88b0281d942c4f738e7e31b2fdc865febfae982dadc1796d1c55.scope: Deactivated successfully.
Nov 29 07:41:46 compute-0 sudo[159604]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:41:46 compute-0 python3.9[160107]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:41:46 compute-0 sudo[160104]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:46 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:41:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:41:46 compute-0 sudo[160273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwlchqnvobpytqkvlpmdnydsgaidfuis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402106.2815459-303-46563287342826/AnsiballZ_stat.py'
Nov 29 07:41:46 compute-0 sudo[160273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:46 compute-0 python3.9[160275]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:41:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:46 compute-0 sudo[160273]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:47 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:41:47 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev b6b41813-0e88-4f53-ab2e-37afb4ca4854 does not exist
Nov 29 07:41:47 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 5a522742-2c45-429c-b735-7191fa1927a5 does not exist
Nov 29 07:41:47 compute-0 ceph-mon[75237]: pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:47 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:41:47 compute-0 sudo[160325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:41:47 compute-0 sudo[160325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:47 compute-0 sudo[160325]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:47 compute-0 sudo[160374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fglednlyoymiewupnrfmmtzniuvrfpgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402106.2815459-303-46563287342826/AnsiballZ_file.py'
Nov 29 07:41:47 compute-0 sudo[160374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:47 compute-0 sudo[160379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:41:47 compute-0 sudo[160379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:47 compute-0 sudo[160379]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:47 compute-0 python3.9[160378]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:41:47 compute-0 sudo[160374]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:47 compute-0 sudo[160553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhqltfyglhnwllogtbcltbbmvpaorexs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402107.6079981-315-129953926209337/AnsiballZ_systemd.py'
Nov 29 07:41:47 compute-0 sudo[160553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:48 compute-0 ceph-mon[75237]: pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:48 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:41:48 compute-0 python3.9[160555]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:41:48 compute-0 systemd[1]: Reloading.
Nov 29 07:41:48 compute-0 systemd-rc-local-generator[160584]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:41:48 compute-0 systemd-sysv-generator[160587]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:41:48 compute-0 systemd[1]: Starting Create netns directory...
Nov 29 07:41:48 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 07:41:48 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 07:41:48 compute-0 systemd[1]: Finished Create netns directory.
Nov 29 07:41:48 compute-0 sudo[160553]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:49 compute-0 sudo[160746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xavmcxwoelxooghqcgcrlqmsjymyesdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402108.8839033-325-5607000293273/AnsiballZ_file.py'
Nov 29 07:41:49 compute-0 sudo[160746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:49 compute-0 python3.9[160748]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:41:49 compute-0 sudo[160746]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:50 compute-0 sudo[160898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epfngvlidjjfvjdvpfrsmztlzbyosgba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402109.7088156-333-124017157775306/AnsiballZ_stat.py'
Nov 29 07:41:50 compute-0 sudo[160898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:50 compute-0 python3.9[160900]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:41:50 compute-0 sudo[160898]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:50 compute-0 sudo[161021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zexpxcgtumhvnqckrittbzrtqorfayvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402109.7088156-333-124017157775306/AnsiballZ_copy.py'
Nov 29 07:41:50 compute-0 sudo[161021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:50 compute-0 python3.9[161023]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402109.7088156-333-124017157775306/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:41:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:50 compute-0 sudo[161021]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:51 compute-0 ceph-mon[75237]: pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:52 compute-0 sudo[161173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avrkiufitwzyrutcqhlzrxjowhdunhrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402111.1741197-350-203657492846461/AnsiballZ_file.py'
Nov 29 07:41:52 compute-0 sudo[161173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:52 compute-0 python3.9[161175]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:41:52 compute-0 sudo[161173]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:53 compute-0 sudo[161327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vilrnvkxdurpsgwevwwoftuqfiznyszg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402112.9330063-358-233999861997306/AnsiballZ_stat.py'
Nov 29 07:41:53 compute-0 sudo[161327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:53 compute-0 ceph-mon[75237]: pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:53 compute-0 python3.9[161329]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:41:53 compute-0 sudo[161327]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:54 compute-0 sudo[161450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hudxxxshzqlhwbapyxwlvtitiagtnzbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402112.9330063-358-233999861997306/AnsiballZ_copy.py'
Nov 29 07:41:54 compute-0 sudo[161450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:54 compute-0 python3.9[161452]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764402112.9330063-358-233999861997306/.source.json _original_basename=.b3cffr0y follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:41:54 compute-0 sudo[161450]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:55 compute-0 sshd-session[161206]: Invalid user conectar from 45.78.219.195 port 46284
Nov 29 07:41:55 compute-0 sudo[161602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aamlwpmggcnxlhulcaevldctwedpjjwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402114.8913238-373-194312944165309/AnsiballZ_file.py'
Nov 29 07:41:55 compute-0 sudo[161602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:55 compute-0 sshd-session[161206]: Received disconnect from 45.78.219.195 port 46284:11: Bye Bye [preauth]
Nov 29 07:41:55 compute-0 sshd-session[161206]: Disconnected from invalid user conectar 45.78.219.195 port 46284 [preauth]
Nov 29 07:41:55 compute-0 python3.9[161604]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:41:55 compute-0 sudo[161602]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:41:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:41:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:56 compute-0 sudo[161754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwmdfeobduuiqiwmpezgiacbebhamvdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402115.8015838-381-21427881456912/AnsiballZ_stat.py'
Nov 29 07:41:56 compute-0 sudo[161754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:56 compute-0 ceph-mon[75237]: pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:56 compute-0 sudo[161754]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:56 compute-0 sudo[161877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dadbkfmekicblwhihumcmtjilkuohtrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402115.8015838-381-21427881456912/AnsiballZ_copy.py'
Nov 29 07:41:56 compute-0 sudo[161877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:56 compute-0 sudo[161877]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:57 compute-0 ceph-mon[75237]: pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:57 compute-0 sudo[162029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmaiogpgazktjjqjzawinmpqoduolrig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402117.392212-398-772866619929/AnsiballZ_container_config_data.py'
Nov 29 07:41:57 compute-0 sudo[162029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:58 compute-0 python3.9[162031]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 29 07:41:58 compute-0 sudo[162029]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:58 compute-0 sudo[162181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znlpaqilrgmxjuzxoxylrobykdzyrmaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402118.3153853-407-272036732184326/AnsiballZ_container_config_hash.py'
Nov 29 07:41:58 compute-0 sudo[162181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:41:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:59 compute-0 ceph-mon[75237]: pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:59 compute-0 python3.9[162183]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 07:41:59 compute-0 sudo[162181]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:00 compute-0 sudo[162333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mthjjdyqyqusutfrcqoyspzdhhxbtssh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402120.156822-416-78923272467956/AnsiballZ_podman_container_info.py'
Nov 29 07:42:00 compute-0 sudo[162333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:00 compute-0 podman[162335]: 2025-11-29 07:42:00.819611785 +0000 UTC m=+0.160925246 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 29 07:42:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:00 compute-0 python3.9[162336]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 07:42:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:01 compute-0 sudo[162333]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:01 compute-0 ceph-mon[75237]: pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:02 compute-0 ceph-mon[75237]: pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:02 compute-0 sudo[162538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzhydmyrwrwyemtfzzxqmrvxlhmzlkan ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764402121.7144887-429-233931106518477/AnsiballZ_edpm_container_manage.py'
Nov 29 07:42:02 compute-0 sudo[162538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:03 compute-0 python3[162540]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 07:42:04 compute-0 ceph-mon[75237]: pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:06 compute-0 sshd-session[162586]: Invalid user user from 103.234.151.178 port 5866
Nov 29 07:42:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:06 compute-0 sshd-session[162586]: Received disconnect from 103.234.151.178 port 5866:11: Bye Bye [preauth]
Nov 29 07:42:06 compute-0 sshd-session[162586]: Disconnected from invalid user user 103.234.151.178 port 5866 [preauth]
Nov 29 07:42:07 compute-0 ceph-mon[75237]: pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:42:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:42:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:42:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:42:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:42:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:42:08 compute-0 ceph-mon[75237]: pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:10 compute-0 ceph-mon[75237]: pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:14 compute-0 ceph-mon[75237]: pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:16 compute-0 ceph-mon[75237]: pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:17 compute-0 podman[162553]: 2025-11-29 07:42:17.242675585 +0000 UTC m=+14.059665196 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:42:17 compute-0 podman[162680]: 2025-11-29 07:42:17.422955675 +0000 UTC m=+0.063086072 container create e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Nov 29 07:42:17 compute-0 podman[162680]: 2025-11-29 07:42:17.38569646 +0000 UTC m=+0.025826857 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:42:17 compute-0 python3[162540]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:42:17 compute-0 ceph-mon[75237]: pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:17 compute-0 sudo[162538]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:18 compute-0 sudo[162867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grcrpugbymyuboacyqtxnqoegtdznssc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402137.8320813-437-221192591463733/AnsiballZ_stat.py'
Nov 29 07:42:18 compute-0 sudo[162867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:18 compute-0 python3.9[162869]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:42:18 compute-0 sudo[162867]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:19 compute-0 ceph-mon[75237]: pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:19 compute-0 sudo[163021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrfmwdmxqubsryzwrtkxwmladaxghlhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402138.8896282-446-69730534926985/AnsiballZ_file.py'
Nov 29 07:42:19 compute-0 sudo[163021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:19 compute-0 python3.9[163023]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:42:19 compute-0 sudo[163021]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:19 compute-0 sudo[163097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xguyemepuwcdfhrbsachtwwanqkcusrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402138.8896282-446-69730534926985/AnsiballZ_stat.py'
Nov 29 07:42:19 compute-0 sudo[163097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:20 compute-0 python3.9[163099]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:42:20 compute-0 sudo[163097]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:20 compute-0 ceph-mon[75237]: pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:20 compute-0 sudo[163248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xugpmeygdsopscanxnfpuoeevnbljzuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402140.177803-446-177293271064850/AnsiballZ_copy.py'
Nov 29 07:42:20 compute-0 sudo[163248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:21 compute-0 python3.9[163250]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764402140.177803-446-177293271064850/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:42:21 compute-0 sudo[163248]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:21 compute-0 sudo[163324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvjjhwgqwimtssguuyhrgjmfydwviztb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402140.177803-446-177293271064850/AnsiballZ_systemd.py'
Nov 29 07:42:21 compute-0 sudo[163324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:21 compute-0 python3.9[163326]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:42:21 compute-0 systemd[1]: Reloading.
Nov 29 07:42:21 compute-0 systemd-rc-local-generator[163350]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:42:21 compute-0 systemd-sysv-generator[163354]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:42:22 compute-0 sudo[163324]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:23 compute-0 sudo[163438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpycfwywuqwksgnnarquuxusxxhptzcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402140.177803-446-177293271064850/AnsiballZ_systemd.py'
Nov 29 07:42:23 compute-0 sudo[163438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:23 compute-0 ceph-mon[75237]: pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:23 compute-0 python3.9[163440]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:42:23 compute-0 systemd[1]: Reloading.
Nov 29 07:42:23 compute-0 systemd-rc-local-generator[163468]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:42:23 compute-0 systemd-sysv-generator[163472]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:42:24 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Nov 29 07:42:24 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c5e3ce9e2cf5d4b3731114e7a851dab7902ba1df7d4871126998cc8923c937/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c5e3ce9e2cf5d4b3731114e7a851dab7902ba1df7d4871126998cc8923c937/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:24 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad.
Nov 29 07:42:24 compute-0 podman[163480]: 2025-11-29 07:42:24.216994963 +0000 UTC m=+0.142367261 container init e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: + sudo -E kolla_set_configs
Nov 29 07:42:24 compute-0 podman[163480]: 2025-11-29 07:42:24.249817189 +0000 UTC m=+0.175189477 container start e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:42:24 compute-0 edpm-start-podman-container[163480]: ovn_metadata_agent
Nov 29 07:42:24 compute-0 edpm-start-podman-container[163479]: Creating additional drop-in dependency for "ovn_metadata_agent" (e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad)
Nov 29 07:42:24 compute-0 systemd[1]: Reloading.
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: INFO:__main__:Validating config file
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: INFO:__main__:Copying service configuration files
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: INFO:__main__:Writing out command to execute
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: ++ cat /run_command
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: + CMD=neutron-ovn-metadata-agent
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: + ARGS=
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: + sudo kolla_copy_cacerts
Nov 29 07:42:24 compute-0 podman[163501]: 2025-11-29 07:42:24.363141684 +0000 UTC m=+0.097755644 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: + [[ ! -n '' ]]
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: + . kolla_extend_start
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: Running command: 'neutron-ovn-metadata-agent'
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: + umask 0022
Nov 29 07:42:24 compute-0 ovn_metadata_agent[163495]: + exec neutron-ovn-metadata-agent
Nov 29 07:42:24 compute-0 systemd-rc-local-generator[163568]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:42:24 compute-0 systemd-sysv-generator[163571]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:42:24 compute-0 ceph-mon[75237]: pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:24 compute-0 systemd[1]: Started ovn_metadata_agent container.
Nov 29 07:42:24 compute-0 sudo[163438]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:25 compute-0 sshd-session[153899]: Connection closed by 192.168.122.30 port 38640
Nov 29 07:42:25 compute-0 sshd-session[153896]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:42:25 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Nov 29 07:42:25 compute-0 systemd[1]: session-48.scope: Consumed 1min 595ms CPU time.
Nov 29 07:42:25 compute-0 systemd-logind[782]: Session 48 logged out. Waiting for processes to exit.
Nov 29 07:42:25 compute-0 systemd-logind[782]: Removed session 48.
Nov 29 07:42:26 compute-0 ceph-mon[75237]: pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.056 163500 INFO neutron.common.config [-] Logging enabled!
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.057 163500 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.057 163500 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.057 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.057 163500 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.057 163500 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.058 163500 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.058 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.058 163500 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.058 163500 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.058 163500 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.058 163500 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.058 163500 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.059 163500 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.059 163500 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.059 163500 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.059 163500 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.059 163500 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.059 163500 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.059 163500 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.059 163500 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.059 163500 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.060 163500 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.060 163500 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.060 163500 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.060 163500 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.060 163500 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.060 163500 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.060 163500 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.060 163500 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.060 163500 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.061 163500 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.061 163500 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.061 163500 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.061 163500 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.061 163500 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.061 163500 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.061 163500 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.062 163500 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.062 163500 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.062 163500 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.062 163500 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.062 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.062 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.062 163500 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.062 163500 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.062 163500 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.062 163500 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.062 163500 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.063 163500 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.063 163500 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.063 163500 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.063 163500 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.063 163500 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.063 163500 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.063 163500 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.063 163500 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.063 163500 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.064 163500 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.064 163500 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.064 163500 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.064 163500 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.064 163500 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.064 163500 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.064 163500 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.064 163500 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.064 163500 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.065 163500 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.065 163500 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.065 163500 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.065 163500 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.065 163500 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.065 163500 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.065 163500 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.065 163500 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.065 163500 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.066 163500 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.066 163500 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.066 163500 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.066 163500 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.066 163500 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.066 163500 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.066 163500 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.066 163500 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.066 163500 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.067 163500 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.067 163500 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.067 163500 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.067 163500 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.067 163500 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.067 163500 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.067 163500 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.067 163500 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.068 163500 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.068 163500 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.068 163500 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.068 163500 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.068 163500 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.068 163500 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.068 163500 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.068 163500 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.068 163500 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.068 163500 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.069 163500 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.069 163500 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.069 163500 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.069 163500 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.069 163500 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.069 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.069 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.069 163500 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.069 163500 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.069 163500 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.070 163500 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.070 163500 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.070 163500 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.070 163500 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.070 163500 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.070 163500 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.070 163500 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.070 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.070 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.071 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.071 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.071 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.071 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.071 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.071 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.071 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.071 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.071 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.072 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.072 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.072 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.072 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.072 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.073 163500 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.073 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.073 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.073 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.073 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.073 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.073 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.074 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.074 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.074 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.074 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.074 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.074 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.074 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.074 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.074 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.075 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.075 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.075 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.075 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.075 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.075 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.075 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.075 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.075 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.076 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.076 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.076 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.076 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.076 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.076 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.076 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.076 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.076 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.076 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.077 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.077 163500 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.077 163500 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.077 163500 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.077 163500 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.077 163500 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.077 163500 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.077 163500 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.077 163500 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.078 163500 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.078 163500 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.078 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.078 163500 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.078 163500 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.078 163500 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.078 163500 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.078 163500 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.078 163500 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.079 163500 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.079 163500 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.079 163500 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.079 163500 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.079 163500 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.079 163500 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.079 163500 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.079 163500 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.079 163500 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.080 163500 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.080 163500 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.080 163500 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.080 163500 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.080 163500 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.080 163500 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.080 163500 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.080 163500 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.080 163500 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.080 163500 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.081 163500 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.081 163500 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.081 163500 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.081 163500 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.081 163500 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.081 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.081 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.081 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.082 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.082 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.082 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.082 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.082 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.082 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.083 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.083 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.083 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.083 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.083 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.083 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.084 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.084 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.084 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.084 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.084 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.084 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.084 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.084 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.085 163500 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.085 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.085 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.085 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.085 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.085 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.085 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.086 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.086 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.086 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.086 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.086 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.086 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.086 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.087 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.087 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.087 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.087 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.087 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.087 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.087 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.087 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.088 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.088 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.088 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.088 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.088 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.088 163500 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.088 163500 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.088 163500 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.088 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.089 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.089 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.089 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.089 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.089 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.089 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.089 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.089 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.090 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.090 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.090 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.090 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.090 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.090 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.090 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.090 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.090 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.091 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.091 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.091 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.091 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.091 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.091 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.091 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.091 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.091 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.092 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.092 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.092 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.092 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.092 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.092 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.092 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.092 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.092 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.093 163500 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.093 163500 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 07:42:27 compute-0 ceph-mon[75237]: pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.103 163500 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.103 163500 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.103 163500 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.103 163500 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.104 163500 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.116 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 230c4529-a404-4083-a72e-940c7905cc88 (UUID: 230c4529-a404-4083-a72e-940c7905cc88) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.140 163500 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.141 163500 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.141 163500 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.141 163500 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.144 163500 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.150 163500 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.155 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '230c4529-a404-4083-a72e-940c7905cc88'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], external_ids={}, name=230c4529-a404-4083-a72e-940c7905cc88, nb_cfg_timestamp=1764402068528, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.156 163500 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fa998717b20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.157 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.157 163500 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.157 163500 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.157 163500 INFO oslo_service.service [-] Starting 1 workers
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.163 163500 DEBUG oslo_service.service [-] Started child 163606 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.168 163500 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpmysosjil/privsep.sock']
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.168 163606 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-957082'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.192 163606 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.193 163606 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.193 163606 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.197 163606 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.204 163606 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.211 163606 INFO eventlet.wsgi.server [-] (163606) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Nov 29 07:42:27 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.855 163500 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.856 163500 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpmysosjil/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.741 163611 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.745 163611 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.748 163611 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.749 163611 INFO oslo.privsep.daemon [-] privsep daemon running as pid 163611
Nov 29 07:42:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:27.859 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[dae22716-9eb4-4231-814d-be9ad3f8d16a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:28.424 163611 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:42:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:28.424 163611 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:42:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:28.424 163611 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:42:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.015 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[91e24f32-7f5a-47ba-9555-ccf680c0622f]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.017 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, column=external_ids, values=({'neutron:ovn-metadata-id': '663db6a9-5088-5118-ae5b-17bd9506dba0'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.027 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.032 163500 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.033 163500 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.033 163500 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.033 163500 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.033 163500 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.033 163500 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.033 163500 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.033 163500 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.033 163500 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.034 163500 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.034 163500 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.034 163500 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.034 163500 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.034 163500 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.034 163500 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.034 163500 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.035 163500 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.035 163500 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.035 163500 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.035 163500 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.035 163500 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.035 163500 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.035 163500 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.035 163500 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.035 163500 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.036 163500 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.036 163500 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.036 163500 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.036 163500 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.036 163500 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.036 163500 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.036 163500 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.036 163500 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.037 163500 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.037 163500 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.037 163500 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.037 163500 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.037 163500 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.037 163500 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.038 163500 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.038 163500 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.038 163500 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.038 163500 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.038 163500 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.038 163500 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.038 163500 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.038 163500 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.038 163500 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.039 163500 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.039 163500 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.039 163500 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.039 163500 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.039 163500 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.039 163500 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.039 163500 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.039 163500 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.039 163500 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.039 163500 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.040 163500 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.040 163500 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.040 163500 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.040 163500 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.040 163500 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.040 163500 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.040 163500 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.040 163500 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.040 163500 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.041 163500 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.041 163500 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.041 163500 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.041 163500 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.041 163500 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.041 163500 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.041 163500 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.041 163500 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.041 163500 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.042 163500 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.042 163500 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.042 163500 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.042 163500 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.042 163500 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.042 163500 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.042 163500 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.042 163500 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.042 163500 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.043 163500 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.043 163500 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.043 163500 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.043 163500 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.043 163500 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.043 163500 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.043 163500 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.043 163500 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.043 163500 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.043 163500 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.044 163500 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.044 163500 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.044 163500 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.044 163500 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.044 163500 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.044 163500 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.044 163500 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.044 163500 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.044 163500 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.045 163500 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.045 163500 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.045 163500 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.045 163500 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.045 163500 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.045 163500 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.045 163500 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.046 163500 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.046 163500 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.046 163500 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.046 163500 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.046 163500 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.046 163500 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.046 163500 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.046 163500 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.046 163500 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.047 163500 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.047 163500 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.047 163500 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.047 163500 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.047 163500 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.047 163500 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.047 163500 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.047 163500 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.048 163500 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.048 163500 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.048 163500 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.048 163500 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.048 163500 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.048 163500 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.048 163500 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.048 163500 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.048 163500 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.049 163500 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.049 163500 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.049 163500 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.049 163500 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.049 163500 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.049 163500 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.049 163500 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.049 163500 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.049 163500 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.050 163500 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.050 163500 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.050 163500 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.050 163500 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.050 163500 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.050 163500 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.050 163500 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.050 163500 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.050 163500 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.050 163500 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.051 163500 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.051 163500 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.051 163500 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.051 163500 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.051 163500 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.051 163500 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.051 163500 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.051 163500 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.052 163500 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.052 163500 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.052 163500 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.052 163500 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.052 163500 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.052 163500 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.052 163500 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.052 163500 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.052 163500 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.052 163500 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.053 163500 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.053 163500 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.053 163500 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.053 163500 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.053 163500 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.053 163500 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.053 163500 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.053 163500 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.054 163500 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.054 163500 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.054 163500 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.054 163500 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.054 163500 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.054 163500 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.054 163500 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.054 163500 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.055 163500 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.055 163500 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.055 163500 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.055 163500 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.055 163500 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.055 163500 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.055 163500 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.055 163500 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.055 163500 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.055 163500 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.056 163500 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.056 163500 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.056 163500 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.056 163500 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.056 163500 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.056 163500 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.056 163500 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.056 163500 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.056 163500 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.057 163500 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.057 163500 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.057 163500 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.057 163500 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.057 163500 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.057 163500 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.057 163500 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.057 163500 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.057 163500 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.057 163500 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.058 163500 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.058 163500 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.058 163500 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.058 163500 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.058 163500 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.058 163500 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.058 163500 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.058 163500 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.058 163500 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.058 163500 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.059 163500 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.059 163500 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.059 163500 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.059 163500 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.059 163500 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.059 163500 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.059 163500 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.059 163500 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.059 163500 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.060 163500 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.060 163500 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.060 163500 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.060 163500 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.060 163500 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.060 163500 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.060 163500 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.060 163500 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.060 163500 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.061 163500 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.061 163500 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.061 163500 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.061 163500 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.061 163500 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.061 163500 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.061 163500 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.061 163500 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.061 163500 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.061 163500 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.062 163500 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.062 163500 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.062 163500 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.062 163500 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.062 163500 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.062 163500 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.062 163500 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.062 163500 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.062 163500 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.062 163500 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.063 163500 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.063 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.063 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.063 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.063 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.063 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.063 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.063 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.064 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.064 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.064 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.064 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.064 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.064 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.064 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.064 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.064 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.065 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.065 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.065 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.065 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.065 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.065 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.065 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.065 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.065 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.066 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.066 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.066 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.066 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.066 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.066 163500 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.066 163500 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.066 163500 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.066 163500 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.067 163500 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:42:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:42:29.067 163500 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 07:42:29 compute-0 ceph-mon[75237]: pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:31 compute-0 sshd-session[163616]: Accepted publickey for zuul from 192.168.122.30 port 49914 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:42:31 compute-0 systemd-logind[782]: New session 49 of user zuul.
Nov 29 07:42:31 compute-0 systemd[1]: Started Session 49 of User zuul.
Nov 29 07:42:31 compute-0 sshd-session[163616]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:42:31 compute-0 podman[163618]: 2025-11-29 07:42:31.85445885 +0000 UTC m=+0.108388316 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 07:42:32 compute-0 ceph-mon[75237]: pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:32 compute-0 python3.9[163795]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:42:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:33 compute-0 ceph-mon[75237]: pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:33 compute-0 sudo[163949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guqadedogkkkvwawazbfsbrkcoqijrim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402153.3121405-34-215921058533319/AnsiballZ_command.py'
Nov 29 07:42:33 compute-0 sudo[163949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:34 compute-0 python3.9[163951]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:42:34 compute-0 sudo[163949]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:34 compute-0 sudo[164113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqeotecvtkqapvyavagghsadykrcfjbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402154.3845534-45-246509516169517/AnsiballZ_systemd_service.py'
Nov 29 07:42:34 compute-0 sudo[164113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:35 compute-0 python3.9[164115]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:42:35 compute-0 systemd[1]: Reloading.
Nov 29 07:42:35 compute-0 systemd-sysv-generator[164144]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:42:35 compute-0 systemd-rc-local-generator[164138]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:42:35 compute-0 sudo[164113]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:35 compute-0 ceph-mon[75237]: pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:36 compute-0 sshd-session[164226]: Received disconnect from 20.185.243.158 port 57350:11: Bye Bye [preauth]
Nov 29 07:42:36 compute-0 sshd-session[164226]: Disconnected from authenticating user root 20.185.243.158 port 57350 [preauth]
Nov 29 07:42:36 compute-0 python3.9[164301]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:42:36 compute-0 network[164318]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:42:36 compute-0 network[164319]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:42:36 compute-0 network[164320]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:42:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:38 compute-0 ceph-mon[75237]: pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:42:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:42:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:42:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:42:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:42:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:42:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:42:38
Nov 29 07:42:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:42:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:42:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'vms', 'backups', 'images', 'default.rgw.control', 'volumes']
Nov 29 07:42:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:42:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:39 compute-0 ceph-mon[75237]: pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:40 compute-0 sudo[164580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wirydqivutpcmtpkuacevtbpnprixaex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402160.2409747-64-52927431573304/AnsiballZ_systemd_service.py'
Nov 29 07:42:40 compute-0 sudo[164580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:40 compute-0 ceph-mgr[75527]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1430667654
Nov 29 07:42:40 compute-0 python3.9[164582]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:42:40 compute-0 sudo[164580]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:41 compute-0 sudo[164733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbezzkhmpagagvvhcwkrvlzuomtzovnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402160.9689229-64-278225792747966/AnsiballZ_systemd_service.py'
Nov 29 07:42:41 compute-0 sudo[164733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:41 compute-0 python3.9[164735]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:42:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:41 compute-0 sudo[164733]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:42 compute-0 sudo[164886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eefruluqamgozlrphgjkocvyflhawnvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402161.7698374-64-273095717314082/AnsiballZ_systemd_service.py'
Nov 29 07:42:42 compute-0 sudo[164886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:42 compute-0 ceph-mon[75237]: pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:42 compute-0 python3.9[164888]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:42:42 compute-0 sudo[164886]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:42:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:42:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:42:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:42:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:42:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:42:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:42:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:42:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:42:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:42:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:43 compute-0 sudo[165039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pascygrsshlpihwsrpzlsudnspqeviyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402162.7740316-64-170313186496569/AnsiballZ_systemd_service.py'
Nov 29 07:42:43 compute-0 sudo[165039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:43 compute-0 ceph-mon[75237]: pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:43 compute-0 python3.9[165041]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:42:43 compute-0 sudo[165039]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:43 compute-0 sudo[165192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcsndmdyndknfacocqnrdvnrvggcumyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402163.6127682-64-176914126197601/AnsiballZ_systemd_service.py'
Nov 29 07:42:43 compute-0 sudo[165192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:44 compute-0 python3.9[165194]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:42:44 compute-0 sudo[165192]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:44 compute-0 sudo[165345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npkihognwwlgqtqitbymwejygmxayglh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402164.4218338-64-268964062622269/AnsiballZ_systemd_service.py'
Nov 29 07:42:44 compute-0 sudo[165345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:45 compute-0 python3.9[165349]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:42:45 compute-0 sudo[165345]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:45 compute-0 sudo[165500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maylolpionggwymjhmdicwmixopjppxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402165.2454705-64-150842163815510/AnsiballZ_systemd_service.py'
Nov 29 07:42:45 compute-0 sudo[165500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:45 compute-0 python3.9[165502]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:42:45 compute-0 sshd-session[165346]: Invalid user fan from 114.34.106.146 port 44976
Nov 29 07:42:45 compute-0 sudo[165500]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:46 compute-0 sshd-session[165346]: Received disconnect from 114.34.106.146 port 44976:11: Bye Bye [preauth]
Nov 29 07:42:46 compute-0 sshd-session[165346]: Disconnected from invalid user fan 114.34.106.146 port 44976 [preauth]
Nov 29 07:42:46 compute-0 sudo[165653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odqtxvbkqjgylmubuxqfgxncrsqafdpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402166.1492138-116-228553371490152/AnsiballZ_file.py'
Nov 29 07:42:46 compute-0 sudo[165653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:46 compute-0 python3.9[165655]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:42:46 compute-0 sudo[165653]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:47 compute-0 ceph-mon[75237]: pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:47 compute-0 sudo[165826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxdlrzuajbfxfhyvenrfejmpfzflfppq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402166.9776204-116-105875280576498/AnsiballZ_file.py'
Nov 29 07:42:47 compute-0 sudo[165826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:47 compute-0 sudo[165785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:47 compute-0 sudo[165785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:47 compute-0 sudo[165785]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:47 compute-0 sudo[165833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:42:47 compute-0 sudo[165833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:47 compute-0 sudo[165833]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:47 compute-0 sudo[165858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:47 compute-0 sudo[165858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:47 compute-0 sudo[165858]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:47 compute-0 sudo[165883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:42:47 compute-0 sudo[165883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:47 compute-0 python3.9[165830]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:42:47 compute-0 sudo[165826]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:47 compute-0 sudo[166080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-echyuadkhzrrftowskntavlfggzasqem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402167.6238077-116-201449745774209/AnsiballZ_file.py'
Nov 29 07:42:47 compute-0 sudo[166080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:47 compute-0 sudo[165883]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:42:47 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:42:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:42:47 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:42:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:42:48 compute-0 python3.9[166088]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:42:48 compute-0 sudo[166080]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:49 compute-0 sudo[166238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiwirbhiodmgpebvslcmjultgihtbrlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402169.3493013-116-183451144776977/AnsiballZ_file.py'
Nov 29 07:42:49 compute-0 sudo[166238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:50 compute-0 python3.9[166240]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:42:50 compute-0 sudo[166238]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:51 compute-0 ceph-mon[75237]: pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:51 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:42:51 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:42:51 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:42:51 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 6c97f99e-3208-4c2e-99f6-75a2e27b232c does not exist
Nov 29 07:42:51 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 1efa0886-ad3e-4289-b056-f5858a559141 does not exist
Nov 29 07:42:51 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 89bd2e67-312f-4b06-b16b-3fc3eaa5b3ff does not exist
Nov 29 07:42:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:42:51 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:42:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:42:51 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:42:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:42:51 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:42:51 compute-0 sudo[166343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:51 compute-0 sudo[166343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:51 compute-0 sudo[166343]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:51 compute-0 sudo[166434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pclfrtayopwsejmxkiamnwjbwkhanwfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402171.3315-116-53429563705258/AnsiballZ_file.py'
Nov 29 07:42:51 compute-0 sudo[166434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:51 compute-0 sudo[166401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:42:51 compute-0 sudo[166401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:51 compute-0 sudo[166401]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:51 compute-0 sudo[166445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:51 compute-0 sudo[166445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:51 compute-0 sudo[166445]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:51 compute-0 sudo[166470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:42:51 compute-0 sudo[166470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:51 compute-0 python3.9[166442]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:42:51 compute-0 sudo[166434]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:52 compute-0 podman[166600]: 2025-11-29 07:42:52.16207493 +0000 UTC m=+0.047612921 container create 377f4603314580b1988799cb7b365a9438eca272d4002de40ecd6265bd6e17e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 07:42:52 compute-0 systemd[1]: Started libpod-conmon-377f4603314580b1988799cb7b365a9438eca272d4002de40ecd6265bd6e17e8.scope.
Nov 29 07:42:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:42:52 compute-0 podman[166600]: 2025-11-29 07:42:52.140193282 +0000 UTC m=+0.025731293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:42:52 compute-0 podman[166600]: 2025-11-29 07:42:52.243501821 +0000 UTC m=+0.129039842 container init 377f4603314580b1988799cb7b365a9438eca272d4002de40ecd6265bd6e17e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_visvesvaraya, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:42:52 compute-0 podman[166600]: 2025-11-29 07:42:52.253245831 +0000 UTC m=+0.138783822 container start 377f4603314580b1988799cb7b365a9438eca272d4002de40ecd6265bd6e17e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_visvesvaraya, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 29 07:42:52 compute-0 podman[166600]: 2025-11-29 07:42:52.257554856 +0000 UTC m=+0.143092847 container attach 377f4603314580b1988799cb7b365a9438eca272d4002de40ecd6265bd6e17e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:42:52 compute-0 nice_visvesvaraya[166651]: 167 167
Nov 29 07:42:52 compute-0 systemd[1]: libpod-377f4603314580b1988799cb7b365a9438eca272d4002de40ecd6265bd6e17e8.scope: Deactivated successfully.
Nov 29 07:42:52 compute-0 podman[166600]: 2025-11-29 07:42:52.260210522 +0000 UTC m=+0.145748513 container died 377f4603314580b1988799cb7b365a9438eca272d4002de40ecd6265bd6e17e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_visvesvaraya, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:42:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dd693a01d43ecacda22fb77f198b214952e7322b87c17249cb7161ad7cb0786-merged.mount: Deactivated successfully.
Nov 29 07:42:52 compute-0 podman[166600]: 2025-11-29 07:42:52.321071297 +0000 UTC m=+0.206609288 container remove 377f4603314580b1988799cb7b365a9438eca272d4002de40ecd6265bd6e17e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:42:52 compute-0 systemd[1]: libpod-conmon-377f4603314580b1988799cb7b365a9438eca272d4002de40ecd6265bd6e17e8.scope: Deactivated successfully.
Nov 29 07:42:52 compute-0 sudo[166721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmurhcfunmevbnnuhdrzkbaiempspohq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402172.0663939-116-32083492179627/AnsiballZ_file.py'
Nov 29 07:42:52 compute-0 sudo[166721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:52 compute-0 sshd-session[166241]: Invalid user admin1 from 103.236.140.19 port 44624
Nov 29 07:42:52 compute-0 ceph-mon[75237]: pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:52 compute-0 podman[166729]: 2025-11-29 07:42:52.506844043 +0000 UTC m=+0.047890348 container create 74d81a1e9430cc526d5e379116f0c8ba7cf2d1cf8e1b901a9994069a19bb3d61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:42:52 compute-0 ceph-mon[75237]: pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:52 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:42:52 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:42:52 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:42:52 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:42:52 compute-0 systemd[1]: Started libpod-conmon-74d81a1e9430cc526d5e379116f0c8ba7cf2d1cf8e1b901a9994069a19bb3d61.scope.
Nov 29 07:42:52 compute-0 python3.9[166723]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:42:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:42:52 compute-0 podman[166729]: 2025-11-29 07:42:52.488347869 +0000 UTC m=+0.029394194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:42:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec22610f5b27d0537fe4c5ff2668fbdd3d526dfc3f5a1be798c27566db7a402/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec22610f5b27d0537fe4c5ff2668fbdd3d526dfc3f5a1be798c27566db7a402/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec22610f5b27d0537fe4c5ff2668fbdd3d526dfc3f5a1be798c27566db7a402/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec22610f5b27d0537fe4c5ff2668fbdd3d526dfc3f5a1be798c27566db7a402/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec22610f5b27d0537fe4c5ff2668fbdd3d526dfc3f5a1be798c27566db7a402/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:52 compute-0 sudo[166721]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:52 compute-0 podman[166729]: 2025-11-29 07:42:52.600784072 +0000 UTC m=+0.141830407 container init 74d81a1e9430cc526d5e379116f0c8ba7cf2d1cf8e1b901a9994069a19bb3d61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 07:42:52 compute-0 podman[166729]: 2025-11-29 07:42:52.610157053 +0000 UTC m=+0.151203358 container start 74d81a1e9430cc526d5e379116f0c8ba7cf2d1cf8e1b901a9994069a19bb3d61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_morse, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 07:42:52 compute-0 podman[166729]: 2025-11-29 07:42:52.615114884 +0000 UTC m=+0.156161189 container attach 74d81a1e9430cc526d5e379116f0c8ba7cf2d1cf8e1b901a9994069a19bb3d61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 07:42:52 compute-0 sshd-session[166241]: Received disconnect from 103.236.140.19 port 44624:11: Bye Bye [preauth]
Nov 29 07:42:52 compute-0 sshd-session[166241]: Disconnected from invalid user admin1 103.236.140.19 port 44624 [preauth]
Nov 29 07:42:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:52 compute-0 sudo[166900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jswwnzmssfpzsnihskqdcwklzoeiahtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402172.7211611-116-157314501000959/AnsiballZ_file.py'
Nov 29 07:42:52 compute-0 sudo[166900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:53 compute-0 python3.9[166902]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:42:53 compute-0 sudo[166900]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:53 compute-0 ceph-mon[75237]: pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:53 compute-0 sudo[167074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erdeylewawjcgcknpajtgqhigwjvjuee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402173.40398-166-59290653092022/AnsiballZ_file.py'
Nov 29 07:42:53 compute-0 sudo[167074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:53 compute-0 dazzling_morse[166746]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:42:53 compute-0 dazzling_morse[166746]: --> relative data size: 1.0
Nov 29 07:42:53 compute-0 dazzling_morse[166746]: --> All data devices are unavailable
Nov 29 07:42:53 compute-0 systemd[1]: libpod-74d81a1e9430cc526d5e379116f0c8ba7cf2d1cf8e1b901a9994069a19bb3d61.scope: Deactivated successfully.
Nov 29 07:42:53 compute-0 systemd[1]: libpod-74d81a1e9430cc526d5e379116f0c8ba7cf2d1cf8e1b901a9994069a19bb3d61.scope: Consumed 1.084s CPU time.
Nov 29 07:42:53 compute-0 podman[166729]: 2025-11-29 07:42:53.759942761 +0000 UTC m=+1.300989076 container died 74d81a1e9430cc526d5e379116f0c8ba7cf2d1cf8e1b901a9994069a19bb3d61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_morse, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 07:42:53 compute-0 python3.9[167077]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:42:53 compute-0 sudo[167074]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-eec22610f5b27d0537fe4c5ff2668fbdd3d526dfc3f5a1be798c27566db7a402-merged.mount: Deactivated successfully.
Nov 29 07:42:53 compute-0 podman[166729]: 2025-11-29 07:42:53.986892248 +0000 UTC m=+1.527938543 container remove 74d81a1e9430cc526d5e379116f0c8ba7cf2d1cf8e1b901a9994069a19bb3d61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_morse, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 07:42:54 compute-0 systemd[1]: libpod-conmon-74d81a1e9430cc526d5e379116f0c8ba7cf2d1cf8e1b901a9994069a19bb3d61.scope: Deactivated successfully.
Nov 29 07:42:54 compute-0 sudo[166470]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:54 compute-0 sudo[167156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:54 compute-0 sudo[167156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:54 compute-0 sudo[167156]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:54 compute-0 sudo[167205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:42:54 compute-0 sudo[167205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:54 compute-0 sudo[167205]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:54 compute-0 sudo[167245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:54 compute-0 sudo[167245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:54 compute-0 sudo[167245]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:54 compute-0 sudo[167335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeviyrhdzeigxrsfzktgdsmjvuqohjeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402174.0208633-166-208344917092225/AnsiballZ_file.py'
Nov 29 07:42:54 compute-0 sudo[167335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:54 compute-0 sudo[167300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:42:54 compute-0 sudo[167300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:54 compute-0 python3.9[167340]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:42:54 compute-0 sudo[167335]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:54 compute-0 podman[167407]: 2025-11-29 07:42:54.71323781 +0000 UTC m=+0.051700312 container create 524ac17b026e414cfc33c99718941b05c02f4ed9e9e3a0dd356619d0b04bd9d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:42:54 compute-0 systemd[1]: Started libpod-conmon-524ac17b026e414cfc33c99718941b05c02f4ed9e9e3a0dd356619d0b04bd9d4.scope.
Nov 29 07:42:54 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:42:54 compute-0 podman[167407]: 2025-11-29 07:42:54.688940202 +0000 UTC m=+0.027402724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:42:54 compute-0 podman[167407]: 2025-11-29 07:42:54.803278353 +0000 UTC m=+0.141740885 container init 524ac17b026e414cfc33c99718941b05c02f4ed9e9e3a0dd356619d0b04bd9d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:42:54 compute-0 podman[167407]: 2025-11-29 07:42:54.820471745 +0000 UTC m=+0.158934257 container start 524ac17b026e414cfc33c99718941b05c02f4ed9e9e3a0dd356619d0b04bd9d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_cohen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 07:42:54 compute-0 podman[167407]: 2025-11-29 07:42:54.825200282 +0000 UTC m=+0.163662844 container attach 524ac17b026e414cfc33c99718941b05c02f4ed9e9e3a0dd356619d0b04bd9d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_cohen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:42:54 compute-0 suspicious_cohen[167424]: 167 167
Nov 29 07:42:54 compute-0 systemd[1]: libpod-524ac17b026e414cfc33c99718941b05c02f4ed9e9e3a0dd356619d0b04bd9d4.scope: Deactivated successfully.
Nov 29 07:42:54 compute-0 podman[167407]: 2025-11-29 07:42:54.829359104 +0000 UTC m=+0.167821616 container died 524ac17b026e414cfc33c99718941b05c02f4ed9e9e3a0dd356619d0b04bd9d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:42:54 compute-0 podman[167421]: 2025-11-29 07:42:54.840776614 +0000 UTC m=+0.078736215 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 07:42:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e48f4404d1c1837d72911daf77f9b7a8171554510edc9f053a24ec5242e22642-merged.mount: Deactivated successfully.
Nov 29 07:42:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:54 compute-0 podman[167407]: 2025-11-29 07:42:54.883371701 +0000 UTC m=+0.221834203 container remove 524ac17b026e414cfc33c99718941b05c02f4ed9e9e3a0dd356619d0b04bd9d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_cohen, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:42:54 compute-0 systemd[1]: libpod-conmon-524ac17b026e414cfc33c99718941b05c02f4ed9e9e3a0dd356619d0b04bd9d4.scope: Deactivated successfully.
Nov 29 07:42:55 compute-0 podman[167539]: 2025-11-29 07:42:55.064499093 +0000 UTC m=+0.051996990 container create 3917136d7e99b567ab6a57a53bdd43dc79773fa9420337c9115b7cd1f36e5b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_payne, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:42:55 compute-0 systemd[1]: Started libpod-conmon-3917136d7e99b567ab6a57a53bdd43dc79773fa9420337c9115b7cd1f36e5b06.scope.
Nov 29 07:42:55 compute-0 podman[167539]: 2025-11-29 07:42:55.045429274 +0000 UTC m=+0.032927211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:42:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:42:55 compute-0 sudo[167608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jojempmzwpfimpwsxxzafggkkrzmplkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402174.8302033-166-277313802349836/AnsiballZ_file.py'
Nov 29 07:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c7a60fb7fb4d9fb976f05e58f7bbded463c7905b450c3a8be1d89b578b50d66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:55 compute-0 sudo[167608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c7a60fb7fb4d9fb976f05e58f7bbded463c7905b450c3a8be1d89b578b50d66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c7a60fb7fb4d9fb976f05e58f7bbded463c7905b450c3a8be1d89b578b50d66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c7a60fb7fb4d9fb976f05e58f7bbded463c7905b450c3a8be1d89b578b50d66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:55 compute-0 podman[167539]: 2025-11-29 07:42:55.177665344 +0000 UTC m=+0.165163271 container init 3917136d7e99b567ab6a57a53bdd43dc79773fa9420337c9115b7cd1f36e5b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:42:55 compute-0 podman[167539]: 2025-11-29 07:42:55.188816848 +0000 UTC m=+0.176314755 container start 3917136d7e99b567ab6a57a53bdd43dc79773fa9420337c9115b7cd1f36e5b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 29 07:42:55 compute-0 podman[167539]: 2025-11-29 07:42:55.19421266 +0000 UTC m=+0.181710567 container attach 3917136d7e99b567ab6a57a53bdd43dc79773fa9420337c9115b7cd1f36e5b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_payne, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 07:42:55 compute-0 python3.9[167610]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:42:55 compute-0 sudo[167608]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:42:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:42:55 compute-0 sudo[167762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itekokjgdtyvbdauikbpldfndsgvduyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402175.5204053-166-84616764479542/AnsiballZ_file.py'
Nov 29 07:42:55 compute-0 sudo[167762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:56 compute-0 python3.9[167764]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:42:56 compute-0 sudo[167762]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:56 compute-0 focused_payne[167600]: {
Nov 29 07:42:56 compute-0 focused_payne[167600]:     "0": [
Nov 29 07:42:56 compute-0 focused_payne[167600]:         {
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "devices": [
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "/dev/loop3"
Nov 29 07:42:56 compute-0 focused_payne[167600]:             ],
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "lv_name": "ceph_lv0",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "lv_size": "21470642176",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "name": "ceph_lv0",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "tags": {
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.cluster_name": "ceph",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.crush_device_class": "",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.encrypted": "0",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.osd_id": "0",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.type": "block",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.vdo": "0"
Nov 29 07:42:56 compute-0 focused_payne[167600]:             },
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "type": "block",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "vg_name": "ceph_vg0"
Nov 29 07:42:56 compute-0 focused_payne[167600]:         }
Nov 29 07:42:56 compute-0 focused_payne[167600]:     ],
Nov 29 07:42:56 compute-0 focused_payne[167600]:     "1": [
Nov 29 07:42:56 compute-0 focused_payne[167600]:         {
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "devices": [
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "/dev/loop4"
Nov 29 07:42:56 compute-0 focused_payne[167600]:             ],
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "lv_name": "ceph_lv1",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "lv_size": "21470642176",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "name": "ceph_lv1",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "tags": {
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.cluster_name": "ceph",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.crush_device_class": "",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.encrypted": "0",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.osd_id": "1",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.type": "block",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.vdo": "0"
Nov 29 07:42:56 compute-0 focused_payne[167600]:             },
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "type": "block",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "vg_name": "ceph_vg1"
Nov 29 07:42:56 compute-0 focused_payne[167600]:         }
Nov 29 07:42:56 compute-0 focused_payne[167600]:     ],
Nov 29 07:42:56 compute-0 focused_payne[167600]:     "2": [
Nov 29 07:42:56 compute-0 focused_payne[167600]:         {
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "devices": [
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "/dev/loop5"
Nov 29 07:42:56 compute-0 focused_payne[167600]:             ],
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "lv_name": "ceph_lv2",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "lv_size": "21470642176",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "name": "ceph_lv2",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "tags": {
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.cluster_name": "ceph",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.crush_device_class": "",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.encrypted": "0",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.osd_id": "2",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.type": "block",
Nov 29 07:42:56 compute-0 focused_payne[167600]:                 "ceph.vdo": "0"
Nov 29 07:42:56 compute-0 focused_payne[167600]:             },
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "type": "block",
Nov 29 07:42:56 compute-0 focused_payne[167600]:             "vg_name": "ceph_vg2"
Nov 29 07:42:56 compute-0 focused_payne[167600]:         }
Nov 29 07:42:56 compute-0 focused_payne[167600]:     ]
Nov 29 07:42:56 compute-0 focused_payne[167600]: }
Nov 29 07:42:56 compute-0 systemd[1]: libpod-3917136d7e99b567ab6a57a53bdd43dc79773fa9420337c9115b7cd1f36e5b06.scope: Deactivated successfully.
Nov 29 07:42:56 compute-0 podman[167539]: 2025-11-29 07:42:56.12106784 +0000 UTC m=+1.108565757 container died 3917136d7e99b567ab6a57a53bdd43dc79773fa9420337c9115b7cd1f36e5b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 29 07:42:56 compute-0 sudo[167928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nskczctcuvkcvcqhetnideozznilvjqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402176.1804442-166-224536504970189/AnsiballZ_file.py'
Nov 29 07:42:56 compute-0 sudo[167928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:57 compute-0 ceph-mon[75237]: pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:58 compute-0 python3.9[167930]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:42:58 compute-0 sudo[167928]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c7a60fb7fb4d9fb976f05e58f7bbded463c7905b450c3a8be1d89b578b50d66-merged.mount: Deactivated successfully.
Nov 29 07:42:58 compute-0 podman[167539]: 2025-11-29 07:42:58.307030244 +0000 UTC m=+3.294528181 container remove 3917136d7e99b567ab6a57a53bdd43dc79773fa9420337c9115b7cd1f36e5b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_payne, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:42:58 compute-0 sudo[167300]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:58 compute-0 sudo[168009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:58 compute-0 sudo[168009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:58 compute-0 sudo[168009]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:58 compute-0 systemd[1]: libpod-conmon-3917136d7e99b567ab6a57a53bdd43dc79773fa9420337c9115b7cd1f36e5b06.scope: Deactivated successfully.
Nov 29 07:42:58 compute-0 sudo[168057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:42:58 compute-0 sudo[168057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:58 compute-0 sudo[168057]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:58 compute-0 sudo[168099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:58 compute-0 sudo[168099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:58 compute-0 sudo[168099]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:58 compute-0 sudo[168163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twqeixvzovhhdpirycglaabqlllpzuux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402178.282067-166-179651034822128/AnsiballZ_file.py'
Nov 29 07:42:58 compute-0 sudo[168163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:58 compute-0 sudo[168153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:42:58 compute-0 sudo[168153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:58 compute-0 python3.9[168182]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:42:58 compute-0 sudo[168163]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:59 compute-0 podman[168225]: 2025-11-29 07:42:58.927122744 +0000 UTC m=+0.027632869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:42:59 compute-0 ceph-mon[75237]: pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:59 compute-0 podman[168225]: 2025-11-29 07:42:59.11289843 +0000 UTC m=+0.213408535 container create 21c826325bc7777a16125693521cd04fe0ab7a8bae5fbf27c79c250e02a08671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_chaplygin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 07:42:59 compute-0 systemd[1]: Started libpod-conmon-21c826325bc7777a16125693521cd04fe0ab7a8bae5fbf27c79c250e02a08671.scope.
Nov 29 07:42:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:42:59 compute-0 sudo[168393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwcexguuqcolmfcdjjmqflijvmsfkjyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402179.0668256-166-172026452249955/AnsiballZ_file.py'
Nov 29 07:42:59 compute-0 sudo[168393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:42:59 compute-0 podman[168225]: 2025-11-29 07:42:59.519307258 +0000 UTC m=+0.619817403 container init 21c826325bc7777a16125693521cd04fe0ab7a8bae5fbf27c79c250e02a08671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_chaplygin, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:42:59 compute-0 podman[168225]: 2025-11-29 07:42:59.526925576 +0000 UTC m=+0.627435701 container start 21c826325bc7777a16125693521cd04fe0ab7a8bae5fbf27c79c250e02a08671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_chaplygin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 07:42:59 compute-0 systemd[1]: libpod-21c826325bc7777a16125693521cd04fe0ab7a8bae5fbf27c79c250e02a08671.scope: Deactivated successfully.
Nov 29 07:42:59 compute-0 conmon[168340]: conmon 21c826325bc7777a1612 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-21c826325bc7777a16125693521cd04fe0ab7a8bae5fbf27c79c250e02a08671.scope/container/memory.events
Nov 29 07:42:59 compute-0 nice_chaplygin[168340]: 167 167
Nov 29 07:42:59 compute-0 python3.9[168395]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:42:59 compute-0 sudo[168393]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:00 compute-0 sudo[168559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qagnzvmknjydcfpkhmflljxjwmthafxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402179.8419712-217-59726311739808/AnsiballZ_command.py'
Nov 29 07:43:00 compute-0 sudo[168559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:43:00 compute-0 podman[168225]: 2025-11-29 07:43:00.660907866 +0000 UTC m=+1.761418011 container attach 21c826325bc7777a16125693521cd04fe0ab7a8bae5fbf27c79c250e02a08671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_chaplygin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:43:00 compute-0 podman[168225]: 2025-11-29 07:43:00.66147445 +0000 UTC m=+1.761984555 container died 21c826325bc7777a16125693521cd04fe0ab7a8bae5fbf27c79c250e02a08671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_chaplygin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:43:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:00 compute-0 python3.9[168561]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:43:00 compute-0 sudo[168559]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:01 compute-0 ceph-mon[75237]: pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:01 compute-0 python3.9[168713]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 07:43:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b650472c828db79da1cae02ba199067733fd9082c900e6490a9565606860b579-merged.mount: Deactivated successfully.
Nov 29 07:43:02 compute-0 podman[168225]: 2025-11-29 07:43:02.698713669 +0000 UTC m=+3.799223774 container remove 21c826325bc7777a16125693521cd04fe0ab7a8bae5fbf27c79c250e02a08671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_chaplygin, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 07:43:02 compute-0 systemd[1]: libpod-conmon-21c826325bc7777a16125693521cd04fe0ab7a8bae5fbf27c79c250e02a08671.scope: Deactivated successfully.
Nov 29 07:43:02 compute-0 podman[168715]: 2025-11-29 07:43:02.867639781 +0000 UTC m=+0.938982599 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:43:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:02 compute-0 ceph-mon[75237]: pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:02 compute-0 podman[168818]: 2025-11-29 07:43:02.930998418 +0000 UTC m=+0.072412731 container create 0901c3224a046ca84549504962a4044441981ac8d04fd6a2e6b5aa2ce7226dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:43:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:02 compute-0 systemd[1]: Started libpod-conmon-0901c3224a046ca84549504962a4044441981ac8d04fd6a2e6b5aa2ce7226dc0.scope.
Nov 29 07:43:02 compute-0 podman[168818]: 2025-11-29 07:43:02.89567105 +0000 UTC m=+0.037085423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:43:03 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3d404406d7371cc0ddff26af696942a5424bd0324690c9aebb7d7437781d60a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3d404406d7371cc0ddff26af696942a5424bd0324690c9aebb7d7437781d60a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3d404406d7371cc0ddff26af696942a5424bd0324690c9aebb7d7437781d60a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3d404406d7371cc0ddff26af696942a5424bd0324690c9aebb7d7437781d60a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:43:03 compute-0 sudo[168918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlpfncaipranjbxcfefyvesvlwilvaeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402182.7789824-235-28566541106220/AnsiballZ_systemd_service.py'
Nov 29 07:43:03 compute-0 sudo[168918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:43:03 compute-0 podman[168818]: 2025-11-29 07:43:03.245677632 +0000 UTC m=+0.387091965 container init 0901c3224a046ca84549504962a4044441981ac8d04fd6a2e6b5aa2ce7226dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_villani, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 07:43:03 compute-0 podman[168818]: 2025-11-29 07:43:03.257128134 +0000 UTC m=+0.398542447 container start 0901c3224a046ca84549504962a4044441981ac8d04fd6a2e6b5aa2ce7226dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_villani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:43:03 compute-0 podman[168818]: 2025-11-29 07:43:03.28020265 +0000 UTC m=+0.421616983 container attach 0901c3224a046ca84549504962a4044441981ac8d04fd6a2e6b5aa2ce7226dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:43:03 compute-0 python3.9[168920]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:43:03 compute-0 systemd[1]: Reloading.
Nov 29 07:43:03 compute-0 systemd-rc-local-generator[168950]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:43:03 compute-0 systemd-sysv-generator[168954]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:43:03 compute-0 sudo[168918]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:03 compute-0 ceph-mon[75237]: pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:04 compute-0 inspiring_villani[168865]: {
Nov 29 07:43:04 compute-0 inspiring_villani[168865]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:43:04 compute-0 inspiring_villani[168865]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:43:04 compute-0 inspiring_villani[168865]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:43:04 compute-0 inspiring_villani[168865]:         "osd_id": 2,
Nov 29 07:43:04 compute-0 inspiring_villani[168865]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:43:04 compute-0 inspiring_villani[168865]:         "type": "bluestore"
Nov 29 07:43:04 compute-0 inspiring_villani[168865]:     },
Nov 29 07:43:04 compute-0 inspiring_villani[168865]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:43:04 compute-0 inspiring_villani[168865]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:43:04 compute-0 inspiring_villani[168865]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:43:04 compute-0 inspiring_villani[168865]:         "osd_id": 0,
Nov 29 07:43:04 compute-0 inspiring_villani[168865]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:43:04 compute-0 inspiring_villani[168865]:         "type": "bluestore"
Nov 29 07:43:04 compute-0 inspiring_villani[168865]:     },
Nov 29 07:43:04 compute-0 inspiring_villani[168865]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:43:04 compute-0 inspiring_villani[168865]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:43:04 compute-0 inspiring_villani[168865]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:43:04 compute-0 inspiring_villani[168865]:         "osd_id": 1,
Nov 29 07:43:04 compute-0 inspiring_villani[168865]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:43:04 compute-0 inspiring_villani[168865]:         "type": "bluestore"
Nov 29 07:43:04 compute-0 inspiring_villani[168865]:     }
Nov 29 07:43:04 compute-0 inspiring_villani[168865]: }
Nov 29 07:43:04 compute-0 systemd[1]: libpod-0901c3224a046ca84549504962a4044441981ac8d04fd6a2e6b5aa2ce7226dc0.scope: Deactivated successfully.
Nov 29 07:43:04 compute-0 systemd[1]: libpod-0901c3224a046ca84549504962a4044441981ac8d04fd6a2e6b5aa2ce7226dc0.scope: Consumed 1.025s CPU time.
Nov 29 07:43:04 compute-0 podman[168818]: 2025-11-29 07:43:04.277200183 +0000 UTC m=+1.418614486 container died 0901c3224a046ca84549504962a4044441981ac8d04fd6a2e6b5aa2ce7226dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:43:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3d404406d7371cc0ddff26af696942a5424bd0324690c9aebb7d7437781d60a-merged.mount: Deactivated successfully.
Nov 29 07:43:04 compute-0 podman[168818]: 2025-11-29 07:43:04.341610331 +0000 UTC m=+1.483024644 container remove 0901c3224a046ca84549504962a4044441981ac8d04fd6a2e6b5aa2ce7226dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:43:04 compute-0 sudo[169147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltnkgyfgdggbaznftzdbbkrattorhezq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402184.0116417-243-16901024094527/AnsiballZ_command.py'
Nov 29 07:43:04 compute-0 systemd[1]: libpod-conmon-0901c3224a046ca84549504962a4044441981ac8d04fd6a2e6b5aa2ce7226dc0.scope: Deactivated successfully.
Nov 29 07:43:04 compute-0 sudo[169147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:43:04 compute-0 sudo[168153]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:43:04 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:43:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:43:04 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:43:04 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 7dc1f637-63f2-4c89-97b6-6e7eecffa4ae does not exist
Nov 29 07:43:04 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev e641ebe4-5009-47dd-8313-efd20694eadf does not exist
Nov 29 07:43:04 compute-0 sudo[169150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:43:04 compute-0 sudo[169150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:04 compute-0 sudo[169150]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:04 compute-0 python3.9[169149]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:43:04 compute-0 sudo[169175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:43:04 compute-0 sudo[169175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:04 compute-0 sudo[169175]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:04 compute-0 sudo[169147]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:05 compute-0 sudo[169350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdbkuiytvzysszxqthfdypuprynjwuyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402184.7210763-243-205445183636372/AnsiballZ_command.py'
Nov 29 07:43:05 compute-0 sudo[169350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:43:05 compute-0 python3.9[169352]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:43:05 compute-0 sudo[169350]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:05 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:43:05 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:43:05 compute-0 ceph-mon[75237]: pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:05 compute-0 sudo[169503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jucmehalxroccublsjnexsdmymmgrcbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402185.4891717-243-66230911586497/AnsiballZ_command.py'
Nov 29 07:43:05 compute-0 sudo[169503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:43:06 compute-0 python3.9[169505]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:43:06 compute-0 sudo[169503]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:06 compute-0 sudo[169656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdtykxtepoidahaqqfmuwocaxmwleiea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402186.219276-243-162023241759054/AnsiballZ_command.py'
Nov 29 07:43:06 compute-0 sudo[169656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:43:06 compute-0 python3.9[169658]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:43:06 compute-0 sudo[169656]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:07 compute-0 sudo[169809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbtkwwjspsmqwzlwqjwbzussgkdwtafo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402186.9147315-243-257025314603987/AnsiballZ_command.py'
Nov 29 07:43:07 compute-0 sudo[169809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:43:07 compute-0 python3.9[169811]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:43:07 compute-0 ceph-mon[75237]: pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:07 compute-0 sudo[169809]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:07 compute-0 sudo[169962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lerrbxoftgfqtilwigxomhdxaczksufe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402187.6282287-243-247362807838724/AnsiballZ_command.py'
Nov 29 07:43:07 compute-0 sudo[169962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:43:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:08 compute-0 python3.9[169964]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:43:08 compute-0 sudo[169962]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:43:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:43:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:43:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:43:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:43:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:43:08 compute-0 sudo[170115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuvqoxyptjrwytoeuchjutzmmyfoetgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402188.3297672-243-250249601327881/AnsiballZ_command.py'
Nov 29 07:43:08 compute-0 sudo[170115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:43:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:08 compute-0 python3.9[170117]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:43:08 compute-0 sudo[170115]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:09 compute-0 ceph-mon[75237]: pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:09 compute-0 sudo[170268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cllwwvcppshtoaswdkwtkfnfnozldwbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402189.375701-297-253061983103757/AnsiballZ_getent.py'
Nov 29 07:43:09 compute-0 sudo[170268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:43:10 compute-0 python3.9[170270]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 29 07:43:10 compute-0 sudo[170268]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:10 compute-0 auditd[699]: Audit daemon rotating log files
Nov 29 07:43:10 compute-0 sudo[170421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfkofwevifvndcnechgrqrovuqibwgvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402190.3419242-305-57026213115487/AnsiballZ_group.py'
Nov 29 07:43:10 compute-0 sudo[170421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:43:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:11 compute-0 python3.9[170423]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 07:43:11 compute-0 groupadd[170424]: group added to /etc/group: name=libvirt, GID=42473
Nov 29 07:43:11 compute-0 groupadd[170424]: group added to /etc/gshadow: name=libvirt
Nov 29 07:43:11 compute-0 groupadd[170424]: new group: name=libvirt, GID=42473
Nov 29 07:43:11 compute-0 sudo[170421]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:11 compute-0 sudo[170579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifpznqicwlfriqgnnloghqbozhmrlgtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402191.3684156-313-274357913395671/AnsiballZ_user.py'
Nov 29 07:43:11 compute-0 sudo[170579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:43:11 compute-0 ceph-mon[75237]: pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:12 compute-0 python3.9[170581]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 07:43:12 compute-0 useradd[170583]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Nov 29 07:43:12 compute-0 sudo[170579]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:12 compute-0 sudo[170739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwzrvdxuseaojqiqhtunfjcgtrxahcum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402192.555694-324-194723990023273/AnsiballZ_setup.py'
Nov 29 07:43:12 compute-0 sudo[170739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:43:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:13 compute-0 python3.9[170741]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:43:13 compute-0 sudo[170739]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:13 compute-0 sudo[170823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shbmacgjexvekenfceqotppxbvievkkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402192.555694-324-194723990023273/AnsiballZ_dnf.py'
Nov 29 07:43:13 compute-0 sudo[170823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:43:14 compute-0 ceph-mon[75237]: pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:14 compute-0 python3.9[170825]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:43:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:15 compute-0 ceph-mon[75237]: pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:17 compute-0 ceph-mon[75237]: pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:19 compute-0 ceph-mon[75237]: pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:21 compute-0 ceph-mon[75237]: pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:24 compute-0 ceph-mon[75237]: pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:25 compute-0 sshd-session[170941]: Received disconnect from 103.234.151.178 port 29684:11: Bye Bye [preauth]
Nov 29 07:43:25 compute-0 sshd-session[170941]: Disconnected from authenticating user root 103.234.151.178 port 29684 [preauth]
Nov 29 07:43:25 compute-0 podman[170997]: 2025-11-29 07:43:25.956487042 +0000 UTC m=+0.101183017 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:43:26 compute-0 ceph-mon[75237]: pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:43:27.095 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:43:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:43:27.096 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:43:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:43:27.096 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:43:27 compute-0 ceph-mon[75237]: pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:29 compute-0 ceph-mon[75237]: pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:31 compute-0 ceph-mon[75237]: pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:33 compute-0 podman[171037]: 2025-11-29 07:43:33.96371066 +0000 UTC m=+0.130858593 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 07:43:33 compute-0 ceph-mon[75237]: pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:36 compute-0 ceph-mon[75237]: pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:38 compute-0 ceph-mon[75237]: pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:43:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:43:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:43:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:43:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:43:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:43:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:43:38
Nov 29 07:43:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:43:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:43:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'backups', 'default.rgw.control', 'vms', 'default.rgw.log']
Nov 29 07:43:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:43:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:39 compute-0 ceph-mon[75237]: pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:42 compute-0 ceph-mon[75237]: pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:42 compute-0 kernel: SELinux:  Converting 2770 SID table entries...
Nov 29 07:43:42 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 07:43:42 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 07:43:42 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 07:43:42 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 07:43:42 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 07:43:42 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 07:43:42 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 07:43:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:43:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:43:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:43:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:43:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:43:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:43:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:43:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:43:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:43:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:43:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:44 compute-0 ceph-mon[75237]: pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:45 compute-0 ceph-mon[75237]: pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:48 compute-0 ceph-mon[75237]: pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:49 compute-0 ceph-mon[75237]: pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:52 compute-0 ceph-mon[75237]: pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:52 compute-0 kernel: SELinux:  Converting 2770 SID table entries...
Nov 29 07:43:52 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 07:43:52 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 07:43:52 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 07:43:52 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 07:43:52 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 07:43:52 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 07:43:52 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 07:43:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:54 compute-0 ceph-mon[75237]: pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:43:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:43:56 compute-0 ceph-mon[75237]: pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:56 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 29 07:43:56 compute-0 sshd-session[171078]: Invalid user pivpn from 20.185.243.158 port 51628
Nov 29 07:43:56 compute-0 sshd-session[171078]: Received disconnect from 20.185.243.158 port 51628:11: Bye Bye [preauth]
Nov 29 07:43:56 compute-0 sshd-session[171078]: Disconnected from invalid user pivpn 20.185.243.158 port 51628 [preauth]
Nov 29 07:43:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:56 compute-0 podman[171080]: 2025-11-29 07:43:56.938086763 +0000 UTC m=+0.104677328 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 07:43:57 compute-0 ceph-mon[75237]: pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:58 compute-0 sshd-session[171100]: Invalid user roott from 114.34.106.146 port 44680
Nov 29 07:43:58 compute-0 sshd-session[171100]: Received disconnect from 114.34.106.146 port 44680:11: Bye Bye [preauth]
Nov 29 07:43:58 compute-0 sshd-session[171100]: Disconnected from invalid user roott 114.34.106.146 port 44680 [preauth]
Nov 29 07:43:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:00 compute-0 ceph-mon[75237]: pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:01 compute-0 ceph-mon[75237]: pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:03 compute-0 ceph-mon[75237]: pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:04 compute-0 sudo[171102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:04 compute-0 sudo[171102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:04 compute-0 sudo[171102]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:04 compute-0 sudo[171133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:44:04 compute-0 sudo[171133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:04 compute-0 sudo[171133]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:04 compute-0 podman[171126]: 2025-11-29 07:44:04.757794142 +0000 UTC m=+0.090454517 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 07:44:04 compute-0 sudo[171177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:04 compute-0 sudo[171177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:04 compute-0 sudo[171177]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:04 compute-0 sudo[171227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:44:04 compute-0 sudo[171227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:05 compute-0 sudo[171227]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:44:05 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:44:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:44:05 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:44:05 compute-0 ceph-mon[75237]: pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:44:05 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:44:05 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 32c4a398-3016-479c-ae11-31873daa9dd5 does not exist
Nov 29 07:44:05 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 9001f016-384e-4bf9-be62-48fcc6941c2d does not exist
Nov 29 07:44:05 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev c029c4dc-ec25-45e3-95c7-151780bc486a does not exist
Nov 29 07:44:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:44:05 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:44:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:44:05 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:44:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:44:05 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:44:05 compute-0 sudo[171666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:05 compute-0 sudo[171666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:05 compute-0 sudo[171666]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:05 compute-0 sudo[171731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:44:05 compute-0 sudo[171731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:05 compute-0 sudo[171731]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:05 compute-0 sudo[171794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:05 compute-0 sudo[171794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:05 compute-0 sudo[171794]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:05 compute-0 sudo[171857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:44:05 compute-0 sudo[171857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:05 compute-0 podman[172154]: 2025-11-29 07:44:05.893477281 +0000 UTC m=+0.023475820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:44:06 compute-0 podman[172154]: 2025-11-29 07:44:06.178152639 +0000 UTC m=+0.308151128 container create 972e152c830f5b7144061afcced62888f97c2f2f46c477a3352d66d0072c3581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 29 07:44:06 compute-0 systemd[1]: Started libpod-conmon-972e152c830f5b7144061afcced62888f97c2f2f46c477a3352d66d0072c3581.scope.
Nov 29 07:44:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:44:06 compute-0 podman[172154]: 2025-11-29 07:44:06.528430046 +0000 UTC m=+0.658428575 container init 972e152c830f5b7144061afcced62888f97c2f2f46c477a3352d66d0072c3581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:44:06 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:44:06 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:44:06 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:44:06 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:44:06 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:44:06 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:44:06 compute-0 podman[172154]: 2025-11-29 07:44:06.54165794 +0000 UTC m=+0.671656449 container start 972e152c830f5b7144061afcced62888f97c2f2f46c477a3352d66d0072c3581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:44:06 compute-0 ecstatic_hodgkin[172390]: 167 167
Nov 29 07:44:06 compute-0 systemd[1]: libpod-972e152c830f5b7144061afcced62888f97c2f2f46c477a3352d66d0072c3581.scope: Deactivated successfully.
Nov 29 07:44:06 compute-0 podman[172154]: 2025-11-29 07:44:06.548136754 +0000 UTC m=+0.678135263 container attach 972e152c830f5b7144061afcced62888f97c2f2f46c477a3352d66d0072c3581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 07:44:06 compute-0 podman[172154]: 2025-11-29 07:44:06.549383148 +0000 UTC m=+0.679381647 container died 972e152c830f5b7144061afcced62888f97c2f2f46c477a3352d66d0072c3581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:44:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5877269c75b17c84054b91197ffd770f656e5b94e7303a8250134b4e9dc3975-merged.mount: Deactivated successfully.
Nov 29 07:44:06 compute-0 podman[172154]: 2025-11-29 07:44:06.596541973 +0000 UTC m=+0.726540472 container remove 972e152c830f5b7144061afcced62888f97c2f2f46c477a3352d66d0072c3581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:44:06 compute-0 systemd[1]: libpod-conmon-972e152c830f5b7144061afcced62888f97c2f2f46c477a3352d66d0072c3581.scope: Deactivated successfully.
Nov 29 07:44:06 compute-0 podman[172703]: 2025-11-29 07:44:06.756189896 +0000 UTC m=+0.023529872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:44:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:07 compute-0 podman[172703]: 2025-11-29 07:44:07.710248532 +0000 UTC m=+0.977588518 container create 3b4dab5b355a9be0da10fbd791e54606f274791686d5aa5fcb3a7ff6e8c81d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hodgkin, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 07:44:07 compute-0 ceph-mon[75237]: pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:07 compute-0 systemd[1]: Started libpod-conmon-3b4dab5b355a9be0da10fbd791e54606f274791686d5aa5fcb3a7ff6e8c81d9d.scope.
Nov 29 07:44:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/764b774cdee15f3043807a2dd388f61374904a01e4a7cc431ffdb023da66336d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/764b774cdee15f3043807a2dd388f61374904a01e4a7cc431ffdb023da66336d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/764b774cdee15f3043807a2dd388f61374904a01e4a7cc431ffdb023da66336d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/764b774cdee15f3043807a2dd388f61374904a01e4a7cc431ffdb023da66336d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/764b774cdee15f3043807a2dd388f61374904a01e4a7cc431ffdb023da66336d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:44:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:08 compute-0 podman[172703]: 2025-11-29 07:44:08.050084379 +0000 UTC m=+1.317424375 container init 3b4dab5b355a9be0da10fbd791e54606f274791686d5aa5fcb3a7ff6e8c81d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hodgkin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:44:08 compute-0 podman[172703]: 2025-11-29 07:44:08.058499335 +0000 UTC m=+1.325839311 container start 3b4dab5b355a9be0da10fbd791e54606f274791686d5aa5fcb3a7ff6e8c81d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hodgkin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 07:44:08 compute-0 podman[172703]: 2025-11-29 07:44:08.081461061 +0000 UTC m=+1.348801057 container attach 3b4dab5b355a9be0da10fbd791e54606f274791686d5aa5fcb3a7ff6e8c81d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:44:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:44:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:44:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:44:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:44:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:44:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:44:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:09 compute-0 charming_hodgkin[173382]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:44:09 compute-0 charming_hodgkin[173382]: --> relative data size: 1.0
Nov 29 07:44:09 compute-0 charming_hodgkin[173382]: --> All data devices are unavailable
Nov 29 07:44:09 compute-0 systemd[1]: libpod-3b4dab5b355a9be0da10fbd791e54606f274791686d5aa5fcb3a7ff6e8c81d9d.scope: Deactivated successfully.
Nov 29 07:44:09 compute-0 podman[172703]: 2025-11-29 07:44:09.207871391 +0000 UTC m=+2.475211387 container died 3b4dab5b355a9be0da10fbd791e54606f274791686d5aa5fcb3a7ff6e8c81d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hodgkin, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-764b774cdee15f3043807a2dd388f61374904a01e4a7cc431ffdb023da66336d-merged.mount: Deactivated successfully.
Nov 29 07:44:09 compute-0 podman[172703]: 2025-11-29 07:44:09.263518233 +0000 UTC m=+2.530858209 container remove 3b4dab5b355a9be0da10fbd791e54606f274791686d5aa5fcb3a7ff6e8c81d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:44:09 compute-0 systemd[1]: libpod-conmon-3b4dab5b355a9be0da10fbd791e54606f274791686d5aa5fcb3a7ff6e8c81d9d.scope: Deactivated successfully.
Nov 29 07:44:09 compute-0 sudo[171857]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:09 compute-0 sudo[174204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:09 compute-0 sudo[174204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:09 compute-0 sudo[174204]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:09 compute-0 sudo[174275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:44:09 compute-0 sudo[174275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:09 compute-0 sudo[174275]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:09 compute-0 sudo[174343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:09 compute-0 sudo[174343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:09 compute-0 sudo[174343]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:09 compute-0 sudo[174408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:44:09 compute-0 sudo[174408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:09 compute-0 podman[174685]: 2025-11-29 07:44:09.864594019 +0000 UTC m=+0.040271501 container create ffa4afdd1a1eb18827264a739443b52ef38e0b4aeb5369436b424e0cc1be93ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_faraday, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 07:44:09 compute-0 systemd[1]: Started libpod-conmon-ffa4afdd1a1eb18827264a739443b52ef38e0b4aeb5369436b424e0cc1be93ec.scope.
Nov 29 07:44:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:44:09 compute-0 podman[174685]: 2025-11-29 07:44:09.848845046 +0000 UTC m=+0.024522558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:44:09 compute-0 podman[174685]: 2025-11-29 07:44:09.95295719 +0000 UTC m=+0.128634692 container init ffa4afdd1a1eb18827264a739443b52ef38e0b4aeb5369436b424e0cc1be93ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:44:09 compute-0 podman[174685]: 2025-11-29 07:44:09.959177176 +0000 UTC m=+0.134854668 container start ffa4afdd1a1eb18827264a739443b52ef38e0b4aeb5369436b424e0cc1be93ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_faraday, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:44:09 compute-0 podman[174685]: 2025-11-29 07:44:09.962243518 +0000 UTC m=+0.137921090 container attach ffa4afdd1a1eb18827264a739443b52ef38e0b4aeb5369436b424e0cc1be93ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:44:09 compute-0 pedantic_faraday[174755]: 167 167
Nov 29 07:44:09 compute-0 systemd[1]: libpod-ffa4afdd1a1eb18827264a739443b52ef38e0b4aeb5369436b424e0cc1be93ec.scope: Deactivated successfully.
Nov 29 07:44:09 compute-0 podman[174685]: 2025-11-29 07:44:09.9667639 +0000 UTC m=+0.142441392 container died ffa4afdd1a1eb18827264a739443b52ef38e0b4aeb5369436b424e0cc1be93ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-7182bdd86fb3eaa5b2101ccc325aa1ed77e4d78468577d9dce8f52285a169381-merged.mount: Deactivated successfully.
Nov 29 07:44:10 compute-0 podman[174685]: 2025-11-29 07:44:10.011635084 +0000 UTC m=+0.187312596 container remove ffa4afdd1a1eb18827264a739443b52ef38e0b4aeb5369436b424e0cc1be93ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 07:44:10 compute-0 systemd[1]: libpod-conmon-ffa4afdd1a1eb18827264a739443b52ef38e0b4aeb5369436b424e0cc1be93ec.scope: Deactivated successfully.
Nov 29 07:44:10 compute-0 ceph-mon[75237]: pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:10 compute-0 sshd-session[173834]: Invalid user ftpuser from 103.236.140.19 port 33584
Nov 29 07:44:10 compute-0 podman[174904]: 2025-11-29 07:44:10.190512212 +0000 UTC m=+0.057812761 container create fb4dc1a5f0bbccbeae2c79224025689f33984b5cd492d330980e3d02ceafefb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cray, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:44:10 compute-0 systemd[1]: Started libpod-conmon-fb4dc1a5f0bbccbeae2c79224025689f33984b5cd492d330980e3d02ceafefb6.scope.
Nov 29 07:44:10 compute-0 podman[174904]: 2025-11-29 07:44:10.166046276 +0000 UTC m=+0.033346825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:44:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6123c89dd90bcc551731e936103856b4f5bfcf495a76291d7047dc03130f798e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6123c89dd90bcc551731e936103856b4f5bfcf495a76291d7047dc03130f798e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6123c89dd90bcc551731e936103856b4f5bfcf495a76291d7047dc03130f798e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6123c89dd90bcc551731e936103856b4f5bfcf495a76291d7047dc03130f798e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:44:10 compute-0 podman[174904]: 2025-11-29 07:44:10.29777526 +0000 UTC m=+0.165075789 container init fb4dc1a5f0bbccbeae2c79224025689f33984b5cd492d330980e3d02ceafefb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:44:10 compute-0 podman[174904]: 2025-11-29 07:44:10.304915822 +0000 UTC m=+0.172216361 container start fb4dc1a5f0bbccbeae2c79224025689f33984b5cd492d330980e3d02ceafefb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 07:44:10 compute-0 podman[174904]: 2025-11-29 07:44:10.30854599 +0000 UTC m=+0.175846549 container attach fb4dc1a5f0bbccbeae2c79224025689f33984b5cd492d330980e3d02ceafefb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cray, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:44:10 compute-0 sshd-session[173834]: Received disconnect from 103.236.140.19 port 33584:11: Bye Bye [preauth]
Nov 29 07:44:10 compute-0 sshd-session[173834]: Disconnected from invalid user ftpuser 103.236.140.19 port 33584 [preauth]
Nov 29 07:44:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:11 compute-0 mystifying_cray[174977]: {
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:     "0": [
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:         {
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "devices": [
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "/dev/loop3"
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             ],
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "lv_name": "ceph_lv0",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "lv_size": "21470642176",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "name": "ceph_lv0",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "tags": {
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.cluster_name": "ceph",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.crush_device_class": "",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.encrypted": "0",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.osd_id": "0",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.type": "block",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.vdo": "0"
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             },
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "type": "block",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "vg_name": "ceph_vg0"
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:         }
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:     ],
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:     "1": [
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:         {
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "devices": [
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "/dev/loop4"
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             ],
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "lv_name": "ceph_lv1",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "lv_size": "21470642176",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "name": "ceph_lv1",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "tags": {
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.cluster_name": "ceph",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.crush_device_class": "",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.encrypted": "0",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.osd_id": "1",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.type": "block",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.vdo": "0"
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             },
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "type": "block",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "vg_name": "ceph_vg1"
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:         }
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:     ],
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:     "2": [
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:         {
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "devices": [
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "/dev/loop5"
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             ],
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "lv_name": "ceph_lv2",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "lv_size": "21470642176",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "name": "ceph_lv2",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "tags": {
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.cluster_name": "ceph",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.crush_device_class": "",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.encrypted": "0",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.osd_id": "2",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.type": "block",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:                 "ceph.vdo": "0"
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             },
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "type": "block",
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:             "vg_name": "ceph_vg2"
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:         }
Nov 29 07:44:11 compute-0 mystifying_cray[174977]:     ]
Nov 29 07:44:11 compute-0 mystifying_cray[174977]: }
Nov 29 07:44:11 compute-0 systemd[1]: libpod-fb4dc1a5f0bbccbeae2c79224025689f33984b5cd492d330980e3d02ceafefb6.scope: Deactivated successfully.
Nov 29 07:44:11 compute-0 conmon[174977]: conmon fb4dc1a5f0bbccbeae2c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fb4dc1a5f0bbccbeae2c79224025689f33984b5cd492d330980e3d02ceafefb6.scope/container/memory.events
Nov 29 07:44:11 compute-0 podman[174904]: 2025-11-29 07:44:11.065213139 +0000 UTC m=+0.932513688 container died fb4dc1a5f0bbccbeae2c79224025689f33984b5cd492d330980e3d02ceafefb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cray, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 07:44:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-6123c89dd90bcc551731e936103856b4f5bfcf495a76291d7047dc03130f798e-merged.mount: Deactivated successfully.
Nov 29 07:44:11 compute-0 ceph-mon[75237]: pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:11 compute-0 podman[174904]: 2025-11-29 07:44:11.133923793 +0000 UTC m=+1.001224302 container remove fb4dc1a5f0bbccbeae2c79224025689f33984b5cd492d330980e3d02ceafefb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cray, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 07:44:11 compute-0 systemd[1]: libpod-conmon-fb4dc1a5f0bbccbeae2c79224025689f33984b5cd492d330980e3d02ceafefb6.scope: Deactivated successfully.
Nov 29 07:44:11 compute-0 sudo[174408]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:11 compute-0 sudo[175502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:11 compute-0 sudo[175502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:11 compute-0 sudo[175502]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:11 compute-0 sudo[175574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:44:11 compute-0 sudo[175574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:11 compute-0 sudo[175574]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:11 compute-0 sudo[175636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:11 compute-0 sudo[175636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:11 compute-0 sudo[175636]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:11 compute-0 sudo[175702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:44:11 compute-0 sudo[175702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:11 compute-0 podman[175994]: 2025-11-29 07:44:11.806963959 +0000 UTC m=+0.040010704 container create 82bd6499fe322af41520ad95cff43f387151e56a350cd99efe05b548afa7601c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:44:11 compute-0 systemd[1]: Started libpod-conmon-82bd6499fe322af41520ad95cff43f387151e56a350cd99efe05b548afa7601c.scope.
Nov 29 07:44:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:44:11 compute-0 podman[175994]: 2025-11-29 07:44:11.88337608 +0000 UTC m=+0.116422825 container init 82bd6499fe322af41520ad95cff43f387151e56a350cd99efe05b548afa7601c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:44:11 compute-0 podman[175994]: 2025-11-29 07:44:11.789153281 +0000 UTC m=+0.022200056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:44:11 compute-0 podman[175994]: 2025-11-29 07:44:11.890746177 +0000 UTC m=+0.123792922 container start 82bd6499fe322af41520ad95cff43f387151e56a350cd99efe05b548afa7601c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_einstein, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:44:11 compute-0 podman[175994]: 2025-11-29 07:44:11.894628362 +0000 UTC m=+0.127675107 container attach 82bd6499fe322af41520ad95cff43f387151e56a350cd99efe05b548afa7601c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_einstein, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 07:44:11 compute-0 condescending_einstein[176057]: 167 167
Nov 29 07:44:11 compute-0 systemd[1]: libpod-82bd6499fe322af41520ad95cff43f387151e56a350cd99efe05b548afa7601c.scope: Deactivated successfully.
Nov 29 07:44:11 compute-0 podman[175994]: 2025-11-29 07:44:11.898367531 +0000 UTC m=+0.131414276 container died 82bd6499fe322af41520ad95cff43f387151e56a350cd99efe05b548afa7601c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:44:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-6332f976a36f8afb023fb1948e8c2bc0393c509b113d1caa008203d92f939350-merged.mount: Deactivated successfully.
Nov 29 07:44:11 compute-0 podman[175994]: 2025-11-29 07:44:11.949988886 +0000 UTC m=+0.183035661 container remove 82bd6499fe322af41520ad95cff43f387151e56a350cd99efe05b548afa7601c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_einstein, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:44:11 compute-0 systemd[1]: libpod-conmon-82bd6499fe322af41520ad95cff43f387151e56a350cd99efe05b548afa7601c.scope: Deactivated successfully.
Nov 29 07:44:12 compute-0 podman[176195]: 2025-11-29 07:44:12.136196022 +0000 UTC m=+0.049268573 container create 73f9bf62cb7cd5fe399e8d1ef7c18c4e9bc8b0e999e53ea08a2245e8f5e4a0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_blackwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:44:12 compute-0 systemd[1]: Started libpod-conmon-73f9bf62cb7cd5fe399e8d1ef7c18c4e9bc8b0e999e53ea08a2245e8f5e4a0ca.scope.
Nov 29 07:44:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:44:12 compute-0 podman[176195]: 2025-11-29 07:44:12.118258611 +0000 UTC m=+0.031331182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e76704a5393e183d7c6e5341ad744577c070c903aa2ceab3ce2c24277b0de9ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e76704a5393e183d7c6e5341ad744577c070c903aa2ceab3ce2c24277b0de9ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e76704a5393e183d7c6e5341ad744577c070c903aa2ceab3ce2c24277b0de9ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e76704a5393e183d7c6e5341ad744577c070c903aa2ceab3ce2c24277b0de9ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:44:12 compute-0 podman[176195]: 2025-11-29 07:44:12.235347082 +0000 UTC m=+0.148419663 container init 73f9bf62cb7cd5fe399e8d1ef7c18c4e9bc8b0e999e53ea08a2245e8f5e4a0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:44:12 compute-0 podman[176195]: 2025-11-29 07:44:12.247053876 +0000 UTC m=+0.160126417 container start 73f9bf62cb7cd5fe399e8d1ef7c18c4e9bc8b0e999e53ea08a2245e8f5e4a0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_blackwell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 29 07:44:12 compute-0 podman[176195]: 2025-11-29 07:44:12.254160967 +0000 UTC m=+0.167233568 container attach 73f9bf62cb7cd5fe399e8d1ef7c18c4e9bc8b0e999e53ea08a2245e8f5e4a0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_blackwell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:44:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:13 compute-0 confident_blackwell[176255]: {
Nov 29 07:44:13 compute-0 confident_blackwell[176255]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:44:13 compute-0 confident_blackwell[176255]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:44:13 compute-0 confident_blackwell[176255]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:44:13 compute-0 confident_blackwell[176255]:         "osd_id": 2,
Nov 29 07:44:13 compute-0 confident_blackwell[176255]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:44:13 compute-0 confident_blackwell[176255]:         "type": "bluestore"
Nov 29 07:44:13 compute-0 confident_blackwell[176255]:     },
Nov 29 07:44:13 compute-0 confident_blackwell[176255]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:44:13 compute-0 confident_blackwell[176255]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:44:13 compute-0 confident_blackwell[176255]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:44:13 compute-0 confident_blackwell[176255]:         "osd_id": 0,
Nov 29 07:44:13 compute-0 confident_blackwell[176255]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:44:13 compute-0 confident_blackwell[176255]:         "type": "bluestore"
Nov 29 07:44:13 compute-0 confident_blackwell[176255]:     },
Nov 29 07:44:13 compute-0 confident_blackwell[176255]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:44:13 compute-0 confident_blackwell[176255]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:44:13 compute-0 confident_blackwell[176255]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:44:13 compute-0 confident_blackwell[176255]:         "osd_id": 1,
Nov 29 07:44:13 compute-0 confident_blackwell[176255]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:44:13 compute-0 confident_blackwell[176255]:         "type": "bluestore"
Nov 29 07:44:13 compute-0 confident_blackwell[176255]:     }
Nov 29 07:44:13 compute-0 confident_blackwell[176255]: }
Nov 29 07:44:13 compute-0 systemd[1]: libpod-73f9bf62cb7cd5fe399e8d1ef7c18c4e9bc8b0e999e53ea08a2245e8f5e4a0ca.scope: Deactivated successfully.
Nov 29 07:44:13 compute-0 podman[176195]: 2025-11-29 07:44:13.288764053 +0000 UTC m=+1.201836594 container died 73f9bf62cb7cd5fe399e8d1ef7c18c4e9bc8b0e999e53ea08a2245e8f5e4a0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 07:44:13 compute-0 systemd[1]: libpod-73f9bf62cb7cd5fe399e8d1ef7c18c4e9bc8b0e999e53ea08a2245e8f5e4a0ca.scope: Consumed 1.046s CPU time.
Nov 29 07:44:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e76704a5393e183d7c6e5341ad744577c070c903aa2ceab3ce2c24277b0de9ba-merged.mount: Deactivated successfully.
Nov 29 07:44:13 compute-0 podman[176195]: 2025-11-29 07:44:13.356013957 +0000 UTC m=+1.269086508 container remove 73f9bf62cb7cd5fe399e8d1ef7c18c4e9bc8b0e999e53ea08a2245e8f5e4a0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_blackwell, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:44:13 compute-0 systemd[1]: libpod-conmon-73f9bf62cb7cd5fe399e8d1ef7c18c4e9bc8b0e999e53ea08a2245e8f5e4a0ca.scope: Deactivated successfully.
Nov 29 07:44:13 compute-0 sudo[175702]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:44:13 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:44:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:44:13 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:44:13 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev da988df2-d89d-48d5-b2c3-d0ce93cc8701 does not exist
Nov 29 07:44:13 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 1189047a-6e01-486f-9d27-7540e955ac83 does not exist
Nov 29 07:44:13 compute-0 sudo[176921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:13 compute-0 sudo[176921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:13 compute-0 sudo[176921]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:13 compute-0 sudo[176987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:44:13 compute-0 sudo[176987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:13 compute-0 sudo[176987]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:13 compute-0 ceph-mon[75237]: pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:13 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:44:13 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:44:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:15 compute-0 ceph-mon[75237]: pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:18 compute-0 ceph-mon[75237]: pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:20 compute-0 ceph-mon[75237]: pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:22 compute-0 ceph-mon[75237]: pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:24 compute-0 ceph-mon[75237]: pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:25 compute-0 ceph-mon[75237]: pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:44:27.097 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:44:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:44:27.100 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:44:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:44:27.100 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:44:27 compute-0 podman[185208]: 2025-11-29 07:44:27.932817388 +0000 UTC m=+0.077092019 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 07:44:27 compute-0 ceph-mon[75237]: pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:29 compute-0 ceph-mon[75237]: pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:32 compute-0 ceph-mon[75237]: pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:34 compute-0 ceph-mon[75237]: pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:34 compute-0 podman[188815]: 2025-11-29 07:44:34.929374023 +0000 UTC m=+0.099152771 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 07:44:35 compute-0 ceph-mon[75237]: pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:37 compute-0 ceph-mon[75237]: pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:44:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:44:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:44:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:44:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:44:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:44:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:44:38
Nov 29 07:44:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:44:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:44:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['images', 'default.rgw.log', '.rgw.root', 'vms', 'volumes', '.mgr', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control']
Nov 29 07:44:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:44:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:40 compute-0 ceph-mon[75237]: pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:42 compute-0 ceph-mon[75237]: pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:44:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:44:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:44:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:44:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:44:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:44:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:44:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:44:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:44:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:44:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:43 compute-0 ceph-mon[75237]: pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:46 compute-0 ceph-mon[75237]: pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:47 compute-0 ceph-mon[75237]: pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:48 compute-0 kernel: SELinux:  Converting 2771 SID table entries...
Nov 29 07:44:48 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 07:44:48 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 07:44:48 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 07:44:48 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 07:44:48 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 07:44:48 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 07:44:48 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 07:44:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:49 compute-0 ceph-mon[75237]: pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:50 compute-0 groupadd[188866]: group added to /etc/group: name=dnsmasq, GID=991
Nov 29 07:44:50 compute-0 groupadd[188866]: group added to /etc/gshadow: name=dnsmasq
Nov 29 07:44:50 compute-0 groupadd[188866]: new group: name=dnsmasq, GID=991
Nov 29 07:44:50 compute-0 useradd[188873]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Nov 29 07:44:50 compute-0 dbus-broker-launch[747]: Noticed file-system modification, trigger reload.
Nov 29 07:44:50 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 29 07:44:50 compute-0 dbus-broker-launch[747]: Noticed file-system modification, trigger reload.
Nov 29 07:44:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:51 compute-0 groupadd[188886]: group added to /etc/group: name=clevis, GID=990
Nov 29 07:44:51 compute-0 groupadd[188886]: group added to /etc/gshadow: name=clevis
Nov 29 07:44:51 compute-0 groupadd[188886]: new group: name=clevis, GID=990
Nov 29 07:44:51 compute-0 useradd[188893]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Nov 29 07:44:51 compute-0 usermod[188903]: add 'clevis' to group 'tss'
Nov 29 07:44:51 compute-0 usermod[188903]: add 'clevis' to shadow group 'tss'
Nov 29 07:44:52 compute-0 ceph-mon[75237]: pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:53 compute-0 ceph-mon[75237]: pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:53 compute-0 polkitd[43585]: Reloading rules
Nov 29 07:44:53 compute-0 polkitd[43585]: Collecting garbage unconditionally...
Nov 29 07:44:53 compute-0 polkitd[43585]: Loading rules from directory /etc/polkit-1/rules.d
Nov 29 07:44:53 compute-0 polkitd[43585]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 29 07:44:53 compute-0 polkitd[43585]: Finished loading, compiling and executing 3 rules
Nov 29 07:44:53 compute-0 polkitd[43585]: Reloading rules
Nov 29 07:44:53 compute-0 polkitd[43585]: Collecting garbage unconditionally...
Nov 29 07:44:53 compute-0 polkitd[43585]: Loading rules from directory /etc/polkit-1/rules.d
Nov 29 07:44:53 compute-0 polkitd[43585]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 29 07:44:53 compute-0 polkitd[43585]: Finished loading, compiling and executing 3 rules
Nov 29 07:44:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:55 compute-0 groupadd[189090]: group added to /etc/group: name=ceph, GID=167
Nov 29 07:44:55 compute-0 groupadd[189090]: group added to /etc/gshadow: name=ceph
Nov 29 07:44:55 compute-0 groupadd[189090]: new group: name=ceph, GID=167
Nov 29 07:44:55 compute-0 useradd[189096]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:44:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:44:56 compute-0 ceph-mon[75237]: pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:58 compute-0 ceph-mon[75237]: pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:58 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Nov 29 07:44:58 compute-0 sshd[1003]: Received signal 15; terminating.
Nov 29 07:44:58 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Nov 29 07:44:58 compute-0 systemd[1]: sshd.service: Unit process 177700 (sshd-session) remains running after unit stopped.
Nov 29 07:44:58 compute-0 systemd[1]: sshd.service: Unit process 177707 (sshd-session) remains running after unit stopped.
Nov 29 07:44:58 compute-0 systemd[1]: sshd.service: Unit process 189103 (sshd-session) remains running after unit stopped.
Nov 29 07:44:58 compute-0 systemd[1]: sshd.service: Unit process 189104 (sshd-session) remains running after unit stopped.
Nov 29 07:44:58 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Nov 29 07:44:58 compute-0 systemd[1]: sshd.service: Consumed 7.855s CPU time, 41.7M memory peak, read 564.0K from disk, written 280.0K to disk.
Nov 29 07:44:58 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Nov 29 07:44:58 compute-0 systemd[1]: Stopping sshd-keygen.target...
Nov 29 07:44:58 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 07:44:58 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 07:44:58 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 07:44:58 compute-0 systemd[1]: Reached target sshd-keygen.target.
Nov 29 07:44:58 compute-0 systemd[1]: Starting OpenSSH server daemon...
Nov 29 07:44:58 compute-0 sshd[189732]: Server listening on 0.0.0.0 port 22.
Nov 29 07:44:58 compute-0 sshd[189732]: Server listening on :: port 22.
Nov 29 07:44:58 compute-0 systemd[1]: Started OpenSSH server daemon.
Nov 29 07:44:58 compute-0 podman[189721]: 2025-11-29 07:44:58.402226619 +0000 UTC m=+0.060957407 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 29 07:44:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:59 compute-0 sshd-session[189103]: Connection closed by 103.234.151.178 port 53510 [preauth]
Nov 29 07:45:00 compute-0 ceph-mon[75237]: pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:00 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:45:00 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:45:00 compute-0 systemd[1]: Reloading.
Nov 29 07:45:00 compute-0 systemd-rc-local-generator[189999]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:45:00 compute-0 systemd-sysv-generator[190003]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:45:00 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 07:45:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:01 compute-0 anacron[30971]: Job `cron.weekly' started
Nov 29 07:45:01 compute-0 anacron[30971]: Job `cron.weekly' terminated
Nov 29 07:45:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:03 compute-0 ceph-mon[75237]: pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:04 compute-0 sshd-session[177700]: Connection closed by 45.78.219.195 port 45350 [preauth]
Nov 29 07:45:04 compute-0 ceph-mon[75237]: pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:05 compute-0 podman[194947]: 2025-11-29 07:45:05.993785193 +0000 UTC m=+0.147545948 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 07:45:06 compute-0 ceph-mon[75237]: pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:07 compute-0 ceph-mon[75237]: pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:07 compute-0 sudo[170823]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:45:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:45:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:45:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:45:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:45:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:45:08 compute-0 sudo[196879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btverkztebgrzxrdpdqlvahvudrllttl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402308.0604-336-192892776931575/AnsiballZ_systemd.py'
Nov 29 07:45:08 compute-0 sudo[196879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:09 compute-0 python3.9[196898]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:45:09 compute-0 systemd[1]: Reloading.
Nov 29 07:45:09 compute-0 systemd-sysv-generator[197381]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:45:09 compute-0 systemd-rc-local-generator[197377]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:45:09 compute-0 ceph-mon[75237]: pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:09 compute-0 sudo[196879]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:10 compute-0 sudo[198651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbbvomjajrfkpxiefrrtdhctehzkqcem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402309.6077945-336-121145566537679/AnsiballZ_systemd.py'
Nov 29 07:45:10 compute-0 sudo[198651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:11 compute-0 python3.9[198664]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:45:11 compute-0 systemd[1]: Reloading.
Nov 29 07:45:11 compute-0 systemd-sysv-generator[198790]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:45:11 compute-0 systemd-rc-local-generator[198785]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:45:11 compute-0 sudo[198651]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:11 compute-0 sudo[198945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjuziudojxumuqnubsfkbzqudlurkgna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402311.6703684-336-66676585084691/AnsiballZ_systemd.py'
Nov 29 07:45:11 compute-0 sudo[198945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:12 compute-0 ceph-mon[75237]: pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:12 compute-0 python3.9[198947]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:45:12 compute-0 systemd[1]: Reloading.
Nov 29 07:45:12 compute-0 systemd-rc-local-generator[198977]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:45:12 compute-0 systemd-sysv-generator[198981]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:45:12 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:45:12 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:45:12 compute-0 systemd[1]: man-db-cache-update.service: Consumed 12.431s CPU time.
Nov 29 07:45:12 compute-0 systemd[1]: run-r4d6da7708ce14bd89d272a0b108b5f79.service: Deactivated successfully.
Nov 29 07:45:12 compute-0 sudo[198945]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:13 compute-0 ceph-mon[75237]: pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:13 compute-0 sudo[199136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyzskyiykmjndtwpaprcymodhlkjwhdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402312.9874089-336-23627813366831/AnsiballZ_systemd.py'
Nov 29 07:45:13 compute-0 sudo[199136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:13 compute-0 sudo[199139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:13 compute-0 sudo[199139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:13 compute-0 sudo[199139]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:13 compute-0 python3.9[199138]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:45:13 compute-0 sudo[199164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:45:13 compute-0 sudo[199164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:13 compute-0 sudo[199164]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:13 compute-0 systemd[1]: Reloading.
Nov 29 07:45:13 compute-0 systemd-rc-local-generator[199241]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:45:13 compute-0 systemd-sysv-generator[199244]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:45:14 compute-0 sudo[199190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:14 compute-0 sudo[199190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:14 compute-0 sudo[199190]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:14 compute-0 sudo[199136]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:14 compute-0 sudo[199251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:45:14 compute-0 sudo[199251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:14 compute-0 sudo[199456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctzuekxdmclsgpkunnirlhoreixugoeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402314.2571576-365-247319758587655/AnsiballZ_systemd.py'
Nov 29 07:45:14 compute-0 sudo[199456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:14 compute-0 sudo[199251]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:45:14 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:45:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:45:14 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:45:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:45:14 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:45:14 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 317832e1-c487-4333-be06-45f88476f096 does not exist
Nov 29 07:45:14 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 04969c08-780d-406f-83c8-f7af0f53456c does not exist
Nov 29 07:45:14 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev f20e2799-7e25-4b4e-8cb6-2c99bc41aafa does not exist
Nov 29 07:45:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:45:14 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:45:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:45:14 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:45:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:45:14 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:45:14 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:45:14 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:45:14 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:45:14 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:45:14 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:45:14 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:45:14 compute-0 sudo[199459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:14 compute-0 sudo[199459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:14 compute-0 sudo[199459]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:14 compute-0 sudo[199484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:45:14 compute-0 sudo[199484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:14 compute-0 sudo[199484]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:14 compute-0 sudo[199509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:14 compute-0 sudo[199509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:14 compute-0 sudo[199509]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:14 compute-0 sudo[199534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:45:14 compute-0 sudo[199534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:14 compute-0 python3.9[199458]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:45:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:14 compute-0 systemd[1]: Reloading.
Nov 29 07:45:15 compute-0 systemd-rc-local-generator[199602]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:45:15 compute-0 systemd-sysv-generator[199606]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:45:15 compute-0 podman[199639]: 2025-11-29 07:45:15.214324678 +0000 UTC m=+0.051493125 container create ecac795e7acc4859ace55309f294295a08b2c87ca4dc7a289bbb262f500dffcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cori, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:45:15 compute-0 podman[199639]: 2025-11-29 07:45:15.194658117 +0000 UTC m=+0.031826594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:45:15 compute-0 sshd-session[199588]: Received disconnect from 20.185.243.158 port 38430:11: Bye Bye [preauth]
Nov 29 07:45:15 compute-0 sshd-session[199588]: Disconnected from authenticating user root 20.185.243.158 port 38430 [preauth]
Nov 29 07:45:15 compute-0 sudo[199456]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:15 compute-0 systemd[1]: Started libpod-conmon-ecac795e7acc4859ace55309f294295a08b2c87ca4dc7a289bbb262f500dffcd.scope.
Nov 29 07:45:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:45:15 compute-0 podman[199639]: 2025-11-29 07:45:15.370077182 +0000 UTC m=+0.207245649 container init ecac795e7acc4859ace55309f294295a08b2c87ca4dc7a289bbb262f500dffcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:45:15 compute-0 podman[199639]: 2025-11-29 07:45:15.378110934 +0000 UTC m=+0.215279381 container start ecac795e7acc4859ace55309f294295a08b2c87ca4dc7a289bbb262f500dffcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cori, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:45:15 compute-0 podman[199639]: 2025-11-29 07:45:15.382585691 +0000 UTC m=+0.219754168 container attach ecac795e7acc4859ace55309f294295a08b2c87ca4dc7a289bbb262f500dffcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cori, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 07:45:15 compute-0 focused_cori[199655]: 167 167
Nov 29 07:45:15 compute-0 systemd[1]: libpod-ecac795e7acc4859ace55309f294295a08b2c87ca4dc7a289bbb262f500dffcd.scope: Deactivated successfully.
Nov 29 07:45:15 compute-0 podman[199639]: 2025-11-29 07:45:15.399081186 +0000 UTC m=+0.236249633 container died ecac795e7acc4859ace55309f294295a08b2c87ca4dc7a289bbb262f500dffcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 07:45:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-80607e259af26f96abf8c8e0cc4ebef5767e1928321354df6c137ba7ff0511e2-merged.mount: Deactivated successfully.
Nov 29 07:45:15 compute-0 podman[199639]: 2025-11-29 07:45:15.447044975 +0000 UTC m=+0.284213422 container remove ecac795e7acc4859ace55309f294295a08b2c87ca4dc7a289bbb262f500dffcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cori, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:45:15 compute-0 systemd[1]: libpod-conmon-ecac795e7acc4859ace55309f294295a08b2c87ca4dc7a289bbb262f500dffcd.scope: Deactivated successfully.
Nov 29 07:45:15 compute-0 podman[199755]: 2025-11-29 07:45:15.661812423 +0000 UTC m=+0.064626670 container create 7f6331520c5b45a660d66f7c5a92cc8beda9759a25a9815cc74eb44054e64d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_herschel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:45:15 compute-0 ceph-mon[75237]: pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:15 compute-0 systemd[1]: Started libpod-conmon-7f6331520c5b45a660d66f7c5a92cc8beda9759a25a9815cc74eb44054e64d01.scope.
Nov 29 07:45:15 compute-0 podman[199755]: 2025-11-29 07:45:15.629618021 +0000 UTC m=+0.032432318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:45:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eeb947ef015b8d977b91efac08818c1f34df7d131f54cb679095045afe449e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eeb947ef015b8d977b91efac08818c1f34df7d131f54cb679095045afe449e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eeb947ef015b8d977b91efac08818c1f34df7d131f54cb679095045afe449e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eeb947ef015b8d977b91efac08818c1f34df7d131f54cb679095045afe449e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eeb947ef015b8d977b91efac08818c1f34df7d131f54cb679095045afe449e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:15 compute-0 podman[199755]: 2025-11-29 07:45:15.756364778 +0000 UTC m=+0.159179005 container init 7f6331520c5b45a660d66f7c5a92cc8beda9759a25a9815cc74eb44054e64d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:45:15 compute-0 podman[199755]: 2025-11-29 07:45:15.764675628 +0000 UTC m=+0.167489855 container start 7f6331520c5b45a660d66f7c5a92cc8beda9759a25a9815cc74eb44054e64d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 29 07:45:15 compute-0 podman[199755]: 2025-11-29 07:45:15.768427538 +0000 UTC m=+0.171241775 container attach 7f6331520c5b45a660d66f7c5a92cc8beda9759a25a9815cc74eb44054e64d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:45:15 compute-0 sudo[199850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqjzukcmkoouhlawbimdweenwffiogye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402315.4799588-365-55276786981207/AnsiballZ_systemd.py'
Nov 29 07:45:15 compute-0 sudo[199850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:16 compute-0 python3.9[199852]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:45:16 compute-0 systemd[1]: Reloading.
Nov 29 07:45:16 compute-0 systemd-sysv-generator[199879]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:45:16 compute-0 systemd-rc-local-generator[199876]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:45:16 compute-0 sudo[199850]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:16 compute-0 epic_herschel[199803]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:45:16 compute-0 epic_herschel[199803]: --> relative data size: 1.0
Nov 29 07:45:16 compute-0 epic_herschel[199803]: --> All data devices are unavailable
Nov 29 07:45:16 compute-0 sudo[200064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvgbhusflapypcenhtzffblmtuknyplc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402316.7051558-365-68671995902567/AnsiballZ_systemd.py'
Nov 29 07:45:16 compute-0 systemd[1]: libpod-7f6331520c5b45a660d66f7c5a92cc8beda9759a25a9815cc74eb44054e64d01.scope: Deactivated successfully.
Nov 29 07:45:16 compute-0 podman[199755]: 2025-11-29 07:45:16.993346184 +0000 UTC m=+1.396160391 container died 7f6331520c5b45a660d66f7c5a92cc8beda9759a25a9815cc74eb44054e64d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_herschel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:45:16 compute-0 sudo[200064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:16 compute-0 systemd[1]: libpod-7f6331520c5b45a660d66f7c5a92cc8beda9759a25a9815cc74eb44054e64d01.scope: Consumed 1.032s CPU time.
Nov 29 07:45:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5eeb947ef015b8d977b91efac08818c1f34df7d131f54cb679095045afe449e4-merged.mount: Deactivated successfully.
Nov 29 07:45:17 compute-0 podman[199755]: 2025-11-29 07:45:17.058915796 +0000 UTC m=+1.461730003 container remove 7f6331520c5b45a660d66f7c5a92cc8beda9759a25a9815cc74eb44054e64d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_herschel, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 07:45:17 compute-0 systemd[1]: libpod-conmon-7f6331520c5b45a660d66f7c5a92cc8beda9759a25a9815cc74eb44054e64d01.scope: Deactivated successfully.
Nov 29 07:45:17 compute-0 sudo[199534]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:17 compute-0 sudo[200078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:17 compute-0 sudo[200078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:17 compute-0 sudo[200078]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:17 compute-0 sudo[200103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:45:17 compute-0 sudo[200103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:17 compute-0 sudo[200103]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:17 compute-0 sudo[200128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:17 compute-0 sudo[200128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:17 compute-0 sudo[200128]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:17 compute-0 python3.9[200067]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:45:17 compute-0 sudo[200153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:45:17 compute-0 sudo[200153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:17 compute-0 systemd[1]: Reloading.
Nov 29 07:45:17 compute-0 systemd-rc-local-generator[200202]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:45:17 compute-0 systemd-sysv-generator[200210]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:45:17 compute-0 podman[200255]: 2025-11-29 07:45:17.706836363 +0000 UTC m=+0.043183445 container create 7c20bb97cdecc3b2ee4f3907fe30df0bb3ad0d615fdde46afbbc574873536ecb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:45:17 compute-0 sudo[200064]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:17 compute-0 systemd[1]: Started libpod-conmon-7c20bb97cdecc3b2ee4f3907fe30df0bb3ad0d615fdde46afbbc574873536ecb.scope.
Nov 29 07:45:17 compute-0 podman[200255]: 2025-11-29 07:45:17.69127743 +0000 UTC m=+0.027624542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:45:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:45:17 compute-0 podman[200255]: 2025-11-29 07:45:17.814523874 +0000 UTC m=+0.150870976 container init 7c20bb97cdecc3b2ee4f3907fe30df0bb3ad0d615fdde46afbbc574873536ecb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 07:45:17 compute-0 podman[200255]: 2025-11-29 07:45:17.822458554 +0000 UTC m=+0.158805636 container start 7c20bb97cdecc3b2ee4f3907fe30df0bb3ad0d615fdde46afbbc574873536ecb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chatelet, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:45:17 compute-0 podman[200255]: 2025-11-29 07:45:17.825907626 +0000 UTC m=+0.162254738 container attach 7c20bb97cdecc3b2ee4f3907fe30df0bb3ad0d615fdde46afbbc574873536ecb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 07:45:17 compute-0 stoic_chatelet[200272]: 167 167
Nov 29 07:45:17 compute-0 systemd[1]: libpod-7c20bb97cdecc3b2ee4f3907fe30df0bb3ad0d615fdde46afbbc574873536ecb.scope: Deactivated successfully.
Nov 29 07:45:17 compute-0 podman[200255]: 2025-11-29 07:45:17.828822967 +0000 UTC m=+0.165170059 container died 7c20bb97cdecc3b2ee4f3907fe30df0bb3ad0d615fdde46afbbc574873536ecb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:45:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cdad1bd7b88c434fd186be4e1b20d52b71fdf892ac17fa72e2b081aa0b6d6f9-merged.mount: Deactivated successfully.
Nov 29 07:45:17 compute-0 podman[200255]: 2025-11-29 07:45:17.871155871 +0000 UTC m=+0.207502953 container remove 7c20bb97cdecc3b2ee4f3907fe30df0bb3ad0d615fdde46afbbc574873536ecb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:45:17 compute-0 systemd[1]: libpod-conmon-7c20bb97cdecc3b2ee4f3907fe30df0bb3ad0d615fdde46afbbc574873536ecb.scope: Deactivated successfully.
Nov 29 07:45:18 compute-0 podman[200390]: 2025-11-29 07:45:18.012710744 +0000 UTC m=+0.026029125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:45:18 compute-0 sudo[200459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tglehaeansydntoerzjogaxptvldygbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402317.8846729-365-49000982218834/AnsiballZ_systemd.py'
Nov 29 07:45:18 compute-0 sudo[200459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:18 compute-0 ceph-mon[75237]: pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:18 compute-0 podman[200390]: 2025-11-29 07:45:18.256583258 +0000 UTC m=+0.269901659 container create ffe42a2b5946fff5e4985a2811b9553111d86d4a5a65e8bfbb169453130d7c86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:45:18 compute-0 systemd[1]: Started libpod-conmon-ffe42a2b5946fff5e4985a2811b9553111d86d4a5a65e8bfbb169453130d7c86.scope.
Nov 29 07:45:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc2e00ab2f7f2a69da9a9fd042e0ce8da48f839b1b0931d34a959f7f3ad5ef4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc2e00ab2f7f2a69da9a9fd042e0ce8da48f839b1b0931d34a959f7f3ad5ef4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc2e00ab2f7f2a69da9a9fd042e0ce8da48f839b1b0931d34a959f7f3ad5ef4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc2e00ab2f7f2a69da9a9fd042e0ce8da48f839b1b0931d34a959f7f3ad5ef4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:18 compute-0 python3.9[200461]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:45:18 compute-0 sudo[200459]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:18 compute-0 podman[200390]: 2025-11-29 07:45:18.653150032 +0000 UTC m=+0.666468433 container init ffe42a2b5946fff5e4985a2811b9553111d86d4a5a65e8bfbb169453130d7c86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:45:18 compute-0 podman[200390]: 2025-11-29 07:45:18.662896575 +0000 UTC m=+0.676214966 container start ffe42a2b5946fff5e4985a2811b9553111d86d4a5a65e8bfbb169453130d7c86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:45:18 compute-0 podman[200390]: 2025-11-29 07:45:18.764159563 +0000 UTC m=+0.777478014 container attach ffe42a2b5946fff5e4985a2811b9553111d86d4a5a65e8bfbb169453130d7c86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:45:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:19 compute-0 sudo[200623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdrdsxtfafhxgdiwvrrqrrftfotnrbzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402318.7572768-365-277515049119123/AnsiballZ_systemd.py'
Nov 29 07:45:19 compute-0 sudo[200623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:19 compute-0 sshd-session[200462]: Invalid user gerrit from 114.34.106.146 port 44004
Nov 29 07:45:19 compute-0 python3.9[200625]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:45:19 compute-0 ceph-mon[75237]: pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:19 compute-0 optimistic_elion[200466]: {
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:     "0": [
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:         {
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "devices": [
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "/dev/loop3"
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             ],
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "lv_name": "ceph_lv0",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "lv_size": "21470642176",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "name": "ceph_lv0",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "tags": {
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.cluster_name": "ceph",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.crush_device_class": "",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.encrypted": "0",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.osd_id": "0",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.type": "block",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.vdo": "0"
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             },
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "type": "block",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "vg_name": "ceph_vg0"
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:         }
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:     ],
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:     "1": [
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:         {
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "devices": [
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "/dev/loop4"
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             ],
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "lv_name": "ceph_lv1",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "lv_size": "21470642176",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "name": "ceph_lv1",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "tags": {
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.cluster_name": "ceph",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.crush_device_class": "",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.encrypted": "0",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.osd_id": "1",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.type": "block",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.vdo": "0"
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             },
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "type": "block",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "vg_name": "ceph_vg1"
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:         }
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:     ],
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:     "2": [
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:         {
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "devices": [
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "/dev/loop5"
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             ],
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "lv_name": "ceph_lv2",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "lv_size": "21470642176",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "name": "ceph_lv2",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "tags": {
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.cluster_name": "ceph",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.crush_device_class": "",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.encrypted": "0",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.osd_id": "2",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.type": "block",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:                 "ceph.vdo": "0"
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             },
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "type": "block",
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:             "vg_name": "ceph_vg2"
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:         }
Nov 29 07:45:19 compute-0 optimistic_elion[200466]:     ]
Nov 29 07:45:19 compute-0 optimistic_elion[200466]: }
Nov 29 07:45:19 compute-0 podman[200390]: 2025-11-29 07:45:19.470182943 +0000 UTC m=+1.483501374 container died ffe42a2b5946fff5e4985a2811b9553111d86d4a5a65e8bfbb169453130d7c86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 07:45:19 compute-0 systemd[1]: libpod-ffe42a2b5946fff5e4985a2811b9553111d86d4a5a65e8bfbb169453130d7c86.scope: Deactivated successfully.
Nov 29 07:45:19 compute-0 systemd[1]: Reloading.
Nov 29 07:45:19 compute-0 sshd-session[200462]: Received disconnect from 114.34.106.146 port 44004:11: Bye Bye [preauth]
Nov 29 07:45:19 compute-0 sshd-session[200462]: Disconnected from invalid user gerrit 114.34.106.146 port 44004 [preauth]
Nov 29 07:45:19 compute-0 systemd-rc-local-generator[200666]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:45:19 compute-0 systemd-sysv-generator[200669]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:45:19 compute-0 sudo[200623]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc2e00ab2f7f2a69da9a9fd042e0ce8da48f839b1b0931d34a959f7f3ad5ef4d-merged.mount: Deactivated successfully.
Nov 29 07:45:20 compute-0 podman[200390]: 2025-11-29 07:45:20.233336193 +0000 UTC m=+2.246654584 container remove ffe42a2b5946fff5e4985a2811b9553111d86d4a5a65e8bfbb169453130d7c86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:45:20 compute-0 systemd[1]: libpod-conmon-ffe42a2b5946fff5e4985a2811b9553111d86d4a5a65e8bfbb169453130d7c86.scope: Deactivated successfully.
Nov 29 07:45:20 compute-0 sudo[200153]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:20 compute-0 sudo[200804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:20 compute-0 sudo[200804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:20 compute-0 sudo[200853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luicrcacqbfzcgibhwufdjzbljkhskex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402320.0334566-401-33288118354384/AnsiballZ_systemd.py'
Nov 29 07:45:20 compute-0 sudo[200804]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:20 compute-0 sudo[200853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:20 compute-0 sudo[200858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:45:20 compute-0 sudo[200858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:20 compute-0 sudo[200858]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:20 compute-0 sudo[200883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:20 compute-0 sudo[200883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:20 compute-0 sudo[200883]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:20 compute-0 sudo[200908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:45:20 compute-0 sudo[200908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:20 compute-0 python3.9[200857]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:45:20 compute-0 systemd[1]: Reloading.
Nov 29 07:45:20 compute-0 systemd-sysv-generator[200998]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:45:20 compute-0 systemd-rc-local-generator[200994]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:45:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:20 compute-0 podman[201010]: 2025-11-29 07:45:20.960347116 +0000 UTC m=+0.109891365 container create ca3593671b27df67236b2246abf361724371e22c22143c00bdcf7e87c883ca5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 07:45:20 compute-0 podman[201010]: 2025-11-29 07:45:20.875292577 +0000 UTC m=+0.024836816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:45:21 compute-0 systemd[1]: Started libpod-conmon-ca3593671b27df67236b2246abf361724371e22c22143c00bdcf7e87c883ca5a.scope.
Nov 29 07:45:21 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 29 07:45:21 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 29 07:45:21 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:45:21 compute-0 sudo[200853]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:21 compute-0 podman[201010]: 2025-11-29 07:45:21.217063819 +0000 UTC m=+0.366608068 container init ca3593671b27df67236b2246abf361724371e22c22143c00bdcf7e87c883ca5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:45:21 compute-0 podman[201010]: 2025-11-29 07:45:21.230551681 +0000 UTC m=+0.380095900 container start ca3593671b27df67236b2246abf361724371e22c22143c00bdcf7e87c883ca5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dewdney, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 07:45:21 compute-0 podman[201010]: 2025-11-29 07:45:21.235527661 +0000 UTC m=+0.385071880 container attach ca3593671b27df67236b2246abf361724371e22c22143c00bdcf7e87c883ca5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dewdney, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 07:45:21 compute-0 magical_dewdney[201029]: 167 167
Nov 29 07:45:21 compute-0 systemd[1]: libpod-ca3593671b27df67236b2246abf361724371e22c22143c00bdcf7e87c883ca5a.scope: Deactivated successfully.
Nov 29 07:45:21 compute-0 podman[201010]: 2025-11-29 07:45:21.23841317 +0000 UTC m=+0.387957419 container died ca3593671b27df67236b2246abf361724371e22c22143c00bdcf7e87c883ca5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:45:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-dadd50eaead4362fa615fb883d93ad99968594bb6dee16374079e3483f1c84fa-merged.mount: Deactivated successfully.
Nov 29 07:45:21 compute-0 podman[201010]: 2025-11-29 07:45:21.291785129 +0000 UTC m=+0.441329348 container remove ca3593671b27df67236b2246abf361724371e22c22143c00bdcf7e87c883ca5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:45:21 compute-0 systemd[1]: libpod-conmon-ca3593671b27df67236b2246abf361724371e22c22143c00bdcf7e87c883ca5a.scope: Deactivated successfully.
Nov 29 07:45:21 compute-0 podman[201149]: 2025-11-29 07:45:21.441468606 +0000 UTC m=+0.023738720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:45:21 compute-0 sudo[201216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-davdjejwtuxitgnmmdpmomhgqslsnbgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402321.285291-409-218107018450890/AnsiballZ_systemd.py'
Nov 29 07:45:21 compute-0 sudo[201216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:21 compute-0 podman[201149]: 2025-11-29 07:45:21.647360031 +0000 UTC m=+0.229630145 container create 8d4028f4a37f5b4e177bd84245627b60802d0c4a5f2390040289d44cd380edac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:45:21 compute-0 systemd[1]: Started libpod-conmon-8d4028f4a37f5b4e177bd84245627b60802d0c4a5f2390040289d44cd380edac.scope.
Nov 29 07:45:21 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:45:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/237cb1abbd063331c4cedb1583f9cc45135560b9043fe1550f15a55e538f7332/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/237cb1abbd063331c4cedb1583f9cc45135560b9043fe1550f15a55e538f7332/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/237cb1abbd063331c4cedb1583f9cc45135560b9043fe1550f15a55e538f7332/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/237cb1abbd063331c4cedb1583f9cc45135560b9043fe1550f15a55e538f7332/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:21 compute-0 python3.9[201218]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:45:22 compute-0 sudo[201216]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:22 compute-0 podman[201149]: 2025-11-29 07:45:22.072485909 +0000 UTC m=+0.654756043 container init 8d4028f4a37f5b4e177bd84245627b60802d0c4a5f2390040289d44cd380edac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:45:22 compute-0 podman[201149]: 2025-11-29 07:45:22.082214732 +0000 UTC m=+0.664484826 container start 8d4028f4a37f5b4e177bd84245627b60802d0c4a5f2390040289d44cd380edac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:45:22 compute-0 ceph-mon[75237]: pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:22 compute-0 podman[201149]: 2025-11-29 07:45:22.285206367 +0000 UTC m=+0.867476571 container attach 8d4028f4a37f5b4e177bd84245627b60802d0c4a5f2390040289d44cd380edac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 07:45:22 compute-0 sudo[201380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvzybcweovakkmnoyqebtfojpeugcxih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402322.6046317-409-23486642180364/AnsiballZ_systemd.py'
Nov 29 07:45:22 compute-0 sudo[201380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:23 compute-0 serene_payne[201221]: {
Nov 29 07:45:23 compute-0 serene_payne[201221]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:45:23 compute-0 serene_payne[201221]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:45:23 compute-0 serene_payne[201221]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:45:23 compute-0 serene_payne[201221]:         "osd_id": 2,
Nov 29 07:45:23 compute-0 serene_payne[201221]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:45:23 compute-0 serene_payne[201221]:         "type": "bluestore"
Nov 29 07:45:23 compute-0 serene_payne[201221]:     },
Nov 29 07:45:23 compute-0 serene_payne[201221]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:45:23 compute-0 serene_payne[201221]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:45:23 compute-0 serene_payne[201221]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:45:23 compute-0 serene_payne[201221]:         "osd_id": 0,
Nov 29 07:45:23 compute-0 serene_payne[201221]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:45:23 compute-0 serene_payne[201221]:         "type": "bluestore"
Nov 29 07:45:23 compute-0 serene_payne[201221]:     },
Nov 29 07:45:23 compute-0 python3.9[201384]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:45:23 compute-0 serene_payne[201221]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:45:23 compute-0 serene_payne[201221]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:45:23 compute-0 serene_payne[201221]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:45:23 compute-0 serene_payne[201221]:         "osd_id": 1,
Nov 29 07:45:23 compute-0 serene_payne[201221]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:45:23 compute-0 serene_payne[201221]:         "type": "bluestore"
Nov 29 07:45:23 compute-0 serene_payne[201221]:     }
Nov 29 07:45:23 compute-0 serene_payne[201221]: }
Nov 29 07:45:23 compute-0 systemd[1]: libpod-8d4028f4a37f5b4e177bd84245627b60802d0c4a5f2390040289d44cd380edac.scope: Deactivated successfully.
Nov 29 07:45:23 compute-0 systemd[1]: libpod-8d4028f4a37f5b4e177bd84245627b60802d0c4a5f2390040289d44cd380edac.scope: Consumed 1.173s CPU time.
Nov 29 07:45:23 compute-0 podman[201149]: 2025-11-29 07:45:23.265015653 +0000 UTC m=+1.847285767 container died 8d4028f4a37f5b4e177bd84245627b60802d0c4a5f2390040289d44cd380edac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_payne, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:45:23 compute-0 sudo[201380]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:23 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 29 07:45:23 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:45:23.505436) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:45:23 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 29 07:45:23 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402323505551, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2046, "num_deletes": 251, "total_data_size": 3502794, "memory_usage": 3553824, "flush_reason": "Manual Compaction"}
Nov 29 07:45:23 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 29 07:45:23 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402323693860, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3438187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10131, "largest_seqno": 12176, "table_properties": {"data_size": 3428858, "index_size": 5951, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17929, "raw_average_key_size": 19, "raw_value_size": 3410386, "raw_average_value_size": 3706, "num_data_blocks": 269, "num_entries": 920, "num_filter_entries": 920, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402086, "oldest_key_time": 1764402086, "file_creation_time": 1764402323, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:45:23 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 188516 microseconds, and 9664 cpu microseconds.
Nov 29 07:45:23 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:45:23 compute-0 ceph-mon[75237]: pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:23 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:45:23.693955) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3438187 bytes OK
Nov 29 07:45:23 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:45:23.693982) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 29 07:45:23 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:45:23.696004) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 29 07:45:23 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:45:23.696061) EVENT_LOG_v1 {"time_micros": 1764402323696045, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:45:23 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:45:23.696137) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:45:23 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3494245, prev total WAL file size 3495400, number of live WAL files 2.
Nov 29 07:45:23 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:45:23 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:45:23.698267) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 29 07:45:23 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:45:23 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3357KB)], [26(7285KB)]
Nov 29 07:45:23 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402323698537, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 10898082, "oldest_snapshot_seqno": -1}
Nov 29 07:45:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-237cb1abbd063331c4cedb1583f9cc45135560b9043fe1550f15a55e538f7332-merged.mount: Deactivated successfully.
Nov 29 07:45:23 compute-0 sudo[201574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iewfqinwwhrhdbujzzbjmyuzeryesmtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402323.4742832-409-99642275510973/AnsiballZ_systemd.py'
Nov 29 07:45:23 compute-0 sudo[201574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:24 compute-0 python3.9[201576]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:45:24 compute-0 sudo[201574]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:24 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4005 keys, 8882678 bytes, temperature: kUnknown
Nov 29 07:45:24 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402324338533, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8882678, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8851083, "index_size": 20477, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 97179, "raw_average_key_size": 24, "raw_value_size": 8774026, "raw_average_value_size": 2190, "num_data_blocks": 879, "num_entries": 4005, "num_filter_entries": 4005, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764402323, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:45:24 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:45:24 compute-0 podman[201149]: 2025-11-29 07:45:24.340954355 +0000 UTC m=+2.923224449 container remove 8d4028f4a37f5b4e177bd84245627b60802d0c4a5f2390040289d44cd380edac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 29 07:45:24 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:45:24.339047) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8882678 bytes
Nov 29 07:45:24 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:45:24.342273) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 17.0 rd, 13.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.1 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(5.8) write-amplify(2.6) OK, records in: 4519, records dropped: 514 output_compression: NoCompression
Nov 29 07:45:24 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:45:24.342317) EVENT_LOG_v1 {"time_micros": 1764402324342296, "job": 10, "event": "compaction_finished", "compaction_time_micros": 640208, "compaction_time_cpu_micros": 33905, "output_level": 6, "num_output_files": 1, "total_output_size": 8882678, "num_input_records": 4519, "num_output_records": 4005, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:45:24 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:45:24 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402324343537, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 29 07:45:24 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:45:24 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402324346708, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 29 07:45:24 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:45:23.698007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:45:24 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:45:24.346759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:45:24 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:45:24.346764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:45:24 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:45:24.346765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:45:24 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:45:24.346767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:45:24 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:45:24.346768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:45:24 compute-0 systemd[1]: libpod-conmon-8d4028f4a37f5b4e177bd84245627b60802d0c4a5f2390040289d44cd380edac.scope: Deactivated successfully.
Nov 29 07:45:24 compute-0 sudo[200908]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:45:24 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:45:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:45:24 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:45:24 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 11a9f2fe-e0ae-48ed-80ad-fe1f9d20526c does not exist
Nov 29 07:45:24 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 09b9d051-538d-4c27-a016-c8c2d617939b does not exist
Nov 29 07:45:24 compute-0 sudo[201604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:24 compute-0 sudo[201604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:24 compute-0 sudo[201604]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:24 compute-0 sudo[201656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:45:24 compute-0 sudo[201656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:24 compute-0 sudo[201656]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:24 compute-0 sudo[201779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdfyznsluesxrmltoedmajcwilybdfkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402324.4685414-409-51308785129000/AnsiballZ_systemd.py'
Nov 29 07:45:24 compute-0 sudo[201779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:25 compute-0 python3.9[201781]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:45:25 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:45:25 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:45:25 compute-0 sudo[201779]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:25 compute-0 sudo[201934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uksgguejkkfihhttpbwqubyjstjdggcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402325.6415179-409-41521528790342/AnsiballZ_systemd.py'
Nov 29 07:45:25 compute-0 sudo[201934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:26 compute-0 python3.9[201936]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:45:26 compute-0 sudo[201934]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:26 compute-0 ceph-mon[75237]: pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:26 compute-0 sudo[202089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpbttvyxngahacplyhfjsvdoohaxgddx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402326.5605295-409-233787075353463/AnsiballZ_systemd.py'
Nov 29 07:45:26 compute-0 sudo[202089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:45:27.099 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:45:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:45:27.101 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:45:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:45:27.101 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:45:27 compute-0 python3.9[202091]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:45:27 compute-0 sudo[202089]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:27 compute-0 ceph-mon[75237]: pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:27 compute-0 sudo[202244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seebmkdhvbwcjpkavuqsdmlahbtdymmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402327.484448-409-103521762086318/AnsiballZ_systemd.py'
Nov 29 07:45:27 compute-0 sudo[202244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:28 compute-0 python3.9[202246]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:45:28 compute-0 sudo[202244]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:28 compute-0 sudo[202411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsfznabothacfdbiokhskkabrzfoiebn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402328.380008-409-195329389241138/AnsiballZ_systemd.py'
Nov 29 07:45:28 compute-0 sudo[202411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:28 compute-0 podman[202373]: 2025-11-29 07:45:28.781914495 +0000 UTC m=+0.098314227 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 07:45:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:29 compute-0 python3.9[202414]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:45:29 compute-0 sudo[202411]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:29 compute-0 sudo[202575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vscegwpalguiceyyonkxobkljjmoigmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402329.3231664-409-248745400819812/AnsiballZ_systemd.py'
Nov 29 07:45:29 compute-0 sudo[202575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:29 compute-0 python3.9[202577]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:45:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:30 compute-0 sshd-session[202448]: Received disconnect from 103.236.140.19 port 47476:11: Bye Bye [preauth]
Nov 29 07:45:30 compute-0 sshd-session[202448]: Disconnected from authenticating user root 103.236.140.19 port 47476 [preauth]
Nov 29 07:45:31 compute-0 sudo[202575]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:31 compute-0 sudo[202730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtmzakakutgibfbiaclwbwoalymekobv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402331.304118-409-10878326572565/AnsiballZ_systemd.py'
Nov 29 07:45:31 compute-0 sudo[202730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:31 compute-0 ceph-mon[75237]: pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:32 compute-0 python3.9[202732]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:45:32 compute-0 sudo[202730]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:32 compute-0 sudo[202885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlqbbaorxiiuspdunhrgzydtarluwjyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402332.277933-409-128776180730323/AnsiballZ_systemd.py'
Nov 29 07:45:32 compute-0 sudo[202885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:33 compute-0 python3.9[202887]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:45:33 compute-0 sudo[202885]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:34 compute-0 sudo[203040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csvhbrwxrqwjwtwsllnjogkntirhffon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402333.839892-409-181337550605895/AnsiballZ_systemd.py'
Nov 29 07:45:34 compute-0 sudo[203040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:34 compute-0 ceph-mon[75237]: pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:34 compute-0 ceph-mon[75237]: pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:34 compute-0 python3.9[203042]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:45:34 compute-0 sudo[203040]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:35 compute-0 sudo[203195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spaowpaknjathlachmkhdtkurlbtykid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402335.0718057-409-227299779691771/AnsiballZ_systemd.py'
Nov 29 07:45:35 compute-0 sudo[203195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:35 compute-0 ceph-mon[75237]: pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:35 compute-0 python3.9[203197]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:45:35 compute-0 sudo[203195]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:36 compute-0 sudo[203363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twitfyodzhecmlwgfryrnievgpmpcfmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402335.9792173-409-134304049325969/AnsiballZ_systemd.py'
Nov 29 07:45:36 compute-0 sudo[203363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:36 compute-0 podman[203324]: 2025-11-29 07:45:36.325485421 +0000 UTC m=+0.103812028 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 07:45:36 compute-0 python3.9[203371]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:45:36 compute-0 sudo[203363]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:37 compute-0 sudo[203531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycltbmivyawdakbtkvpzebxlfjdpulul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402337.4815896-511-122538802501099/AnsiballZ_file.py'
Nov 29 07:45:37 compute-0 sudo[203531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:37 compute-0 python3.9[203533]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:45:38 compute-0 sudo[203531]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:38 compute-0 sudo[203683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vomafhbxagflypvyeafalyajdzpujhno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402338.1744914-511-165463054404277/AnsiballZ_file.py'
Nov 29 07:45:38 compute-0 sudo[203683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:45:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:45:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:45:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:45:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:45:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:45:38 compute-0 python3.9[203685]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:45:38 compute-0 sudo[203683]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:45:38
Nov 29 07:45:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:45:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:45:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['vms', 'images', 'backups', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 29 07:45:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:45:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:39 compute-0 sudo[203835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svpybjycbbxxipiengwtnpqglmorophp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402338.9156365-511-176574358677144/AnsiballZ_file.py'
Nov 29 07:45:39 compute-0 sudo[203835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:39 compute-0 ceph-mon[75237]: pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:39 compute-0 python3.9[203837]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:45:39 compute-0 sudo[203835]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:40 compute-0 sudo[203987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzdkgnqulhkukjldnvijavxortxubqan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402339.8090706-511-211878730838627/AnsiballZ_file.py'
Nov 29 07:45:40 compute-0 sudo[203987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:40 compute-0 python3.9[203989]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:45:40 compute-0 sudo[203987]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:40 compute-0 ceph-mon[75237]: pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:40 compute-0 sudo[204139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibcyjecavylhhpbytumhskcntcrdvjbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402340.5212016-511-64158464760835/AnsiballZ_file.py'
Nov 29 07:45:40 compute-0 sudo[204139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:40 compute-0 python3.9[204141]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:45:41 compute-0 sudo[204139]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:41 compute-0 sudo[204291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juaqunmclsdncgfisxdzkwkgfpjsdlgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402341.156197-511-26388741388508/AnsiballZ_file.py'
Nov 29 07:45:41 compute-0 sudo[204291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:45:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:45:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:45:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:45:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:45:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:45:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:45:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:45:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:45:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:45:42 compute-0 python3.9[204293]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:45:42 compute-0 sudo[204291]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:43 compute-0 sudo[204443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzslqsbsyxavonuzseiyrgqqkcvrbrst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402342.9913757-554-80007494603894/AnsiballZ_stat.py'
Nov 29 07:45:43 compute-0 sudo[204443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:43 compute-0 ceph-mon[75237]: pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:43 compute-0 python3.9[204445]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:45:43 compute-0 sudo[204443]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:44 compute-0 sudo[204568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgumhfosrxtonqosbqgsatpamzobqgnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402342.9913757-554-80007494603894/AnsiballZ_copy.py'
Nov 29 07:45:44 compute-0 sudo[204568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:44 compute-0 python3.9[204570]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764402342.9913757-554-80007494603894/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:45:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:44 compute-0 sudo[204568]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:44 compute-0 ceph-mon[75237]: pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:44 compute-0 sudo[204720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckgjgvmotibcqyivndixwkfvqdsezxwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402344.5947433-554-38702316909413/AnsiballZ_stat.py'
Nov 29 07:45:44 compute-0 sudo[204720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:45 compute-0 python3.9[204722]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:45:45 compute-0 sudo[204720]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:45 compute-0 ceph-mon[75237]: pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:46 compute-0 sudo[204845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyagthehsaqqcwagcvubsgyotjybmjbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402344.5947433-554-38702316909413/AnsiballZ_copy.py'
Nov 29 07:45:46 compute-0 sudo[204845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:46 compute-0 python3.9[204847]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764402344.5947433-554-38702316909413/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:45:46 compute-0 sudo[204845]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:46 compute-0 sudo[204997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naxljccgstxpyenmmfouvoylycrjasir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402346.5009952-554-30008899900670/AnsiballZ_stat.py'
Nov 29 07:45:46 compute-0 sudo[204997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:46 compute-0 python3.9[204999]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:45:47 compute-0 sudo[204997]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:47 compute-0 sudo[205122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyvqfdiqqhqwnqggxpuozhbvawtgwdqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402346.5009952-554-30008899900670/AnsiballZ_copy.py'
Nov 29 07:45:47 compute-0 sudo[205122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:48 compute-0 ceph-mon[75237]: pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:48 compute-0 python3.9[205124]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764402346.5009952-554-30008899900670/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:45:48 compute-0 sudo[205122]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:48 compute-0 sudo[205274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trcclahznvwagzcljuyblyicerovzbne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402348.3444333-554-269383327064284/AnsiballZ_stat.py'
Nov 29 07:45:48 compute-0 sudo[205274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:48 compute-0 python3.9[205276]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:45:49 compute-0 sudo[205274]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:49 compute-0 sudo[205399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyzarookapzvxrjznidvfxgcmfymarhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402348.3444333-554-269383327064284/AnsiballZ_copy.py'
Nov 29 07:45:49 compute-0 sudo[205399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:49 compute-0 python3.9[205401]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764402348.3444333-554-269383327064284/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:45:49 compute-0 sudo[205399]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:50 compute-0 sudo[205551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjtebaqwvkjcxkjhbudkubbqzhqygqbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402350.1047904-554-118009102514537/AnsiballZ_stat.py'
Nov 29 07:45:50 compute-0 sudo[205551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:50 compute-0 python3.9[205553]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:45:50 compute-0 sudo[205551]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:50 compute-0 ceph-mon[75237]: pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:51 compute-0 sudo[205676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spscdxkkuctbdsneimjyhwkqtqmbrlwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402350.1047904-554-118009102514537/AnsiballZ_copy.py'
Nov 29 07:45:51 compute-0 sudo[205676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:51 compute-0 python3.9[205678]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764402350.1047904-554-118009102514537/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:45:51 compute-0 sudo[205676]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:51 compute-0 sudo[205828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrhakixoqsjjxiqdcwbiupqsbavkxlfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402351.3920352-554-72031555526068/AnsiballZ_stat.py'
Nov 29 07:45:51 compute-0 sudo[205828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:51 compute-0 python3.9[205830]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:45:51 compute-0 sudo[205828]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:52 compute-0 ceph-mon[75237]: pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:52 compute-0 sudo[205953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frwxxnjxdwheuxfufmjfaqcjgfbsmfbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402351.3920352-554-72031555526068/AnsiballZ_copy.py'
Nov 29 07:45:52 compute-0 sudo[205953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:52 compute-0 python3.9[205955]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764402351.3920352-554-72031555526068/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:45:52 compute-0 sudo[205953]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:52 compute-0 sudo[206105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idbiyepdrhzqvqdqgbjqkbozogiegxpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402352.6381726-554-187594321536326/AnsiballZ_stat.py'
Nov 29 07:45:52 compute-0 sudo[206105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:53 compute-0 python3.9[206107]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:45:53 compute-0 sudo[206105]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:53 compute-0 sudo[206228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efgarovqwwoqsdrkzqbzocxfdabnubxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402352.6381726-554-187594321536326/AnsiballZ_copy.py'
Nov 29 07:45:53 compute-0 sudo[206228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:53 compute-0 python3.9[206230]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764402352.6381726-554-187594321536326/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:45:53 compute-0 sudo[206228]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:54 compute-0 ceph-mon[75237]: pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:54 compute-0 sudo[206380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqnwcepkoxemwwdkfayyfmhytzakxqru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402354.0168948-554-144570662241299/AnsiballZ_stat.py'
Nov 29 07:45:54 compute-0 sudo[206380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:54 compute-0 python3.9[206382]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:45:54 compute-0 sudo[206380]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:54 compute-0 sudo[206505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shtiqzrcqpbsauaudmomxabnmxfhsjtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402354.0168948-554-144570662241299/AnsiballZ_copy.py'
Nov 29 07:45:54 compute-0 sudo[206505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:55 compute-0 python3.9[206507]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764402354.0168948-554-144570662241299/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:45:55 compute-0 sudo[206505]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:45:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:45:56 compute-0 sudo[206657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpoquxrzlmdqdaqggyxkyrnatuwopkft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402355.3442302-667-225468230205880/AnsiballZ_command.py'
Nov 29 07:45:56 compute-0 sudo[206657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:56 compute-0 python3.9[206659]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 29 07:45:56 compute-0 ceph-mon[75237]: pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:56 compute-0 sudo[206657]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:57 compute-0 sudo[206810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqenjnpdfxaromtybxjwpywiatzoaeta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402356.9172864-676-224094921707651/AnsiballZ_file.py'
Nov 29 07:45:57 compute-0 sudo[206810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:57 compute-0 python3.9[206812]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:45:57 compute-0 sudo[206810]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:58 compute-0 sudo[206962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxytxulqibtgwmvcnbwqacizqbxzwqsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402357.9124265-676-214491978512759/AnsiballZ_file.py'
Nov 29 07:45:58 compute-0 sudo[206962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:58 compute-0 python3.9[206964]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:45:58 compute-0 sudo[206962]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:58 compute-0 podman[207043]: 2025-11-29 07:45:58.935944457 +0000 UTC m=+0.096527195 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Nov 29 07:45:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:59 compute-0 sudo[207133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzxbirmiwaovfdykscmmxfkpcswuwkfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402358.6726272-676-96575821749741/AnsiballZ_file.py'
Nov 29 07:45:59 compute-0 sudo[207133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:59 compute-0 ceph-mon[75237]: pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:59 compute-0 python3.9[207135]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:45:59 compute-0 sudo[207133]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:59 compute-0 sudo[207285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohafoigjxditsrwnvsqtlzaggwdhizbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402359.4037223-676-189771055299957/AnsiballZ_file.py'
Nov 29 07:45:59 compute-0 sudo[207285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:45:59 compute-0 python3.9[207287]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:45:59 compute-0 sudo[207285]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:00 compute-0 ceph-mon[75237]: pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:00 compute-0 sudo[207437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmlquydgwdbginojwofrmqdmixjyvvbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402360.1058474-676-44567596138897/AnsiballZ_file.py'
Nov 29 07:46:00 compute-0 sudo[207437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:00 compute-0 python3.9[207439]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:00 compute-0 sudo[207437]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:01 compute-0 sudo[207589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldwmbqlentxnagkcsrtlzgmqbajvmqyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402360.8498237-676-272137459735910/AnsiballZ_file.py'
Nov 29 07:46:01 compute-0 sudo[207589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:01 compute-0 python3.9[207591]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:01 compute-0 sudo[207589]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:01 compute-0 sudo[207741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knxvwqgfqunsofvqbivgecfchyhxcgyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402361.4983616-676-62246898677061/AnsiballZ_file.py'
Nov 29 07:46:01 compute-0 sudo[207741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:02 compute-0 python3.9[207743]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:02 compute-0 sudo[207741]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:02 compute-0 ceph-mon[75237]: pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:02 compute-0 sudo[207893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyahsufektimmpexfflyjinynawuvqtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402362.2319918-676-20104925023450/AnsiballZ_file.py'
Nov 29 07:46:02 compute-0 sudo[207893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:02 compute-0 python3.9[207895]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:02 compute-0 sudo[207893]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:03 compute-0 sudo[208045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwfxsobyctpwuicttrqxkqgywdvpmenl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402362.9419124-676-240928027017796/AnsiballZ_file.py'
Nov 29 07:46:03 compute-0 sudo[208045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:03 compute-0 python3.9[208047]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:03 compute-0 sudo[208045]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:03 compute-0 sudo[208197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csmiuqwxtraoqcivhvfyuztrcmyaudpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402363.6402087-676-70152366326834/AnsiballZ_file.py'
Nov 29 07:46:03 compute-0 sudo[208197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:04 compute-0 python3.9[208199]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:04 compute-0 sudo[208197]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:04 compute-0 sudo[208349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkvzfmiugiuhrfxrsmhjfenbqpdnomtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402364.3078978-676-45012666836523/AnsiballZ_file.py'
Nov 29 07:46:04 compute-0 sudo[208349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:04 compute-0 python3.9[208351]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:04 compute-0 sudo[208349]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:46:04 compute-0 ceph-mon[75237]: pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:05 compute-0 sudo[208501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwtzrsrkyznitqroanzinwcylzfykljf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402364.944193-676-178312727298448/AnsiballZ_file.py'
Nov 29 07:46:05 compute-0 sudo[208501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:05 compute-0 python3.9[208503]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:05 compute-0 sudo[208501]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:05 compute-0 sudo[208653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crsirdalxekjmgeiokwtktxekicgfymd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402365.602228-676-220791122326393/AnsiballZ_file.py'
Nov 29 07:46:05 compute-0 sudo[208653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:06 compute-0 ceph-mon[75237]: pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:06 compute-0 python3.9[208655]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:06 compute-0 sudo[208653]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:06 compute-0 podman[208656]: 2025-11-29 07:46:06.939307491 +0000 UTC m=+0.103085979 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller)
Nov 29 07:46:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:07 compute-0 sudo[208830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qupdbppvewiqvasdgpamrmbgygmvjqjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402367.0563357-676-164182998415799/AnsiballZ_file.py'
Nov 29 07:46:07 compute-0 sudo[208830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:07 compute-0 python3.9[208832]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:07 compute-0 sudo[208830]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:07 compute-0 ceph-mon[75237]: pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:08 compute-0 sudo[208982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htvcgxjgyayxowrpafkpkipaupzmufkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402367.8009882-775-267220122367773/AnsiballZ_stat.py'
Nov 29 07:46:08 compute-0 sudo[208982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:08 compute-0 python3.9[208984]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:46:08 compute-0 sudo[208982]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:46:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:46:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:46:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:46:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:46:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:46:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:09 compute-0 sudo[209105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svlpugdevoiaomkwejakwahbivmvfazy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402367.8009882-775-267220122367773/AnsiballZ_copy.py'
Nov 29 07:46:09 compute-0 sudo[209105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:09 compute-0 python3.9[209107]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764402367.8009882-775-267220122367773/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:09 compute-0 sudo[209105]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:46:09 compute-0 sudo[209257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyzliqsapzefpcbljkctcytfbkaqeefc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402369.6018026-775-43373529964192/AnsiballZ_stat.py'
Nov 29 07:46:09 compute-0 sudo[209257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:10 compute-0 python3.9[209259]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:46:10 compute-0 sudo[209257]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:10 compute-0 ceph-mon[75237]: pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:10 compute-0 sudo[209380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxzjvuwahdowpropnlsoldmluocdhueb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402369.6018026-775-43373529964192/AnsiballZ_copy.py'
Nov 29 07:46:10 compute-0 sudo[209380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:10 compute-0 python3.9[209382]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764402369.6018026-775-43373529964192/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:10 compute-0 sudo[209380]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:11 compute-0 sudo[209532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wumcnzpujrirvrcldopymvjqphjavxie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402370.9760244-775-77504160844136/AnsiballZ_stat.py'
Nov 29 07:46:11 compute-0 sudo[209532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:11 compute-0 python3.9[209534]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:46:11 compute-0 sudo[209532]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:11 compute-0 sudo[209655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dehpsllxvmtpwvomawnaedelcyeepxjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402370.9760244-775-77504160844136/AnsiballZ_copy.py'
Nov 29 07:46:11 compute-0 sudo[209655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:12 compute-0 python3.9[209657]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764402370.9760244-775-77504160844136/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:12 compute-0 sudo[209655]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:12 compute-0 ceph-mon[75237]: pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:12 compute-0 sudo[209807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtnwcojspdbktpyamxlttvghkdknmuju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402372.1662383-775-77811706683545/AnsiballZ_stat.py'
Nov 29 07:46:12 compute-0 sudo[209807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:12 compute-0 python3.9[209809]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:46:12 compute-0 sudo[209807]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:12 compute-0 sudo[209930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lstvfobyetgwwuejxgciyhiovxntbmzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402372.1662383-775-77811706683545/AnsiballZ_copy.py'
Nov 29 07:46:12 compute-0 sudo[209930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:13 compute-0 python3.9[209932]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764402372.1662383-775-77811706683545/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:13 compute-0 sudo[209930]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:13 compute-0 sudo[210082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpyysnpldvangqaecdgebegaxbjxvnjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402373.3767824-775-201995938377577/AnsiballZ_stat.py'
Nov 29 07:46:13 compute-0 sudo[210082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:13 compute-0 python3.9[210084]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:46:13 compute-0 sudo[210082]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:14 compute-0 sudo[210205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilhyvpcmooihurbtyzyjmecgdcqbzndm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402373.3767824-775-201995938377577/AnsiballZ_copy.py'
Nov 29 07:46:14 compute-0 sudo[210205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:14 compute-0 ceph-mon[75237]: pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:14 compute-0 python3.9[210207]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764402373.3767824-775-201995938377577/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:14 compute-0 sudo[210205]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:46:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:15 compute-0 sudo[210357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzkhivnaqseghhfwljwvqugpyryzwlas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402374.7188118-775-97945563508954/AnsiballZ_stat.py'
Nov 29 07:46:15 compute-0 sudo[210357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:15 compute-0 python3.9[210359]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:46:15 compute-0 sudo[210357]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:15 compute-0 ceph-mon[75237]: pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:15 compute-0 sudo[210480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkabuehqrfuwwhzhcwjacyjicapkhais ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402374.7188118-775-97945563508954/AnsiballZ_copy.py'
Nov 29 07:46:15 compute-0 sudo[210480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:15 compute-0 python3.9[210482]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764402374.7188118-775-97945563508954/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:15 compute-0 sudo[210480]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:16 compute-0 sudo[210632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhttktemzylajhczojdhjsimvizklahq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402376.0698197-775-248357395067894/AnsiballZ_stat.py'
Nov 29 07:46:16 compute-0 sudo[210632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:16 compute-0 python3.9[210634]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:46:16 compute-0 sudo[210632]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:16 compute-0 sudo[210755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjbdduzbdqvghtghzhwohkaysharegsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402376.0698197-775-248357395067894/AnsiballZ_copy.py'
Nov 29 07:46:16 compute-0 sudo[210755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:17 compute-0 python3.9[210757]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764402376.0698197-775-248357395067894/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:17 compute-0 sudo[210755]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:18 compute-0 ceph-mon[75237]: pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:18 compute-0 sudo[210907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dznojoyohqbzpmulcfbdxxeefofrijtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402378.107398-775-140002215976181/AnsiballZ_stat.py'
Nov 29 07:46:18 compute-0 sudo[210907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:18 compute-0 python3.9[210909]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:46:18 compute-0 sudo[210907]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:18 compute-0 sudo[211030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzrlrndtshpqimbprazjgmjiyfnzbqwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402378.107398-775-140002215976181/AnsiballZ_copy.py'
Nov 29 07:46:18 compute-0 sudo[211030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:19 compute-0 python3.9[211032]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764402378.107398-775-140002215976181/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:19 compute-0 sudo[211030]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:19 compute-0 sudo[211182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hacbcvkgzwvgyfozsjktupgzklajilda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402379.5827365-775-158562332869187/AnsiballZ_stat.py'
Nov 29 07:46:19 compute-0 sudo[211182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:46:20 compute-0 python3.9[211184]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:46:20 compute-0 sudo[211182]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:20 compute-0 ceph-mon[75237]: pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:20 compute-0 sudo[211305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdxjrupayfopecnngyrhrlwwlownxvgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402379.5827365-775-158562332869187/AnsiballZ_copy.py'
Nov 29 07:46:20 compute-0 sudo[211305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:20 compute-0 python3.9[211307]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764402379.5827365-775-158562332869187/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:20 compute-0 sudo[211305]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:21 compute-0 sudo[211457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzpdjjakpbfrzxdfwfyuyedrmzrgdyad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402380.7152834-775-196465899114526/AnsiballZ_stat.py'
Nov 29 07:46:21 compute-0 sudo[211457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:21 compute-0 python3.9[211459]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:46:21 compute-0 sudo[211457]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:21 compute-0 sudo[211580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpdlhhqwabxxhlatpoixnsazybitzeby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402380.7152834-775-196465899114526/AnsiballZ_copy.py'
Nov 29 07:46:21 compute-0 sudo[211580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:21 compute-0 python3.9[211582]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764402380.7152834-775-196465899114526/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:21 compute-0 sudo[211580]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:22 compute-0 sudo[211732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vreinlunysclkongyirftixcemnmpotm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402381.9769561-775-248421150899908/AnsiballZ_stat.py'
Nov 29 07:46:22 compute-0 sudo[211732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:22 compute-0 ceph-mon[75237]: pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:22 compute-0 python3.9[211734]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:46:22 compute-0 sudo[211732]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:22 compute-0 sudo[211855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozaapgetejxnsnbvioyxqgaxaxojraph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402381.9769561-775-248421150899908/AnsiballZ_copy.py'
Nov 29 07:46:22 compute-0 sudo[211855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:23 compute-0 python3.9[211857]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764402381.9769561-775-248421150899908/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:23 compute-0 sudo[211855]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:23 compute-0 sudo[212007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hthcsyxurhjzsvfeoyiisoopawoqpfqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402383.2318127-775-13228160030101/AnsiballZ_stat.py'
Nov 29 07:46:23 compute-0 sudo[212007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:23 compute-0 python3.9[212009]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:46:23 compute-0 sudo[212007]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:24 compute-0 sudo[212130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ingarzjhzqmurphdvukyhrhudpslbhpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402383.2318127-775-13228160030101/AnsiballZ_copy.py'
Nov 29 07:46:24 compute-0 sudo[212130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:24 compute-0 python3.9[212132]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764402383.2318127-775-13228160030101/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:24 compute-0 ceph-mon[75237]: pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:24 compute-0 sudo[212130]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:24 compute-0 sudo[212188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:24 compute-0 sudo[212188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:24 compute-0 sudo[212188]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:24 compute-0 sudo[212236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:46:24 compute-0 sudo[212236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:24 compute-0 sudo[212236]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:24 compute-0 sudo[212284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:24 compute-0 sudo[212284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:24 compute-0 sudo[212284]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:24 compute-0 sudo[212328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:46:24 compute-0 sudo[212328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:24 compute-0 sudo[212384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkdzbtoukoauajpqpaynihjuebhbhdeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402384.553363-775-149664707161235/AnsiballZ_stat.py'
Nov 29 07:46:24 compute-0 sudo[212384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:46:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:25 compute-0 python3.9[212386]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:46:25 compute-0 sudo[212384]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:25 compute-0 ceph-mon[75237]: pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:25 compute-0 sudo[212328]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:46:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:46:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:46:25 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:46:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:46:25 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:46:25 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 7b16cc73-29fe-4dc6-b07e-851c59206b10 does not exist
Nov 29 07:46:25 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 4781b6fe-a649-4ec4-a946-7ff7414a6aa7 does not exist
Nov 29 07:46:25 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 1162efeb-fa26-4c49-a790-02ca5cc76cd0 does not exist
Nov 29 07:46:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:46:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:46:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:46:25 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:46:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:46:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:46:25 compute-0 sudo[212562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpktvubefqkhbltzhspjllcelgomijuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402384.553363-775-149664707161235/AnsiballZ_copy.py'
Nov 29 07:46:25 compute-0 sudo[212562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:25 compute-0 sudo[212517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:25 compute-0 sudo[212517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:25 compute-0 sudo[212517]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:25 compute-0 sshd-session[212133]: Received disconnect from 103.234.151.178 port 13776:11: Bye Bye [preauth]
Nov 29 07:46:25 compute-0 sshd-session[212133]: Disconnected from authenticating user root 103.234.151.178 port 13776 [preauth]
Nov 29 07:46:25 compute-0 sudo[212567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:46:25 compute-0 sudo[212567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:25 compute-0 sudo[212567]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:25 compute-0 python3.9[212565]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764402384.553363-775-149664707161235/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:25 compute-0 sudo[212592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:25 compute-0 sudo[212592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:25 compute-0 sudo[212592]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:25 compute-0 sudo[212562]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:25 compute-0 sudo[212617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:46:25 compute-0 sudo[212617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:26 compute-0 podman[212780]: 2025-11-29 07:46:26.299958661 +0000 UTC m=+0.045425106 container create f7bd0db55bcc5888c66a95bac14174a578cf10f4a6800aa48678085fef66cf47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 07:46:26 compute-0 systemd[1]: Started libpod-conmon-f7bd0db55bcc5888c66a95bac14174a578cf10f4a6800aa48678085fef66cf47.scope.
Nov 29 07:46:26 compute-0 sudo[212846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcigdexsvhxxgtmkbccijstzzzzxespz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402386.0634618-775-120736157059451/AnsiballZ_stat.py'
Nov 29 07:46:26 compute-0 sudo[212846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:26 compute-0 podman[212780]: 2025-11-29 07:46:26.280949788 +0000 UTC m=+0.026416243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:46:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:46:26 compute-0 podman[212780]: 2025-11-29 07:46:26.401556548 +0000 UTC m=+0.147023003 container init f7bd0db55bcc5888c66a95bac14174a578cf10f4a6800aa48678085fef66cf47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_antonelli, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:46:26 compute-0 podman[212780]: 2025-11-29 07:46:26.411053935 +0000 UTC m=+0.156520380 container start f7bd0db55bcc5888c66a95bac14174a578cf10f4a6800aa48678085fef66cf47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_antonelli, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:46:26 compute-0 podman[212780]: 2025-11-29 07:46:26.415124144 +0000 UTC m=+0.160590609 container attach f7bd0db55bcc5888c66a95bac14174a578cf10f4a6800aa48678085fef66cf47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_antonelli, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 07:46:26 compute-0 hungry_antonelli[212847]: 167 167
Nov 29 07:46:26 compute-0 systemd[1]: libpod-f7bd0db55bcc5888c66a95bac14174a578cf10f4a6800aa48678085fef66cf47.scope: Deactivated successfully.
Nov 29 07:46:26 compute-0 conmon[212847]: conmon f7bd0db55bcc5888c66a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f7bd0db55bcc5888c66a95bac14174a578cf10f4a6800aa48678085fef66cf47.scope/container/memory.events
Nov 29 07:46:26 compute-0 podman[212780]: 2025-11-29 07:46:26.418942377 +0000 UTC m=+0.164408832 container died f7bd0db55bcc5888c66a95bac14174a578cf10f4a6800aa48678085fef66cf47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:46:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-0823eb0aa17f59d49074067d89ed434a43ecc21ea0245d1949b5d16cb0943314-merged.mount: Deactivated successfully.
Nov 29 07:46:26 compute-0 podman[212780]: 2025-11-29 07:46:26.479201032 +0000 UTC m=+0.224667477 container remove f7bd0db55bcc5888c66a95bac14174a578cf10f4a6800aa48678085fef66cf47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_antonelli, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:46:26 compute-0 systemd[1]: libpod-conmon-f7bd0db55bcc5888c66a95bac14174a578cf10f4a6800aa48678085fef66cf47.scope: Deactivated successfully.
Nov 29 07:46:26 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:46:26 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:46:26 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:46:26 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:46:26 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:46:26 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:46:26 compute-0 python3.9[212851]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:46:26 compute-0 sudo[212846]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:26 compute-0 podman[212877]: 2025-11-29 07:46:26.658211665 +0000 UTC m=+0.053453941 container create 5587abb2257d3d651513c7ac918a481a56334720a260f508a9900a5a28aac83b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_beaver, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 07:46:26 compute-0 systemd[1]: Started libpod-conmon-5587abb2257d3d651513c7ac918a481a56334720a260f508a9900a5a28aac83b.scope.
Nov 29 07:46:26 compute-0 podman[212877]: 2025-11-29 07:46:26.631499256 +0000 UTC m=+0.026741552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:46:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1455772c772cf58ed5a4b5f8b2d43e28ea2b07630d8937ec0e44e5cc5651bd82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1455772c772cf58ed5a4b5f8b2d43e28ea2b07630d8937ec0e44e5cc5651bd82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1455772c772cf58ed5a4b5f8b2d43e28ea2b07630d8937ec0e44e5cc5651bd82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1455772c772cf58ed5a4b5f8b2d43e28ea2b07630d8937ec0e44e5cc5651bd82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1455772c772cf58ed5a4b5f8b2d43e28ea2b07630d8937ec0e44e5cc5651bd82/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:46:26 compute-0 podman[212877]: 2025-11-29 07:46:26.766599937 +0000 UTC m=+0.161842213 container init 5587abb2257d3d651513c7ac918a481a56334720a260f508a9900a5a28aac83b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:46:26 compute-0 podman[212877]: 2025-11-29 07:46:26.775115007 +0000 UTC m=+0.170357273 container start 5587abb2257d3d651513c7ac918a481a56334720a260f508a9900a5a28aac83b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_beaver, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:46:26 compute-0 podman[212877]: 2025-11-29 07:46:26.778455867 +0000 UTC m=+0.173698133 container attach 5587abb2257d3d651513c7ac918a481a56334720a260f508a9900a5a28aac83b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_beaver, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:46:26 compute-0 sudo[213013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfgpvyrgauliiewwgbkeamsspgshtmub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402386.0634618-775-120736157059451/AnsiballZ_copy.py'
Nov 29 07:46:26 compute-0 sudo[213013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:46:27.101 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:46:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:46:27.103 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:46:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:46:27.103 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:46:27 compute-0 python3.9[213015]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764402386.0634618-775-120736157059451/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:27 compute-0 sudo[213013]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:27 compute-0 ceph-mon[75237]: pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:27 compute-0 python3.9[213174]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:46:27 compute-0 strange_beaver[212935]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:46:27 compute-0 strange_beaver[212935]: --> relative data size: 1.0
Nov 29 07:46:27 compute-0 strange_beaver[212935]: --> All data devices are unavailable
Nov 29 07:46:27 compute-0 systemd[1]: libpod-5587abb2257d3d651513c7ac918a481a56334720a260f508a9900a5a28aac83b.scope: Deactivated successfully.
Nov 29 07:46:27 compute-0 podman[212877]: 2025-11-29 07:46:27.906940022 +0000 UTC m=+1.302182288 container died 5587abb2257d3d651513c7ac918a481a56334720a260f508a9900a5a28aac83b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_beaver, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:46:27 compute-0 systemd[1]: libpod-5587abb2257d3d651513c7ac918a481a56334720a260f508a9900a5a28aac83b.scope: Consumed 1.074s CPU time.
Nov 29 07:46:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-1455772c772cf58ed5a4b5f8b2d43e28ea2b07630d8937ec0e44e5cc5651bd82-merged.mount: Deactivated successfully.
Nov 29 07:46:27 compute-0 podman[212877]: 2025-11-29 07:46:27.969540049 +0000 UTC m=+1.364782315 container remove 5587abb2257d3d651513c7ac918a481a56334720a260f508a9900a5a28aac83b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_beaver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:46:27 compute-0 systemd[1]: libpod-conmon-5587abb2257d3d651513c7ac918a481a56334720a260f508a9900a5a28aac83b.scope: Deactivated successfully.
Nov 29 07:46:28 compute-0 sudo[212617]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:28 compute-0 sudo[213237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:28 compute-0 sudo[213237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:28 compute-0 sudo[213237]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:28 compute-0 sudo[213285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:46:28 compute-0 sudo[213285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:28 compute-0 sudo[213285]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:28 compute-0 sudo[213333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:28 compute-0 sudo[213333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:28 compute-0 sudo[213333]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:28 compute-0 sudo[213358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:46:28 compute-0 sudo[213358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:28 compute-0 sudo[213482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmwckxwnlakqasryfnouuxzjwwauwtts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402388.043777-981-197188782293638/AnsiballZ_seboolean.py'
Nov 29 07:46:28 compute-0 sudo[213482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:28 compute-0 podman[213498]: 2025-11-29 07:46:28.64669829 +0000 UTC m=+0.064253793 container create d9358d533d96d9c4002ce46826c0633cd7088f8d2cb3714dbfef23f566e4d05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cerf, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:46:28 compute-0 systemd[1]: Started libpod-conmon-d9358d533d96d9c4002ce46826c0633cd7088f8d2cb3714dbfef23f566e4d05d.scope.
Nov 29 07:46:28 compute-0 python3.9[213486]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 29 07:46:28 compute-0 podman[213498]: 2025-11-29 07:46:28.614281317 +0000 UTC m=+0.031836820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:46:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:46:28 compute-0 podman[213498]: 2025-11-29 07:46:28.816778314 +0000 UTC m=+0.234333797 container init d9358d533d96d9c4002ce46826c0633cd7088f8d2cb3714dbfef23f566e4d05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cerf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:46:28 compute-0 podman[213498]: 2025-11-29 07:46:28.825489758 +0000 UTC m=+0.243045261 container start d9358d533d96d9c4002ce46826c0633cd7088f8d2cb3714dbfef23f566e4d05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:46:28 compute-0 loving_cerf[213515]: 167 167
Nov 29 07:46:28 compute-0 systemd[1]: libpod-d9358d533d96d9c4002ce46826c0633cd7088f8d2cb3714dbfef23f566e4d05d.scope: Deactivated successfully.
Nov 29 07:46:28 compute-0 podman[213498]: 2025-11-29 07:46:28.86227122 +0000 UTC m=+0.279826693 container attach d9358d533d96d9c4002ce46826c0633cd7088f8d2cb3714dbfef23f566e4d05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:46:28 compute-0 podman[213498]: 2025-11-29 07:46:28.862908057 +0000 UTC m=+0.280463540 container died d9358d533d96d9c4002ce46826c0633cd7088f8d2cb3714dbfef23f566e4d05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 07:46:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cd152c492f80c38a40cc0006e0e85f014ffcb124391a8cc3f35b421a88771a2-merged.mount: Deactivated successfully.
Nov 29 07:46:28 compute-0 podman[213498]: 2025-11-29 07:46:28.913342046 +0000 UTC m=+0.330897509 container remove d9358d533d96d9c4002ce46826c0633cd7088f8d2cb3714dbfef23f566e4d05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 07:46:28 compute-0 systemd[1]: libpod-conmon-d9358d533d96d9c4002ce46826c0633cd7088f8d2cb3714dbfef23f566e4d05d.scope: Deactivated successfully.
Nov 29 07:46:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:29 compute-0 podman[213538]: 2025-11-29 07:46:29.131448605 +0000 UTC m=+0.059244627 container create f4da278a32426a8b7c46a94fa9c1fad15a26eeebb2e71521f7a02d24c5219703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Nov 29 07:46:29 compute-0 systemd[1]: Started libpod-conmon-f4da278a32426a8b7c46a94fa9c1fad15a26eeebb2e71521f7a02d24c5219703.scope.
Nov 29 07:46:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:46:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c2ba048f6a85d56024b5c77483d1a54f2f16c569e3edafb0c7ff4dba43d2e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:46:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c2ba048f6a85d56024b5c77483d1a54f2f16c569e3edafb0c7ff4dba43d2e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:46:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c2ba048f6a85d56024b5c77483d1a54f2f16c569e3edafb0c7ff4dba43d2e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:46:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c2ba048f6a85d56024b5c77483d1a54f2f16c569e3edafb0c7ff4dba43d2e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:46:29 compute-0 podman[213538]: 2025-11-29 07:46:29.096655817 +0000 UTC m=+0.024451939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:46:29 compute-0 podman[213538]: 2025-11-29 07:46:29.201225086 +0000 UTC m=+0.129021128 container init f4da278a32426a8b7c46a94fa9c1fad15a26eeebb2e71521f7a02d24c5219703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 07:46:29 compute-0 podman[213538]: 2025-11-29 07:46:29.210286 +0000 UTC m=+0.138082032 container start f4da278a32426a8b7c46a94fa9c1fad15a26eeebb2e71521f7a02d24c5219703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 07:46:29 compute-0 podman[213538]: 2025-11-29 07:46:29.215773908 +0000 UTC m=+0.143569930 container attach f4da278a32426a8b7c46a94fa9c1fad15a26eeebb2e71521f7a02d24c5219703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mcclintock, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:46:29 compute-0 podman[213552]: 2025-11-29 07:46:29.237714389 +0000 UTC m=+0.068319252 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:46:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:46:29 compute-0 sudo[213482]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:30 compute-0 ceph-mon[75237]: pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]: {
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:     "0": [
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:         {
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "devices": [
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "/dev/loop3"
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             ],
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "lv_name": "ceph_lv0",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "lv_size": "21470642176",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "name": "ceph_lv0",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "tags": {
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.cluster_name": "ceph",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.crush_device_class": "",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.encrypted": "0",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.osd_id": "0",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.type": "block",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.vdo": "0"
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             },
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "type": "block",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "vg_name": "ceph_vg0"
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:         }
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:     ],
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:     "1": [
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:         {
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "devices": [
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "/dev/loop4"
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             ],
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "lv_name": "ceph_lv1",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "lv_size": "21470642176",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "name": "ceph_lv1",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "tags": {
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.cluster_name": "ceph",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.crush_device_class": "",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.encrypted": "0",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.osd_id": "1",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.type": "block",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.vdo": "0"
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             },
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "type": "block",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "vg_name": "ceph_vg1"
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:         }
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:     ],
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:     "2": [
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:         {
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "devices": [
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "/dev/loop5"
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             ],
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "lv_name": "ceph_lv2",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "lv_size": "21470642176",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "name": "ceph_lv2",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "tags": {
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.cluster_name": "ceph",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.crush_device_class": "",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.encrypted": "0",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.osd_id": "2",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.type": "block",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:                 "ceph.vdo": "0"
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             },
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "type": "block",
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:             "vg_name": "ceph_vg2"
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:         }
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]:     ]
Nov 29 07:46:30 compute-0 sweet_mcclintock[213555]: }
Nov 29 07:46:30 compute-0 systemd[1]: libpod-f4da278a32426a8b7c46a94fa9c1fad15a26eeebb2e71521f7a02d24c5219703.scope: Deactivated successfully.
Nov 29 07:46:30 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 29 07:46:30 compute-0 podman[213538]: 2025-11-29 07:46:30.083936196 +0000 UTC m=+1.011732218 container died f4da278a32426a8b7c46a94fa9c1fad15a26eeebb2e71521f7a02d24c5219703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 07:46:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8c2ba048f6a85d56024b5c77483d1a54f2f16c569e3edafb0c7ff4dba43d2e4-merged.mount: Deactivated successfully.
Nov 29 07:46:30 compute-0 podman[213538]: 2025-11-29 07:46:30.14194931 +0000 UTC m=+1.069745332 container remove f4da278a32426a8b7c46a94fa9c1fad15a26eeebb2e71521f7a02d24c5219703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mcclintock, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:46:30 compute-0 systemd[1]: libpod-conmon-f4da278a32426a8b7c46a94fa9c1fad15a26eeebb2e71521f7a02d24c5219703.scope: Deactivated successfully.
Nov 29 07:46:30 compute-0 sudo[213358]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:30 compute-0 sudo[213677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:30 compute-0 sudo[213677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:30 compute-0 sudo[213677]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:30 compute-0 sudo[213721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:46:30 compute-0 sudo[213721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:30 compute-0 sudo[213721]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:30 compute-0 sudo[213750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:30 compute-0 sudo[213750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:30 compute-0 sudo[213750]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:30 compute-0 sudo[213800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:46:30 compute-0 sudo[213800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:30 compute-0 sudo[213848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhyrmgvsxszcgyddkisjcfavrobndxnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402390.1296732-989-123221041011900/AnsiballZ_copy.py'
Nov 29 07:46:30 compute-0 sudo[213848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:30 compute-0 python3.9[213852]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:30 compute-0 sudo[213848]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:30 compute-0 podman[213918]: 2025-11-29 07:46:30.791461315 +0000 UTC m=+0.054985043 container create 9329bfd84e923470ed8ed10cacd52701669faa90be95bfe305ebff5a035651fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tesla, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:46:30 compute-0 systemd[1]: Started libpod-conmon-9329bfd84e923470ed8ed10cacd52701669faa90be95bfe305ebff5a035651fd.scope.
Nov 29 07:46:30 compute-0 podman[213918]: 2025-11-29 07:46:30.764889599 +0000 UTC m=+0.028413417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:46:30 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:46:30 compute-0 podman[213918]: 2025-11-29 07:46:30.900523165 +0000 UTC m=+0.164046943 container init 9329bfd84e923470ed8ed10cacd52701669faa90be95bfe305ebff5a035651fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tesla, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 07:46:30 compute-0 podman[213918]: 2025-11-29 07:46:30.914024509 +0000 UTC m=+0.177548227 container start 9329bfd84e923470ed8ed10cacd52701669faa90be95bfe305ebff5a035651fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tesla, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:46:30 compute-0 podman[213918]: 2025-11-29 07:46:30.917078271 +0000 UTC m=+0.180602089 container attach 9329bfd84e923470ed8ed10cacd52701669faa90be95bfe305ebff5a035651fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tesla, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:46:30 compute-0 clever_tesla[213981]: 167 167
Nov 29 07:46:30 compute-0 systemd[1]: libpod-9329bfd84e923470ed8ed10cacd52701669faa90be95bfe305ebff5a035651fd.scope: Deactivated successfully.
Nov 29 07:46:30 compute-0 podman[213918]: 2025-11-29 07:46:30.919603209 +0000 UTC m=+0.183126947 container died 9329bfd84e923470ed8ed10cacd52701669faa90be95bfe305ebff5a035651fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 07:46:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c6325801a08579c787455ccfea02854970dfedb8b350f2d6014404e30c15f53-merged.mount: Deactivated successfully.
Nov 29 07:46:30 compute-0 podman[213918]: 2025-11-29 07:46:30.959998578 +0000 UTC m=+0.223522316 container remove 9329bfd84e923470ed8ed10cacd52701669faa90be95bfe305ebff5a035651fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tesla, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 07:46:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:30 compute-0 systemd[1]: libpod-conmon-9329bfd84e923470ed8ed10cacd52701669faa90be95bfe305ebff5a035651fd.scope: Deactivated successfully.
Nov 29 07:46:31 compute-0 podman[214010]: 2025-11-29 07:46:31.180292665 +0000 UTC m=+0.065377073 container create f51f419f195637843c4f9c7f33d9fdf764ab9b9a36c60550c347777debfa3b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leakey, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:46:31 compute-0 systemd[1]: Started libpod-conmon-f51f419f195637843c4f9c7f33d9fdf764ab9b9a36c60550c347777debfa3b47.scope.
Nov 29 07:46:31 compute-0 podman[214010]: 2025-11-29 07:46:31.150557934 +0000 UTC m=+0.035642392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:46:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:46:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334068445a8c96846d245e68ba7bf5bf2145d86e95732df9776c037ba703cb93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:46:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334068445a8c96846d245e68ba7bf5bf2145d86e95732df9776c037ba703cb93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:46:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334068445a8c96846d245e68ba7bf5bf2145d86e95732df9776c037ba703cb93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:46:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334068445a8c96846d245e68ba7bf5bf2145d86e95732df9776c037ba703cb93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:46:31 compute-0 podman[214010]: 2025-11-29 07:46:31.280211509 +0000 UTC m=+0.165295907 container init f51f419f195637843c4f9c7f33d9fdf764ab9b9a36c60550c347777debfa3b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 07:46:31 compute-0 podman[214010]: 2025-11-29 07:46:31.297462234 +0000 UTC m=+0.182546642 container start f51f419f195637843c4f9c7f33d9fdf764ab9b9a36c60550c347777debfa3b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:46:31 compute-0 podman[214010]: 2025-11-29 07:46:31.301588585 +0000 UTC m=+0.186673013 container attach f51f419f195637843c4f9c7f33d9fdf764ab9b9a36c60550c347777debfa3b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leakey, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:46:31 compute-0 sudo[214106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgdjjnhaseemtkriyidoziqcnmevsrxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402390.7891734-989-103766696524444/AnsiballZ_copy.py'
Nov 29 07:46:31 compute-0 sudo[214106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:32 compute-0 python3.9[214108]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:32 compute-0 sudo[214106]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:32 compute-0 silly_leakey[214028]: {
Nov 29 07:46:32 compute-0 silly_leakey[214028]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:46:32 compute-0 silly_leakey[214028]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:46:32 compute-0 silly_leakey[214028]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:46:32 compute-0 silly_leakey[214028]:         "osd_id": 2,
Nov 29 07:46:32 compute-0 silly_leakey[214028]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:46:32 compute-0 silly_leakey[214028]:         "type": "bluestore"
Nov 29 07:46:32 compute-0 silly_leakey[214028]:     },
Nov 29 07:46:32 compute-0 silly_leakey[214028]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:46:32 compute-0 silly_leakey[214028]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:46:32 compute-0 silly_leakey[214028]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:46:32 compute-0 silly_leakey[214028]:         "osd_id": 0,
Nov 29 07:46:32 compute-0 silly_leakey[214028]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:46:32 compute-0 silly_leakey[214028]:         "type": "bluestore"
Nov 29 07:46:32 compute-0 silly_leakey[214028]:     },
Nov 29 07:46:32 compute-0 silly_leakey[214028]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:46:32 compute-0 silly_leakey[214028]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:46:32 compute-0 silly_leakey[214028]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:46:32 compute-0 silly_leakey[214028]:         "osd_id": 1,
Nov 29 07:46:32 compute-0 silly_leakey[214028]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:46:32 compute-0 silly_leakey[214028]:         "type": "bluestore"
Nov 29 07:46:32 compute-0 silly_leakey[214028]:     }
Nov 29 07:46:32 compute-0 silly_leakey[214028]: }
Nov 29 07:46:32 compute-0 systemd[1]: libpod-f51f419f195637843c4f9c7f33d9fdf764ab9b9a36c60550c347777debfa3b47.scope: Deactivated successfully.
Nov 29 07:46:32 compute-0 systemd[1]: libpod-f51f419f195637843c4f9c7f33d9fdf764ab9b9a36c60550c347777debfa3b47.scope: Consumed 1.116s CPU time.
Nov 29 07:46:32 compute-0 podman[214010]: 2025-11-29 07:46:32.405733703 +0000 UTC m=+1.290818111 container died f51f419f195637843c4f9c7f33d9fdf764ab9b9a36c60550c347777debfa3b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 29 07:46:32 compute-0 ceph-mon[75237]: pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-334068445a8c96846d245e68ba7bf5bf2145d86e95732df9776c037ba703cb93-merged.mount: Deactivated successfully.
Nov 29 07:46:32 compute-0 sudo[214300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxdsvkregbtrjanuilphonisxynamcro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402392.36349-989-79595328528711/AnsiballZ_copy.py'
Nov 29 07:46:32 compute-0 sudo[214300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:32 compute-0 podman[214010]: 2025-11-29 07:46:32.662381251 +0000 UTC m=+1.547465619 container remove f51f419f195637843c4f9c7f33d9fdf764ab9b9a36c60550c347777debfa3b47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:46:32 compute-0 systemd[1]: libpod-conmon-f51f419f195637843c4f9c7f33d9fdf764ab9b9a36c60550c347777debfa3b47.scope: Deactivated successfully.
Nov 29 07:46:32 compute-0 sudo[213800]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:46:32 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:46:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:46:32 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:46:32 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev f5fd02db-245c-495a-8335-c7a2c594ed8d does not exist
Nov 29 07:46:32 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 2571259f-029e-4a05-96f9-c832134767dd does not exist
Nov 29 07:46:32 compute-0 sudo[214303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:32 compute-0 sudo[214303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:32 compute-0 sudo[214303]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:32 compute-0 sudo[214329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:46:32 compute-0 sudo[214329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:32 compute-0 sudo[214329]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:32 compute-0 python3.9[214302]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:32 compute-0 sudo[214300]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:33 compute-0 sshd-session[214326]: Received disconnect from 20.185.243.158 port 36208:11: Bye Bye [preauth]
Nov 29 07:46:33 compute-0 sshd-session[214326]: Disconnected from authenticating user root 20.185.243.158 port 36208 [preauth]
Nov 29 07:46:33 compute-0 sudo[214504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwlearttnmcazougkyefhozxdtohqtpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402392.9837348-989-224151633081634/AnsiballZ_copy.py'
Nov 29 07:46:33 compute-0 sudo[214504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:33 compute-0 python3.9[214506]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:33 compute-0 sudo[214504]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:33 compute-0 sudo[214656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvynpnjmsizxcgeowgvzctlqqhdvezsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402393.6368682-989-148951059813222/AnsiballZ_copy.py'
Nov 29 07:46:33 compute-0 sudo[214656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:34 compute-0 python3.9[214658]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:34 compute-0 sudo[214656]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:34 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:46:34 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:46:34 compute-0 ceph-mon[75237]: pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:34 compute-0 sudo[214808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbqhkwaviddotzmleacmkewhikinhrly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402394.3869069-1025-44364537504323/AnsiballZ_copy.py'
Nov 29 07:46:34 compute-0 sudo[214808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:34 compute-0 python3.9[214810]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:46:34 compute-0 sudo[214808]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:35 compute-0 sudo[214960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsbtbjankdkvhguspogoshluatkipwhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402395.0631354-1025-157024042691310/AnsiballZ_copy.py'
Nov 29 07:46:35 compute-0 sudo[214960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:35 compute-0 python3.9[214962]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:35 compute-0 sudo[214960]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:36 compute-0 sudo[215114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrtmpeqlaiqkosfcystednokyfvbdpet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402395.790056-1025-56164875282092/AnsiballZ_copy.py'
Nov 29 07:46:36 compute-0 sudo[215114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:36 compute-0 python3.9[215116]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:36 compute-0 sudo[215114]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:36 compute-0 ceph-mon[75237]: pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:36 compute-0 sshd-session[214963]: Invalid user student from 114.34.106.146 port 48438
Nov 29 07:46:36 compute-0 sudo[215266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fthnyajefiwiurghkhqzclcoyzejtnxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402396.5363944-1025-153065952351110/AnsiballZ_copy.py'
Nov 29 07:46:36 compute-0 sudo[215266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:36 compute-0 sshd-session[214963]: Received disconnect from 114.34.106.146 port 48438:11: Bye Bye [preauth]
Nov 29 07:46:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:36 compute-0 sshd-session[214963]: Disconnected from invalid user student 114.34.106.146 port 48438 [preauth]
Nov 29 07:46:37 compute-0 python3.9[215268]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:37 compute-0 sudo[215266]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:37 compute-0 podman[215269]: 2025-11-29 07:46:37.205032404 +0000 UTC m=+0.090908921 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 07:46:37 compute-0 ceph-mon[75237]: pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:37 compute-0 sudo[215444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmmgjmxzyacmavyhasvaflsvgacqzcvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402397.2704036-1025-188946016563368/AnsiballZ_copy.py'
Nov 29 07:46:37 compute-0 sudo[215444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:37 compute-0 python3.9[215446]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:37 compute-0 sudo[215444]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:38 compute-0 sudo[215596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ailakkqdzngehxrjgsldnmlqcgzhmlao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402398.1166077-1061-17264835486796/AnsiballZ_systemd.py'
Nov 29 07:46:38 compute-0 sudo[215596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:46:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:46:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:46:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:46:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:46:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:46:38 compute-0 python3.9[215598]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:46:38 compute-0 systemd[1]: Reloading.
Nov 29 07:46:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:46:38
Nov 29 07:46:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:46:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:46:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['images', 'default.rgw.control', 'vms', 'default.rgw.meta', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'default.rgw.log']
Nov 29 07:46:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:46:38 compute-0 systemd-rc-local-generator[215623]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:46:38 compute-0 systemd-sysv-generator[215630]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:46:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:39 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Nov 29 07:46:39 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Nov 29 07:46:39 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 29 07:46:39 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 29 07:46:39 compute-0 systemd[1]: Starting libvirt logging daemon...
Nov 29 07:46:39 compute-0 systemd[1]: Started libvirt logging daemon.
Nov 29 07:46:39 compute-0 sudo[215596]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:46:39 compute-0 sudo[215790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aocfwckbgbhehsuifhzrzqdeqfgbjzrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402399.5930958-1061-262916689787783/AnsiballZ_systemd.py'
Nov 29 07:46:39 compute-0 sudo[215790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:40 compute-0 ceph-mon[75237]: pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:40 compute-0 python3.9[215792]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:46:40 compute-0 systemd[1]: Reloading.
Nov 29 07:46:40 compute-0 systemd-rc-local-generator[215818]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:46:40 compute-0 systemd-sysv-generator[215822]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:46:40 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 29 07:46:40 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 29 07:46:40 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 29 07:46:40 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 29 07:46:40 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 29 07:46:40 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 29 07:46:40 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 07:46:40 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 29 07:46:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:40 compute-0 sudo[215790]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:41 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 29 07:46:41 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 29 07:46:41 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 29 07:46:41 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 29 07:46:41 compute-0 sudo[216014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nifphoxlkawrlvlonlnkulhtjpinxbvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402401.582994-1061-59666642941328/AnsiballZ_systemd.py'
Nov 29 07:46:41 compute-0 sudo[216014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:42 compute-0 python3.9[216018]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:46:42 compute-0 systemd[1]: Reloading.
Nov 29 07:46:42 compute-0 ceph-mon[75237]: pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:42 compute-0 systemd-rc-local-generator[216041]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:46:42 compute-0 systemd-sysv-generator[216044]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:46:42 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 29 07:46:42 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 29 07:46:42 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 29 07:46:42 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 29 07:46:42 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 29 07:46:42 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 29 07:46:42 compute-0 setroubleshoot[215855]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 9a8e50cd-9625-4e37-a2b4-f49ba071eeaf
Nov 29 07:46:42 compute-0 setroubleshoot[215855]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 29 07:46:42 compute-0 setroubleshoot[215855]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 9a8e50cd-9625-4e37-a2b4-f49ba071eeaf
Nov 29 07:46:42 compute-0 sudo[216014]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:42 compute-0 setroubleshoot[215855]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 29 07:46:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:46:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:46:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:46:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:46:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:46:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:46:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:46:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:46:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:46:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:46:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:43 compute-0 sudo[216229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-disbfiohniniadaiudfpoukmonlbedkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402402.8380144-1061-212258346234229/AnsiballZ_systemd.py'
Nov 29 07:46:43 compute-0 sudo[216229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:43 compute-0 ceph-mon[75237]: pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:43 compute-0 python3.9[216231]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:46:43 compute-0 systemd[1]: Reloading.
Nov 29 07:46:43 compute-0 systemd-rc-local-generator[216257]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:46:43 compute-0 systemd-sysv-generator[216263]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:46:44 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Nov 29 07:46:44 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 29 07:46:44 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 29 07:46:44 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 29 07:46:44 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 29 07:46:44 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 29 07:46:44 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 29 07:46:44 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 29 07:46:44 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 29 07:46:44 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 29 07:46:44 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 07:46:44 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 29 07:46:44 compute-0 sudo[216229]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:44 compute-0 sudo[216444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vivvqnqbqptydqujfzfuyvktgvycvtxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402404.4808347-1061-206100211039769/AnsiballZ_systemd.py'
Nov 29 07:46:44 compute-0 sudo[216444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:46:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:45 compute-0 python3.9[216446]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:46:45 compute-0 systemd[1]: Reloading.
Nov 29 07:46:45 compute-0 systemd-rc-local-generator[216469]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:46:45 compute-0 systemd-sysv-generator[216474]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:46:45 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Nov 29 07:46:45 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Nov 29 07:46:45 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 29 07:46:45 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 29 07:46:45 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 29 07:46:45 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 29 07:46:45 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 29 07:46:45 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 29 07:46:45 compute-0 sudo[216444]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:46 compute-0 ceph-mon[75237]: pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:46 compute-0 sudo[216656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlrchmtndulrhdxebnmqndbtexjmiail ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402405.8958602-1098-77149947935000/AnsiballZ_file.py'
Nov 29 07:46:46 compute-0 sudo[216656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:46 compute-0 python3.9[216658]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:46 compute-0 sudo[216656]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:46 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:46:46 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 2803 writes, 12K keys, 2803 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 2803 writes, 2803 syncs, 1.00 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1208 writes, 5322 keys, 1208 commit groups, 1.0 writes per commit group, ingest: 8.04 MB, 0.01 MB/s
                                           Interval WAL: 1208 writes, 1208 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     47.5      0.30              0.04         5    0.060       0      0       0.0       0.0
                                             L6      1/0    8.47 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1     34.7     29.1      1.05              0.12         4    0.263     16K   1795       0.0       0.0
                                            Sum      1/0    8.47 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     27.0     33.2      1.35              0.17         9    0.150     16K   1795       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     27.0     33.2      1.35              0.17         8    0.169     16K   1795       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     34.7     29.1      1.05              0.12         4    0.263     16K   1795       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     47.7      0.30              0.04         4    0.075       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     24.3      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.014, interval 0.014
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.04 GB write, 0.04 MB/s write, 0.04 GB read, 0.03 MB/s read, 1.4 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.06 MB/s read, 1.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55dbdf32d1f0#2 capacity: 308.00 MB usage: 1.04 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 9.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(68,897.11 KB,0.284443%) FilterBlock(10,52.17 KB,0.0165419%) IndexBlock(10,112.36 KB,0.0356253%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 07:46:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:47 compute-0 sudo[216810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnltzwqakbcthpzfhdbjbpxnkhfcckeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402406.8012602-1106-143863097059427/AnsiballZ_find.py'
Nov 29 07:46:47 compute-0 sudo[216810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:47 compute-0 python3.9[216812]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 07:46:47 compute-0 sudo[216810]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:47 compute-0 sudo[216962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjasqaygbdgiupfjxekdlawdcsptaunz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402407.499319-1114-48103201216341/AnsiballZ_command.py'
Nov 29 07:46:47 compute-0 sudo[216962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:47 compute-0 python3.9[216964]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:46:48 compute-0 sudo[216962]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:48 compute-0 ceph-mon[75237]: pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:48 compute-0 sshd-session[216782]: Invalid user autcom from 103.236.140.19 port 46122
Nov 29 07:46:48 compute-0 python3.9[217118]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 07:46:48 compute-0 sshd-session[216782]: Received disconnect from 103.236.140.19 port 46122:11: Bye Bye [preauth]
Nov 29 07:46:48 compute-0 sshd-session[216782]: Disconnected from invalid user autcom 103.236.140.19 port 46122 [preauth]
Nov 29 07:46:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:49 compute-0 python3.9[217268]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:46:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:46:50 compute-0 python3.9[217389]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764402408.9947844-1133-238108236947801/.source.xml follow=False _original_basename=secret.xml.j2 checksum=66afa20a638762d574b961834bb852c74c86f618 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:50 compute-0 ceph-mon[75237]: pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:50 compute-0 sudo[217539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-curiptinbxkpxiueixjbnadeqezasjwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402410.2492137-1148-250221884377420/AnsiballZ_command.py'
Nov 29 07:46:50 compute-0 sudo[217539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:50 compute-0 python3.9[217541]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 321e9cb7-01a2-5759-bf8c-981c9a64aa3e
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:46:50 compute-0 polkitd[43585]: Registered Authentication Agent for unix-process:217543:464993 (system bus name :1.2839 [pkttyagent --process 217543 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 29 07:46:50 compute-0 polkitd[43585]: Unregistered Authentication Agent for unix-process:217543:464993 (system bus name :1.2839, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 29 07:46:50 compute-0 polkitd[43585]: Registered Authentication Agent for unix-process:217542:464993 (system bus name :1.2840 [pkttyagent --process 217542 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 29 07:46:50 compute-0 polkitd[43585]: Unregistered Authentication Agent for unix-process:217542:464993 (system bus name :1.2840, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 29 07:46:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:51 compute-0 sudo[217539]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:51 compute-0 ceph-mon[75237]: pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:52 compute-0 python3.9[217703]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:52 compute-0 sudo[217853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mycofuiaunsmacyfneamdtkhqayzvvin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402412.3696697-1164-8154096524317/AnsiballZ_command.py'
Nov 29 07:46:52 compute-0 sudo[217853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:52 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 29 07:46:52 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.027s CPU time.
Nov 29 07:46:52 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 29 07:46:52 compute-0 sudo[217853]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:53 compute-0 sudo[218006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jskyfwsbclmipdpsjwpjxmhyqlneuwcq ; FSID=321e9cb7-01a2-5759-bf8c-981c9a64aa3e KEY=AQACoCppAAAAABAAUMBgxavMgjAgzQEp37H3Rw== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402413.076458-1172-124885436347298/AnsiballZ_command.py'
Nov 29 07:46:53 compute-0 sudo[218006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:54 compute-0 polkitd[43585]: Registered Authentication Agent for unix-process:218009:465338 (system bus name :1.2843 [pkttyagent --process 218009 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 29 07:46:54 compute-0 polkitd[43585]: Unregistered Authentication Agent for unix-process:218009:465338 (system bus name :1.2843, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 29 07:46:54 compute-0 ceph-mon[75237]: pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:54 compute-0 sudo[218006]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:46:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:54 compute-0 sudo[218164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-folrqztdocfuozgahyunmmlkiwglrghz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402414.6804202-1180-27809632428094/AnsiballZ_copy.py'
Nov 29 07:46:54 compute-0 sudo[218164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:55 compute-0 python3.9[218166]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:55 compute-0 sudo[218164]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:55 compute-0 ceph-mon[75237]: pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:55 compute-0 sudo[218316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cohwvxwqmpcsnnavizgibeqlkfmzhunl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402415.40978-1188-251643128207694/AnsiballZ_stat.py'
Nov 29 07:46:55 compute-0 sudo[218316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:46:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:46:56 compute-0 python3.9[218318]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:46:56 compute-0 sudo[218316]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:56 compute-0 sudo[218439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fximcudppjvrahkczermjzfljithhxyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402415.40978-1188-251643128207694/AnsiballZ_copy.py'
Nov 29 07:46:56 compute-0 sudo[218439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:56 compute-0 python3.9[218441]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764402415.40978-1188-251643128207694/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:56 compute-0 sudo[218439]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:57 compute-0 sudo[218591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psohsuydvinhtaelvtzpwdzjdjgbpatq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402416.9155152-1204-214858597687925/AnsiballZ_file.py'
Nov 29 07:46:57 compute-0 sudo[218591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:57 compute-0 python3.9[218593]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:57 compute-0 sudo[218591]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:58 compute-0 sudo[218743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thvgdhfjkbwusmcgvbfywnlqdxaqofeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402417.6523197-1212-234151711784565/AnsiballZ_stat.py'
Nov 29 07:46:58 compute-0 sudo[218743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:58 compute-0 python3.9[218745]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:46:58 compute-0 sudo[218743]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:58 compute-0 ceph-mon[75237]: pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:58 compute-0 sudo[218821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owwtsuyjamjrihdthogfrrowwdcnuwkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402417.6523197-1212-234151711784565/AnsiballZ_file.py'
Nov 29 07:46:58 compute-0 sudo[218821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:58 compute-0 python3.9[218823]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:58 compute-0 sudo[218821]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:59 compute-0 sudo[218973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnedeffijzktusjxjqnfvdtxstdbfwji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402419.0062568-1224-177545182154335/AnsiballZ_stat.py'
Nov 29 07:46:59 compute-0 sudo[218973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:59 compute-0 podman[218975]: 2025-11-29 07:46:59.436003235 +0000 UTC m=+0.087935892 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:46:59 compute-0 python3.9[218976]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:46:59 compute-0 sudo[218973]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:59 compute-0 sudo[219069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufxyicstvexvgnwpgzpcqahzahdveyrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402419.0062568-1224-177545182154335/AnsiballZ_file.py'
Nov 29 07:46:59 compute-0 sudo[219069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:46:59 compute-0 ceph-mon[75237]: pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:46:59 compute-0 python3.9[219071]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.0x3a0a2k recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:46:59 compute-0 sudo[219069]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:00 compute-0 sudo[219221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtkuhslkxueoztldhnrrvzxsuiesommj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402420.1902745-1236-163139556706944/AnsiballZ_stat.py'
Nov 29 07:47:00 compute-0 sudo[219221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:00 compute-0 python3.9[219223]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:47:00 compute-0 sudo[219221]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:01 compute-0 sudo[219299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeyveyzbcvfxvoqsrcdxhcpsmreumbel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402420.1902745-1236-163139556706944/AnsiballZ_file.py'
Nov 29 07:47:01 compute-0 sudo[219299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:01 compute-0 python3.9[219301]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:47:01 compute-0 sudo[219299]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:02 compute-0 sudo[219451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qakosytaerqgsoddkqlasodmgvdppkzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402421.6714234-1249-218184855616415/AnsiballZ_command.py'
Nov 29 07:47:02 compute-0 sudo[219451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:02 compute-0 python3.9[219453]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:47:02 compute-0 sudo[219451]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:03 compute-0 ceph-mon[75237]: pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:03 compute-0 sudo[219604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbspwgxncpucxaqgasamvtjwafraasfv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764402423.028679-1257-202022908596032/AnsiballZ_edpm_nftables_from_files.py'
Nov 29 07:47:03 compute-0 sudo[219604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:03 compute-0 python3[219606]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 07:47:03 compute-0 sudo[219604]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:04 compute-0 ceph-mon[75237]: pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:05 compute-0 sudo[219756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhmeucdnsqdjqwbhtyipqztjqhzjwctj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402424.592279-1265-157935357757035/AnsiballZ_stat.py'
Nov 29 07:47:05 compute-0 sudo[219756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:05 compute-0 python3.9[219758]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:47:05 compute-0 sudo[219756]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:47:05 compute-0 sudo[219834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjptgprmpgqryqdncwinwzwncrfnlpdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402424.592279-1265-157935357757035/AnsiballZ_file.py'
Nov 29 07:47:05 compute-0 sudo[219834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:05 compute-0 python3.9[219836]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:47:05 compute-0 sudo[219834]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:06 compute-0 sudo[219986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddkefsmdukksbwcaomesvjctjhkmzqhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402425.9043818-1277-224893481907986/AnsiballZ_stat.py'
Nov 29 07:47:06 compute-0 sudo[219986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:06 compute-0 ceph-mon[75237]: pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:06 compute-0 python3.9[219988]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:47:06 compute-0 sudo[219986]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:06 compute-0 sudo[220064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhlzlyfentgzxhrbopkpykzehgouhhvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402425.9043818-1277-224893481907986/AnsiballZ_file.py'
Nov 29 07:47:06 compute-0 sudo[220064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:07 compute-0 python3.9[220066]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:47:07 compute-0 sudo[220064]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:07 compute-0 sudo[220227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtaddlgohduywksfxyqulxverixfdlaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402427.2424295-1289-33060412956238/AnsiballZ_stat.py'
Nov 29 07:47:07 compute-0 sudo[220227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:07 compute-0 podman[220190]: 2025-11-29 07:47:07.675368862 +0000 UTC m=+0.136931121 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 07:47:07 compute-0 python3.9[220231]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:47:07 compute-0 sudo[220227]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:07 compute-0 ceph-mon[75237]: pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:08 compute-0 sudo[220319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvzfzoaklgnselztqjeuqvqljyficzpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402427.2424295-1289-33060412956238/AnsiballZ_file.py'
Nov 29 07:47:08 compute-0 sudo[220319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:08 compute-0 python3.9[220321]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:47:08 compute-0 sudo[220319]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:47:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:47:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:47:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:47:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:47:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:47:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:09 compute-0 sudo[220471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mikrlgnzvuuvinenqelrvutwigyqyvwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402428.5829625-1301-9398074973046/AnsiballZ_stat.py'
Nov 29 07:47:09 compute-0 sudo[220471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:09 compute-0 python3.9[220473]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:47:09 compute-0 sudo[220471]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:09 compute-0 sudo[220549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfhuvrafdanckipnodyoxllihzdnpmiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402428.5829625-1301-9398074973046/AnsiballZ_file.py'
Nov 29 07:47:09 compute-0 sudo[220549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:09 compute-0 python3.9[220551]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:47:09 compute-0 sudo[220549]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:47:10 compute-0 ceph-mon[75237]: pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:10 compute-0 sudo[220701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frosxzjtapnlsusakchjcglviuqflgot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402429.8883843-1313-80883285595258/AnsiballZ_stat.py'
Nov 29 07:47:10 compute-0 sudo[220701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:10 compute-0 python3.9[220703]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:47:10 compute-0 sudo[220701]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:11 compute-0 sudo[220826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpyhnuanglwajuayaxaqlnbfzqoutbkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402429.8883843-1313-80883285595258/AnsiballZ_copy.py'
Nov 29 07:47:11 compute-0 sudo[220826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:11 compute-0 python3.9[220828]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764402429.8883843-1313-80883285595258/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:47:11 compute-0 sudo[220826]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:11 compute-0 sudo[220978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfbjdmqympcdafirgepfpvosaeqqqijl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402431.4732337-1328-56831702411814/AnsiballZ_file.py'
Nov 29 07:47:11 compute-0 sudo[220978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:11 compute-0 ceph-mon[75237]: pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:12 compute-0 python3.9[220980]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:47:12 compute-0 sudo[220978]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:13 compute-0 sudo[221130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udejqjfpklcaotyukvyqmuaorfjbwnbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402432.2968109-1336-26624954001125/AnsiballZ_command.py'
Nov 29 07:47:13 compute-0 sudo[221130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:13 compute-0 python3.9[221132]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:47:13 compute-0 sudo[221130]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:14 compute-0 sudo[221285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voudglanrdsxylbfrydryqlhdeemjmtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402433.6812744-1344-2792804792691/AnsiballZ_blockinfile.py'
Nov 29 07:47:14 compute-0 sudo[221285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:14 compute-0 ceph-mon[75237]: pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:14 compute-0 python3.9[221287]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:47:14 compute-0 sudo[221285]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:47:15 compute-0 sudo[221437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imqbntgujugoxnafgnwkgbmqyrbwikso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402435.0861888-1353-24800273003388/AnsiballZ_command.py'
Nov 29 07:47:15 compute-0 sudo[221437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:15 compute-0 ceph-mon[75237]: pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:15 compute-0 python3.9[221439]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:47:15 compute-0 sudo[221437]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:16 compute-0 sudo[221590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlizrrpozslpxvvpgdugjhcsffoimbzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402435.9075828-1361-24154526868888/AnsiballZ_stat.py'
Nov 29 07:47:16 compute-0 sudo[221590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:16 compute-0 python3.9[221592]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:47:16 compute-0 sudo[221590]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:16 compute-0 sudo[221744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoviqesmmnizznfymzeubultrrtbtega ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402436.6692-1369-214324406658776/AnsiballZ_command.py'
Nov 29 07:47:16 compute-0 sudo[221744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:17 compute-0 python3.9[221746]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:47:17 compute-0 sudo[221744]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:17 compute-0 sudo[221899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljhwnjdptqnnghcmtvmiecvdusqyerzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402437.457415-1377-125687208946406/AnsiballZ_file.py'
Nov 29 07:47:17 compute-0 sudo[221899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:17 compute-0 python3.9[221901]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:47:18 compute-0 sudo[221899]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:18 compute-0 ceph-mon[75237]: pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:18 compute-0 sudo[222051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruqkgbepsifuklregzkppinqvjxcvvqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402438.2211902-1385-113357395926811/AnsiballZ_stat.py'
Nov 29 07:47:18 compute-0 sudo[222051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:18 compute-0 python3.9[222053]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:47:18 compute-0 sudo[222051]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:19 compute-0 sudo[222174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ansvlhztmvayzuwrikjzlzclorpiwrqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402438.2211902-1385-113357395926811/AnsiballZ_copy.py'
Nov 29 07:47:19 compute-0 sudo[222174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:19 compute-0 python3.9[222176]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764402438.2211902-1385-113357395926811/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:47:19 compute-0 sudo[222174]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:20 compute-0 sudo[222326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvyylbnzicykdlquusveggsabghzwcan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402439.8054771-1400-131878948214329/AnsiballZ_stat.py'
Nov 29 07:47:20 compute-0 sudo[222326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:20 compute-0 ceph-mon[75237]: pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:47:20 compute-0 python3.9[222328]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:47:20 compute-0 sudo[222326]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:20 compute-0 sudo[222449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbbxonranegsfqitsfgqirzxbcjgpdxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402439.8054771-1400-131878948214329/AnsiballZ_copy.py'
Nov 29 07:47:20 compute-0 sudo[222449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:21 compute-0 python3.9[222451]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764402439.8054771-1400-131878948214329/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:47:21 compute-0 sudo[222449]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:21 compute-0 sudo[222601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufgirlplrsmbkeufqqrkcdkqccpkhcax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402441.2523544-1415-209207452219825/AnsiballZ_stat.py'
Nov 29 07:47:21 compute-0 sudo[222601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:21 compute-0 python3.9[222603]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:47:21 compute-0 sudo[222601]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:22 compute-0 sudo[222724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsvxdtaopcaazbguhaakldidktnfhzdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402441.2523544-1415-209207452219825/AnsiballZ_copy.py'
Nov 29 07:47:22 compute-0 sudo[222724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:22 compute-0 ceph-mon[75237]: pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:22 compute-0 python3.9[222726]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764402441.2523544-1415-209207452219825/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:47:22 compute-0 sudo[222724]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:22 compute-0 sudo[222876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixxktuasdutuicbpvzwvgtnsvujvovij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402442.555556-1430-56024652202979/AnsiballZ_systemd.py'
Nov 29 07:47:22 compute-0 sudo[222876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:23 compute-0 python3.9[222878]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:47:23 compute-0 systemd[1]: Reloading.
Nov 29 07:47:23 compute-0 systemd-rc-local-generator[222904]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:47:23 compute-0 systemd-sysv-generator[222909]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:47:23 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Nov 29 07:47:23 compute-0 sudo[222876]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:24 compute-0 sudo[223067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pflfkhemirebctpspryjjpjtnqonusuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402443.925953-1438-277750918854459/AnsiballZ_systemd.py'
Nov 29 07:47:24 compute-0 sudo[223067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:24 compute-0 ceph-mon[75237]: pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:24 compute-0 python3.9[223069]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 07:47:24 compute-0 systemd[1]: Reloading.
Nov 29 07:47:24 compute-0 systemd-rc-local-generator[223098]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:47:24 compute-0 systemd-sysv-generator[223101]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:47:24 compute-0 systemd[1]: Reloading.
Nov 29 07:47:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:25 compute-0 systemd-rc-local-generator[223133]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:47:25 compute-0 systemd-sysv-generator[223137]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:47:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 29 07:47:25 compute-0 sudo[223067]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:47:25.290653) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402445290822, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1142, "num_deletes": 251, "total_data_size": 1724188, "memory_usage": 1756088, "flush_reason": "Manual Compaction"}
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402445300958, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1014428, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12177, "largest_seqno": 13318, "table_properties": {"data_size": 1010277, "index_size": 1739, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10666, "raw_average_key_size": 19, "raw_value_size": 1001194, "raw_average_value_size": 1874, "num_data_blocks": 80, "num_entries": 534, "num_filter_entries": 534, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402323, "oldest_key_time": 1764402323, "file_creation_time": 1764402445, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 10389 microseconds, and 4794 cpu microseconds.
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:47:25.301048) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1014428 bytes OK
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:47:25.301073) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:47:25.302724) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:47:25.302754) EVENT_LOG_v1 {"time_micros": 1764402445302743, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:47:25.302782) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 1718967, prev total WAL file size 1718967, number of live WAL files 2.
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:47:25.303928) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(990KB)], [29(8674KB)]
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402445304202, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9897106, "oldest_snapshot_seqno": -1}
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4077 keys, 7267835 bytes, temperature: kUnknown
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402445379820, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7267835, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7238896, "index_size": 17629, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 98902, "raw_average_key_size": 24, "raw_value_size": 7163657, "raw_average_value_size": 1757, "num_data_blocks": 758, "num_entries": 4077, "num_filter_entries": 4077, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764402445, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:47:25.380117) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7267835 bytes
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:47:25.381831) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.7 rd, 96.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 8.5 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(16.9) write-amplify(7.2) OK, records in: 4539, records dropped: 462 output_compression: NoCompression
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:47:25.381852) EVENT_LOG_v1 {"time_micros": 1764402445381842, "job": 12, "event": "compaction_finished", "compaction_time_micros": 75717, "compaction_time_cpu_micros": 44699, "output_level": 6, "num_output_files": 1, "total_output_size": 7267835, "num_input_records": 4539, "num_output_records": 4077, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402445382243, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402445384415, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:47:25.303767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:47:25.384474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:47:25.384480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:47:25.384483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:47:25.384485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:47:25 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:47:25.384487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:47:25 compute-0 sshd-session[163630]: Connection closed by 192.168.122.30 port 49914
Nov 29 07:47:25 compute-0 sshd-session[163616]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:47:25 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Nov 29 07:47:25 compute-0 systemd[1]: session-49.scope: Consumed 3min 43.674s CPU time.
Nov 29 07:47:25 compute-0 systemd-logind[782]: Session 49 logged out. Waiting for processes to exit.
Nov 29 07:47:25 compute-0 systemd-logind[782]: Removed session 49.
Nov 29 07:47:26 compute-0 ceph-mon[75237]: pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:47:27.102 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:47:27.104 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:47:27.104 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:28 compute-0 ceph-mon[75237]: pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:29 compute-0 podman[223166]: 2025-11-29 07:47:29.926976667 +0000 UTC m=+0.081389525 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 07:47:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:47:30 compute-0 ceph-mon[75237]: pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:32 compute-0 ceph-mon[75237]: pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:32 compute-0 sudo[223187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:47:32 compute-0 sudo[223187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:32 compute-0 sudo[223187]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:32 compute-0 sshd-session[223185]: Accepted publickey for zuul from 192.168.122.30 port 41734 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:47:32 compute-0 systemd-logind[782]: New session 50 of user zuul.
Nov 29 07:47:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:33 compute-0 systemd[1]: Started Session 50 of User zuul.
Nov 29 07:47:33 compute-0 sshd-session[223185]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:47:33 compute-0 sudo[223212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:47:33 compute-0 sudo[223212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:33 compute-0 sudo[223212]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:33 compute-0 sudo[223239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:47:33 compute-0 sudo[223239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:33 compute-0 sudo[223239]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:33 compute-0 sudo[223282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:47:33 compute-0 sudo[223282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:33 compute-0 sudo[223282]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:47:33 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:47:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:47:33 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:47:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:47:33 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:47:33 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 6a6076af-cfc9-49be-81d1-7f324eb6eac8 does not exist
Nov 29 07:47:33 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 2829958d-f3a4-4fd1-b54e-f4c40401fcb2 does not exist
Nov 29 07:47:33 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev b6d701d6-8669-431a-9880-d16fbd7a76b8 does not exist
Nov 29 07:47:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:47:33 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:47:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:47:33 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:47:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:47:33 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:47:33 compute-0 sudo[223470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:47:33 compute-0 sudo[223470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:33 compute-0 sudo[223470]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:34 compute-0 sudo[223495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:47:34 compute-0 sudo[223495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:34 compute-0 sudo[223495]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:34 compute-0 sudo[223520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:47:34 compute-0 sudo[223520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:34 compute-0 sudo[223520]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:34 compute-0 python3.9[223469]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:47:34 compute-0 sudo[223545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:47:34 compute-0 sudo[223545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:34 compute-0 ceph-mon[75237]: pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:34 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:47:34 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:47:34 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:47:34 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:47:34 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:47:34 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:47:34 compute-0 podman[223616]: 2025-11-29 07:47:34.497401439 +0000 UTC m=+0.042111576 container create 132a905b432d71e7316892533eec96d1d69f217d54be4f02f3edfd387d233bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:47:34 compute-0 systemd[1]: Started libpod-conmon-132a905b432d71e7316892533eec96d1d69f217d54be4f02f3edfd387d233bdd.scope.
Nov 29 07:47:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:47:34 compute-0 podman[223616]: 2025-11-29 07:47:34.47738369 +0000 UTC m=+0.022093817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:47:34 compute-0 podman[223616]: 2025-11-29 07:47:34.591956528 +0000 UTC m=+0.136666715 container init 132a905b432d71e7316892533eec96d1d69f217d54be4f02f3edfd387d233bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:47:34 compute-0 podman[223616]: 2025-11-29 07:47:34.604008962 +0000 UTC m=+0.148719059 container start 132a905b432d71e7316892533eec96d1d69f217d54be4f02f3edfd387d233bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bassi, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:47:34 compute-0 podman[223616]: 2025-11-29 07:47:34.607900177 +0000 UTC m=+0.152610304 container attach 132a905b432d71e7316892533eec96d1d69f217d54be4f02f3edfd387d233bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bassi, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:47:34 compute-0 musing_bassi[223632]: 167 167
Nov 29 07:47:34 compute-0 systemd[1]: libpod-132a905b432d71e7316892533eec96d1d69f217d54be4f02f3edfd387d233bdd.scope: Deactivated successfully.
Nov 29 07:47:34 compute-0 podman[223616]: 2025-11-29 07:47:34.61169985 +0000 UTC m=+0.156409977 container died 132a905b432d71e7316892533eec96d1d69f217d54be4f02f3edfd387d233bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:47:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ab7e5f090b7f8f30bcd13eda14a142a234d6f1449b4402dcc7301c3ec10400a-merged.mount: Deactivated successfully.
Nov 29 07:47:34 compute-0 podman[223616]: 2025-11-29 07:47:34.670166675 +0000 UTC m=+0.214876772 container remove 132a905b432d71e7316892533eec96d1d69f217d54be4f02f3edfd387d233bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bassi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:47:34 compute-0 systemd[1]: libpod-conmon-132a905b432d71e7316892533eec96d1d69f217d54be4f02f3edfd387d233bdd.scope: Deactivated successfully.
Nov 29 07:47:34 compute-0 podman[223679]: 2025-11-29 07:47:34.894756169 +0000 UTC m=+0.093954283 container create 0afd52950c41732ce9b6bd08c1dfa10258823e1c9e018a091d76850c3fcc8f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_fermat, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:47:34 compute-0 podman[223679]: 2025-11-29 07:47:34.842079009 +0000 UTC m=+0.041277163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:47:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:35 compute-0 systemd[1]: Started libpod-conmon-0afd52950c41732ce9b6bd08c1dfa10258823e1c9e018a091d76850c3fcc8f25.scope.
Nov 29 07:47:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/151264450ca313726d838f04b1fa9d027824e8c8759289b9f91f52ddbaa2368b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/151264450ca313726d838f04b1fa9d027824e8c8759289b9f91f52ddbaa2368b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/151264450ca313726d838f04b1fa9d027824e8c8759289b9f91f52ddbaa2368b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/151264450ca313726d838f04b1fa9d027824e8c8759289b9f91f52ddbaa2368b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/151264450ca313726d838f04b1fa9d027824e8c8759289b9f91f52ddbaa2368b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:35 compute-0 podman[223679]: 2025-11-29 07:47:35.08771969 +0000 UTC m=+0.286917794 container init 0afd52950c41732ce9b6bd08c1dfa10258823e1c9e018a091d76850c3fcc8f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_fermat, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 07:47:35 compute-0 podman[223679]: 2025-11-29 07:47:35.100430522 +0000 UTC m=+0.299628596 container start 0afd52950c41732ce9b6bd08c1dfa10258823e1c9e018a091d76850c3fcc8f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 07:47:35 compute-0 podman[223679]: 2025-11-29 07:47:35.104248754 +0000 UTC m=+0.303446828 container attach 0afd52950c41732ce9b6bd08c1dfa10258823e1c9e018a091d76850c3fcc8f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 07:47:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:47:35 compute-0 python3.9[223825]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:47:35 compute-0 network[223842]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:47:35 compute-0 network[223843]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:47:35 compute-0 network[223844]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:47:36 compute-0 amazing_fermat[223741]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:47:36 compute-0 amazing_fermat[223741]: --> relative data size: 1.0
Nov 29 07:47:36 compute-0 amazing_fermat[223741]: --> All data devices are unavailable
Nov 29 07:47:36 compute-0 podman[223679]: 2025-11-29 07:47:36.24721527 +0000 UTC m=+1.446413364 container died 0afd52950c41732ce9b6bd08c1dfa10258823e1c9e018a091d76850c3fcc8f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_fermat, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 07:47:36 compute-0 ceph-mon[75237]: pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:36 compute-0 systemd[1]: libpod-0afd52950c41732ce9b6bd08c1dfa10258823e1c9e018a091d76850c3fcc8f25.scope: Deactivated successfully.
Nov 29 07:47:36 compute-0 systemd[1]: libpod-0afd52950c41732ce9b6bd08c1dfa10258823e1c9e018a091d76850c3fcc8f25.scope: Consumed 1.103s CPU time.
Nov 29 07:47:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-151264450ca313726d838f04b1fa9d027824e8c8759289b9f91f52ddbaa2368b-merged.mount: Deactivated successfully.
Nov 29 07:47:36 compute-0 podman[223679]: 2025-11-29 07:47:36.720568558 +0000 UTC m=+1.919766642 container remove 0afd52950c41732ce9b6bd08c1dfa10258823e1c9e018a091d76850c3fcc8f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:47:36 compute-0 systemd[1]: libpod-conmon-0afd52950c41732ce9b6bd08c1dfa10258823e1c9e018a091d76850c3fcc8f25.scope: Deactivated successfully.
Nov 29 07:47:36 compute-0 sudo[223545]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:36 compute-0 sudo[223892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:47:36 compute-0 sudo[223892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:36 compute-0 sudo[223892]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:36 compute-0 sudo[223920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:47:36 compute-0 sudo[223920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:36 compute-0 sudo[223920]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:36 compute-0 sudo[223949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:47:36 compute-0 sudo[223949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:36 compute-0 sudo[223949]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:37 compute-0 sudo[223977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:47:37 compute-0 sudo[223977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:37 compute-0 podman[224057]: 2025-11-29 07:47:37.324214977 +0000 UTC m=+0.035841117 container create c93d07e668ff2302d7fe7776123f8cdfe421218e7e4fc4d427b146e20de21c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shamir, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:47:37 compute-0 systemd[1]: Started libpod-conmon-c93d07e668ff2302d7fe7776123f8cdfe421218e7e4fc4d427b146e20de21c0a.scope.
Nov 29 07:47:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:47:37 compute-0 podman[224057]: 2025-11-29 07:47:37.394274106 +0000 UTC m=+0.105900256 container init c93d07e668ff2302d7fe7776123f8cdfe421218e7e4fc4d427b146e20de21c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shamir, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 07:47:37 compute-0 podman[224057]: 2025-11-29 07:47:37.402003824 +0000 UTC m=+0.113629964 container start c93d07e668ff2302d7fe7776123f8cdfe421218e7e4fc4d427b146e20de21c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shamir, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 07:47:37 compute-0 podman[224057]: 2025-11-29 07:47:37.309691256 +0000 UTC m=+0.021317396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:47:37 compute-0 nostalgic_shamir[224078]: 167 167
Nov 29 07:47:37 compute-0 systemd[1]: libpod-c93d07e668ff2302d7fe7776123f8cdfe421218e7e4fc4d427b146e20de21c0a.scope: Deactivated successfully.
Nov 29 07:47:37 compute-0 podman[224057]: 2025-11-29 07:47:37.412564518 +0000 UTC m=+0.124190698 container attach c93d07e668ff2302d7fe7776123f8cdfe421218e7e4fc4d427b146e20de21c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:47:37 compute-0 podman[224057]: 2025-11-29 07:47:37.412933148 +0000 UTC m=+0.124559298 container died c93d07e668ff2302d7fe7776123f8cdfe421218e7e4fc4d427b146e20de21c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shamir, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:47:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b59c70a04ee117902220617d5dcd4e70c8123b902960354dac3f6e6c46dca901-merged.mount: Deactivated successfully.
Nov 29 07:47:37 compute-0 podman[224057]: 2025-11-29 07:47:37.458180358 +0000 UTC m=+0.169806488 container remove c93d07e668ff2302d7fe7776123f8cdfe421218e7e4fc4d427b146e20de21c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_shamir, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:47:37 compute-0 systemd[1]: libpod-conmon-c93d07e668ff2302d7fe7776123f8cdfe421218e7e4fc4d427b146e20de21c0a.scope: Deactivated successfully.
Nov 29 07:47:37 compute-0 podman[224114]: 2025-11-29 07:47:37.613946336 +0000 UTC m=+0.024429420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:47:37 compute-0 podman[224114]: 2025-11-29 07:47:37.816150366 +0000 UTC m=+0.226633430 container create b1d59dd31a2ecf42db3bdd104742c65b55afd5371a5776beeb5f2765d3f6646a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pasteur, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 29 07:47:37 compute-0 systemd[1]: Started libpod-conmon-b1d59dd31a2ecf42db3bdd104742c65b55afd5371a5776beeb5f2765d3f6646a.scope.
Nov 29 07:47:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5356725e3397def7647858d74affa7fb3d78885a782080754a8031a26cd4d7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5356725e3397def7647858d74affa7fb3d78885a782080754a8031a26cd4d7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5356725e3397def7647858d74affa7fb3d78885a782080754a8031a26cd4d7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5356725e3397def7647858d74affa7fb3d78885a782080754a8031a26cd4d7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:38 compute-0 podman[224114]: 2025-11-29 07:47:38.054933162 +0000 UTC m=+0.465416306 container init b1d59dd31a2ecf42db3bdd104742c65b55afd5371a5776beeb5f2765d3f6646a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pasteur, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:47:38 compute-0 podman[224114]: 2025-11-29 07:47:38.067600143 +0000 UTC m=+0.478083237 container start b1d59dd31a2ecf42db3bdd104742c65b55afd5371a5776beeb5f2765d3f6646a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 07:47:38 compute-0 podman[224114]: 2025-11-29 07:47:38.071527829 +0000 UTC m=+0.482010893 container attach b1d59dd31a2ecf42db3bdd104742c65b55afd5371a5776beeb5f2765d3f6646a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:47:38 compute-0 podman[224133]: 2025-11-29 07:47:38.078073295 +0000 UTC m=+0.341818814 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:47:38 compute-0 ceph-mon[75237]: pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:47:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:47:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:47:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:47:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:47:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:47:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:47:38
Nov 29 07:47:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:47:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:47:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['.rgw.root', 'vms', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'images', 'default.rgw.control', '.mgr', 'backups', 'cephfs.cephfs.data']
Nov 29 07:47:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]: {
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:     "0": [
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:         {
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "devices": [
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "/dev/loop3"
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             ],
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "lv_name": "ceph_lv0",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "lv_size": "21470642176",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "name": "ceph_lv0",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "tags": {
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.cluster_name": "ceph",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.crush_device_class": "",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.encrypted": "0",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.osd_id": "0",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.type": "block",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.vdo": "0"
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             },
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "type": "block",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "vg_name": "ceph_vg0"
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:         }
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:     ],
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:     "1": [
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:         {
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "devices": [
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "/dev/loop4"
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             ],
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "lv_name": "ceph_lv1",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "lv_size": "21470642176",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "name": "ceph_lv1",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "tags": {
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.cluster_name": "ceph",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.crush_device_class": "",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.encrypted": "0",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.osd_id": "1",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.type": "block",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.vdo": "0"
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             },
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "type": "block",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "vg_name": "ceph_vg1"
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:         }
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:     ],
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:     "2": [
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:         {
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "devices": [
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "/dev/loop5"
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             ],
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "lv_name": "ceph_lv2",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "lv_size": "21470642176",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "name": "ceph_lv2",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "tags": {
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.cluster_name": "ceph",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.crush_device_class": "",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.encrypted": "0",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.osd_id": "2",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.type": "block",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:                 "ceph.vdo": "0"
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             },
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "type": "block",
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:             "vg_name": "ceph_vg2"
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:         }
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]:     ]
Nov 29 07:47:38 compute-0 adoring_pasteur[224160]: }
Nov 29 07:47:38 compute-0 systemd[1]: libpod-b1d59dd31a2ecf42db3bdd104742c65b55afd5371a5776beeb5f2765d3f6646a.scope: Deactivated successfully.
Nov 29 07:47:38 compute-0 podman[224114]: 2025-11-29 07:47:38.918251349 +0000 UTC m=+1.328734433 container died b1d59dd31a2ecf42db3bdd104742c65b55afd5371a5776beeb5f2765d3f6646a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:47:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5356725e3397def7647858d74affa7fb3d78885a782080754a8031a26cd4d7d-merged.mount: Deactivated successfully.
Nov 29 07:47:39 compute-0 podman[224114]: 2025-11-29 07:47:39.297949054 +0000 UTC m=+1.708432148 container remove b1d59dd31a2ecf42db3bdd104742c65b55afd5371a5776beeb5f2765d3f6646a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:47:39 compute-0 systemd[1]: libpod-conmon-b1d59dd31a2ecf42db3bdd104742c65b55afd5371a5776beeb5f2765d3f6646a.scope: Deactivated successfully.
Nov 29 07:47:39 compute-0 sudo[223977]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:39 compute-0 sudo[224233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:47:39 compute-0 sudo[224233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:39 compute-0 sudo[224233]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:39 compute-0 sudo[224258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:47:39 compute-0 sudo[224258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:39 compute-0 sudo[224258]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:39 compute-0 sudo[224283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:47:39 compute-0 sudo[224283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:39 compute-0 sudo[224283]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:39 compute-0 ceph-mon[75237]: pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:39 compute-0 sudo[224308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:47:39 compute-0 sudo[224308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:40 compute-0 sshd-session[224218]: Invalid user old from 103.234.151.178 port 37602
Nov 29 07:47:40 compute-0 podman[224380]: 2025-11-29 07:47:40.109871477 +0000 UTC m=+0.023621058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:47:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:47:40 compute-0 sshd-session[224218]: Received disconnect from 103.234.151.178 port 37602:11: Bye Bye [preauth]
Nov 29 07:47:40 compute-0 sshd-session[224218]: Disconnected from invalid user old 103.234.151.178 port 37602 [preauth]
Nov 29 07:47:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:41 compute-0 podman[224380]: 2025-11-29 07:47:41.035486004 +0000 UTC m=+0.949235755 container create a4385449e35e8204acd82a4b7b352cc9431756e8e471d7605b154cfab2c7c165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 07:47:41 compute-0 sudo[224551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csrgskkrbaifocttclmippytllvlrtpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402460.5730147-47-228575926846658/AnsiballZ_setup.py'
Nov 29 07:47:41 compute-0 sudo[224551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:41 compute-0 systemd[1]: Started libpod-conmon-a4385449e35e8204acd82a4b7b352cc9431756e8e471d7605b154cfab2c7c165.scope.
Nov 29 07:47:41 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:47:41 compute-0 podman[224380]: 2025-11-29 07:47:41.150812322 +0000 UTC m=+1.064561873 container init a4385449e35e8204acd82a4b7b352cc9431756e8e471d7605b154cfab2c7c165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:47:41 compute-0 podman[224380]: 2025-11-29 07:47:41.164291206 +0000 UTC m=+1.078040737 container start a4385449e35e8204acd82a4b7b352cc9431756e8e471d7605b154cfab2c7c165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 07:47:41 compute-0 podman[224380]: 2025-11-29 07:47:41.16853585 +0000 UTC m=+1.082285411 container attach a4385449e35e8204acd82a4b7b352cc9431756e8e471d7605b154cfab2c7c165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:47:41 compute-0 wonderful_ramanujan[224556]: 167 167
Nov 29 07:47:41 compute-0 systemd[1]: libpod-a4385449e35e8204acd82a4b7b352cc9431756e8e471d7605b154cfab2c7c165.scope: Deactivated successfully.
Nov 29 07:47:41 compute-0 conmon[224556]: conmon a4385449e35e8204acd8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a4385449e35e8204acd82a4b7b352cc9431756e8e471d7605b154cfab2c7c165.scope/container/memory.events
Nov 29 07:47:41 compute-0 podman[224380]: 2025-11-29 07:47:41.172835306 +0000 UTC m=+1.086584857 container died a4385449e35e8204acd82a4b7b352cc9431756e8e471d7605b154cfab2c7c165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:47:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-b52dfbe0aa8c39860e1b727933e401e1f8210954ada57746b9740dae1b1e8fbe-merged.mount: Deactivated successfully.
Nov 29 07:47:41 compute-0 podman[224380]: 2025-11-29 07:47:41.217287713 +0000 UTC m=+1.131037254 container remove a4385449e35e8204acd82a4b7b352cc9431756e8e471d7605b154cfab2c7c165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 07:47:41 compute-0 systemd[1]: libpod-conmon-a4385449e35e8204acd82a4b7b352cc9431756e8e471d7605b154cfab2c7c165.scope: Deactivated successfully.
Nov 29 07:47:41 compute-0 python3.9[224555]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:47:41 compute-0 podman[224580]: 2025-11-29 07:47:41.362414535 +0000 UTC m=+0.027336667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:47:41 compute-0 podman[224580]: 2025-11-29 07:47:41.495471671 +0000 UTC m=+0.160393773 container create 9110ea6dfec9cf6f17dd789bbb23892cd66b71d4e5fd89670f9489cd0bf65e6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shaw, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:47:41 compute-0 systemd[1]: Started libpod-conmon-9110ea6dfec9cf6f17dd789bbb23892cd66b71d4e5fd89670f9489cd0bf65e6e.scope.
Nov 29 07:47:41 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/150a32af64e5d0ffec0c3e99f159b773be87376759f2e6e2c10d34d72f7c0f43/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/150a32af64e5d0ffec0c3e99f159b773be87376759f2e6e2c10d34d72f7c0f43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/150a32af64e5d0ffec0c3e99f159b773be87376759f2e6e2c10d34d72f7c0f43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/150a32af64e5d0ffec0c3e99f159b773be87376759f2e6e2c10d34d72f7c0f43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:41 compute-0 sudo[224551]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:41 compute-0 podman[224580]: 2025-11-29 07:47:41.863386977 +0000 UTC m=+0.528309179 container init 9110ea6dfec9cf6f17dd789bbb23892cd66b71d4e5fd89670f9489cd0bf65e6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 07:47:41 compute-0 podman[224580]: 2025-11-29 07:47:41.87311254 +0000 UTC m=+0.538034652 container start 9110ea6dfec9cf6f17dd789bbb23892cd66b71d4e5fd89670f9489cd0bf65e6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shaw, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:47:41 compute-0 podman[224580]: 2025-11-29 07:47:41.878581187 +0000 UTC m=+0.543503309 container attach 9110ea6dfec9cf6f17dd789bbb23892cd66b71d4e5fd89670f9489cd0bf65e6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shaw, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 07:47:42 compute-0 ceph-mon[75237]: pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:42 compute-0 sudo[224683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyssnhaqgcvukertrjktdgxlutlomgkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402460.5730147-47-228575926846658/AnsiballZ_dnf.py'
Nov 29 07:47:42 compute-0 sudo[224683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:42 compute-0 python3.9[224685]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:47:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:47:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:47:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:47:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:47:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:47:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:47:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:47:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:47:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:47:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:47:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:43 compute-0 nifty_shaw[224605]: {
Nov 29 07:47:43 compute-0 nifty_shaw[224605]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:47:43 compute-0 nifty_shaw[224605]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:47:43 compute-0 nifty_shaw[224605]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:47:43 compute-0 nifty_shaw[224605]:         "osd_id": 2,
Nov 29 07:47:43 compute-0 nifty_shaw[224605]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:47:43 compute-0 nifty_shaw[224605]:         "type": "bluestore"
Nov 29 07:47:43 compute-0 nifty_shaw[224605]:     },
Nov 29 07:47:43 compute-0 nifty_shaw[224605]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:47:43 compute-0 nifty_shaw[224605]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:47:43 compute-0 nifty_shaw[224605]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:47:43 compute-0 nifty_shaw[224605]:         "osd_id": 0,
Nov 29 07:47:43 compute-0 nifty_shaw[224605]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:47:43 compute-0 nifty_shaw[224605]:         "type": "bluestore"
Nov 29 07:47:43 compute-0 nifty_shaw[224605]:     },
Nov 29 07:47:43 compute-0 nifty_shaw[224605]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:47:43 compute-0 nifty_shaw[224605]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:47:43 compute-0 nifty_shaw[224605]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:47:43 compute-0 nifty_shaw[224605]:         "osd_id": 1,
Nov 29 07:47:43 compute-0 nifty_shaw[224605]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:47:43 compute-0 nifty_shaw[224605]:         "type": "bluestore"
Nov 29 07:47:43 compute-0 nifty_shaw[224605]:     }
Nov 29 07:47:43 compute-0 nifty_shaw[224605]: }
Nov 29 07:47:43 compute-0 systemd[1]: libpod-9110ea6dfec9cf6f17dd789bbb23892cd66b71d4e5fd89670f9489cd0bf65e6e.scope: Deactivated successfully.
Nov 29 07:47:43 compute-0 podman[224580]: 2025-11-29 07:47:43.081697613 +0000 UTC m=+1.746619755 container died 9110ea6dfec9cf6f17dd789bbb23892cd66b71d4e5fd89670f9489cd0bf65e6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shaw, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:47:43 compute-0 systemd[1]: libpod-9110ea6dfec9cf6f17dd789bbb23892cd66b71d4e5fd89670f9489cd0bf65e6e.scope: Consumed 1.214s CPU time.
Nov 29 07:47:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-150a32af64e5d0ffec0c3e99f159b773be87376759f2e6e2c10d34d72f7c0f43-merged.mount: Deactivated successfully.
Nov 29 07:47:43 compute-0 podman[224580]: 2025-11-29 07:47:43.217418571 +0000 UTC m=+1.882340683 container remove 9110ea6dfec9cf6f17dd789bbb23892cd66b71d4e5fd89670f9489cd0bf65e6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shaw, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 07:47:43 compute-0 systemd[1]: libpod-conmon-9110ea6dfec9cf6f17dd789bbb23892cd66b71d4e5fd89670f9489cd0bf65e6e.scope: Deactivated successfully.
Nov 29 07:47:43 compute-0 sudo[224308]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:47:43 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:47:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:47:43 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:47:43 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 4f528ae0-e3d1-40f5-9d34-96a8092939e0 does not exist
Nov 29 07:47:43 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev d541fd1d-4397-45c8-a26e-6284c2007344 does not exist
Nov 29 07:47:43 compute-0 sudo[224729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:47:43 compute-0 sudo[224729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:43 compute-0 sudo[224729]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:43 compute-0 sudo[224754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:47:43 compute-0 sudo[224754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:43 compute-0 sudo[224754]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:44 compute-0 ceph-mon[75237]: pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:47:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:47:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:47:46 compute-0 ceph-mon[75237]: pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:47 compute-0 ceph-mon[75237]: pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:47 compute-0 sudo[224683]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:48 compute-0 sudo[224928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjycvqrqzuvdffivsgwkjvletklirtkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402468.2974746-59-10246637557512/AnsiballZ_stat.py'
Nov 29 07:47:48 compute-0 sudo[224928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:48 compute-0 python3.9[224930]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:47:48 compute-0 sudo[224928]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:50 compute-0 sudo[225080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsjliqxaxxcmeilcvjutfyqvcrxpdogr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402469.2132156-69-68124373043723/AnsiballZ_command.py'
Nov 29 07:47:50 compute-0 sudo[225080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:50 compute-0 ceph-mon[75237]: pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:47:50 compute-0 python3.9[225082]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:47:50 compute-0 sudo[225080]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:51 compute-0 sudo[225235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixsuvmcoclrnebofesybixblbthgbtnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402470.73525-79-56660072980114/AnsiballZ_stat.py'
Nov 29 07:47:51 compute-0 sudo[225235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:51 compute-0 python3.9[225237]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:47:51 compute-0 sudo[225235]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:51 compute-0 sshd-session[225227]: Received disconnect from 20.185.243.158 port 52514:11: Bye Bye [preauth]
Nov 29 07:47:51 compute-0 sshd-session[225227]: Disconnected from authenticating user root 20.185.243.158 port 52514 [preauth]
Nov 29 07:47:51 compute-0 sudo[225387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frprqppzbqqpoyadvltorenafblxklzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402471.4150152-87-38912000870982/AnsiballZ_command.py'
Nov 29 07:47:51 compute-0 sudo[225387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:51 compute-0 python3.9[225389]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:47:51 compute-0 sudo[225387]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:52 compute-0 ceph-mon[75237]: pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:52 compute-0 sudo[225542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzhpuhhiuwsgbbbhuqhljvqzuirznmgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402472.1796877-95-83883605039797/AnsiballZ_stat.py'
Nov 29 07:47:52 compute-0 sudo[225542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:52 compute-0 python3.9[225544]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:47:52 compute-0 sudo[225542]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:53 compute-0 sudo[225665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxngmksncsiatpilrgugkcmlmyfwhqkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402472.1796877-95-83883605039797/AnsiballZ_copy.py'
Nov 29 07:47:53 compute-0 sudo[225665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:53 compute-0 sshd-session[225461]: Invalid user rstudio from 114.34.106.146 port 54256
Nov 29 07:47:53 compute-0 python3.9[225667]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764402472.1796877-95-83883605039797/.source.iscsi _original_basename=.zs5_7ma0 follow=False checksum=08b223526ef0f5be0a6e57845524ab9171ab941e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:47:53 compute-0 sudo[225665]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:53 compute-0 sshd-session[225461]: Received disconnect from 114.34.106.146 port 54256:11: Bye Bye [preauth]
Nov 29 07:47:53 compute-0 sshd-session[225461]: Disconnected from invalid user rstudio 114.34.106.146 port 54256 [preauth]
Nov 29 07:47:54 compute-0 sudo[225817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqphuvrgewetimgvjzzfqboboagsmpow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402473.7074215-110-114668639028678/AnsiballZ_file.py'
Nov 29 07:47:54 compute-0 sudo[225817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:54 compute-0 ceph-mon[75237]: pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:54 compute-0 python3.9[225819]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:47:54 compute-0 sudo[225817]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:55 compute-0 sudo[225969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpvxhxcdnjulfmysraxjvnqifnocatfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402474.576945-118-260975654915104/AnsiballZ_lineinfile.py'
Nov 29 07:47:55 compute-0 sudo[225969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:47:55 compute-0 python3.9[225971]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:47:55 compute-0 sudo[225969]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:55 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:47:55 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:47:55 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:47:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:47:56 compute-0 sudo[226122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spobuwwmvmwgspfpbizkdyqscqkaijyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402475.5777066-127-76747971259202/AnsiballZ_systemd_service.py'
Nov 29 07:47:56 compute-0 sudo[226122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:56 compute-0 ceph-mon[75237]: pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:56 compute-0 python3.9[226124]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:47:56 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 29 07:47:56 compute-0 sudo[226122]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:57 compute-0 sudo[226278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okcfrxfmnxxaichhjllmggbalusxxcjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402476.737273-135-71922695921991/AnsiballZ_systemd_service.py'
Nov 29 07:47:57 compute-0 sudo[226278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:57 compute-0 python3.9[226280]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:47:58 compute-0 ceph-mon[75237]: pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:58 compute-0 systemd[1]: Reloading.
Nov 29 07:47:58 compute-0 systemd-sysv-generator[226314]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:47:58 compute-0 systemd-rc-local-generator[226310]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:47:58 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 07:47:58 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 29 07:47:58 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Nov 29 07:47:58 compute-0 systemd[1]: Started Open-iSCSI.
Nov 29 07:47:58 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 29 07:47:58 compute-0 sudo[226278]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:58 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 29 07:47:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:47:59 compute-0 sudo[226478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhpznmdfvvctucmxdcxfmlsitqxvlhyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402479.174611-146-208871303236581/AnsiballZ_service_facts.py'
Nov 29 07:47:59 compute-0 sudo[226478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:47:59 compute-0 python3.9[226480]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:47:59 compute-0 network[226497]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:47:59 compute-0 network[226498]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:47:59 compute-0 network[226499]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:48:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:48:00 compute-0 ceph-mon[75237]: pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:00 compute-0 podman[226506]: 2025-11-29 07:48:00.804043648 +0000 UTC m=+0.083025199 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 07:48:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:01 compute-0 ceph-mon[75237]: pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:03 compute-0 sshd-session[226549]: Received disconnect from 103.236.140.19 port 54744:11: Bye Bye [preauth]
Nov 29 07:48:03 compute-0 sshd-session[226549]: Disconnected from authenticating user root 103.236.140.19 port 54744 [preauth]
Nov 29 07:48:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:03 compute-0 sudo[226478]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:03 compute-0 sudo[226790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvlyagyraqhobfhugggxklpysmbvyjif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402483.5043488-156-192252577515755/AnsiballZ_file.py'
Nov 29 07:48:03 compute-0 sudo[226790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:04 compute-0 python3.9[226792]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 07:48:04 compute-0 ceph-mon[75237]: pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:04 compute-0 sudo[226790]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:04 compute-0 sudo[226942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpdrmxaubkytrgnifbrvmgjmfpvnlpql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402484.2697482-164-177828647414392/AnsiballZ_modprobe.py'
Nov 29 07:48:04 compute-0 sudo[226942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:04 compute-0 python3.9[226944]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 29 07:48:04 compute-0 sudo[226942]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:48:05 compute-0 sudo[227098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cofvsltfvcywfkhejzphgmghmanesjkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402485.1363366-172-203120512268644/AnsiballZ_stat.py'
Nov 29 07:48:05 compute-0 sudo[227098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:05 compute-0 python3.9[227100]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:48:05 compute-0 sudo[227098]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:06 compute-0 ceph-mon[75237]: pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:06 compute-0 sudo[227221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anxacairkbmvcgagmaesigoteezgsgmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402485.1363366-172-203120512268644/AnsiballZ_copy.py'
Nov 29 07:48:06 compute-0 sudo[227221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:07 compute-0 python3.9[227223]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764402485.1363366-172-203120512268644/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:48:07 compute-0 sudo[227221]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:07 compute-0 sudo[227373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctfgooyjknfvemloodfxquxsvxsfpuqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402487.3517637-188-238246846014515/AnsiballZ_lineinfile.py'
Nov 29 07:48:07 compute-0 sudo[227373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:07 compute-0 python3.9[227375]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:48:07 compute-0 sudo[227373]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:08 compute-0 ceph-mon[75237]: pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:48:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:48:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:48:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:48:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:48:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:48:08 compute-0 podman[227452]: 2025-11-29 07:48:08.955345272 +0000 UTC m=+0.114168818 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:48:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:09 compute-0 sudo[227552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dghawziuaxhyeqjangloawktmapeonri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402488.625733-196-128843665144459/AnsiballZ_systemd.py'
Nov 29 07:48:09 compute-0 sudo[227552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:09 compute-0 python3.9[227554]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:48:09 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 07:48:09 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 29 07:48:09 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 29 07:48:09 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 29 07:48:09 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 29 07:48:09 compute-0 sudo[227552]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:10 compute-0 ceph-mon[75237]: pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:10 compute-0 sudo[227708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sarzrxffpbhlckrbvmpydqicipjnsscu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402489.9293802-204-184561339616403/AnsiballZ_file.py'
Nov 29 07:48:10 compute-0 sudo[227708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:48:10 compute-0 python3.9[227710]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:48:10 compute-0 sudo[227708]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:11 compute-0 sudo[227860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiwhrqiazgclcaurtsujqygudexpejqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402490.802975-213-278239412653438/AnsiballZ_stat.py'
Nov 29 07:48:11 compute-0 sudo[227860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:11 compute-0 python3.9[227862]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:48:11 compute-0 sudo[227860]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:12 compute-0 sudo[228012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wicgbmlskiaxtbxqciyllqsgpfzvxfvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402491.6826458-222-178360047727303/AnsiballZ_stat.py'
Nov 29 07:48:12 compute-0 sudo[228012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:12 compute-0 ceph-mon[75237]: pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:12 compute-0 python3.9[228014]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:48:12 compute-0 sudo[228012]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:12 compute-0 sudo[228164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fksbhemwkhnmcaatixyjycsinirwlhgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402492.4575465-230-130634595482739/AnsiballZ_stat.py'
Nov 29 07:48:12 compute-0 sudo[228164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:12 compute-0 python3.9[228166]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:48:13 compute-0 sudo[228164]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:13 compute-0 sudo[228287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpcaxfcegqbuviagcziffwgorscmvrls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402492.4575465-230-130634595482739/AnsiballZ_copy.py'
Nov 29 07:48:13 compute-0 sudo[228287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:13 compute-0 python3.9[228289]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764402492.4575465-230-130634595482739/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:48:13 compute-0 sudo[228287]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:14 compute-0 ceph-mon[75237]: pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:14 compute-0 sudo[228439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwtgsttjtudujpxmpkvzghleektkgune ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402493.8672066-245-64394572936621/AnsiballZ_command.py'
Nov 29 07:48:14 compute-0 sudo[228439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:14 compute-0 python3.9[228441]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:48:14 compute-0 sudo[228439]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:14 compute-0 sudo[228592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpiejhmszeqzehdmjmpvozckvioxejcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402494.6164925-253-32074414639755/AnsiballZ_lineinfile.py'
Nov 29 07:48:14 compute-0 sudo[228592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:15 compute-0 python3.9[228594]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:48:15 compute-0 sudo[228592]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:48:15 compute-0 sudo[228744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkwjpldfujlbkfqvinplkztpednhgwed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402495.3968728-261-123873553254619/AnsiballZ_replace.py'
Nov 29 07:48:15 compute-0 sudo[228744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:16 compute-0 python3.9[228746]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:48:16 compute-0 ceph-mon[75237]: pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:16 compute-0 sudo[228744]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:16 compute-0 sudo[228896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcoffebyvjijeibeugrossjizwsqpmse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402496.2751052-269-37108651866704/AnsiballZ_replace.py'
Nov 29 07:48:16 compute-0 sudo[228896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:16 compute-0 python3.9[228898]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:48:16 compute-0 sudo[228896]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:17 compute-0 sudo[229049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-behpqpkxontpvujzawypgrylytytntma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402497.0109127-278-45546900573996/AnsiballZ_lineinfile.py'
Nov 29 07:48:17 compute-0 sudo[229049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:17 compute-0 python3.9[229051]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:48:17 compute-0 sudo[229049]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:17 compute-0 sudo[229201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btjruefrmrrmjsxkkqxmzpmlwnfeudaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402497.6558251-278-147162250341581/AnsiballZ_lineinfile.py'
Nov 29 07:48:17 compute-0 sudo[229201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:18 compute-0 python3.9[229203]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:48:18 compute-0 sudo[229201]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:18 compute-0 ceph-mon[75237]: pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:19 compute-0 sudo[229353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhjcpezhxgozabqchrlboeudqrdyxjrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402498.260428-278-215505004472618/AnsiballZ_lineinfile.py'
Nov 29 07:48:19 compute-0 sudo[229353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:19 compute-0 python3.9[229355]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:48:19 compute-0 sudo[229353]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:19 compute-0 ceph-mon[75237]: pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:19 compute-0 sudo[229505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srcymqinbwwkzqprpgcxoecnxvgpkmqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402499.487721-278-152017609196634/AnsiballZ_lineinfile.py'
Nov 29 07:48:19 compute-0 sudo[229505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:19 compute-0 python3.9[229507]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:48:20 compute-0 sudo[229505]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:48:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:21 compute-0 sudo[229657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibsarylsgjcvyoxjxmrlpfvonfhdluwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402500.2065198-307-8819483435010/AnsiballZ_stat.py'
Nov 29 07:48:21 compute-0 sudo[229657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:21 compute-0 sshd-session[215640]: error: kex_exchange_identification: read: Connection timed out
Nov 29 07:48:21 compute-0 sshd-session[215640]: banner exchange: Connection from 45.78.219.195 port 48594: Connection timed out
Nov 29 07:48:21 compute-0 python3.9[229659]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:48:21 compute-0 sudo[229657]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:21 compute-0 sudo[229811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeecktihebtvjsepzwcdaujcievlvmzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402501.5551627-315-174825016423265/AnsiballZ_file.py'
Nov 29 07:48:21 compute-0 sudo[229811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:22 compute-0 ceph-mon[75237]: pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:22 compute-0 python3.9[229813]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:48:22 compute-0 sudo[229811]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:22 compute-0 sudo[229963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpuokvcvpwcvcdsoghrhxkgfydksrapp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402502.433731-324-276265789728183/AnsiballZ_file.py'
Nov 29 07:48:22 compute-0 sudo[229963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:22 compute-0 python3.9[229965]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:48:23 compute-0 sudo[229963]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:23 compute-0 sudo[230115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjygwlkhberwmxxcsuygefahbzxfkvsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402503.2208905-332-243351243021255/AnsiballZ_stat.py'
Nov 29 07:48:23 compute-0 sudo[230115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:23 compute-0 python3.9[230117]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:48:23 compute-0 sudo[230115]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:24 compute-0 ceph-mon[75237]: pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:24 compute-0 sudo[230193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aewroigbkjnimvtrckwwobjliyajrclw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402503.2208905-332-243351243021255/AnsiballZ_file.py'
Nov 29 07:48:24 compute-0 sudo[230193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:24 compute-0 python3.9[230195]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:48:24 compute-0 sudo[230193]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:24 compute-0 sudo[230345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdpbhpxanxdmwaervrmszhaevwjstzxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402504.5254638-332-243733211226645/AnsiballZ_stat.py'
Nov 29 07:48:24 compute-0 sudo[230345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:25 compute-0 python3.9[230347]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:48:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:25 compute-0 sudo[230345]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:25 compute-0 sudo[230423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcjmqfyqunyxeyhybpnnjfujzvxtqvwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402504.5254638-332-243733211226645/AnsiballZ_file.py'
Nov 29 07:48:25 compute-0 sudo[230423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:48:25 compute-0 python3.9[230425]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:48:25 compute-0 sudo[230423]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:25 compute-0 ceph-mon[75237]: pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:26 compute-0 sudo[230575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ragyrwkzssempvnysmqenrddqduymeuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402505.7423575-355-226020639527919/AnsiballZ_file.py'
Nov 29 07:48:26 compute-0 sudo[230575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:26 compute-0 python3.9[230577]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:48:26 compute-0 sudo[230575]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:26 compute-0 sudo[230727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypyzrggoderljnasmrlniixaidbhbrce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402506.550062-363-218091344338599/AnsiballZ_stat.py'
Nov 29 07:48:26 compute-0 sudo[230727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:27 compute-0 python3.9[230729]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:48:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:48:27.104 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:48:27.105 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:48:27.106 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:27 compute-0 sudo[230727]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:27 compute-0 sudo[230805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjsxvswhzrxayioyjlmwqcyhowdpqrun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402506.550062-363-218091344338599/AnsiballZ_file.py'
Nov 29 07:48:27 compute-0 sudo[230805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:27 compute-0 python3.9[230807]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:48:27 compute-0 sudo[230805]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:28 compute-0 sudo[230957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boyaeqcxoreckgsutruduahfokprestp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402507.9010017-375-45549141799353/AnsiballZ_stat.py'
Nov 29 07:48:28 compute-0 sudo[230957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:28 compute-0 ceph-mon[75237]: pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:28 compute-0 python3.9[230959]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:48:28 compute-0 sudo[230957]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:28 compute-0 sudo[231035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suwcnluhogijqvblzzlfjxnwlvuflxgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402507.9010017-375-45549141799353/AnsiballZ_file.py'
Nov 29 07:48:28 compute-0 sudo[231035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:28 compute-0 python3.9[231037]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:48:28 compute-0 sudo[231035]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:29 compute-0 sudo[231187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muoggisojzrszrgjnfvqqzlzfnrkuilq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402509.1167874-387-82168088750174/AnsiballZ_systemd.py'
Nov 29 07:48:29 compute-0 sudo[231187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:29 compute-0 ceph-mon[75237]: pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:29 compute-0 python3.9[231189]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:48:29 compute-0 systemd[1]: Reloading.
Nov 29 07:48:29 compute-0 systemd-rc-local-generator[231215]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:48:29 compute-0 systemd-sysv-generator[231223]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:48:30 compute-0 sudo[231187]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:48:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:31 compute-0 sudo[231389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgppwoltccmlnzemakwenrbsacwkgrjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402511.2339118-395-102448586716520/AnsiballZ_stat.py'
Nov 29 07:48:31 compute-0 podman[231350]: 2025-11-29 07:48:31.548768153 +0000 UTC m=+0.069252810 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 07:48:31 compute-0 sudo[231389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:31 compute-0 python3.9[231397]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:48:31 compute-0 sudo[231389]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:32 compute-0 sudo[231474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llqxpkdxhauyutlbtxjnidtvoppxtthr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402511.2339118-395-102448586716520/AnsiballZ_file.py'
Nov 29 07:48:32 compute-0 sudo[231474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:32 compute-0 ceph-mon[75237]: pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:32 compute-0 python3.9[231476]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:48:32 compute-0 sudo[231474]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:33 compute-0 sudo[231626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljpfwchinkzrnjynhqlatrbnxrmtjsqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402512.4238784-407-18017385875517/AnsiballZ_stat.py'
Nov 29 07:48:33 compute-0 sudo[231626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:33 compute-0 python3.9[231628]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:48:33 compute-0 sudo[231626]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:33 compute-0 sudo[231704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbqsuwkopimufomswlapyleovnhdmtoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402512.4238784-407-18017385875517/AnsiballZ_file.py'
Nov 29 07:48:33 compute-0 sudo[231704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:33 compute-0 python3.9[231706]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:48:33 compute-0 sudo[231704]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:34 compute-0 ceph-mon[75237]: pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:34 compute-0 sudo[231856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czqzmuyxlhjolmrlfczhzqhekbmpkkgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402514.073412-419-143072967171416/AnsiballZ_systemd.py'
Nov 29 07:48:34 compute-0 sudo[231856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:34 compute-0 python3.9[231858]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:48:34 compute-0 systemd[1]: Reloading.
Nov 29 07:48:34 compute-0 systemd-rc-local-generator[231881]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:48:34 compute-0 systemd-sysv-generator[231888]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:48:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:35 compute-0 systemd[1]: Starting Create netns directory...
Nov 29 07:48:35 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 07:48:35 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 07:48:35 compute-0 systemd[1]: Finished Create netns directory.
Nov 29 07:48:35 compute-0 sudo[231856]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:48:35 compute-0 sudo[232050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efmewyepjlsmmdxaottahjzdjbhttbji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402515.5300665-429-249159777129425/AnsiballZ_file.py'
Nov 29 07:48:35 compute-0 sudo[232050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:36 compute-0 python3.9[232052]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:48:36 compute-0 sudo[232050]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:36 compute-0 ceph-mon[75237]: pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:36 compute-0 sudo[232202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpmmgsyegladzomnebvjaidpuhkmwjno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402516.2651005-437-99931746211369/AnsiballZ_stat.py'
Nov 29 07:48:36 compute-0 sudo[232202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:36 compute-0 python3.9[232204]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:48:36 compute-0 sudo[232202]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:37 compute-0 sudo[232325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwofsaebpvjffanujmxcifgbsxyehmdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402516.2651005-437-99931746211369/AnsiballZ_copy.py'
Nov 29 07:48:37 compute-0 sudo[232325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:37 compute-0 python3.9[232327]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402516.2651005-437-99931746211369/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:48:37 compute-0 sudo[232325]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:38 compute-0 ceph-mon[75237]: pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:38 compute-0 sudo[232477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knlbnxaleumfjxrmlykhwzjmlfbaloyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402517.9550974-454-64060297659204/AnsiballZ_file.py'
Nov 29 07:48:38 compute-0 sudo[232477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:38 compute-0 python3.9[232479]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:48:38 compute-0 sudo[232477]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:48:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:48:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:48:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:48:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:48:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:48:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:48:38
Nov 29 07:48:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:48:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:48:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'backups', 'default.rgw.control']
Nov 29 07:48:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:48:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:39 compute-0 sudo[232646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abjsqwbgmqgvgijefdpwqoycnfwepamv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402518.7572176-462-148515709289873/AnsiballZ_stat.py'
Nov 29 07:48:39 compute-0 sudo[232646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:39 compute-0 podman[232603]: 2025-11-29 07:48:39.253303415 +0000 UTC m=+0.178978544 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller)
Nov 29 07:48:39 compute-0 python3.9[232651]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:48:39 compute-0 sudo[232646]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:39 compute-0 sudo[232779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioxfabsubutnnrpuiianymtjfuaxzbep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402518.7572176-462-148515709289873/AnsiballZ_copy.py'
Nov 29 07:48:39 compute-0 sudo[232779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:40 compute-0 python3.9[232781]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764402518.7572176-462-148515709289873/.source.json _original_basename=.u_wx6rg9 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:48:40 compute-0 sudo[232779]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:40 compute-0 ceph-mon[75237]: pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:48:40 compute-0 sudo[232931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiqqmkcxradbfsxqkcgylvjuctcfbwrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402520.2961464-477-51763334807289/AnsiballZ_file.py'
Nov 29 07:48:40 compute-0 sudo[232931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:40 compute-0 python3.9[232933]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:48:40 compute-0 sudo[232931]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:40 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 29 07:48:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:41 compute-0 sudo[233084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbxmvtgypfeftoatcksnqjoamedxurvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402521.122254-485-140534769037104/AnsiballZ_stat.py'
Nov 29 07:48:41 compute-0 sudo[233084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:41 compute-0 sudo[233084]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:42 compute-0 sudo[233207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvijwbvrynkkzlpeuxebswervfguddra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402521.122254-485-140534769037104/AnsiballZ_copy.py'
Nov 29 07:48:42 compute-0 sudo[233207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:42 compute-0 sudo[233207]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:42 compute-0 ceph-mon[75237]: pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:42 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 07:48:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:48:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:48:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:48:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:48:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:48:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:48:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:48:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:48:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:48:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:48:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:43 compute-0 sudo[233360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwbemzcqhegheznzzajspuncivymnxzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402522.5881443-502-264892082530798/AnsiballZ_container_config_data.py'
Nov 29 07:48:43 compute-0 sudo[233360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:43 compute-0 python3.9[233362]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 29 07:48:43 compute-0 sudo[233360]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:43 compute-0 sudo[233387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:43 compute-0 sudo[233387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:43 compute-0 sudo[233387]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:43 compute-0 sudo[233429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:48:43 compute-0 sudo[233429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:43 compute-0 sudo[233429]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:43 compute-0 sudo[233480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:43 compute-0 sudo[233480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:43 compute-0 sudo[233480]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:43 compute-0 sudo[233514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:48:43 compute-0 sudo[233514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:43 compute-0 sudo[233626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcnjcrighftknnxwefmogdmrcuevuvzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402523.5101726-511-269776340514868/AnsiballZ_container_config_hash.py'
Nov 29 07:48:43 compute-0 sudo[233626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:44 compute-0 sudo[233514]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:44 compute-0 python3.9[233629]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 07:48:44 compute-0 sudo[233626]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:48:44 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:48:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:48:44 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:48:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:48:44 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:48:44 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev e5a2cc53-d9c2-4ec8-b4bd-281b12a2a9f9 does not exist
Nov 29 07:48:44 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev fb8bdf79-3797-406d-bcf7-12b25cb2b326 does not exist
Nov 29 07:48:44 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 72119ba3-55f0-415f-adad-27d2334de19c does not exist
Nov 29 07:48:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:48:44 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:48:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:48:44 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:48:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:48:44 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:48:44 compute-0 sudo[233669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:44 compute-0 sudo[233669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:44 compute-0 sudo[233669]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:44 compute-0 sudo[233695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:48:44 compute-0 sudo[233695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:44 compute-0 sudo[233695]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:44 compute-0 sudo[233733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:44 compute-0 sudo[233733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:44 compute-0 sudo[233733]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:44 compute-0 ceph-mon[75237]: pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:48:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:48:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:48:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:48:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:48:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:48:44 compute-0 sudo[233774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:48:44 compute-0 sudo[233774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:44 compute-0 sudo[233928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ergquqpyqfdbgtvlyuzkbgiyjgldmfyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402524.3772268-520-194187857733032/AnsiballZ_podman_container_info.py'
Nov 29 07:48:44 compute-0 sudo[233928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:44 compute-0 podman[233937]: 2025-11-29 07:48:44.933014277 +0000 UTC m=+0.089093678 container create 707ec49292975338922143e027a027e7eace4e7ada6c88435408610fa08c288a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_matsumoto, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:48:44 compute-0 podman[233937]: 2025-11-29 07:48:44.867428138 +0000 UTC m=+0.023507549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:48:44 compute-0 python3.9[233933]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 07:48:44 compute-0 systemd[1]: Started libpod-conmon-707ec49292975338922143e027a027e7eace4e7ada6c88435408610fa08c288a.scope.
Nov 29 07:48:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:48:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:45 compute-0 podman[233937]: 2025-11-29 07:48:45.277414179 +0000 UTC m=+0.433493580 container init 707ec49292975338922143e027a027e7eace4e7ada6c88435408610fa08c288a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_matsumoto, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 07:48:45 compute-0 podman[233937]: 2025-11-29 07:48:45.284222154 +0000 UTC m=+0.440301525 container start 707ec49292975338922143e027a027e7eace4e7ada6c88435408610fa08c288a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_matsumoto, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:48:45 compute-0 podman[233937]: 2025-11-29 07:48:45.288266053 +0000 UTC m=+0.444345424 container attach 707ec49292975338922143e027a027e7eace4e7ada6c88435408610fa08c288a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:48:45 compute-0 vigorous_matsumoto[233955]: 167 167
Nov 29 07:48:45 compute-0 systemd[1]: libpod-707ec49292975338922143e027a027e7eace4e7ada6c88435408610fa08c288a.scope: Deactivated successfully.
Nov 29 07:48:45 compute-0 podman[233937]: 2025-11-29 07:48:45.297351509 +0000 UTC m=+0.453430890 container died 707ec49292975338922143e027a027e7eace4e7ada6c88435408610fa08c288a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:48:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b90071d875dc029a041318259a70c2ff0de5ddd90ef97cce5e3d4506c6094f5-merged.mount: Deactivated successfully.
Nov 29 07:48:45 compute-0 podman[233937]: 2025-11-29 07:48:45.338134516 +0000 UTC m=+0.494213887 container remove 707ec49292975338922143e027a027e7eace4e7ada6c88435408610fa08c288a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:48:45 compute-0 systemd[1]: libpod-conmon-707ec49292975338922143e027a027e7eace4e7ada6c88435408610fa08c288a.scope: Deactivated successfully.
Nov 29 07:48:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:48:45 compute-0 podman[234015]: 2025-11-29 07:48:45.490614882 +0000 UTC m=+0.026803338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:48:45 compute-0 podman[234015]: 2025-11-29 07:48:45.651312771 +0000 UTC m=+0.187501247 container create 375f521d0e42fae1986e2ec6ec615b00180949e135b68598ffca3e537fc0185e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:48:45 compute-0 ceph-mon[75237]: pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:45 compute-0 systemd[1]: Started libpod-conmon-375f521d0e42fae1986e2ec6ec615b00180949e135b68598ffca3e537fc0185e.scope.
Nov 29 07:48:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:48:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f6bf76ed1e52ad78b3346a3636649198d584889eec88927d19cc2b1ea1da16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f6bf76ed1e52ad78b3346a3636649198d584889eec88927d19cc2b1ea1da16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f6bf76ed1e52ad78b3346a3636649198d584889eec88927d19cc2b1ea1da16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f6bf76ed1e52ad78b3346a3636649198d584889eec88927d19cc2b1ea1da16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f6bf76ed1e52ad78b3346a3636649198d584889eec88927d19cc2b1ea1da16/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:45 compute-0 podman[234015]: 2025-11-29 07:48:45.879379297 +0000 UTC m=+0.415567763 container init 375f521d0e42fae1986e2ec6ec615b00180949e135b68598ffca3e537fc0185e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_grothendieck, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 07:48:45 compute-0 podman[234015]: 2025-11-29 07:48:45.886447399 +0000 UTC m=+0.422635835 container start 375f521d0e42fae1986e2ec6ec615b00180949e135b68598ffca3e537fc0185e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:48:45 compute-0 podman[234015]: 2025-11-29 07:48:45.890573081 +0000 UTC m=+0.426761517 container attach 375f521d0e42fae1986e2ec6ec615b00180949e135b68598ffca3e537fc0185e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 07:48:46 compute-0 sudo[233928]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:46 compute-0 adoring_grothendieck[234055]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:48:46 compute-0 adoring_grothendieck[234055]: --> relative data size: 1.0
Nov 29 07:48:46 compute-0 adoring_grothendieck[234055]: --> All data devices are unavailable
Nov 29 07:48:46 compute-0 systemd[1]: libpod-375f521d0e42fae1986e2ec6ec615b00180949e135b68598ffca3e537fc0185e.scope: Deactivated successfully.
Nov 29 07:48:46 compute-0 systemd[1]: libpod-375f521d0e42fae1986e2ec6ec615b00180949e135b68598ffca3e537fc0185e.scope: Consumed 1.061s CPU time.
Nov 29 07:48:46 compute-0 podman[234015]: 2025-11-29 07:48:46.992559862 +0000 UTC m=+1.528748378 container died 375f521d0e42fae1986e2ec6ec615b00180949e135b68598ffca3e537fc0185e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_grothendieck, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:48:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2f6bf76ed1e52ad78b3346a3636649198d584889eec88927d19cc2b1ea1da16-merged.mount: Deactivated successfully.
Nov 29 07:48:47 compute-0 sudo[234346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyknjmvuhsdjftgvjlxwvtaswlzyqhfg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764402526.9658115-533-216031490649650/AnsiballZ_edpm_container_manage.py'
Nov 29 07:48:47 compute-0 sudo[234346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:47 compute-0 python3[234348]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 07:48:47 compute-0 podman[234015]: 2025-11-29 07:48:47.956741166 +0000 UTC m=+2.492929602 container remove 375f521d0e42fae1986e2ec6ec615b00180949e135b68598ffca3e537fc0185e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:48:47 compute-0 systemd[1]: libpod-conmon-375f521d0e42fae1986e2ec6ec615b00180949e135b68598ffca3e537fc0185e.scope: Deactivated successfully.
Nov 29 07:48:48 compute-0 sudo[233774]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:48 compute-0 sudo[234374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:48 compute-0 sudo[234374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:48 compute-0 sudo[234374]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:48 compute-0 sudo[234401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:48:48 compute-0 sudo[234401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:48 compute-0 sudo[234401]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:48 compute-0 sudo[234426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:48 compute-0 sudo[234426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:48 compute-0 sudo[234426]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:48 compute-0 sudo[234451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:48:48 compute-0 sudo[234451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:48 compute-0 ceph-mon[75237]: pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:48 compute-0 podman[234516]: 2025-11-29 07:48:48.66072638 +0000 UTC m=+0.039951424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:48:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:49 compute-0 podman[234516]: 2025-11-29 07:48:49.033884643 +0000 UTC m=+0.413109597 container create 5774f949c2afb60d6ff8c2f4e9d81048a9535a8de3dadfd97388ba293dfd9275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_heisenberg, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 07:48:49 compute-0 systemd[1]: Started libpod-conmon-5774f949c2afb60d6ff8c2f4e9d81048a9535a8de3dadfd97388ba293dfd9275.scope.
Nov 29 07:48:49 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:48:49 compute-0 podman[234516]: 2025-11-29 07:48:49.35451866 +0000 UTC m=+0.733743654 container init 5774f949c2afb60d6ff8c2f4e9d81048a9535a8de3dadfd97388ba293dfd9275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:48:49 compute-0 podman[234516]: 2025-11-29 07:48:49.367270146 +0000 UTC m=+0.746495110 container start 5774f949c2afb60d6ff8c2f4e9d81048a9535a8de3dadfd97388ba293dfd9275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_heisenberg, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:48:49 compute-0 systemd[1]: libpod-5774f949c2afb60d6ff8c2f4e9d81048a9535a8de3dadfd97388ba293dfd9275.scope: Deactivated successfully.
Nov 29 07:48:49 compute-0 stoic_heisenberg[234539]: 167 167
Nov 29 07:48:49 compute-0 conmon[234539]: conmon 5774f949c2afb60d6ff8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5774f949c2afb60d6ff8c2f4e9d81048a9535a8de3dadfd97388ba293dfd9275.scope/container/memory.events
Nov 29 07:48:49 compute-0 podman[234516]: 2025-11-29 07:48:49.39985656 +0000 UTC m=+0.779081544 container attach 5774f949c2afb60d6ff8c2f4e9d81048a9535a8de3dadfd97388ba293dfd9275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_heisenberg, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:48:49 compute-0 podman[234516]: 2025-11-29 07:48:49.400193629 +0000 UTC m=+0.779418593 container died 5774f949c2afb60d6ff8c2f4e9d81048a9535a8de3dadfd97388ba293dfd9275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_heisenberg, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:48:50 compute-0 ceph-mon[75237]: pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3dd998bbb964965845ef98cbe3d1df78be7cf81d20dca07160f3550ef311da12-merged.mount: Deactivated successfully.
Nov 29 07:48:50 compute-0 podman[234516]: 2025-11-29 07:48:50.18031955 +0000 UTC m=+1.559544494 container remove 5774f949c2afb60d6ff8c2f4e9d81048a9535a8de3dadfd97388ba293dfd9275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_heisenberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 07:48:50 compute-0 systemd[1]: libpod-conmon-5774f949c2afb60d6ff8c2f4e9d81048a9535a8de3dadfd97388ba293dfd9275.scope: Deactivated successfully.
Nov 29 07:48:50 compute-0 podman[234363]: 2025-11-29 07:48:50.199314295 +0000 UTC m=+2.202701089 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 07:48:50 compute-0 podman[234607]: 2025-11-29 07:48:50.409584478 +0000 UTC m=+0.046961394 container create 5c47da85d8e3d500fa6c73376bfaed4038619287d23cd2b6b65ecc06b88ed089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:48:50 compute-0 podman[234596]: 2025-11-29 07:48:50.421221084 +0000 UTC m=+0.072444296 container create af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd)
Nov 29 07:48:50 compute-0 podman[234596]: 2025-11-29 07:48:50.393843051 +0000 UTC m=+0.045066243 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 07:48:50 compute-0 python3[234348]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 07:48:50 compute-0 systemd[1]: Started libpod-conmon-5c47da85d8e3d500fa6c73376bfaed4038619287d23cd2b6b65ecc06b88ed089.scope.
Nov 29 07:48:50 compute-0 podman[234607]: 2025-11-29 07:48:50.390333697 +0000 UTC m=+0.027710623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:48:50 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:48:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27a039369dfe16952643946d854ada89b7d508716bf29d56bd280d6ce5db7aba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27a039369dfe16952643946d854ada89b7d508716bf29d56bd280d6ce5db7aba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27a039369dfe16952643946d854ada89b7d508716bf29d56bd280d6ce5db7aba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27a039369dfe16952643946d854ada89b7d508716bf29d56bd280d6ce5db7aba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:50 compute-0 podman[234607]: 2025-11-29 07:48:50.521816453 +0000 UTC m=+0.159193369 container init 5c47da85d8e3d500fa6c73376bfaed4038619287d23cd2b6b65ecc06b88ed089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:48:50 compute-0 podman[234607]: 2025-11-29 07:48:50.539182194 +0000 UTC m=+0.176559150 container start 5c47da85d8e3d500fa6c73376bfaed4038619287d23cd2b6b65ecc06b88ed089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bohr, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:48:50 compute-0 podman[234607]: 2025-11-29 07:48:50.544238761 +0000 UTC m=+0.181615677 container attach 5c47da85d8e3d500fa6c73376bfaed4038619287d23cd2b6b65ecc06b88ed089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bohr, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:48:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:48:50 compute-0 sudo[234346]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:51 compute-0 sudo[234807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogoevtqelnvkeokjiwhmtqhnnsbtdwuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402530.7932584-541-89705282325157/AnsiballZ_stat.py'
Nov 29 07:48:51 compute-0 sudo[234807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:51 compute-0 busy_bohr[234641]: {
Nov 29 07:48:51 compute-0 busy_bohr[234641]:     "0": [
Nov 29 07:48:51 compute-0 busy_bohr[234641]:         {
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "devices": [
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "/dev/loop3"
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             ],
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "lv_name": "ceph_lv0",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "lv_size": "21470642176",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "name": "ceph_lv0",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "tags": {
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.cluster_name": "ceph",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.crush_device_class": "",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.encrypted": "0",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.osd_id": "0",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.type": "block",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.vdo": "0"
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             },
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "type": "block",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "vg_name": "ceph_vg0"
Nov 29 07:48:51 compute-0 busy_bohr[234641]:         }
Nov 29 07:48:51 compute-0 busy_bohr[234641]:     ],
Nov 29 07:48:51 compute-0 busy_bohr[234641]:     "1": [
Nov 29 07:48:51 compute-0 busy_bohr[234641]:         {
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "devices": [
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "/dev/loop4"
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             ],
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "lv_name": "ceph_lv1",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "lv_size": "21470642176",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "name": "ceph_lv1",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "tags": {
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.cluster_name": "ceph",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.crush_device_class": "",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.encrypted": "0",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.osd_id": "1",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.type": "block",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.vdo": "0"
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             },
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "type": "block",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "vg_name": "ceph_vg1"
Nov 29 07:48:51 compute-0 busy_bohr[234641]:         }
Nov 29 07:48:51 compute-0 busy_bohr[234641]:     ],
Nov 29 07:48:51 compute-0 busy_bohr[234641]:     "2": [
Nov 29 07:48:51 compute-0 busy_bohr[234641]:         {
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "devices": [
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "/dev/loop5"
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             ],
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "lv_name": "ceph_lv2",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "lv_size": "21470642176",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "name": "ceph_lv2",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "tags": {
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.cluster_name": "ceph",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.crush_device_class": "",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.encrypted": "0",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.osd_id": "2",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.type": "block",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:                 "ceph.vdo": "0"
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             },
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "type": "block",
Nov 29 07:48:51 compute-0 busy_bohr[234641]:             "vg_name": "ceph_vg2"
Nov 29 07:48:51 compute-0 busy_bohr[234641]:         }
Nov 29 07:48:51 compute-0 busy_bohr[234641]:     ]
Nov 29 07:48:51 compute-0 busy_bohr[234641]: }
Nov 29 07:48:51 compute-0 systemd[1]: libpod-5c47da85d8e3d500fa6c73376bfaed4038619287d23cd2b6b65ecc06b88ed089.scope: Deactivated successfully.
Nov 29 07:48:51 compute-0 podman[234607]: 2025-11-29 07:48:51.315782259 +0000 UTC m=+0.953159185 container died 5c47da85d8e3d500fa6c73376bfaed4038619287d23cd2b6b65ecc06b88ed089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:48:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-27a039369dfe16952643946d854ada89b7d508716bf29d56bd280d6ce5db7aba-merged.mount: Deactivated successfully.
Nov 29 07:48:51 compute-0 python3.9[234811]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:48:51 compute-0 podman[234607]: 2025-11-29 07:48:51.376879816 +0000 UTC m=+1.014256732 container remove 5c47da85d8e3d500fa6c73376bfaed4038619287d23cd2b6b65ecc06b88ed089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 29 07:48:51 compute-0 sudo[234807]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:51 compute-0 systemd[1]: libpod-conmon-5c47da85d8e3d500fa6c73376bfaed4038619287d23cd2b6b65ecc06b88ed089.scope: Deactivated successfully.
Nov 29 07:48:51 compute-0 sudo[234451]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:51 compute-0 sudo[234847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:51 compute-0 sudo[234847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:51 compute-0 sudo[234847]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:51 compute-0 sudo[234878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:48:51 compute-0 sudo[234878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:51 compute-0 sudo[234878]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:51 compute-0 sudo[234903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:51 compute-0 sudo[234903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:51 compute-0 sudo[234903]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:51 compute-0 sudo[234956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:48:51 compute-0 sudo[234956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:52 compute-0 ceph-mon[75237]: pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:52 compute-0 podman[235044]: 2025-11-29 07:48:52.166575706 +0000 UTC m=+0.050786958 container create 27e2d453808180e7294435ffb97a4b0ea75c5157e737917085ad25a17f0b1884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 07:48:52 compute-0 systemd[1]: Started libpod-conmon-27e2d453808180e7294435ffb97a4b0ea75c5157e737917085ad25a17f0b1884.scope.
Nov 29 07:48:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:48:52 compute-0 podman[235044]: 2025-11-29 07:48:52.242892467 +0000 UTC m=+0.127103759 container init 27e2d453808180e7294435ffb97a4b0ea75c5157e737917085ad25a17f0b1884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 07:48:52 compute-0 podman[235044]: 2025-11-29 07:48:52.150642444 +0000 UTC m=+0.034853716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:48:52 compute-0 podman[235044]: 2025-11-29 07:48:52.251306865 +0000 UTC m=+0.135518117 container start 27e2d453808180e7294435ffb97a4b0ea75c5157e737917085ad25a17f0b1884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 07:48:52 compute-0 podman[235044]: 2025-11-29 07:48:52.256332041 +0000 UTC m=+0.140543303 container attach 27e2d453808180e7294435ffb97a4b0ea75c5157e737917085ad25a17f0b1884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_thompson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:48:52 compute-0 strange_thompson[235084]: 167 167
Nov 29 07:48:52 compute-0 podman[235044]: 2025-11-29 07:48:52.257942345 +0000 UTC m=+0.142153597 container died 27e2d453808180e7294435ffb97a4b0ea75c5157e737917085ad25a17f0b1884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:48:52 compute-0 systemd[1]: libpod-27e2d453808180e7294435ffb97a4b0ea75c5157e737917085ad25a17f0b1884.scope: Deactivated successfully.
Nov 29 07:48:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee1e40309c337c1ac6d0a891d7287f64840f4a231448c341fa0913ccacb2c918-merged.mount: Deactivated successfully.
Nov 29 07:48:52 compute-0 podman[235044]: 2025-11-29 07:48:52.299951295 +0000 UTC m=+0.184162557 container remove 27e2d453808180e7294435ffb97a4b0ea75c5157e737917085ad25a17f0b1884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 07:48:52 compute-0 systemd[1]: libpod-conmon-27e2d453808180e7294435ffb97a4b0ea75c5157e737917085ad25a17f0b1884.scope: Deactivated successfully.
Nov 29 07:48:52 compute-0 sudo[235153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtwlhbkouvnllmnzfftlnjcrsstdglus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402531.685897-550-264092400261168/AnsiballZ_file.py'
Nov 29 07:48:52 compute-0 sudo[235153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:52 compute-0 podman[235161]: 2025-11-29 07:48:52.451471284 +0000 UTC m=+0.046742489 container create 3726ddbc3edec63705b8a0a2f377cb67179396c6e007b1e6d334af759cc019a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 07:48:52 compute-0 systemd[1]: Started libpod-conmon-3726ddbc3edec63705b8a0a2f377cb67179396c6e007b1e6d334af759cc019a8.scope.
Nov 29 07:48:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:48:52 compute-0 podman[235161]: 2025-11-29 07:48:52.425271564 +0000 UTC m=+0.020542789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:48:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/369f3cb82cf43d9936d5a13561a57c7e07f8741219e93b7e8a7ed4b550a1eeba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/369f3cb82cf43d9936d5a13561a57c7e07f8741219e93b7e8a7ed4b550a1eeba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/369f3cb82cf43d9936d5a13561a57c7e07f8741219e93b7e8a7ed4b550a1eeba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/369f3cb82cf43d9936d5a13561a57c7e07f8741219e93b7e8a7ed4b550a1eeba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:52 compute-0 podman[235161]: 2025-11-29 07:48:52.539033659 +0000 UTC m=+0.134304884 container init 3726ddbc3edec63705b8a0a2f377cb67179396c6e007b1e6d334af759cc019a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mccarthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:48:52 compute-0 podman[235161]: 2025-11-29 07:48:52.549702349 +0000 UTC m=+0.144973594 container start 3726ddbc3edec63705b8a0a2f377cb67179396c6e007b1e6d334af759cc019a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 07:48:52 compute-0 python3.9[235155]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:48:52 compute-0 podman[235161]: 2025-11-29 07:48:52.553636296 +0000 UTC m=+0.148907521 container attach 3726ddbc3edec63705b8a0a2f377cb67179396c6e007b1e6d334af759cc019a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 07:48:52 compute-0 sudo[235153]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:52 compute-0 sudo[235255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceudyfpuraywqgxaqxjuvzalswmqyrgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402531.685897-550-264092400261168/AnsiballZ_stat.py'
Nov 29 07:48:52 compute-0 sudo[235255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:53 compute-0 python3.9[235257]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:48:53 compute-0 sudo[235255]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]: {
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]:         "osd_id": 2,
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]:         "type": "bluestore"
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]:     },
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]:         "osd_id": 0,
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]:         "type": "bluestore"
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]:     },
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]:         "osd_id": 1,
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]:         "type": "bluestore"
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]:     }
Nov 29 07:48:53 compute-0 clever_mccarthy[235177]: }
Nov 29 07:48:53 compute-0 systemd[1]: libpod-3726ddbc3edec63705b8a0a2f377cb67179396c6e007b1e6d334af759cc019a8.scope: Deactivated successfully.
Nov 29 07:48:53 compute-0 systemd[1]: libpod-3726ddbc3edec63705b8a0a2f377cb67179396c6e007b1e6d334af759cc019a8.scope: Consumed 1.025s CPU time.
Nov 29 07:48:53 compute-0 podman[235161]: 2025-11-29 07:48:53.573194771 +0000 UTC m=+1.168465976 container died 3726ddbc3edec63705b8a0a2f377cb67179396c6e007b1e6d334af759cc019a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:48:53 compute-0 sudo[235437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdxtifcgrgnyabzflptpriqqorgxdniw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402533.13841-550-97750913441724/AnsiballZ_copy.py'
Nov 29 07:48:53 compute-0 sudo[235437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-369f3cb82cf43d9936d5a13561a57c7e07f8741219e93b7e8a7ed4b550a1eeba-merged.mount: Deactivated successfully.
Nov 29 07:48:53 compute-0 podman[235161]: 2025-11-29 07:48:53.638679757 +0000 UTC m=+1.233950962 container remove 3726ddbc3edec63705b8a0a2f377cb67179396c6e007b1e6d334af759cc019a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mccarthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:48:53 compute-0 systemd[1]: libpod-conmon-3726ddbc3edec63705b8a0a2f377cb67179396c6e007b1e6d334af759cc019a8.scope: Deactivated successfully.
Nov 29 07:48:53 compute-0 sudo[234956]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:48:53 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:48:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:48:53 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:48:53 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 502afac5-297f-4259-b538-04c09469b0fc does not exist
Nov 29 07:48:53 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 23c5f201-0ead-431f-b5cd-242edbf78d56 does not exist
Nov 29 07:48:53 compute-0 sudo[235450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:53 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:48:53 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Cumulative writes: 5764 writes, 24K keys, 5764 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5764 writes, 913 syncs, 6.31 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 176 writes, 275 keys, 176 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 176 writes, 88 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.74              0.00         1    0.739       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.74              0.00         1    0.739       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.74              0.00         1    0.739       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.7 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.013       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562223181090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562223181090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562223181090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.9 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5622231811f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 07:48:53 compute-0 sudo[235450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:53 compute-0 sudo[235450]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:53 compute-0 python3.9[235447]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764402533.13841-550-97750913441724/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:48:53 compute-0 sudo[235437]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:53 compute-0 sudo[235475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:48:53 compute-0 sudo[235475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:53 compute-0 sudo[235475]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:54 compute-0 sudo[235573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yamyhyrwnflkpqthbidgbechslggcmlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402533.13841-550-97750913441724/AnsiballZ_systemd.py'
Nov 29 07:48:54 compute-0 sudo[235573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:54 compute-0 python3.9[235575]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:48:54 compute-0 systemd[1]: Reloading.
Nov 29 07:48:54 compute-0 systemd-rc-local-generator[235602]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:48:54 compute-0 systemd-sysv-generator[235607]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:48:54 compute-0 ceph-mon[75237]: pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:48:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:48:54 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 29 07:48:54 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 29 07:48:54 compute-0 sshd-session[235315]: Invalid user rstudio from 103.234.151.178 port 61418
Nov 29 07:48:54 compute-0 sudo[235573]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:55 compute-0 sudo[235686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwuvzwhyphbfnqehppvsnhpeofydvygi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402533.13841-550-97750913441724/AnsiballZ_systemd.py'
Nov 29 07:48:55 compute-0 sudo[235686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:48:55 compute-0 sshd-session[235315]: Received disconnect from 103.234.151.178 port 61418:11: Bye Bye [preauth]
Nov 29 07:48:55 compute-0 sshd-session[235315]: Disconnected from invalid user rstudio 103.234.151.178 port 61418 [preauth]
Nov 29 07:48:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:48:55 compute-0 python3.9[235688]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:48:55 compute-0 systemd[1]: Reloading.
Nov 29 07:48:55 compute-0 systemd-rc-local-generator[235712]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:48:55 compute-0 systemd-sysv-generator[235720]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:48:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:48:56 compute-0 ceph-mon[75237]: pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:56 compute-0 systemd[1]: Starting multipathd container...
Nov 29 07:48:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76a9cf15db39f777ab247f21174a38e540baacefac0396d97edeca24e3fb29c6/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76a9cf15db39f777ab247f21174a38e540baacefac0396d97edeca24e3fb29c6/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:57 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7.
Nov 29 07:48:58 compute-0 podman[235728]: 2025-11-29 07:48:58.605556432 +0000 UTC m=+2.287808007 container init af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:48:58 compute-0 multipathd[235744]: + sudo -E kolla_set_configs
Nov 29 07:48:58 compute-0 podman[235728]: 2025-11-29 07:48:58.642459233 +0000 UTC m=+2.324710778 container start af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 29 07:48:58 compute-0 sudo[235750]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 29 07:48:58 compute-0 sudo[235750]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 07:48:58 compute-0 sudo[235750]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 29 07:48:58 compute-0 multipathd[235744]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 07:48:58 compute-0 multipathd[235744]: INFO:__main__:Validating config file
Nov 29 07:48:58 compute-0 multipathd[235744]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 07:48:58 compute-0 multipathd[235744]: INFO:__main__:Writing out command to execute
Nov 29 07:48:58 compute-0 sudo[235750]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:58 compute-0 multipathd[235744]: ++ cat /run_command
Nov 29 07:48:58 compute-0 multipathd[235744]: + CMD='/usr/sbin/multipathd -d'
Nov 29 07:48:58 compute-0 multipathd[235744]: + ARGS=
Nov 29 07:48:58 compute-0 multipathd[235744]: + sudo kolla_copy_cacerts
Nov 29 07:48:58 compute-0 sudo[235766]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 29 07:48:58 compute-0 sudo[235766]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 07:48:58 compute-0 sudo[235766]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 29 07:48:58 compute-0 sudo[235766]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:58 compute-0 multipathd[235744]: + [[ ! -n '' ]]
Nov 29 07:48:58 compute-0 multipathd[235744]: + . kolla_extend_start
Nov 29 07:48:58 compute-0 multipathd[235744]: Running command: '/usr/sbin/multipathd -d'
Nov 29 07:48:58 compute-0 multipathd[235744]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 07:48:58 compute-0 multipathd[235744]: + umask 0022
Nov 29 07:48:58 compute-0 multipathd[235744]: + exec /usr/sbin/multipathd -d
Nov 29 07:48:58 compute-0 multipathd[235744]: 4778.003414 | --------start up--------
Nov 29 07:48:58 compute-0 multipathd[235744]: 4778.003445 | read /etc/multipath.conf
Nov 29 07:48:58 compute-0 multipathd[235744]: 4778.012411 | path checkers start up
Nov 29 07:48:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:48:59 compute-0 ceph-mon[75237]: pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:01 compute-0 podman[235728]: multipathd
Nov 29 07:49:01 compute-0 systemd[1]: Started multipathd container.
Nov 29 07:49:01 compute-0 sudo[235686]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:01 compute-0 podman[235751]: 2025-11-29 07:49:01.294585261 +0000 UTC m=+2.633992267 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 29 07:49:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:49:01 compute-0 ceph-mon[75237]: pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:01 compute-0 ceph-mon[75237]: pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:01 compute-0 podman[235832]: 2025-11-29 07:49:01.876961538 +0000 UTC m=+0.049892375 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:49:02 compute-0 python3.9[235954]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:49:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:03 compute-0 sudo[236106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szgdqsndtajfuvaovdhqsxhgtqyedsma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402542.7215781-586-32681786829970/AnsiballZ_command.py'
Nov 29 07:49:03 compute-0 sudo[236106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:03 compute-0 python3.9[236108]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:49:03 compute-0 sudo[236106]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:03 compute-0 ceph-mon[75237]: pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:04 compute-0 sudo[236272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbpcgebfwscinucjdstyedkwgvpfbeel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402543.813823-594-196384057866170/AnsiballZ_systemd.py'
Nov 29 07:49:04 compute-0 sudo[236272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:04 compute-0 python3.9[236274]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:49:04 compute-0 systemd[1]: Stopping multipathd container...
Nov 29 07:49:04 compute-0 multipathd[235744]: 4784.037932 | exit (signal)
Nov 29 07:49:04 compute-0 multipathd[235744]: 4784.038016 | --------shut down-------
Nov 29 07:49:04 compute-0 systemd[1]: libpod-af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7.scope: Deactivated successfully.
Nov 29 07:49:04 compute-0 podman[236278]: 2025-11-29 07:49:04.865791848 +0000 UTC m=+0.108172845 container died af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:49:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:05 compute-0 systemd[1]: af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7-6e08028d66508c57.timer: Deactivated successfully.
Nov 29 07:49:05 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7.
Nov 29 07:49:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7-userdata-shm.mount: Deactivated successfully.
Nov 29 07:49:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-76a9cf15db39f777ab247f21174a38e540baacefac0396d97edeca24e3fb29c6-merged.mount: Deactivated successfully.
Nov 29 07:49:05 compute-0 podman[236278]: 2025-11-29 07:49:05.473059791 +0000 UTC m=+0.715440748 container cleanup af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Nov 29 07:49:05 compute-0 podman[236278]: multipathd
Nov 29 07:49:05 compute-0 podman[236310]: multipathd
Nov 29 07:49:05 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 29 07:49:05 compute-0 systemd[1]: Stopped multipathd container.
Nov 29 07:49:05 compute-0 systemd[1]: Starting multipathd container...
Nov 29 07:49:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:49:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76a9cf15db39f777ab247f21174a38e540baacefac0396d97edeca24e3fb29c6/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 07:49:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76a9cf15db39f777ab247f21174a38e540baacefac0396d97edeca24e3fb29c6/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 07:49:06 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7.
Nov 29 07:49:06 compute-0 sshd-session[236308]: Invalid user ts1 from 114.34.106.146 port 53798
Nov 29 07:49:06 compute-0 podman[236323]: 2025-11-29 07:49:06.441574602 +0000 UTC m=+0.879941980 container init af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd)
Nov 29 07:49:06 compute-0 multipathd[236338]: + sudo -E kolla_set_configs
Nov 29 07:49:06 compute-0 ceph-mon[75237]: pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:06 compute-0 podman[236323]: 2025-11-29 07:49:06.474934136 +0000 UTC m=+0.913301494 container start af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 07:49:06 compute-0 podman[236323]: multipathd
Nov 29 07:49:06 compute-0 sudo[236344]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 29 07:49:06 compute-0 systemd[1]: Started multipathd container.
Nov 29 07:49:06 compute-0 sudo[236344]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 07:49:06 compute-0 sudo[236344]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 29 07:49:06 compute-0 sudo[236272]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:06 compute-0 multipathd[236338]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 07:49:06 compute-0 multipathd[236338]: INFO:__main__:Validating config file
Nov 29 07:49:06 compute-0 multipathd[236338]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 07:49:06 compute-0 multipathd[236338]: INFO:__main__:Writing out command to execute
Nov 29 07:49:06 compute-0 sudo[236344]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:06 compute-0 multipathd[236338]: ++ cat /run_command
Nov 29 07:49:06 compute-0 multipathd[236338]: + CMD='/usr/sbin/multipathd -d'
Nov 29 07:49:06 compute-0 multipathd[236338]: + ARGS=
Nov 29 07:49:06 compute-0 multipathd[236338]: + sudo kolla_copy_cacerts
Nov 29 07:49:06 compute-0 sudo[236365]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 29 07:49:06 compute-0 sudo[236365]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 07:49:06 compute-0 sudo[236365]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 29 07:49:06 compute-0 sudo[236365]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:06 compute-0 multipathd[236338]: Running command: '/usr/sbin/multipathd -d'
Nov 29 07:49:06 compute-0 podman[236345]: 2025-11-29 07:49:06.584030845 +0000 UTC m=+0.090974118 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Nov 29 07:49:06 compute-0 multipathd[236338]: + [[ ! -n '' ]]
Nov 29 07:49:06 compute-0 multipathd[236338]: + . kolla_extend_start
Nov 29 07:49:06 compute-0 multipathd[236338]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 07:49:06 compute-0 multipathd[236338]: + umask 0022
Nov 29 07:49:06 compute-0 multipathd[236338]: + exec /usr/sbin/multipathd -d
Nov 29 07:49:06 compute-0 systemd[1]: af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7-30ef967ee809b391.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 07:49:06 compute-0 systemd[1]: af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7-30ef967ee809b391.service: Failed with result 'exit-code'.
Nov 29 07:49:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:49:06 compute-0 multipathd[236338]: 4785.816368 | --------start up--------
Nov 29 07:49:06 compute-0 multipathd[236338]: 4785.816389 | read /etc/multipath.conf
Nov 29 07:49:06 compute-0 multipathd[236338]: 4785.822649 | path checkers start up
Nov 29 07:49:06 compute-0 sshd-session[236308]: Received disconnect from 114.34.106.146 port 53798:11: Bye Bye [preauth]
Nov 29 07:49:06 compute-0 sshd-session[236308]: Disconnected from invalid user ts1 114.34.106.146 port 53798 [preauth]
Nov 29 07:49:06 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:49:06 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 6739 writes, 28K keys, 6739 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6739 writes, 1126 syncs, 5.98 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 277 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094e430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094e430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094e430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558bf094edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 07:49:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:07 compute-0 sudo[236526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldeweefbxkdotbkdrllcvwashgluvfpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402546.741318-602-3558415115917/AnsiballZ_file.py'
Nov 29 07:49:07 compute-0 sudo[236526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:07 compute-0 python3.9[236528]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:49:07 compute-0 sudo[236526]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:07 compute-0 sudo[236678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvznfawjjnpppqaqmwzhmykhhdslwpmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402547.6371276-614-95293095081985/AnsiballZ_file.py'
Nov 29 07:49:07 compute-0 sudo[236678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:08 compute-0 python3.9[236680]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 07:49:08 compute-0 sudo[236678]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:08 compute-0 ceph-mon[75237]: pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:49:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:49:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:49:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:49:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:49:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:49:08 compute-0 sudo[236830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyuhvvjwyaqhhgjnkxqxixcefzawpucj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402548.3703477-622-186741372768880/AnsiballZ_modprobe.py'
Nov 29 07:49:08 compute-0 sudo[236830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:08 compute-0 python3.9[236832]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 29 07:49:08 compute-0 kernel: Key type psk registered
Nov 29 07:49:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:09 compute-0 sudo[236830]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:09 compute-0 ceph-mon[75237]: pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:09 compute-0 sshd-session[236916]: Invalid user superset from 20.185.243.158 port 39440
Nov 29 07:49:09 compute-0 sshd-session[236916]: Received disconnect from 20.185.243.158 port 39440:11: Bye Bye [preauth]
Nov 29 07:49:09 compute-0 sshd-session[236916]: Disconnected from invalid user superset 20.185.243.158 port 39440 [preauth]
Nov 29 07:49:09 compute-0 sudo[237011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buvmixexlluetgucvjrsbkzukrxzgfns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402549.2899027-630-249636436362931/AnsiballZ_stat.py'
Nov 29 07:49:09 compute-0 sudo[237011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:09 compute-0 podman[236967]: 2025-11-29 07:49:09.709673377 +0000 UTC m=+0.110083007 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 07:49:09 compute-0 python3.9[237015]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:49:09 compute-0 sudo[237011]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:10 compute-0 sudo[237141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqpftudpbrgzdzigcpymdrkvdpvvuhdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402549.2899027-630-249636436362931/AnsiballZ_copy.py'
Nov 29 07:49:10 compute-0 sudo[237141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:10 compute-0 python3.9[237143]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764402549.2899027-630-249636436362931/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:49:10 compute-0 sudo[237141]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:11 compute-0 sudo[237293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peaxvcehtpmyfzaatexsnloxehqxhmhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402550.822102-646-162207694340314/AnsiballZ_lineinfile.py'
Nov 29 07:49:11 compute-0 sudo[237293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:11 compute-0 python3.9[237295]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:49:11 compute-0 sudo[237293]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:49:12 compute-0 sudo[237445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgxchsvxlgekqmrdfycsmlbajzhlynjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402551.7114086-654-210977043101919/AnsiballZ_systemd.py'
Nov 29 07:49:12 compute-0 sudo[237445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:12 compute-0 ceph-mon[75237]: pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:12 compute-0 python3.9[237447]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:49:12 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 07:49:12 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 29 07:49:12 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 29 07:49:12 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 29 07:49:12 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 29 07:49:12 compute-0 sudo[237445]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:12 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:49:12 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 5869 writes, 24K keys, 5869 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5869 writes, 906 syncs, 6.48 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 216 writes, 330 keys, 216 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 216 writes, 108 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.024       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.024       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.024       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f1743090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f1743090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f1743090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5571f17431f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 07:49:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:13 compute-0 sudo[237601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blpaxxkrmhmovufltoueyyvqlckwgczg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402552.8828106-662-268253754499384/AnsiballZ_dnf.py'
Nov 29 07:49:13 compute-0 sudo[237601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:13 compute-0 python3.9[237603]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:49:14 compute-0 ceph-mon[75237]: pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:15 compute-0 systemd[1]: Reloading.
Nov 29 07:49:15 compute-0 systemd-sysv-generator[237631]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:49:15 compute-0 systemd-rc-local-generator[237628]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:49:15 compute-0 ceph-mgr[75527]: [devicehealth INFO root] Check health
Nov 29 07:49:15 compute-0 systemd[1]: Reloading.
Nov 29 07:49:16 compute-0 systemd-sysv-generator[237676]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:49:16 compute-0 systemd-rc-local-generator[237672]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:49:16 compute-0 ceph-mon[75237]: pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:16 compute-0 systemd-logind[782]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 07:49:16 compute-0 systemd-logind[782]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 07:49:16 compute-0 lvm[237716]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 07:49:16 compute-0 lvm[237719]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 07:49:16 compute-0 lvm[237716]: VG ceph_vg1 finished
Nov 29 07:49:16 compute-0 lvm[237719]: VG ceph_vg2 finished
Nov 29 07:49:16 compute-0 lvm[237715]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 07:49:16 compute-0 lvm[237715]: VG ceph_vg0 finished
Nov 29 07:49:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:49:16 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:49:16 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:49:16 compute-0 systemd[1]: Reloading.
Nov 29 07:49:16 compute-0 systemd-rc-local-generator[237773]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:49:16 compute-0 systemd-sysv-generator[237776]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:49:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:17 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 07:49:17 compute-0 sudo[237601]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:18 compute-0 sudo[239032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wawwemrjofaibgmhdmqwrxbxcazlsigy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402557.9282758-670-12878027607484/AnsiballZ_systemd_service.py'
Nov 29 07:49:18 compute-0 sudo[239032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:18 compute-0 ceph-mon[75237]: pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:18 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:49:18 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:49:18 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.852s CPU time.
Nov 29 07:49:18 compute-0 systemd[1]: run-r6f9e1eaa6e9347909691242104b69c7b.service: Deactivated successfully.
Nov 29 07:49:18 compute-0 python3.9[239047]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:49:18 compute-0 systemd[1]: Stopping Open-iSCSI...
Nov 29 07:49:18 compute-0 iscsid[226319]: iscsid shutting down.
Nov 29 07:49:18 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Nov 29 07:49:18 compute-0 systemd[1]: Stopped Open-iSCSI.
Nov 29 07:49:18 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 07:49:18 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 29 07:49:18 compute-0 systemd[1]: Started Open-iSCSI.
Nov 29 07:49:18 compute-0 sudo[239032]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:19 compute-0 python3.9[239217]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:49:20 compute-0 ceph-mon[75237]: pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:20 compute-0 sudo[239371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yybhhzutfqbthprogmbaogrdifnkpief ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402560.1404064-688-279932962809755/AnsiballZ_file.py'
Nov 29 07:49:20 compute-0 sudo[239371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:20 compute-0 python3.9[239373]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:49:20 compute-0 sudo[239371]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:21 compute-0 sudo[239525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbssqxoitjhmgaiunwsowxiqedojqonv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402561.089814-699-78401930988489/AnsiballZ_systemd_service.py'
Nov 29 07:49:21 compute-0 sudo[239525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:49:21 compute-0 python3.9[239527]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:49:21 compute-0 systemd[1]: Reloading.
Nov 29 07:49:21 compute-0 systemd-rc-local-generator[239551]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:49:21 compute-0 systemd-sysv-generator[239559]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:49:22 compute-0 sudo[239525]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:22 compute-0 ceph-mon[75237]: pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:22 compute-0 sshd-session[239488]: Invalid user exx from 103.236.140.19 port 43932
Nov 29 07:49:22 compute-0 sshd-session[239488]: Received disconnect from 103.236.140.19 port 43932:11: Bye Bye [preauth]
Nov 29 07:49:22 compute-0 sshd-session[239488]: Disconnected from invalid user exx 103.236.140.19 port 43932 [preauth]
Nov 29 07:49:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:23 compute-0 python3.9[239713]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:49:23 compute-0 network[239730]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:49:23 compute-0 network[239731]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:49:23 compute-0 network[239732]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:49:24 compute-0 ceph-mon[75237]: pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:26 compute-0 ceph-mon[75237]: pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:49:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:49:27.105 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:49:27.106 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:49:27.107 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:27 compute-0 sudo[240005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stzlkegqhqfruzpatdqfkixsjebpbqfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402567.3887746-718-127970762363396/AnsiballZ_systemd_service.py'
Nov 29 07:49:27 compute-0 sudo[240005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:28 compute-0 python3.9[240007]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:49:28 compute-0 sudo[240005]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:28 compute-0 ceph-mon[75237]: pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:28 compute-0 sudo[240158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arqpiosdovewojqhjobjtntswpltrwbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402568.2781732-718-248187295279831/AnsiballZ_systemd_service.py'
Nov 29 07:49:28 compute-0 sudo[240158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:28 compute-0 python3.9[240160]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:49:29 compute-0 sudo[240158]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:29 compute-0 sudo[240311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbwowsnpdwfodeqgjsfizzsvdjbeqcrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402569.2002926-718-74891107316588/AnsiballZ_systemd_service.py'
Nov 29 07:49:29 compute-0 sudo[240311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:29 compute-0 python3.9[240313]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:49:29 compute-0 sudo[240311]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:30 compute-0 sudo[240464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alcdyjtkueapuzjndklcazhqujfuaioh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402569.9700344-718-269828316029650/AnsiballZ_systemd_service.py'
Nov 29 07:49:30 compute-0 sudo[240464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:30 compute-0 ceph-mon[75237]: pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:30 compute-0 python3.9[240466]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:49:30 compute-0 sudo[240464]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:31 compute-0 sudo[240617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cexlmbpnokmqfrxvtbfzdebogojxaqvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402570.832645-718-167155588967338/AnsiballZ_systemd_service.py'
Nov 29 07:49:31 compute-0 sudo[240617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:31 compute-0 python3.9[240619]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:49:31 compute-0 sudo[240617]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:49:32 compute-0 sudo[240780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlhiuxoaqyykhtsnawzisedalxlnbjkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402571.7530487-718-274706518158004/AnsiballZ_systemd_service.py'
Nov 29 07:49:32 compute-0 sudo[240780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:32 compute-0 podman[240744]: 2025-11-29 07:49:32.15093333 +0000 UTC m=+0.102914253 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:49:32 compute-0 python3.9[240783]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:49:32 compute-0 ceph-mon[75237]: pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:32 compute-0 sudo[240780]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:32 compute-0 sudo[240942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piarqzlpqwwjszxqkhhyqozkyjbwpvpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402572.6054175-718-254585017362847/AnsiballZ_systemd_service.py'
Nov 29 07:49:32 compute-0 sudo[240942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:33 compute-0 python3.9[240944]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:49:33 compute-0 sudo[240942]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:33 compute-0 sudo[241095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyldogokuhmchurqkrnscahoovlwtuha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402573.502972-718-73500630359021/AnsiballZ_systemd_service.py'
Nov 29 07:49:33 compute-0 sudo[241095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:34 compute-0 python3.9[241097]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:49:34 compute-0 sudo[241095]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:34 compute-0 ceph-mon[75237]: pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:34 compute-0 sudo[241248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfppuumegyblqadcjajwrthmxrpkvyoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402574.4833384-777-190639251112339/AnsiballZ_file.py'
Nov 29 07:49:34 compute-0 sudo[241248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:35 compute-0 python3.9[241250]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:49:35 compute-0 sudo[241248]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:35 compute-0 sudo[241400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzwfqkfxqbvtfynoolixdoguhsstfbjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402575.235012-777-235294930965378/AnsiballZ_file.py'
Nov 29 07:49:35 compute-0 sudo[241400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:35 compute-0 python3.9[241402]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:49:35 compute-0 sudo[241400]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:36 compute-0 sudo[241552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hukybrmwidonanowprjxporfpepjscua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402576.0058432-777-22309597420393/AnsiballZ_file.py'
Nov 29 07:49:36 compute-0 sudo[241552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:36 compute-0 ceph-mon[75237]: pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:36 compute-0 python3.9[241554]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:49:36 compute-0 sudo[241552]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:49:36 compute-0 podman[241652]: 2025-11-29 07:49:36.909387721 +0000 UTC m=+0.071863290 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Nov 29 07:49:36 compute-0 sudo[241725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxyszjikqtdeeleoqybstnozkuyuucsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402576.6735578-777-59769329437740/AnsiballZ_file.py'
Nov 29 07:49:36 compute-0 sudo[241725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:37 compute-0 python3.9[241727]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:49:37 compute-0 sudo[241725]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:37 compute-0 sudo[241877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wytcrfbekozyjefpooprugvztxoflkeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402577.3791778-777-56462726708000/AnsiballZ_file.py'
Nov 29 07:49:37 compute-0 sudo[241877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:38 compute-0 python3.9[241879]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:49:38 compute-0 sudo[241877]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:38 compute-0 ceph-mon[75237]: pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:38 compute-0 sudo[242029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htooqnzzhcjfmwrektpumqizjadtawth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402578.198763-777-174923402546608/AnsiballZ_file.py'
Nov 29 07:49:38 compute-0 sudo[242029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:49:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:49:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:49:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:49:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:49:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:49:38 compute-0 python3.9[242031]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:49:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:49:38
Nov 29 07:49:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:49:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:49:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'backups', 'images', 'default.rgw.log', 'vms', 'default.rgw.meta']
Nov 29 07:49:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:49:38 compute-0 sudo[242029]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:39 compute-0 sudo[242181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mredmkiabbugthefujqnxrfbpzndczdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402578.986307-777-208138901072916/AnsiballZ_file.py'
Nov 29 07:49:39 compute-0 sudo[242181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:39 compute-0 python3.9[242183]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:49:39 compute-0 sudo[242181]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:39 compute-0 podman[242283]: 2025-11-29 07:49:39.950431128 +0000 UTC m=+0.105400749 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:49:39 compute-0 sudo[242359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jilxyqfupqkylgzzlawvxudwqojzuujc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402579.6779938-777-218381747633623/AnsiballZ_file.py'
Nov 29 07:49:39 compute-0 sudo[242359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:40 compute-0 python3.9[242361]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:49:40 compute-0 sudo[242359]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:40 compute-0 ceph-mon[75237]: pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:41 compute-0 sudo[242511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngkhrwhkzhqnauascexhnuujdlpxumln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402580.3977668-834-37982807686770/AnsiballZ_file.py'
Nov 29 07:49:41 compute-0 sudo[242511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:41 compute-0 python3.9[242513]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:49:41 compute-0 sudo[242511]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:49:42 compute-0 sudo[242663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyampbrcuetykimppfabvgtrirnwhims ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402581.7258883-834-162732470258267/AnsiballZ_file.py'
Nov 29 07:49:42 compute-0 sudo[242663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:42 compute-0 python3.9[242665]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:49:42 compute-0 sudo[242663]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:42 compute-0 ceph-mon[75237]: pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:49:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:49:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:49:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:49:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:49:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:49:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:49:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:49:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:49:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:49:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:43 compute-0 sudo[242815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duhldstfzxdeexhutsvbhzwqrixahcit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402582.9037778-834-170723325756698/AnsiballZ_file.py'
Nov 29 07:49:43 compute-0 sudo[242815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:43 compute-0 python3.9[242817]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:49:43 compute-0 sudo[242815]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:44 compute-0 sudo[242967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfidswpxuavwyzodkifllgflhlrmwvgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402583.662658-834-190132745115028/AnsiballZ_file.py'
Nov 29 07:49:44 compute-0 sudo[242967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:44 compute-0 python3.9[242969]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:49:44 compute-0 sudo[242967]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:44 compute-0 ceph-mon[75237]: pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:44 compute-0 sudo[243119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pabwzvzeslrcncczoieubmptqcvzuyah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402584.3500807-834-139808004467548/AnsiballZ_file.py'
Nov 29 07:49:44 compute-0 sudo[243119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:44 compute-0 python3.9[243121]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:49:44 compute-0 sudo[243119]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:45 compute-0 sudo[243271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oponhuhpphbkwnuucuymqzwoqyzhcuid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402585.0151782-834-238583788391806/AnsiballZ_file.py'
Nov 29 07:49:45 compute-0 sudo[243271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:45 compute-0 python3.9[243273]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:49:45 compute-0 sudo[243271]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:46 compute-0 sudo[243423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhyzgkewdycpjplgesghvxpjfdhndyfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402585.737741-834-140960022375864/AnsiballZ_file.py'
Nov 29 07:49:46 compute-0 sudo[243423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:46 compute-0 python3.9[243425]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:49:46 compute-0 sudo[243423]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:46 compute-0 ceph-mon[75237]: pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:49:46 compute-0 sudo[243575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnmhullzjudwtebflxbpqkjhqiduwncs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402586.4504418-834-75779034984348/AnsiballZ_file.py'
Nov 29 07:49:46 compute-0 sudo[243575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:47 compute-0 python3.9[243577]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:49:47 compute-0 sudo[243575]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:47 compute-0 sudo[243727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkueuqzubggwvcldpieibtcsyoljgksg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402587.5156026-892-24955900259683/AnsiballZ_command.py'
Nov 29 07:49:47 compute-0 sudo[243727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:48 compute-0 python3.9[243729]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:49:48 compute-0 sudo[243727]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:48 compute-0 ceph-mon[75237]: pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:48 compute-0 python3.9[243881]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 07:49:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:49 compute-0 sudo[244031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wevcwauswiywafvfnptehosudmkbpijz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402589.2075186-910-210484176476968/AnsiballZ_systemd_service.py'
Nov 29 07:49:49 compute-0 sudo[244031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:49 compute-0 python3.9[244033]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:49:49 compute-0 systemd[1]: Reloading.
Nov 29 07:49:49 compute-0 systemd-rc-local-generator[244059]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:49:49 compute-0 systemd-sysv-generator[244065]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:49:50 compute-0 sudo[244031]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:50 compute-0 ceph-mon[75237]: pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:50 compute-0 sudo[244218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjidfwcyhxeybyltunqovkjpftlrbesa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402590.385888-918-4308983912928/AnsiballZ_command.py'
Nov 29 07:49:50 compute-0 sudo[244218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:50 compute-0 python3.9[244220]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:49:50 compute-0 sudo[244218]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:51 compute-0 sudo[244371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovicktxdftdjhsrjgsqolndfarcekfrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402591.131-918-191966202105214/AnsiballZ_command.py'
Nov 29 07:49:51 compute-0 sudo[244371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:51 compute-0 ceph-mon[75237]: pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:49:51 compute-0 python3.9[244373]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:49:51 compute-0 sudo[244371]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:52 compute-0 sudo[244524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmphuraivrxducbrlorrknunshtonvto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402591.889888-918-149908980700993/AnsiballZ_command.py'
Nov 29 07:49:52 compute-0 sudo[244524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:52 compute-0 python3.9[244526]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:49:52 compute-0 sudo[244524]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:53 compute-0 sudo[244677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaejazrdqdtmgajogqoxkemjfsdfqphj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402592.690897-918-222832240667829/AnsiballZ_command.py'
Nov 29 07:49:53 compute-0 sudo[244677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:53 compute-0 python3.9[244679]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:49:53 compute-0 sudo[244681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:53 compute-0 sudo[244681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:53 compute-0 sudo[244681]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:54 compute-0 sudo[244706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:49:54 compute-0 sudo[244706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:54 compute-0 sudo[244706]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:54 compute-0 sudo[244731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:54 compute-0 sudo[244731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:54 compute-0 sudo[244731]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:54 compute-0 sudo[244756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 07:49:54 compute-0 sudo[244756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:54 compute-0 sudo[244677]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:54 compute-0 ceph-mon[75237]: pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:54 compute-0 sudo[244756]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:49:54 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:49:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:49:54 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:49:54 compute-0 sudo[244869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:54 compute-0 sudo[244869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:54 compute-0 sudo[244869]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:54 compute-0 sudo[244918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:49:54 compute-0 sudo[244918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:54 compute-0 sudo[244918]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:54 compute-0 sudo[244969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:54 compute-0 sudo[244969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:54 compute-0 sudo[244969]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:54 compute-0 sudo[245027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmgniwpryeiiymqtgwvcbnpmduqtawsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402594.4699008-918-255044996605126/AnsiballZ_command.py'
Nov 29 07:49:54 compute-0 sudo[245027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:54 compute-0 sudo[245021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:49:54 compute-0 sudo[245021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:54 compute-0 python3.9[245043]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:49:54 compute-0 sudo[245027]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:55 compute-0 sudo[245021]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:49:55 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:49:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:49:55 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:49:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:49:55 compute-0 sudo[245230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsculmjigmammavpwjutqwkzymrhktqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402595.103277-918-73508035737131/AnsiballZ_command.py'
Nov 29 07:49:55 compute-0 sudo[245230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:55 compute-0 python3.9[245232]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:49:55 compute-0 sudo[245230]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:55 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev cf451a79-4dd7-4f2f-8188-90698e9701aa does not exist
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 5c15d276-68e7-49db-bd39-5f7bf397c56f does not exist
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 7742c89c-a3d4-4be1-af79-fec8813767a5 does not exist
Nov 29 07:49:55 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:49:55 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:49:55 compute-0 ceph-mon[75237]: pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:55 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:49:55 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:49:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:49:55 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:49:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:49:55 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:49:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:49:55 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:49:55 compute-0 sudo[245258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:55 compute-0 sudo[245258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:55 compute-0 sudo[245258]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:55 compute-0 sudo[245297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:49:55 compute-0 sudo[245297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:55 compute-0 sudo[245297]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:55 compute-0 sudo[245348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:55 compute-0 sudo[245348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:55 compute-0 sudo[245348]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:55 compute-0 sudo[245386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:49:55 compute-0 sudo[245386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:49:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:49:56 compute-0 sudo[245485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgjubbexrjlquurudfjsvshynxapckua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402595.7427585-918-118405172627413/AnsiballZ_command.py'
Nov 29 07:49:56 compute-0 sudo[245485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:56 compute-0 python3.9[245496]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:49:56 compute-0 sudo[245485]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:56 compute-0 podman[245525]: 2025-11-29 07:49:56.248616051 +0000 UTC m=+0.028810482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:49:56 compute-0 podman[245525]: 2025-11-29 07:49:56.48265826 +0000 UTC m=+0.262852631 container create 7a6c15565db9136d7868b7b4d67258a20eccd4ad1cd6078263f7e9206ee22a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euclid, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:49:56 compute-0 systemd[1]: Started libpod-conmon-7a6c15565db9136d7868b7b4d67258a20eccd4ad1cd6078263f7e9206ee22a98.scope.
Nov 29 07:49:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:49:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:49:56 compute-0 podman[245525]: 2025-11-29 07:49:56.777334333 +0000 UTC m=+0.557528694 container init 7a6c15565db9136d7868b7b4d67258a20eccd4ad1cd6078263f7e9206ee22a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 07:49:56 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:49:56 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:49:56 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:49:56 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:49:56 compute-0 podman[245525]: 2025-11-29 07:49:56.785573636 +0000 UTC m=+0.565767977 container start 7a6c15565db9136d7868b7b4d67258a20eccd4ad1cd6078263f7e9206ee22a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 07:49:56 compute-0 sudo[245694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqsazuzanxowggipzsvjrxubzaynoexn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402596.5026252-918-12455857466845/AnsiballZ_command.py'
Nov 29 07:49:56 compute-0 kind_euclid[245595]: 167 167
Nov 29 07:49:56 compute-0 systemd[1]: libpod-7a6c15565db9136d7868b7b4d67258a20eccd4ad1cd6078263f7e9206ee22a98.scope: Deactivated successfully.
Nov 29 07:49:56 compute-0 sudo[245694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:56 compute-0 podman[245525]: 2025-11-29 07:49:56.795942647 +0000 UTC m=+0.576137018 container attach 7a6c15565db9136d7868b7b4d67258a20eccd4ad1cd6078263f7e9206ee22a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euclid, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 07:49:56 compute-0 podman[245525]: 2025-11-29 07:49:56.796974785 +0000 UTC m=+0.577169126 container died 7a6c15565db9136d7868b7b4d67258a20eccd4ad1cd6078263f7e9206ee22a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:49:56 compute-0 python3.9[245699]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:49:57 compute-0 sudo[245694]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d812144fdc8cc9b14f06236610981905decddf42b557bee0fd8139da8a87878-merged.mount: Deactivated successfully.
Nov 29 07:49:57 compute-0 podman[245525]: 2025-11-29 07:49:57.066519807 +0000 UTC m=+0.846714158 container remove 7a6c15565db9136d7868b7b4d67258a20eccd4ad1cd6078263f7e9206ee22a98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euclid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:49:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:57 compute-0 systemd[1]: libpod-conmon-7a6c15565db9136d7868b7b4d67258a20eccd4ad1cd6078263f7e9206ee22a98.scope: Deactivated successfully.
Nov 29 07:49:57 compute-0 podman[245743]: 2025-11-29 07:49:57.220641578 +0000 UTC m=+0.029212394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:49:57 compute-0 podman[245743]: 2025-11-29 07:49:57.594136618 +0000 UTC m=+0.402707354 container create 8b9196b885561943702db38f160d9ffac591aca317667c74027099c08304dfc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 07:49:57 compute-0 systemd[1]: Started libpod-conmon-8b9196b885561943702db38f160d9ffac591aca317667c74027099c08304dfc9.scope.
Nov 29 07:49:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dee8d59676a2a2ff6ca6d3cb900f8afa31c92fea84181554c85299043f04f15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dee8d59676a2a2ff6ca6d3cb900f8afa31c92fea84181554c85299043f04f15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dee8d59676a2a2ff6ca6d3cb900f8afa31c92fea84181554c85299043f04f15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dee8d59676a2a2ff6ca6d3cb900f8afa31c92fea84181554c85299043f04f15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dee8d59676a2a2ff6ca6d3cb900f8afa31c92fea84181554c85299043f04f15/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:49:57 compute-0 podman[245743]: 2025-11-29 07:49:57.755586948 +0000 UTC m=+0.564157704 container init 8b9196b885561943702db38f160d9ffac591aca317667c74027099c08304dfc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:49:57 compute-0 podman[245743]: 2025-11-29 07:49:57.764338265 +0000 UTC m=+0.572908991 container start 8b9196b885561943702db38f160d9ffac591aca317667c74027099c08304dfc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:49:57 compute-0 podman[245743]: 2025-11-29 07:49:57.768361054 +0000 UTC m=+0.576931830 container attach 8b9196b885561943702db38f160d9ffac591aca317667c74027099c08304dfc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_turing, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 07:49:57 compute-0 ceph-mon[75237]: pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:58 compute-0 sudo[245890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgipqfrixfnbsrsnvrohdszeugelzezj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402598.0378985-997-225143869267934/AnsiballZ_file.py'
Nov 29 07:49:58 compute-0 sudo[245890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:58 compute-0 python3.9[245892]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:49:58 compute-0 sudo[245890]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:58 compute-0 hopeful_turing[245760]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:49:58 compute-0 hopeful_turing[245760]: --> relative data size: 1.0
Nov 29 07:49:58 compute-0 hopeful_turing[245760]: --> All data devices are unavailable
Nov 29 07:49:58 compute-0 systemd[1]: libpod-8b9196b885561943702db38f160d9ffac591aca317667c74027099c08304dfc9.scope: Deactivated successfully.
Nov 29 07:49:58 compute-0 podman[245743]: 2025-11-29 07:49:58.955717181 +0000 UTC m=+1.764287907 container died 8b9196b885561943702db38f160d9ffac591aca317667c74027099c08304dfc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_turing, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 07:49:58 compute-0 systemd[1]: libpod-8b9196b885561943702db38f160d9ffac591aca317667c74027099c08304dfc9.scope: Consumed 1.121s CPU time.
Nov 29 07:49:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-1dee8d59676a2a2ff6ca6d3cb900f8afa31c92fea84181554c85299043f04f15-merged.mount: Deactivated successfully.
Nov 29 07:49:59 compute-0 podman[245743]: 2025-11-29 07:49:59.025159484 +0000 UTC m=+1.833730220 container remove 8b9196b885561943702db38f160d9ffac591aca317667c74027099c08304dfc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_turing, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 07:49:59 compute-0 systemd[1]: libpod-conmon-8b9196b885561943702db38f160d9ffac591aca317667c74027099c08304dfc9.scope: Deactivated successfully.
Nov 29 07:49:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:49:59 compute-0 sudo[245386]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:59 compute-0 sudo[246081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ficnonagtjgvjgmipfckzmejjpsqegfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402598.748474-997-104547819190626/AnsiballZ_file.py'
Nov 29 07:49:59 compute-0 sudo[246081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:59 compute-0 sudo[246080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:59 compute-0 sudo[246080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:59 compute-0 sudo[246080]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:59 compute-0 sudo[246108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:49:59 compute-0 sudo[246108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:59 compute-0 sudo[246108]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:59 compute-0 python3.9[246100]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:49:59 compute-0 sudo[246133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:59 compute-0 sudo[246133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:59 compute-0 sudo[246133]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:59 compute-0 sudo[246081]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:59 compute-0 sudo[246158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:49:59 compute-0 sudo[246158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:59 compute-0 podman[246322]: 2025-11-29 07:49:59.724990797 +0000 UTC m=+0.044168869 container create 25f0837fd9a48cd45c2ac9d4eca8ba4bed16a0da0c597662a3acd40df2114884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bose, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:49:59 compute-0 systemd[1]: Started libpod-conmon-25f0837fd9a48cd45c2ac9d4eca8ba4bed16a0da0c597662a3acd40df2114884.scope.
Nov 29 07:49:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:49:59 compute-0 podman[246322]: 2025-11-29 07:49:59.706793383 +0000 UTC m=+0.025971475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:49:59 compute-0 podman[246322]: 2025-11-29 07:49:59.815679407 +0000 UTC m=+0.134857499 container init 25f0837fd9a48cd45c2ac9d4eca8ba4bed16a0da0c597662a3acd40df2114884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bose, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:49:59 compute-0 podman[246322]: 2025-11-29 07:49:59.822378759 +0000 UTC m=+0.141556831 container start 25f0837fd9a48cd45c2ac9d4eca8ba4bed16a0da0c597662a3acd40df2114884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bose, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:49:59 compute-0 podman[246322]: 2025-11-29 07:49:59.826650995 +0000 UTC m=+0.145829067 container attach 25f0837fd9a48cd45c2ac9d4eca8ba4bed16a0da0c597662a3acd40df2114884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bose, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:49:59 compute-0 angry_bose[246363]: 167 167
Nov 29 07:49:59 compute-0 systemd[1]: libpod-25f0837fd9a48cd45c2ac9d4eca8ba4bed16a0da0c597662a3acd40df2114884.scope: Deactivated successfully.
Nov 29 07:49:59 compute-0 podman[246322]: 2025-11-29 07:49:59.828083594 +0000 UTC m=+0.147261666 container died 25f0837fd9a48cd45c2ac9d4eca8ba4bed16a0da0c597662a3acd40df2114884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bose, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:49:59 compute-0 sudo[246392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgmihcpjscxtlzkgwdyqczqkeujmbjaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402599.4989727-997-75231746617341/AnsiballZ_file.py'
Nov 29 07:49:59 compute-0 sudo[246392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:49:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bb44de0523ffb535e71f1886d9b6dc4fe277759b7ffe45d0336dc035b731a7a-merged.mount: Deactivated successfully.
Nov 29 07:49:59 compute-0 podman[246322]: 2025-11-29 07:49:59.870084323 +0000 UTC m=+0.189262395 container remove 25f0837fd9a48cd45c2ac9d4eca8ba4bed16a0da0c597662a3acd40df2114884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bose, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:49:59 compute-0 systemd[1]: libpod-conmon-25f0837fd9a48cd45c2ac9d4eca8ba4bed16a0da0c597662a3acd40df2114884.scope: Deactivated successfully.
Nov 29 07:50:00 compute-0 podman[246414]: 2025-11-29 07:50:00.043324872 +0000 UTC m=+0.049137123 container create 65b48a136b1828fb66ce74e6a429e809463c695f6091c52b532668e466864374 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_robinson, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 07:50:00 compute-0 python3.9[246397]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:50:00 compute-0 systemd[1]: Started libpod-conmon-65b48a136b1828fb66ce74e6a429e809463c695f6091c52b532668e466864374.scope.
Nov 29 07:50:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:50:00 compute-0 sudo[246392]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cd8b03cb985c382aab5361ed11713feb6849b058e8318fa623567afcbd881a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cd8b03cb985c382aab5361ed11713feb6849b058e8318fa623567afcbd881a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cd8b03cb985c382aab5361ed11713feb6849b058e8318fa623567afcbd881a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cd8b03cb985c382aab5361ed11713feb6849b058e8318fa623567afcbd881a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:00 compute-0 podman[246414]: 2025-11-29 07:50:00.026534887 +0000 UTC m=+0.032347068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:50:00 compute-0 podman[246414]: 2025-11-29 07:50:00.131677479 +0000 UTC m=+0.137489660 container init 65b48a136b1828fb66ce74e6a429e809463c695f6091c52b532668e466864374 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_robinson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 07:50:00 compute-0 ceph-mon[75237]: pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:00 compute-0 podman[246414]: 2025-11-29 07:50:00.139336466 +0000 UTC m=+0.145148627 container start 65b48a136b1828fb66ce74e6a429e809463c695f6091c52b532668e466864374 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_robinson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:50:00 compute-0 podman[246414]: 2025-11-29 07:50:00.142311917 +0000 UTC m=+0.148124108 container attach 65b48a136b1828fb66ce74e6a429e809463c695f6091c52b532668e466864374 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:50:00 compute-0 sudo[246584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grsrvcjubebfzlfucynnbrykumhfwdtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402600.2927876-1019-36501679440895/AnsiballZ_file.py'
Nov 29 07:50:00 compute-0 sudo[246584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:00 compute-0 python3.9[246586]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:50:00 compute-0 sudo[246584]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:00 compute-0 youthful_robinson[246430]: {
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:     "0": [
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:         {
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "devices": [
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "/dev/loop3"
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             ],
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "lv_name": "ceph_lv0",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "lv_size": "21470642176",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "name": "ceph_lv0",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "tags": {
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.cluster_name": "ceph",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.crush_device_class": "",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.encrypted": "0",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.osd_id": "0",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.type": "block",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.vdo": "0"
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             },
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "type": "block",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "vg_name": "ceph_vg0"
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:         }
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:     ],
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:     "1": [
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:         {
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "devices": [
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "/dev/loop4"
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             ],
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "lv_name": "ceph_lv1",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "lv_size": "21470642176",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "name": "ceph_lv1",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "tags": {
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.cluster_name": "ceph",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.crush_device_class": "",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.encrypted": "0",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.osd_id": "1",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.type": "block",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.vdo": "0"
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             },
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "type": "block",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "vg_name": "ceph_vg1"
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:         }
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:     ],
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:     "2": [
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:         {
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "devices": [
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "/dev/loop5"
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             ],
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "lv_name": "ceph_lv2",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "lv_size": "21470642176",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "name": "ceph_lv2",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "tags": {
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.cluster_name": "ceph",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.crush_device_class": "",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.encrypted": "0",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.osd_id": "2",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.type": "block",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:                 "ceph.vdo": "0"
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             },
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "type": "block",
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:             "vg_name": "ceph_vg2"
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:         }
Nov 29 07:50:00 compute-0 youthful_robinson[246430]:     ]
Nov 29 07:50:00 compute-0 youthful_robinson[246430]: }
Nov 29 07:50:00 compute-0 systemd[1]: libpod-65b48a136b1828fb66ce74e6a429e809463c695f6091c52b532668e466864374.scope: Deactivated successfully.
Nov 29 07:50:00 compute-0 podman[246414]: 2025-11-29 07:50:00.945524884 +0000 UTC m=+0.951337065 container died 65b48a136b1828fb66ce74e6a429e809463c695f6091c52b532668e466864374 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_robinson, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:50:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cd8b03cb985c382aab5361ed11713feb6849b058e8318fa623567afcbd881a9-merged.mount: Deactivated successfully.
Nov 29 07:50:01 compute-0 podman[246414]: 2025-11-29 07:50:01.008623925 +0000 UTC m=+1.014436086 container remove 65b48a136b1828fb66ce74e6a429e809463c695f6091c52b532668e466864374 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:50:01 compute-0 systemd[1]: libpod-conmon-65b48a136b1828fb66ce74e6a429e809463c695f6091c52b532668e466864374.scope: Deactivated successfully.
Nov 29 07:50:01 compute-0 sudo[246158]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:01 compute-0 sudo[246680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:50:01 compute-0 sudo[246680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:01 compute-0 sudo[246680]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:01 compute-0 sudo[246728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:50:01 compute-0 sudo[246728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:01 compute-0 sudo[246728]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:01 compute-0 sudo[246753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:50:01 compute-0 sudo[246753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:01 compute-0 sudo[246753]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:01 compute-0 sudo[246802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:50:01 compute-0 sudo[246802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:01 compute-0 sudo[246853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljtkctugzdfmstfzrleoibnsjjwvbqfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402600.9779806-1019-229160605929565/AnsiballZ_file.py'
Nov 29 07:50:01 compute-0 sudo[246853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:01 compute-0 python3.9[246855]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:50:01 compute-0 sudo[246853]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:50:01 compute-0 podman[246918]: 2025-11-29 07:50:01.697333927 +0000 UTC m=+0.043086300 container create e6134c6004aeea80f8cb2c46c4745078567ee42a9ac96e3fa2879aca0cabe5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:50:01 compute-0 systemd[1]: Started libpod-conmon-e6134c6004aeea80f8cb2c46c4745078567ee42a9ac96e3fa2879aca0cabe5f3.scope.
Nov 29 07:50:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:50:01 compute-0 podman[246918]: 2025-11-29 07:50:01.678265199 +0000 UTC m=+0.024017602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:50:01 compute-0 podman[246918]: 2025-11-29 07:50:01.780189874 +0000 UTC m=+0.125942287 container init e6134c6004aeea80f8cb2c46c4745078567ee42a9ac96e3fa2879aca0cabe5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:50:01 compute-0 podman[246918]: 2025-11-29 07:50:01.788557611 +0000 UTC m=+0.134310004 container start e6134c6004aeea80f8cb2c46c4745078567ee42a9ac96e3fa2879aca0cabe5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 07:50:01 compute-0 podman[246918]: 2025-11-29 07:50:01.792449807 +0000 UTC m=+0.138202200 container attach e6134c6004aeea80f8cb2c46c4745078567ee42a9ac96e3fa2879aca0cabe5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:50:01 compute-0 laughing_chaum[246955]: 167 167
Nov 29 07:50:01 compute-0 systemd[1]: libpod-e6134c6004aeea80f8cb2c46c4745078567ee42a9ac96e3fa2879aca0cabe5f3.scope: Deactivated successfully.
Nov 29 07:50:01 compute-0 conmon[246955]: conmon e6134c6004aeea80f8cb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e6134c6004aeea80f8cb2c46c4745078567ee42a9ac96e3fa2879aca0cabe5f3.scope/container/memory.events
Nov 29 07:50:01 compute-0 podman[246918]: 2025-11-29 07:50:01.794675056 +0000 UTC m=+0.140427439 container died e6134c6004aeea80f8cb2c46c4745078567ee42a9ac96e3fa2879aca0cabe5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:50:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-65c7f2943805ac5ccaf71da07334b3faf20f9b1417a352875ea240b827a85314-merged.mount: Deactivated successfully.
Nov 29 07:50:01 compute-0 podman[246918]: 2025-11-29 07:50:01.84234684 +0000 UTC m=+0.188099223 container remove e6134c6004aeea80f8cb2c46c4745078567ee42a9ac96e3fa2879aca0cabe5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 07:50:01 compute-0 systemd[1]: libpod-conmon-e6134c6004aeea80f8cb2c46c4745078567ee42a9ac96e3fa2879aca0cabe5f3.scope: Deactivated successfully.
Nov 29 07:50:02 compute-0 podman[247045]: 2025-11-29 07:50:02.037675058 +0000 UTC m=+0.075173740 container create 7f199a435785f441171f7f571cbfce53adc49514a1c6385b3173959e828f9788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mahavira, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:50:02 compute-0 sudo[247101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avxpnqjktreyppfizmzxtqwfzzykakwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402601.7603152-1019-132315337084170/AnsiballZ_file.py'
Nov 29 07:50:02 compute-0 podman[247045]: 2025-11-29 07:50:01.997689813 +0000 UTC m=+0.035188525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:50:02 compute-0 systemd[1]: Started libpod-conmon-7f199a435785f441171f7f571cbfce53adc49514a1c6385b3173959e828f9788.scope.
Nov 29 07:50:02 compute-0 sudo[247101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cfb743c7153e129843dd82c270a9fc83a76ae2af4393a96236d8b1a4a5949da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cfb743c7153e129843dd82c270a9fc83a76ae2af4393a96236d8b1a4a5949da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cfb743c7153e129843dd82c270a9fc83a76ae2af4393a96236d8b1a4a5949da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cfb743c7153e129843dd82c270a9fc83a76ae2af4393a96236d8b1a4a5949da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:02 compute-0 ceph-mon[75237]: pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:02 compute-0 podman[247045]: 2025-11-29 07:50:02.162560075 +0000 UTC m=+0.200058767 container init 7f199a435785f441171f7f571cbfce53adc49514a1c6385b3173959e828f9788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:50:02 compute-0 podman[247045]: 2025-11-29 07:50:02.171358315 +0000 UTC m=+0.208856967 container start 7f199a435785f441171f7f571cbfce53adc49514a1c6385b3173959e828f9788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mahavira, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:50:02 compute-0 podman[247045]: 2025-11-29 07:50:02.183141104 +0000 UTC m=+0.220639766 container attach 7f199a435785f441171f7f571cbfce53adc49514a1c6385b3173959e828f9788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:50:02 compute-0 python3.9[247107]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:50:02 compute-0 sudo[247101]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:02 compute-0 podman[247111]: 2025-11-29 07:50:02.386807078 +0000 UTC m=+0.068735535 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 07:50:02 compute-0 sudo[247280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwztztzizrfghgywjtvzfkieassmjfyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402602.4535463-1019-120019653543147/AnsiballZ_file.py'
Nov 29 07:50:02 compute-0 sudo[247280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:02 compute-0 python3.9[247282]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:50:02 compute-0 sudo[247280]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:03 compute-0 musing_mahavira[247105]: {
Nov 29 07:50:03 compute-0 musing_mahavira[247105]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:50:03 compute-0 musing_mahavira[247105]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:50:03 compute-0 musing_mahavira[247105]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:50:03 compute-0 musing_mahavira[247105]:         "osd_id": 2,
Nov 29 07:50:03 compute-0 musing_mahavira[247105]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:50:03 compute-0 musing_mahavira[247105]:         "type": "bluestore"
Nov 29 07:50:03 compute-0 musing_mahavira[247105]:     },
Nov 29 07:50:03 compute-0 musing_mahavira[247105]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:50:03 compute-0 musing_mahavira[247105]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:50:03 compute-0 musing_mahavira[247105]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:50:03 compute-0 musing_mahavira[247105]:         "osd_id": 0,
Nov 29 07:50:03 compute-0 musing_mahavira[247105]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:50:03 compute-0 musing_mahavira[247105]:         "type": "bluestore"
Nov 29 07:50:03 compute-0 musing_mahavira[247105]:     },
Nov 29 07:50:03 compute-0 musing_mahavira[247105]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:50:03 compute-0 musing_mahavira[247105]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:50:03 compute-0 musing_mahavira[247105]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:50:03 compute-0 musing_mahavira[247105]:         "osd_id": 1,
Nov 29 07:50:03 compute-0 musing_mahavira[247105]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:50:03 compute-0 musing_mahavira[247105]:         "type": "bluestore"
Nov 29 07:50:03 compute-0 musing_mahavira[247105]:     }
Nov 29 07:50:03 compute-0 musing_mahavira[247105]: }
Nov 29 07:50:03 compute-0 systemd[1]: libpod-7f199a435785f441171f7f571cbfce53adc49514a1c6385b3173959e828f9788.scope: Deactivated successfully.
Nov 29 07:50:03 compute-0 podman[247045]: 2025-11-29 07:50:03.166891788 +0000 UTC m=+1.204390430 container died 7f199a435785f441171f7f571cbfce53adc49514a1c6385b3173959e828f9788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mahavira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:50:03 compute-0 systemd[1]: libpod-7f199a435785f441171f7f571cbfce53adc49514a1c6385b3173959e828f9788.scope: Consumed 1.003s CPU time.
Nov 29 07:50:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cfb743c7153e129843dd82c270a9fc83a76ae2af4393a96236d8b1a4a5949da-merged.mount: Deactivated successfully.
Nov 29 07:50:03 compute-0 podman[247045]: 2025-11-29 07:50:03.238817609 +0000 UTC m=+1.276316251 container remove 7f199a435785f441171f7f571cbfce53adc49514a1c6385b3173959e828f9788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:50:03 compute-0 systemd[1]: libpod-conmon-7f199a435785f441171f7f571cbfce53adc49514a1c6385b3173959e828f9788.scope: Deactivated successfully.
Nov 29 07:50:03 compute-0 sudo[246802]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:50:03 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:50:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:50:03 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:50:03 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 00dd0226-5ffe-4e26-8d94-0fd50dfde489 does not exist
Nov 29 07:50:03 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 300b24b0-4a00-4684-abb1-5aa93aefd1d9 does not exist
Nov 29 07:50:03 compute-0 sudo[247446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:50:03 compute-0 sudo[247446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:03 compute-0 sudo[247446]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:03 compute-0 sudo[247499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjyquoaynkdfzlkiszavidjmxajmegix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402603.102803-1019-57512492916189/AnsiballZ_file.py'
Nov 29 07:50:03 compute-0 sudo[247499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:03 compute-0 sudo[247498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:50:03 compute-0 sudo[247498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:03 compute-0 sudo[247498]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:03 compute-0 python3.9[247517]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:50:03 compute-0 sudo[247499]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:04 compute-0 sudo[247675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hppkijowkihgndjdyqetiogrebwhobbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402603.7460704-1019-116432691398926/AnsiballZ_file.py'
Nov 29 07:50:04 compute-0 sudo[247675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:04 compute-0 python3.9[247677]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:50:04 compute-0 sudo[247675]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:04 compute-0 ceph-mon[75237]: pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:50:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:50:04 compute-0 sudo[247827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjmnuaahaloahqpattxryskymixbqbzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402604.492526-1019-263386566423168/AnsiballZ_file.py'
Nov 29 07:50:04 compute-0 sudo[247827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:04 compute-0 python3.9[247829]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:50:05 compute-0 sudo[247827]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:06 compute-0 ceph-mon[75237]: pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:50:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:07 compute-0 podman[247854]: 2025-11-29 07:50:07.932350779 +0000 UTC m=+0.090766593 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 07:50:08 compute-0 ceph-mon[75237]: pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:50:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:50:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:50:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:50:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:50:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:50:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:10 compute-0 ceph-mon[75237]: pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:10 compute-0 podman[247874]: 2025-11-29 07:50:10.937020439 +0000 UTC m=+0.099613873 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 07:50:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:11 compute-0 sudo[248025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egtgxrwtstmdrgiwdakmjyyfxbceuhsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402611.0352497-1208-115483680223490/AnsiballZ_getent.py'
Nov 29 07:50:11 compute-0 sudo[248025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:50:11 compute-0 python3.9[248027]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 29 07:50:11 compute-0 sudo[248025]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:12 compute-0 ceph-mon[75237]: pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:12 compute-0 sudo[248178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbzuxivewanjiuqgmnqmgaoscnmcrgvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402612.0861402-1216-109807374159226/AnsiballZ_group.py'
Nov 29 07:50:12 compute-0 sudo[248178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:12 compute-0 python3.9[248180]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 07:50:12 compute-0 groupadd[248181]: group added to /etc/group: name=nova, GID=42436
Nov 29 07:50:12 compute-0 groupadd[248181]: group added to /etc/gshadow: name=nova
Nov 29 07:50:12 compute-0 groupadd[248181]: new group: name=nova, GID=42436
Nov 29 07:50:12 compute-0 sudo[248178]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:13 compute-0 sudo[248336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llovmirrazohihqldcddcpvicfkrdtis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402613.1717632-1224-167690612738808/AnsiballZ_user.py'
Nov 29 07:50:13 compute-0 sudo[248336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:14 compute-0 python3.9[248338]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 07:50:14 compute-0 useradd[248340]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 29 07:50:14 compute-0 useradd[248340]: add 'nova' to group 'libvirt'
Nov 29 07:50:14 compute-0 useradd[248340]: add 'nova' to shadow group 'libvirt'
Nov 29 07:50:14 compute-0 sudo[248336]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:14 compute-0 ceph-mon[75237]: pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:15 compute-0 sshd-session[248371]: Accepted publickey for zuul from 192.168.122.30 port 40734 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 07:50:15 compute-0 systemd-logind[782]: New session 51 of user zuul.
Nov 29 07:50:15 compute-0 systemd[1]: Started Session 51 of User zuul.
Nov 29 07:50:15 compute-0 sshd-session[248371]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:50:15 compute-0 ceph-mon[75237]: pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:16 compute-0 sshd-session[248374]: Received disconnect from 192.168.122.30 port 40734:11: disconnected by user
Nov 29 07:50:16 compute-0 sshd-session[248374]: Disconnected from user zuul 192.168.122.30 port 40734
Nov 29 07:50:16 compute-0 sshd-session[248371]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:50:16 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Nov 29 07:50:16 compute-0 systemd-logind[782]: Session 51 logged out. Waiting for processes to exit.
Nov 29 07:50:16 compute-0 systemd-logind[782]: Removed session 51.
Nov 29 07:50:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:50:17 compute-0 python3.9[248524]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:50:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:17 compute-0 python3.9[248645]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402616.4814014-1249-38958478017197/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:50:18 compute-0 ceph-mon[75237]: pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:18 compute-0 python3.9[248795]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:50:19 compute-0 python3.9[248871]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:50:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:19 compute-0 python3.9[249023]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:50:20 compute-0 ceph-mon[75237]: pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:20 compute-0 python3.9[249144]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402619.3893616-1249-49230275234270/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:50:20 compute-0 sshd-session[248948]: Received disconnect from 114.34.106.146 port 46752:11: Bye Bye [preauth]
Nov 29 07:50:20 compute-0 sshd-session[248948]: Disconnected from authenticating user root 114.34.106.146 port 46752 [preauth]
Nov 29 07:50:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:50:21 compute-0 python3.9[249294]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:50:22 compute-0 ceph-mon[75237]: pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:22 compute-0 python3.9[249415]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402620.7355254-1249-5120054358182/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:50:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:23 compute-0 python3.9[249565]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:50:23 compute-0 python3.9[249686]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402622.8220904-1249-27838330143621/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:50:24 compute-0 ceph-mon[75237]: pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:24 compute-0 sshd-session[249711]: Received disconnect from 20.185.243.158 port 53034:11: Bye Bye [preauth]
Nov 29 07:50:24 compute-0 sshd-session[249711]: Disconnected from authenticating user root 20.185.243.158 port 53034 [preauth]
Nov 29 07:50:24 compute-0 python3.9[249838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:50:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:25 compute-0 python3.9[249959]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402624.1719747-1249-265991148047621/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:50:25 compute-0 sudo[250111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqvlwvjhjamldaxtevigdqbzawgdzniv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402625.6212857-1332-3059512337559/AnsiballZ_file.py'
Nov 29 07:50:26 compute-0 sudo[250111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:26 compute-0 python3.9[250113]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:50:26 compute-0 ceph-mon[75237]: pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:26 compute-0 sudo[250111]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:50:26 compute-0 sudo[250263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beakpktkfyopwmlgmnqfmgkgdhhentry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402626.4430673-1340-242439393542278/AnsiballZ_copy.py'
Nov 29 07:50:26 compute-0 sudo[250263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:26 compute-0 python3.9[250265]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:50:27 compute-0 sudo[250263]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:50:27.110 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:50:27.112 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:50:27.112 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:27 compute-0 sudo[250415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrrlbunyynqnvolrhayaczydbrkadexj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402627.254133-1348-33503178812847/AnsiballZ_stat.py'
Nov 29 07:50:27 compute-0 sudo[250415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:28 compute-0 ceph-mon[75237]: pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:28 compute-0 python3.9[250417]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:50:28 compute-0 sudo[250415]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:29 compute-0 sudo[250567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujanzhxahmnlyovxdpbrulancgefyjqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402628.741308-1356-194521625075075/AnsiballZ_stat.py'
Nov 29 07:50:29 compute-0 sudo[250567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:29 compute-0 python3.9[250569]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:50:29 compute-0 sudo[250567]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:29 compute-0 sshd-session[249960]: Invalid user testuser from 103.234.151.178 port 21690
Nov 29 07:50:29 compute-0 sudo[250690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywqthdbbnmbmskpofzkjjutlfcneotcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402628.741308-1356-194521625075075/AnsiballZ_copy.py'
Nov 29 07:50:29 compute-0 sudo[250690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:29 compute-0 python3.9[250692]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764402628.741308-1356-194521625075075/.source _original_basename=.zaq0vztn follow=False checksum=aa981f932e7708b73f05d686a5cf1f246516a566 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 29 07:50:30 compute-0 sudo[250690]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:30 compute-0 ceph-mon[75237]: pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:30 compute-0 sshd-session[249960]: Received disconnect from 103.234.151.178 port 21690:11: Bye Bye [preauth]
Nov 29 07:50:30 compute-0 sshd-session[249960]: Disconnected from invalid user testuser 103.234.151.178 port 21690 [preauth]
Nov 29 07:50:30 compute-0 python3.9[250844]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:50:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:50:31 compute-0 python3.9[250996]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:50:31 compute-0 ceph-mon[75237]: pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:32 compute-0 podman[251091]: 2025-11-29 07:50:32.833459046 +0000 UTC m=+0.067111622 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 07:50:33 compute-0 python3.9[251128]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402630.9812157-1382-249825015316455/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:50:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:33 compute-0 python3.9[251286]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:50:34 compute-0 ceph-mon[75237]: pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:34 compute-0 python3.9[251407]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402633.2155874-1397-165625711835855/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:50:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:35 compute-0 sudo[251558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmykzbebcyqagnryvbstbkvvhfqwcyft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402635.0836434-1414-263663607678406/AnsiballZ_container_config_data.py'
Nov 29 07:50:35 compute-0 sudo[251558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:35 compute-0 python3.9[251560]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 29 07:50:35 compute-0 sudo[251558]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:36 compute-0 ceph-mon[75237]: pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:36 compute-0 sudo[251710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ankufpoybjgsrnbzcpurwoyifamsksmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402635.8897407-1423-40461733373733/AnsiballZ_container_config_hash.py'
Nov 29 07:50:36 compute-0 sudo[251710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:36 compute-0 python3.9[251712]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 07:50:36 compute-0 sudo[251710]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:50:37 compute-0 sudo[251864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyjrtxtrjrylnxiomyhzolnaxmmxcvyj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764402636.7876434-1433-66457534094763/AnsiballZ_edpm_container_manage.py'
Nov 29 07:50:37 compute-0 sudo[251864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 29 07:50:37 compute-0 python3[251866]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 07:50:37 compute-0 sshd-session[251736]: Invalid user halo from 103.236.140.19 port 38794
Nov 29 07:50:38 compute-0 sshd-session[251736]: Received disconnect from 103.236.140.19 port 38794:11: Bye Bye [preauth]
Nov 29 07:50:38 compute-0 sshd-session[251736]: Disconnected from invalid user halo 103.236.140.19 port 38794 [preauth]
Nov 29 07:50:38 compute-0 ceph-mon[75237]: pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 29 07:50:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:50:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:50:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:50:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:50:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:50:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:50:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:50:38
Nov 29 07:50:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:50:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:50:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['volumes', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'backups', 'default.rgw.log']
Nov 29 07:50:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:50:38 compute-0 podman[251901]: 2025-11-29 07:50:38.926668736 +0000 UTC m=+0.093457959 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 07:50:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 29 07:50:40 compute-0 ceph-mon[75237]: pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 29 07:50:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Nov 29 07:50:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:50:41 compute-0 ceph-mon[75237]: pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Nov 29 07:50:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:50:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:50:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:50:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:50:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:50:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:50:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:50:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:50:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:50:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:50:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:50:44 compute-0 ceph-mon[75237]: pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:50:44 compute-0 podman[251939]: 2025-11-29 07:50:44.594435868 +0000 UTC m=+2.760616131 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:50:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:50:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:50:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Nov 29 07:50:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:50:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Nov 29 07:50:51 compute-0 ceph-mon[75237]: pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:50:51 compute-0 ceph-mon[75237]: pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:50:51 compute-0 podman[251879]: 2025-11-29 07:50:51.459479889 +0000 UTC m=+14.024163405 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 07:50:51 compute-0 podman[252003]: 2025-11-29 07:50:51.76212193 +0000 UTC m=+0.060110614 container create 890575c391546937dff6836c9651851d5d6163f577c18ec3d536ecea46db26fa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3)
Nov 29 07:50:51 compute-0 podman[252003]: 2025-11-29 07:50:51.731605721 +0000 UTC m=+0.029594435 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 07:50:51 compute-0 python3[251866]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 29 07:50:51 compute-0 sudo[251864]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:52 compute-0 ceph-mon[75237]: pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Nov 29 07:50:52 compute-0 ceph-mon[75237]: pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Nov 29 07:50:52 compute-0 sudo[252191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvycbmcziilmhjuenieucxtlhkxnzflv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402652.1748493-1441-166075055658844/AnsiballZ_stat.py'
Nov 29 07:50:52 compute-0 sudo[252191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:52 compute-0 python3.9[252193]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:50:52 compute-0 sudo[252191]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Nov 29 07:50:53 compute-0 sudo[252345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztbisawrfwtndgijsuqfyzvbrbhfcusq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402653.2218513-1453-49200128418067/AnsiballZ_container_config_data.py'
Nov 29 07:50:53 compute-0 sudo[252345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:53 compute-0 ceph-mon[75237]: pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Nov 29 07:50:53 compute-0 python3.9[252347]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 29 07:50:53 compute-0 sudo[252345]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:50:54 compute-0 sudo[252497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytsrdawihsaqhhoxyzmytmdxyfuavsrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402654.0893176-1462-281022883507866/AnsiballZ_container_config_hash.py'
Nov 29 07:50:54 compute-0 sudo[252497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:54 compute-0 python3.9[252499]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 07:50:54 compute-0 sudo[252497]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:55 compute-0 sudo[252649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxxnmbudzodnyuxqhdcywsjpzrdxsxfx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764402655.0357912-1472-80258929615285/AnsiballZ_edpm_container_manage.py'
Nov 29 07:50:55 compute-0 sudo[252649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:50:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:50:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:57 compute-0 python3[252651]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 07:50:57 compute-0 ceph-mon[75237]: pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:57 compute-0 podman[252686]: 2025-11-29 07:50:57.477376779 +0000 UTC m=+0.024561701 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 07:50:57 compute-0 podman[252686]: 2025-11-29 07:50:57.676867131 +0000 UTC m=+0.224052043 container create 495c3f71b44403abc075c4a688ed39f29d268017db7a6383090dfa11020c2eea (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=nova_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2)
Nov 29 07:50:57 compute-0 python3[252651]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 29 07:50:57 compute-0 sudo[252649]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:58 compute-0 sudo[252873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzyqufzuoxpaaujdlogwjbxpxymssoqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402658.056517-1480-18846808685621/AnsiballZ_stat.py'
Nov 29 07:50:58 compute-0 sudo[252873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:58 compute-0 python3.9[252875]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:50:58 compute-0 sudo[252873]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:58 compute-0 ceph-mon[75237]: pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:50:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:50:59 compute-0 sudo[253027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aknkrswghstrackmepfvvpnziasvjvwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402659.3717406-1489-215295398835025/AnsiballZ_file.py'
Nov 29 07:50:59 compute-0 sudo[253027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:50:59 compute-0 python3.9[253029]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:50:59 compute-0 sudo[253027]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:00 compute-0 ceph-mon[75237]: pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:01 compute-0 sudo[253178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etrikeskyopcimuageutgmupvkmcqyji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402660.0545619-1489-176489269340374/AnsiballZ_copy.py'
Nov 29 07:51:01 compute-0 sudo[253178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:51:01 compute-0 python3.9[253180]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764402660.0545619-1489-176489269340374/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:51:01 compute-0 sudo[253178]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:01 compute-0 sudo[253254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-culshxswrmjppipetlygdvkxnqztvdjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402660.0545619-1489-176489269340374/AnsiballZ_systemd.py'
Nov 29 07:51:01 compute-0 sudo[253254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:51:02 compute-0 python3.9[253256]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:51:02 compute-0 systemd[1]: Reloading.
Nov 29 07:51:02 compute-0 systemd-rc-local-generator[253279]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:51:02 compute-0 systemd-sysv-generator[253282]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:51:02 compute-0 ceph-mon[75237]: pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:02 compute-0 sudo[253254]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:02 compute-0 sudo[253380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gefklejgbqsdyffujzejqnskycljfkox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402660.0545619-1489-176489269340374/AnsiballZ_systemd.py'
Nov 29 07:51:02 compute-0 sudo[253380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:51:03 compute-0 podman[253339]: 2025-11-29 07:51:03.011174246 +0000 UTC m=+0.097773975 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 07:51:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:03 compute-0 python3.9[253384]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:51:03 compute-0 systemd[1]: Reloading.
Nov 29 07:51:03 compute-0 systemd-sysv-generator[253429]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:51:03 compute-0 systemd-rc-local-generator[253424]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:51:03 compute-0 sudo[253391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:51:03 compute-0 sudo[253391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:03 compute-0 sudo[253391]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:03 compute-0 systemd[1]: Starting nova_compute container...
Nov 29 07:51:03 compute-0 ceph-mon[75237]: pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:03 compute-0 sudo[253453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:51:03 compute-0 sudo[253453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:03 compute-0 sudo[253453]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:04 compute-0 sudo[253489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:51:04 compute-0 sudo[253489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:04 compute-0 sudo[253489]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:04 compute-0 sudo[253514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:51:04 compute-0 sudo[253514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279ee5f4f8360e78ce31094c0f5a87bd06a51deaa7bb5af1a725484edd7cfc05/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279ee5f4f8360e78ce31094c0f5a87bd06a51deaa7bb5af1a725484edd7cfc05/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279ee5f4f8360e78ce31094c0f5a87bd06a51deaa7bb5af1a725484edd7cfc05/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279ee5f4f8360e78ce31094c0f5a87bd06a51deaa7bb5af1a725484edd7cfc05/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279ee5f4f8360e78ce31094c0f5a87bd06a51deaa7bb5af1a725484edd7cfc05/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:04 compute-0 podman[253454]: 2025-11-29 07:51:04.161889835 +0000 UTC m=+0.252773564 container init 495c3f71b44403abc075c4a688ed39f29d268017db7a6383090dfa11020c2eea (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:51:04 compute-0 podman[253454]: 2025-11-29 07:51:04.171872743 +0000 UTC m=+0.262756452 container start 495c3f71b44403abc075c4a688ed39f29d268017db7a6383090dfa11020c2eea (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 07:51:04 compute-0 nova_compute[253543]: + sudo -E kolla_set_configs
Nov 29 07:51:04 compute-0 podman[253454]: nova_compute
Nov 29 07:51:04 compute-0 systemd[1]: Started nova_compute container.
Nov 29 07:51:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:51:04 compute-0 sudo[253380]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Validating config file
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Copying service configuration files
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Deleting /etc/ceph
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Creating directory /etc/ceph
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Writing out command to execute
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:51:04 compute-0 nova_compute[253543]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 07:51:04 compute-0 nova_compute[253543]: ++ cat /run_command
Nov 29 07:51:04 compute-0 nova_compute[253543]: + CMD=nova-compute
Nov 29 07:51:04 compute-0 nova_compute[253543]: + ARGS=
Nov 29 07:51:04 compute-0 nova_compute[253543]: + sudo kolla_copy_cacerts
Nov 29 07:51:04 compute-0 nova_compute[253543]: + [[ ! -n '' ]]
Nov 29 07:51:04 compute-0 nova_compute[253543]: + . kolla_extend_start
Nov 29 07:51:04 compute-0 nova_compute[253543]: Running command: 'nova-compute'
Nov 29 07:51:04 compute-0 nova_compute[253543]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 07:51:04 compute-0 nova_compute[253543]: + umask 0022
Nov 29 07:51:04 compute-0 nova_compute[253543]: + exec nova-compute
Nov 29 07:51:04 compute-0 sudo[253514]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 07:51:04 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:51:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:51:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:51:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:51:04 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:51:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:51:04 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:51:04 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 04dc6fb2-2941-4aa0-b728-61c12aadcea7 does not exist
Nov 29 07:51:04 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev dece9066-ca0c-44e7-8100-9d4294aa4ef1 does not exist
Nov 29 07:51:04 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 5277f7e3-4026-4176-b8ae-ed1e66089575 does not exist
Nov 29 07:51:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:51:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:51:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:51:04 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:51:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:51:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:51:04 compute-0 sudo[253611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:51:04 compute-0 sudo[253611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:04 compute-0 sudo[253611]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:04 compute-0 sudo[253636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:51:04 compute-0 sudo[253636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:04 compute-0 sudo[253636]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:04 compute-0 sudo[253684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:51:04 compute-0 sudo[253684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:04 compute-0 sudo[253684]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:51:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:51:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:51:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:51:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:51:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:51:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:51:04 compute-0 sudo[253738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:51:04 compute-0 sudo[253738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:05 compute-0 python3.9[253848]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:51:05 compute-0 podman[253878]: 2025-11-29 07:51:05.331129091 +0000 UTC m=+0.024690254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:51:05 compute-0 podman[253878]: 2025-11-29 07:51:05.469773722 +0000 UTC m=+0.163334905 container create d96906d53184362388d1d75ce986c7495f15837d8a6b1ba8d41f097688dfc96f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:51:05 compute-0 systemd[1]: Started libpod-conmon-d96906d53184362388d1d75ce986c7495f15837d8a6b1ba8d41f097688dfc96f.scope.
Nov 29 07:51:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:51:05 compute-0 podman[253878]: 2025-11-29 07:51:05.634174143 +0000 UTC m=+0.327735326 container init d96906d53184362388d1d75ce986c7495f15837d8a6b1ba8d41f097688dfc96f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:51:05 compute-0 podman[253878]: 2025-11-29 07:51:05.642601299 +0000 UTC m=+0.336162452 container start d96906d53184362388d1d75ce986c7495f15837d8a6b1ba8d41f097688dfc96f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:51:05 compute-0 mystifying_hertz[253917]: 167 167
Nov 29 07:51:05 compute-0 systemd[1]: libpod-d96906d53184362388d1d75ce986c7495f15837d8a6b1ba8d41f097688dfc96f.scope: Deactivated successfully.
Nov 29 07:51:05 compute-0 podman[253878]: 2025-11-29 07:51:05.688265845 +0000 UTC m=+0.381827048 container attach d96906d53184362388d1d75ce986c7495f15837d8a6b1ba8d41f097688dfc96f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hertz, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:51:05 compute-0 podman[253878]: 2025-11-29 07:51:05.690411573 +0000 UTC m=+0.383972746 container died d96906d53184362388d1d75ce986c7495f15837d8a6b1ba8d41f097688dfc96f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 07:51:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5314e6d19f3e66c2936e58d469be99500747a57bb3a20468ccc03e1f62f5310-merged.mount: Deactivated successfully.
Nov 29 07:51:06 compute-0 ceph-mon[75237]: pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:06 compute-0 podman[253878]: 2025-11-29 07:51:06.204437036 +0000 UTC m=+0.897998219 container remove d96906d53184362388d1d75ce986c7495f15837d8a6b1ba8d41f097688dfc96f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:51:06 compute-0 systemd[1]: libpod-conmon-d96906d53184362388d1d75ce986c7495f15837d8a6b1ba8d41f097688dfc96f.scope: Deactivated successfully.
Nov 29 07:51:06 compute-0 python3.9[254056]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:51:06 compute-0 podman[254088]: 2025-11-29 07:51:06.43672392 +0000 UTC m=+0.084367966 container create 6a6d964681ec57f5b897a1e0dbeeeca8c3e8733bc9338fef611e44aabb053816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:51:06 compute-0 podman[254088]: 2025-11-29 07:51:06.375887587 +0000 UTC m=+0.023531663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:51:06 compute-0 systemd[1]: Started libpod-conmon-6a6d964681ec57f5b897a1e0dbeeeca8c3e8733bc9338fef611e44aabb053816.scope.
Nov 29 07:51:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a6bddc046d97eceb61f2321c07e11f06f90cb2bf563916b9c68ab88d9c7017c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a6bddc046d97eceb61f2321c07e11f06f90cb2bf563916b9c68ab88d9c7017c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a6bddc046d97eceb61f2321c07e11f06f90cb2bf563916b9c68ab88d9c7017c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a6bddc046d97eceb61f2321c07e11f06f90cb2bf563916b9c68ab88d9c7017c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a6bddc046d97eceb61f2321c07e11f06f90cb2bf563916b9c68ab88d9c7017c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:06 compute-0 podman[254088]: 2025-11-29 07:51:06.65325695 +0000 UTC m=+0.300901016 container init 6a6d964681ec57f5b897a1e0dbeeeca8c3e8733bc9338fef611e44aabb053816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 07:51:06 compute-0 podman[254088]: 2025-11-29 07:51:06.661583783 +0000 UTC m=+0.309227829 container start 6a6d964681ec57f5b897a1e0dbeeeca8c3e8733bc9338fef611e44aabb053816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:51:06 compute-0 podman[254088]: 2025-11-29 07:51:06.707608578 +0000 UTC m=+0.355252624 container attach 6a6d964681ec57f5b897a1e0dbeeeca8c3e8733bc9338fef611e44aabb053816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 29 07:51:06 compute-0 nova_compute[253543]: 2025-11-29 07:51:06.821 253547 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 07:51:06 compute-0 nova_compute[253543]: 2025-11-29 07:51:06.822 253547 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 07:51:06 compute-0 nova_compute[253543]: 2025-11-29 07:51:06.822 253547 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 07:51:06 compute-0 nova_compute[253543]: 2025-11-29 07:51:06.822 253547 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 29 07:51:06 compute-0 nova_compute[253543]: 2025-11-29 07:51:06.985 253547 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:07 compute-0 nova_compute[253543]: 2025-11-29 07:51:07.019 253547 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:07 compute-0 nova_compute[253543]: 2025-11-29 07:51:07.019 253547 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 29 07:51:07 compute-0 python3.9[254237]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:51:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:07 compute-0 sshd[189732]: Timeout before authentication for connection from 45.78.219.195 to 38.102.83.203, pid = 236238
Nov 29 07:51:07 compute-0 happy_haibt[254128]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:51:07 compute-0 happy_haibt[254128]: --> relative data size: 1.0
Nov 29 07:51:07 compute-0 happy_haibt[254128]: --> All data devices are unavailable
Nov 29 07:51:07 compute-0 systemd[1]: libpod-6a6d964681ec57f5b897a1e0dbeeeca8c3e8733bc9338fef611e44aabb053816.scope: Deactivated successfully.
Nov 29 07:51:07 compute-0 podman[254088]: 2025-11-29 07:51:07.816107155 +0000 UTC m=+1.463751221 container died 6a6d964681ec57f5b897a1e0dbeeeca8c3e8733bc9338fef611e44aabb053816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 07:51:07 compute-0 systemd[1]: libpod-6a6d964681ec57f5b897a1e0dbeeeca8c3e8733bc9338fef611e44aabb053816.scope: Consumed 1.090s CPU time.
Nov 29 07:51:07 compute-0 sudo[254424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loleuvnnjqijlpqjsdpneekuzojvhefx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402667.3663676-1549-212092346033322/AnsiballZ_podman_container.py'
Nov 29 07:51:07 compute-0 sudo[254424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.016 253547 INFO nova.virt.driver [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 29 07:51:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a6bddc046d97eceb61f2321c07e11f06f90cb2bf563916b9c68ab88d9c7017c-merged.mount: Deactivated successfully.
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.187 253547 INFO nova.compute.provider_config [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.207 253547 DEBUG oslo_concurrency.lockutils [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.208 253547 DEBUG oslo_concurrency.lockutils [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.208 253547 DEBUG oslo_concurrency.lockutils [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.209 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.209 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.209 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.209 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.209 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.209 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.210 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.210 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.210 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.210 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.210 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.211 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.211 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.211 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.211 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.211 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.212 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.212 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.212 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.212 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.212 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.213 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.213 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.213 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.213 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.213 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.214 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.214 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.214 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.214 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.214 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.214 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.215 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.215 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.215 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.215 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.215 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.215 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.216 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.216 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.216 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.216 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.216 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.216 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.217 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.217 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.217 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.217 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.217 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.217 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.217 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.218 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.218 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.218 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.218 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.218 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.218 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.219 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.219 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.219 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.219 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.219 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.219 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.219 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.220 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.220 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.220 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.220 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.220 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.220 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.220 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.221 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.221 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.221 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.221 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.221 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.221 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.222 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.222 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.222 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.222 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.222 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.222 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.222 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.223 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.223 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.223 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.223 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.223 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.223 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.223 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.224 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.224 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.224 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.224 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.224 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.224 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.224 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.225 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.225 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.225 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.225 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.225 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.225 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.225 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.226 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.226 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.226 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.226 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.226 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.226 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.226 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.227 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.227 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.227 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.227 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.227 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.227 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.227 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.228 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.228 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.228 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.228 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.228 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.228 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.228 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.229 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.229 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.229 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.229 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.229 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.229 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.229 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.230 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.230 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.230 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.230 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.230 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.230 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.231 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.231 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.231 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.231 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.231 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.231 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.231 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.232 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 python3.9[254426]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.232 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.232 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.232 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.232 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.232 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.232 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.233 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.233 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.233 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.233 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.233 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.233 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.234 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.234 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.234 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.234 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.234 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.234 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.235 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.235 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.235 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.235 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.236 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.236 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.236 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.236 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.236 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.237 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.237 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.237 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.237 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.237 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.237 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.237 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.238 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.238 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.238 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.238 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.238 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.238 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.239 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.239 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.239 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.239 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.239 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.239 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.239 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.240 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.240 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.240 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.240 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.240 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.240 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.240 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.241 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.241 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.241 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.241 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.241 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.241 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.242 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.242 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.242 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.242 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.242 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.242 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.243 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.243 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.244 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.244 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.244 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.244 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.244 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.244 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.244 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.244 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.245 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.245 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.245 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.245 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.245 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.245 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.245 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.246 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.246 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.246 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.246 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.246 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.246 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.247 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.247 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.247 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.247 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.247 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.247 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.247 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.248 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.248 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.248 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.248 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.248 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.248 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.248 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.248 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.249 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.249 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.249 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.249 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.249 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.249 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.249 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.250 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.250 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.250 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.250 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.250 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.250 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.251 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.251 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.251 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.251 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.251 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.251 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.252 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.252 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.252 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.252 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.252 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.252 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.252 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.253 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.253 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.253 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.253 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.253 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.253 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.253 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.254 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.254 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.254 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.254 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.254 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.254 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.255 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.255 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.255 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.255 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.255 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.255 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.255 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.255 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.256 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.256 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.256 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.256 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.256 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.256 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.257 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.257 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.257 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.257 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.257 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.257 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.257 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.257 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.258 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.258 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.258 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.258 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.258 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.258 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.259 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.259 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.259 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.259 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.259 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.259 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.259 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.260 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.260 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.260 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.260 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.260 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.260 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.260 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.261 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.261 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.261 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.261 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.261 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.261 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.262 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.262 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.262 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.262 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.262 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.262 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.263 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.263 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.263 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.263 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.263 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.263 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.263 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.264 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.264 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.264 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.264 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.264 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.265 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.265 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.265 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.265 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.265 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.265 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.266 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.266 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.266 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.266 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.266 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.266 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.266 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.267 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.267 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.267 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.267 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.267 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.267 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.267 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.268 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.268 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.268 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.268 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.268 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.268 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.268 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.269 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.269 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.269 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.269 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.269 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.269 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.270 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.270 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.270 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.270 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.270 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.270 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.271 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.271 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.271 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.271 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.271 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.271 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.271 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.272 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.272 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.272 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.272 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.272 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.272 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.273 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.273 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.273 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.273 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.273 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.273 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.273 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.273 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.274 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.274 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.274 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.274 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.274 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.274 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.275 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.275 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.275 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.275 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.275 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.275 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.275 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.276 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.276 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.276 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.276 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.276 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.276 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.277 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.277 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.277 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.277 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.277 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.277 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.278 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.278 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.278 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.278 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.278 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.279 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.279 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.279 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.279 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.279 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.280 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.280 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.280 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.280 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.280 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.281 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.281 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.281 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.281 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.281 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.281 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.282 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.282 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.282 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.282 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.282 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.282 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.283 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.283 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.283 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.283 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.283 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.283 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 podman[254088]: 2025-11-29 07:51:08.282551512 +0000 UTC m=+1.930195598 container remove 6a6d964681ec57f5b897a1e0dbeeeca8c3e8733bc9338fef611e44aabb053816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.283 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.284 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.284 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.284 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.284 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.284 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.284 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.284 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.285 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.285 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.285 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.285 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.285 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.285 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.285 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.286 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.286 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.286 253547 WARNING oslo_config.cfg [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 07:51:08 compute-0 nova_compute[253543]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 07:51:08 compute-0 nova_compute[253543]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 07:51:08 compute-0 nova_compute[253543]: and ``live_migration_inbound_addr`` respectively.
Nov 29 07:51:08 compute-0 nova_compute[253543]: ).  Its value may be silently ignored in the future.
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.286 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.287 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.287 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.287 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.287 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.287 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.287 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.288 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.288 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.288 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.288 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.288 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.288 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.289 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.289 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.289 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.289 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.289 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.289 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.rbd_secret_uuid        = 321e9cb7-01a2-5759-bf8c-981c9a64aa3e log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.290 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.290 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.290 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.290 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.290 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.290 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.290 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.291 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.291 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.291 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.291 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.291 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.291 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.291 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.292 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.292 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.292 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.292 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.292 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.292 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.293 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.293 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.293 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.293 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.293 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.293 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.293 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.294 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.294 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.294 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.294 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.294 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.294 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.295 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.295 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.295 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.295 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.295 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.295 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.295 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.296 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.296 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.296 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.296 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.296 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.296 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.296 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.297 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.297 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.297 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.297 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.297 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.297 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.297 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.298 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.298 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.298 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.298 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.298 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.298 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.298 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.299 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.299 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.299 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.299 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.299 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.299 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.300 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.300 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.300 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.300 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.300 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.300 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.300 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.301 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.301 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.301 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.301 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.301 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.301 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.301 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.302 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.302 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.302 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.302 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.302 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.302 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.302 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.302 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.303 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.303 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.303 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.303 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.303 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.303 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.303 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.304 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.304 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.304 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.304 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.304 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.304 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.305 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.305 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.305 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.305 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.305 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.305 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.305 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.306 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.306 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.306 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.306 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.306 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.307 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.307 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.307 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.307 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.307 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.307 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.308 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.308 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.308 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.308 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.308 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.308 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.309 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.309 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.309 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.309 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.309 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.309 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.310 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.310 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.310 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.310 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.310 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.310 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.310 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.311 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.311 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.311 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.311 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.311 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.312 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.312 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.312 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.312 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.312 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.312 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.312 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.312 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.313 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.313 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.313 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.313 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.313 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.313 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.313 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.314 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.314 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.314 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.314 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.314 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.314 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.315 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.315 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.315 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.315 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.315 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.315 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.315 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.316 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.316 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.316 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.316 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.316 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.316 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.317 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.317 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.317 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.317 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.317 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 sudo[253738]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.317 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.317 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.318 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.318 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.318 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.318 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.318 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.318 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.318 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.319 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.319 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.319 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.319 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.319 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.319 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.319 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.320 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.320 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.320 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.320 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.320 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.320 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.320 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.321 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.321 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.321 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.321 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.321 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.321 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.321 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.322 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.322 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.322 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.322 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.322 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.322 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.322 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.323 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.323 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.323 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.323 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.323 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.323 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.323 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.324 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.324 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.324 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.324 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.324 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.324 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.325 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.325 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.325 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.325 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.325 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.325 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.325 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.326 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.326 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.326 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.326 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.326 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.326 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.327 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.327 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.327 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.327 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.327 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.327 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.327 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.328 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.328 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.328 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.328 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.328 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.328 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.328 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.329 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.329 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.329 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.329 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.329 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.329 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.329 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.329 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.330 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.330 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.330 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.330 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.330 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.330 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.331 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.331 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.331 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.331 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.331 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.331 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.332 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.332 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.332 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.332 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.332 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.332 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.332 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.333 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.333 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.333 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.333 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.333 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.333 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.333 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.334 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.334 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.334 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.334 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.334 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.334 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.335 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.335 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.335 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.335 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.335 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.335 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.335 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.336 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.336 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.336 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.336 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.336 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.336 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.337 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.337 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.337 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.337 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.337 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.337 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.337 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.338 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.338 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.338 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.338 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.338 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.338 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.338 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.339 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.339 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.339 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.339 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.339 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.339 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.339 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.340 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.340 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.340 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.340 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.340 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.340 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.340 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.341 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.341 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.341 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.341 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.341 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.341 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.341 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.342 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.342 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.342 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.342 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.342 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.342 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.342 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.343 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.343 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.343 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.343 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.343 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.343 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.343 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.344 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.344 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.344 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.344 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.344 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.344 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.344 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.345 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.345 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.345 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.345 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.345 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.345 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.346 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.346 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.346 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.346 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.346 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.346 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.346 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.347 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.347 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.347 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.347 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.347 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.347 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.347 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.348 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.348 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.348 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.348 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.348 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.348 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.348 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.349 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.349 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.349 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.349 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.349 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.349 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.349 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.350 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.350 253547 DEBUG oslo_service.service [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.351 253547 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 29 07:51:08 compute-0 ceph-mon[75237]: pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:08 compute-0 sudo[254424]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:08 compute-0 sudo[254445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:51:08 compute-0 sudo[254445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:08 compute-0 systemd[1]: libpod-conmon-6a6d964681ec57f5b897a1e0dbeeeca8c3e8733bc9338fef611e44aabb053816.scope: Deactivated successfully.
Nov 29 07:51:08 compute-0 sudo[254445]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:08 compute-0 sudo[254475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:51:08 compute-0 sudo[254475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:08 compute-0 sudo[254475]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.471 253547 DEBUG nova.virt.libvirt.host [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.472 253547 DEBUG nova.virt.libvirt.host [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.473 253547 DEBUG nova.virt.libvirt.host [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.473 253547 DEBUG nova.virt.libvirt.host [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 29 07:51:08 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 07:51:08 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 29 07:51:08 compute-0 sudo[254527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:51:08 compute-0 sudo[254527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:08 compute-0 sudo[254527]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.587 253547 DEBUG nova.virt.libvirt.host [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fcfb4cc8f40> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.590 253547 DEBUG nova.virt.libvirt.host [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fcfb4cc8f40> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.592 253547 INFO nova.virt.libvirt.driver [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Connection event '1' reason 'None'
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.620 253547 WARNING nova.virt.libvirt.driver [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 29 07:51:08 compute-0 nova_compute[253543]: 2025-11-29 07:51:08.620 253547 DEBUG nova.virt.libvirt.volume.mount [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 29 07:51:08 compute-0 sudo[254590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:51:08 compute-0 sudo[254590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:51:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:51:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:51:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:51:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:51:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:51:09 compute-0 sudo[254815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgseayandvrxlyovnetnguboaibgamne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402668.630274-1557-67820396326650/AnsiballZ_systemd.py'
Nov 29 07:51:09 compute-0 sudo[254815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:51:09 compute-0 podman[254763]: 2025-11-29 07:51:09.065992304 +0000 UTC m=+0.112527230 container create 357e6af7057823a34a3c6f9ff049517085949c22dc22aa90a94a8ee48a772cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 07:51:09 compute-0 podman[254763]: 2025-11-29 07:51:08.976047711 +0000 UTC m=+0.022582657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:51:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:09 compute-0 systemd[1]: Started libpod-conmon-357e6af7057823a34a3c6f9ff049517085949c22dc22aa90a94a8ee48a772cb8.scope.
Nov 29 07:51:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:51:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:51:09 compute-0 podman[254763]: 2025-11-29 07:51:09.366637202 +0000 UTC m=+0.413172168 container init 357e6af7057823a34a3c6f9ff049517085949c22dc22aa90a94a8ee48a772cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:51:09 compute-0 podman[254763]: 2025-11-29 07:51:09.379567199 +0000 UTC m=+0.426102125 container start 357e6af7057823a34a3c6f9ff049517085949c22dc22aa90a94a8ee48a772cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:51:09 compute-0 cranky_napier[254829]: 167 167
Nov 29 07:51:09 compute-0 systemd[1]: libpod-357e6af7057823a34a3c6f9ff049517085949c22dc22aa90a94a8ee48a772cb8.scope: Deactivated successfully.
Nov 29 07:51:09 compute-0 conmon[254829]: conmon 357e6af7057823a34a3c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-357e6af7057823a34a3c6f9ff049517085949c22dc22aa90a94a8ee48a772cb8.scope/container/memory.events
Nov 29 07:51:09 compute-0 python3.9[254817]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:51:09 compute-0 systemd[1]: Stopping nova_compute container...
Nov 29 07:51:09 compute-0 podman[254763]: 2025-11-29 07:51:09.485203314 +0000 UTC m=+0.531738280 container attach 357e6af7057823a34a3c6f9ff049517085949c22dc22aa90a94a8ee48a772cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:51:09 compute-0 podman[254763]: 2025-11-29 07:51:09.487783824 +0000 UTC m=+0.534318780 container died 357e6af7057823a34a3c6f9ff049517085949c22dc22aa90a94a8ee48a772cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 07:51:09 compute-0 nova_compute[253543]: 2025-11-29 07:51:09.548 253547 INFO nova.virt.libvirt.host [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 07:51:09 compute-0 nova_compute[253543]: 
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <host>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <uuid>a28c55e7-2003-4883-bda8-258835775761</uuid>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <cpu>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <arch>x86_64</arch>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model>EPYC-Rome-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <vendor>AMD</vendor>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <microcode version='16777317'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <signature family='23' model='49' stepping='0'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature name='x2apic'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature name='tsc-deadline'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature name='osxsave'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature name='hypervisor'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature name='tsc_adjust'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature name='spec-ctrl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature name='stibp'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature name='arch-capabilities'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature name='ssbd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature name='cmp_legacy'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature name='topoext'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature name='virt-ssbd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature name='lbrv'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature name='tsc-scale'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature name='vmcb-clean'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature name='pause-filter'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature name='pfthreshold'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature name='svme-addr-chk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature name='rdctl-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature name='skip-l1dfl-vmentry'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature name='mds-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature name='pschange-mc-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <pages unit='KiB' size='4'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <pages unit='KiB' size='2048'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <pages unit='KiB' size='1048576'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </cpu>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <power_management>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <suspend_mem/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </power_management>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <iommu support='no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <migration_features>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <live/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <uri_transports>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <uri_transport>tcp</uri_transport>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <uri_transport>rdma</uri_transport>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </uri_transports>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </migration_features>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <topology>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <cells num='1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <cell id='0'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:           <memory unit='KiB'>7864324</memory>
Nov 29 07:51:09 compute-0 nova_compute[253543]:           <pages unit='KiB' size='4'>1966081</pages>
Nov 29 07:51:09 compute-0 nova_compute[253543]:           <pages unit='KiB' size='2048'>0</pages>
Nov 29 07:51:09 compute-0 nova_compute[253543]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 29 07:51:09 compute-0 nova_compute[253543]:           <distances>
Nov 29 07:51:09 compute-0 nova_compute[253543]:             <sibling id='0' value='10'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:           </distances>
Nov 29 07:51:09 compute-0 nova_compute[253543]:           <cpus num='8'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:           </cpus>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         </cell>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </cells>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </topology>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <cache>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </cache>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <secmodel>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model>selinux</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <doi>0</doi>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </secmodel>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <secmodel>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model>dac</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <doi>0</doi>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </secmodel>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </host>
Nov 29 07:51:09 compute-0 nova_compute[253543]: 
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <guest>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <os_type>hvm</os_type>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <arch name='i686'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <wordsize>32</wordsize>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <domain type='qemu'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <domain type='kvm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </arch>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <features>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <pae/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <nonpae/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <acpi default='on' toggle='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <apic default='on' toggle='no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <cpuselection/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <deviceboot/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <disksnapshot default='on' toggle='no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <externalSnapshot/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </features>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </guest>
Nov 29 07:51:09 compute-0 nova_compute[253543]: 
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <guest>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <os_type>hvm</os_type>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <arch name='x86_64'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <wordsize>64</wordsize>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <domain type='qemu'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <domain type='kvm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </arch>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <features>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <acpi default='on' toggle='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <apic default='on' toggle='no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <cpuselection/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <deviceboot/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <disksnapshot default='on' toggle='no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <externalSnapshot/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </features>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </guest>
Nov 29 07:51:09 compute-0 nova_compute[253543]: 
Nov 29 07:51:09 compute-0 nova_compute[253543]: </capabilities>
Nov 29 07:51:09 compute-0 nova_compute[253543]: 
Nov 29 07:51:09 compute-0 nova_compute[253543]: 2025-11-29 07:51:09.554 253547 DEBUG nova.virt.libvirt.host [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 29 07:51:09 compute-0 nova_compute[253543]: 2025-11-29 07:51:09.588 253547 DEBUG nova.virt.libvirt.host [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 07:51:09 compute-0 nova_compute[253543]: <domainCapabilities>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <domain>kvm</domain>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <arch>i686</arch>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <vcpu max='240'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <iothreads supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <os supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <enum name='firmware'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <loader supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='type'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>rom</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>pflash</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='readonly'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>yes</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>no</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='secure'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>no</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </loader>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </os>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <cpu>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>on</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>off</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </mode>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <mode name='maximum' supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='maximumMigratable'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>on</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>off</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </mode>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <mode name='host-model' supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <vendor>AMD</vendor>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='x2apic'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='stibp'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='ssbd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='succor'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='ibrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='lbrv'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </mode>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <mode name='custom' supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cooperlake'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cooperlake-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cooperlake-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Denverton'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mpx'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Denverton-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mpx'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Denverton-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Denverton-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Dhyana-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Genoa'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amd-psfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='auto-ibrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='stibp-always-on'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amd-psfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='auto-ibrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='stibp-always-on'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Milan'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amd-psfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='stibp-always-on'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Rome'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='GraniteRapids'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mcdt-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pbrsb-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='prefetchiti'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mcdt-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pbrsb-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='prefetchiti'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx10'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx10-128'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx10-256'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx10-512'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mcdt-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pbrsb-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='prefetchiti'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-noTSX'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 podman[254770]: 2025-11-29 07:51:09.616410085 +0000 UTC m=+0.630219803 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='IvyBridge'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='IvyBridge-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='IvyBridge-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='KnightsMill'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-4fmaps'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-4vnniw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512er'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512pf'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='KnightsMill-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-4fmaps'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-4vnniw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512er'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512pf'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Opteron_G4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fma4'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xop'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fma4'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xop'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Opteron_G5'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fma4'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tbm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xop'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fma4'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tbm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xop'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SapphireRapids'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SierraForest'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-ne-convert'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cmpccxadd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mcdt-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pbrsb-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SierraForest-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-ne-convert'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cmpccxadd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mcdt-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pbrsb-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Snowridge'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='core-capability'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mpx'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='split-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Snowridge-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='core-capability'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mpx'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='split-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Snowridge-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='core-capability'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='split-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Snowridge-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='core-capability'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='split-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Snowridge-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='athlon'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnow'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnowext'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='athlon-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnow'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnowext'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='core2duo'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='core2duo-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='coreduo'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='coreduo-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='n270'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='n270-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='phenom'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnow'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnowext'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='phenom-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnow'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnowext'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </mode>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </cpu>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <memoryBacking supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <enum name='sourceType'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <value>file</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <value>anonymous</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <value>memfd</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </memoryBacking>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <devices>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <disk supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='diskDevice'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>disk</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>cdrom</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>floppy</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>lun</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='bus'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>ide</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>fdc</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>scsi</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>usb</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>sata</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='model'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio-transitional</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio-non-transitional</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </disk>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <graphics supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='type'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vnc</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>egl-headless</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>dbus</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </graphics>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <video supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='modelType'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vga</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>cirrus</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>none</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>bochs</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>ramfb</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </video>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <hostdev supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='mode'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>subsystem</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='startupPolicy'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>default</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>mandatory</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>requisite</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>optional</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='subsysType'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>usb</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>pci</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>scsi</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='capsType'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='pciBackend'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </hostdev>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <rng supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='model'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio-transitional</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio-non-transitional</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='backendModel'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>random</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>egd</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>builtin</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </rng>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <filesystem supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='driverType'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>path</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>handle</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtiofs</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </filesystem>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <tpm supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='model'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>tpm-tis</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>tpm-crb</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='backendModel'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>emulator</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>external</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='backendVersion'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>2.0</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </tpm>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <redirdev supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='bus'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>usb</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </redirdev>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <channel supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='type'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>pty</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>unix</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </channel>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <crypto supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='model'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='type'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>qemu</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='backendModel'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>builtin</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </crypto>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <interface supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='backendType'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>default</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>passt</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </interface>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <panic supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='model'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>isa</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>hyperv</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </panic>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <console supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='type'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>null</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vc</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>pty</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>dev</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>file</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>pipe</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>stdio</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>udp</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>tcp</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>unix</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>qemu-vdagent</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>dbus</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </console>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </devices>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <features>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <gic supported='no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <vmcoreinfo supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <genid supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <backingStoreInput supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <backup supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <async-teardown supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <ps2 supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <sev supported='no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <sgx supported='no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <hyperv supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='features'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>relaxed</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vapic</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>spinlocks</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vpindex</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>runtime</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>synic</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>stimer</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>reset</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vendor_id</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>frequencies</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>reenlightenment</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>tlbflush</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>ipi</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>avic</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>emsr_bitmap</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>xmm_input</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <defaults>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <spinlocks>4095</spinlocks>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <stimer_direct>on</stimer_direct>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </defaults>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </hyperv>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <launchSecurity supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='sectype'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>tdx</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </launchSecurity>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </features>
Nov 29 07:51:09 compute-0 nova_compute[253543]: </domainCapabilities>
Nov 29 07:51:09 compute-0 nova_compute[253543]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:51:09 compute-0 nova_compute[253543]: 2025-11-29 07:51:09.599 253547 DEBUG nova.virt.libvirt.host [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 07:51:09 compute-0 nova_compute[253543]: <domainCapabilities>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <domain>kvm</domain>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <arch>i686</arch>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <vcpu max='4096'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <iothreads supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <os supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <enum name='firmware'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <loader supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='type'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>rom</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>pflash</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='readonly'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>yes</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>no</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='secure'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>no</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </loader>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </os>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <cpu>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>on</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>off</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </mode>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <mode name='maximum' supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='maximumMigratable'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>on</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>off</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </mode>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <mode name='host-model' supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <vendor>AMD</vendor>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='x2apic'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='stibp'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='ssbd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='succor'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='ibrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='lbrv'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </mode>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <mode name='custom' supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cooperlake'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cooperlake-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cooperlake-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Denverton'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mpx'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Denverton-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mpx'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Denverton-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Denverton-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Dhyana-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Genoa'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amd-psfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='auto-ibrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='stibp-always-on'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amd-psfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='auto-ibrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='stibp-always-on'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Milan'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amd-psfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='stibp-always-on'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Rome'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='GraniteRapids'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mcdt-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pbrsb-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='prefetchiti'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mcdt-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pbrsb-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='prefetchiti'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx10'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx10-128'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx10-256'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx10-512'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mcdt-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pbrsb-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='prefetchiti'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-noTSX'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2f6c7ef3ea59d63121784016f9ce4f3da8eda64f6c1187b1562a33bddc99865-merged.mount: Deactivated successfully.
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='IvyBridge'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='IvyBridge-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='IvyBridge-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='KnightsMill'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-4fmaps'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-4vnniw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512er'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512pf'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='KnightsMill-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-4fmaps'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-4vnniw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512er'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512pf'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Opteron_G4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fma4'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xop'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fma4'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xop'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Opteron_G5'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fma4'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tbm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xop'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fma4'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tbm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xop'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SapphireRapids'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SierraForest'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-ne-convert'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cmpccxadd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mcdt-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pbrsb-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SierraForest-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-ne-convert'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cmpccxadd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mcdt-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pbrsb-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Snowridge'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='core-capability'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mpx'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='split-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Snowridge-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='core-capability'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mpx'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='split-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Snowridge-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='core-capability'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='split-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Snowridge-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='core-capability'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='split-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Snowridge-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='athlon'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnow'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnowext'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='athlon-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnow'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnowext'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='core2duo'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='core2duo-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='coreduo'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='coreduo-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='n270'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='n270-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='phenom'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnow'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnowext'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='phenom-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnow'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnowext'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </mode>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </cpu>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <memoryBacking supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <enum name='sourceType'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <value>file</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <value>anonymous</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <value>memfd</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </memoryBacking>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <devices>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <disk supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='diskDevice'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>disk</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>cdrom</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>floppy</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>lun</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='bus'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>fdc</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>scsi</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>usb</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>sata</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='model'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio-transitional</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio-non-transitional</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </disk>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <graphics supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='type'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vnc</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>egl-headless</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>dbus</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </graphics>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <video supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='modelType'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vga</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>cirrus</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>none</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>bochs</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>ramfb</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </video>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <hostdev supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='mode'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>subsystem</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='startupPolicy'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>default</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>mandatory</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>requisite</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>optional</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='subsysType'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>usb</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>pci</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>scsi</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='capsType'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='pciBackend'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </hostdev>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <rng supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='model'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio-transitional</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio-non-transitional</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='backendModel'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>random</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>egd</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>builtin</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </rng>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <filesystem supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='driverType'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>path</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>handle</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtiofs</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </filesystem>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <tpm supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='model'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>tpm-tis</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>tpm-crb</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='backendModel'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>emulator</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>external</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='backendVersion'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>2.0</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </tpm>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <redirdev supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='bus'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>usb</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </redirdev>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <channel supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='type'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>pty</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>unix</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </channel>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <crypto supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='model'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='type'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>qemu</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='backendModel'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>builtin</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </crypto>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <interface supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='backendType'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>default</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>passt</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </interface>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <panic supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='model'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>isa</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>hyperv</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </panic>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <console supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='type'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>null</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vc</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>pty</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>dev</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>file</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>pipe</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>stdio</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>udp</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>tcp</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>unix</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>qemu-vdagent</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>dbus</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </console>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </devices>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <features>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <gic supported='no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <vmcoreinfo supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <genid supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <backingStoreInput supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <backup supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <async-teardown supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <ps2 supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <sev supported='no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <sgx supported='no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <hyperv supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='features'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>relaxed</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vapic</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>spinlocks</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vpindex</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>runtime</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>synic</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>stimer</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>reset</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vendor_id</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>frequencies</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>reenlightenment</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>tlbflush</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>ipi</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>avic</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>emsr_bitmap</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>xmm_input</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <defaults>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <spinlocks>4095</spinlocks>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <stimer_direct>on</stimer_direct>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </defaults>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </hyperv>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <launchSecurity supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='sectype'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>tdx</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </launchSecurity>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </features>
Nov 29 07:51:09 compute-0 nova_compute[253543]: </domainCapabilities>
Nov 29 07:51:09 compute-0 nova_compute[253543]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:51:09 compute-0 nova_compute[253543]: 2025-11-29 07:51:09.668 253547 DEBUG nova.virt.libvirt.host [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 29 07:51:09 compute-0 nova_compute[253543]: 2025-11-29 07:51:09.672 253547 DEBUG nova.virt.libvirt.host [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 07:51:09 compute-0 nova_compute[253543]: <domainCapabilities>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <domain>kvm</domain>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <arch>x86_64</arch>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <vcpu max='240'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <iothreads supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <os supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <enum name='firmware'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <loader supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='type'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>rom</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>pflash</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='readonly'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>yes</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>no</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='secure'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>no</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </loader>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </os>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <cpu>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>on</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>off</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </mode>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <mode name='maximum' supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='maximumMigratable'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>on</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>off</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </mode>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <mode name='host-model' supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <vendor>AMD</vendor>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='x2apic'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='stibp'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='ssbd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='succor'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='ibrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='lbrv'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </mode>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <mode name='custom' supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cooperlake'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cooperlake-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cooperlake-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Denverton'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mpx'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Denverton-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mpx'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Denverton-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Denverton-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Dhyana-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Genoa'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amd-psfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='auto-ibrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='stibp-always-on'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amd-psfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='auto-ibrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='stibp-always-on'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Milan'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amd-psfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='stibp-always-on'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Rome'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='GraniteRapids'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mcdt-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pbrsb-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='prefetchiti'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mcdt-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pbrsb-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='prefetchiti'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx10'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx10-128'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx10-256'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx10-512'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mcdt-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pbrsb-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='prefetchiti'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-noTSX'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='IvyBridge'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='IvyBridge-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='IvyBridge-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='KnightsMill'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-4fmaps'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-4vnniw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512er'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512pf'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='KnightsMill-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-4fmaps'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-4vnniw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512er'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512pf'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Opteron_G4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fma4'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xop'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fma4'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xop'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Opteron_G5'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fma4'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tbm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xop'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fma4'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tbm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xop'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SapphireRapids'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SierraForest'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-ne-convert'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cmpccxadd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mcdt-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pbrsb-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SierraForest-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-ne-convert'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cmpccxadd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mcdt-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pbrsb-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Snowridge'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='core-capability'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mpx'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='split-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Snowridge-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='core-capability'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mpx'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='split-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Snowridge-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='core-capability'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='split-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Snowridge-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='core-capability'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='split-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Snowridge-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='athlon'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnow'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnowext'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='athlon-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnow'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnowext'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='core2duo'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='core2duo-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='coreduo'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='coreduo-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='n270'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='n270-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='phenom'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnow'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnowext'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='phenom-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnow'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnowext'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </mode>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </cpu>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <memoryBacking supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <enum name='sourceType'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <value>file</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <value>anonymous</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <value>memfd</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </memoryBacking>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <devices>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <disk supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='diskDevice'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>disk</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>cdrom</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>floppy</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>lun</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='bus'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>ide</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>fdc</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>scsi</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>usb</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>sata</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='model'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio-transitional</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio-non-transitional</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </disk>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <graphics supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='type'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vnc</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>egl-headless</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>dbus</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </graphics>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <video supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='modelType'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vga</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>cirrus</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>none</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>bochs</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>ramfb</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </video>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <hostdev supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='mode'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>subsystem</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='startupPolicy'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>default</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>mandatory</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>requisite</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>optional</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='subsysType'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>usb</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>pci</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>scsi</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='capsType'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='pciBackend'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </hostdev>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <rng supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='model'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio-transitional</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio-non-transitional</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='backendModel'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>random</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>egd</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>builtin</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </rng>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <filesystem supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='driverType'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>path</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>handle</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtiofs</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </filesystem>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <tpm supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='model'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>tpm-tis</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>tpm-crb</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='backendModel'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>emulator</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>external</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='backendVersion'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>2.0</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </tpm>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <redirdev supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='bus'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>usb</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </redirdev>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <channel supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='type'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>pty</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>unix</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </channel>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <crypto supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='model'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='type'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>qemu</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='backendModel'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>builtin</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </crypto>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <interface supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='backendType'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>default</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>passt</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </interface>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <panic supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='model'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>isa</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>hyperv</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </panic>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <console supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='type'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>null</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vc</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>pty</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>dev</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>file</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>pipe</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>stdio</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>udp</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>tcp</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>unix</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>qemu-vdagent</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>dbus</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </console>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </devices>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <features>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <gic supported='no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <vmcoreinfo supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <genid supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <backingStoreInput supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <backup supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <async-teardown supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <ps2 supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <sev supported='no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <sgx supported='no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <hyperv supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='features'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>relaxed</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vapic</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>spinlocks</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vpindex</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>runtime</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>synic</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>stimer</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>reset</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vendor_id</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>frequencies</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>reenlightenment</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>tlbflush</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>ipi</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>avic</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>emsr_bitmap</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>xmm_input</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <defaults>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <spinlocks>4095</spinlocks>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <stimer_direct>on</stimer_direct>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </defaults>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </hyperv>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <launchSecurity supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='sectype'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>tdx</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </launchSecurity>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </features>
Nov 29 07:51:09 compute-0 nova_compute[253543]: </domainCapabilities>
Nov 29 07:51:09 compute-0 nova_compute[253543]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:51:09 compute-0 nova_compute[253543]: 2025-11-29 07:51:09.773 253547 DEBUG nova.virt.libvirt.host [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 07:51:09 compute-0 nova_compute[253543]: <domainCapabilities>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <domain>kvm</domain>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <arch>x86_64</arch>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <vcpu max='4096'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <iothreads supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <os supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <enum name='firmware'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <value>efi</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <loader supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='type'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>rom</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>pflash</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='readonly'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>yes</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>no</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='secure'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>yes</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>no</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </loader>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </os>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <cpu>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>on</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>off</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </mode>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <mode name='maximum' supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='maximumMigratable'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>on</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>off</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </mode>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <mode name='host-model' supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <vendor>AMD</vendor>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='x2apic'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='stibp'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='ssbd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='succor'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='ibrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='lbrv'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </mode>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <mode name='custom' supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Broadwell-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cooperlake'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cooperlake-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Cooperlake-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Denverton'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mpx'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Denverton-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mpx'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Denverton-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Denverton-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Dhyana-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Genoa'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amd-psfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='auto-ibrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='stibp-always-on'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amd-psfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='auto-ibrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='stibp-always-on'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Milan'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amd-psfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='stibp-always-on'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Rome'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='EPYC-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='GraniteRapids'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mcdt-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pbrsb-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='prefetchiti'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mcdt-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pbrsb-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='prefetchiti'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx10'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx10-128'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx10-256'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx10-512'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mcdt-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pbrsb-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='prefetchiti'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-noTSX'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Haswell-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='IvyBridge'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='IvyBridge-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='IvyBridge-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='KnightsMill'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-4fmaps'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-4vnniw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512er'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512pf'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='KnightsMill-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-4fmaps'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-4vnniw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512er'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512pf'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Opteron_G4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fma4'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xop'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fma4'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xop'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Opteron_G5'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fma4'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tbm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xop'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fma4'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tbm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xop'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SapphireRapids'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='amx-tile'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-bf16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-fp16'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bitalg'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrc'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fzrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='la57'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='taa-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xfd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SierraForest'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-ne-convert'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cmpccxadd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mcdt-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pbrsb-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 podman[254842]: 2025-11-29 07:51:09.901916487 +0000 UTC m=+0.483888876 container remove 357e6af7057823a34a3c6f9ff049517085949c22dc22aa90a94a8ee48a772cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='SierraForest-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-ifma'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-ne-convert'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx-vnni-int8'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cmpccxadd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fbsdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='fsrs'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ibrs-all'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mcdt-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pbrsb-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='psdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='serialize'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vaes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 systemd[1]: libpod-conmon-357e6af7057823a34a3c6f9ff049517085949c22dc22aa90a94a8ee48a772cb8.scope: Deactivated successfully.
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='hle'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='rtm'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512bw'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512cd'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512dq'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512f'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='avx512vl'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='invpcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pcid'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='pku'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Snowridge'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='core-capability'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mpx'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='split-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Snowridge-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='core-capability'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='mpx'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='split-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Snowridge-v2'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='core-capability'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='split-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Snowridge-v3'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='core-capability'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='split-lock-detect'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='Snowridge-v4'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='cldemote'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='erms'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='gfni'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdir64b'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='movdiri'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='xsaves'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='athlon'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnow'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnowext'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='athlon-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnow'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnowext'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='core2duo'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='core2duo-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='coreduo'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='coreduo-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='n270'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='n270-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='ss'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='phenom'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnow'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnowext'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <blockers model='phenom-v1'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnow'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <feature name='3dnowext'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </blockers>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </mode>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </cpu>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <memoryBacking supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <enum name='sourceType'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <value>file</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <value>anonymous</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <value>memfd</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </memoryBacking>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <devices>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <disk supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='diskDevice'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>disk</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>cdrom</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>floppy</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>lun</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='bus'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>fdc</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>scsi</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>usb</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>sata</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='model'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio-transitional</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio-non-transitional</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </disk>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <graphics supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='type'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vnc</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>egl-headless</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>dbus</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </graphics>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <video supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='modelType'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vga</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>cirrus</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>none</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>bochs</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>ramfb</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </video>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <hostdev supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='mode'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>subsystem</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='startupPolicy'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>default</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>mandatory</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>requisite</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>optional</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='subsysType'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>usb</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>pci</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>scsi</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='capsType'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='pciBackend'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </hostdev>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <rng supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='model'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio-transitional</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtio-non-transitional</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='backendModel'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>random</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>egd</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>builtin</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </rng>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <filesystem supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='driverType'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>path</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>handle</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>virtiofs</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </filesystem>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <tpm supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='model'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>tpm-tis</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>tpm-crb</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='backendModel'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>emulator</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>external</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='backendVersion'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>2.0</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </tpm>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <redirdev supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='bus'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>usb</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </redirdev>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <channel supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='type'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>pty</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>unix</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </channel>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <crypto supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='model'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='type'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>qemu</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='backendModel'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>builtin</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </crypto>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <interface supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='backendType'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>default</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>passt</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </interface>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <panic supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='model'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>isa</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>hyperv</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </panic>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <console supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='type'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>null</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vc</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>pty</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>dev</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>file</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>pipe</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>stdio</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>udp</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>tcp</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>unix</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>qemu-vdagent</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>dbus</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </console>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </devices>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   <features>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <gic supported='no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <vmcoreinfo supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <genid supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <backingStoreInput supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <backup supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <async-teardown supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <ps2 supported='yes'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <sev supported='no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <sgx supported='no'/>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <hyperv supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='features'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>relaxed</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vapic</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>spinlocks</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vpindex</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>runtime</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>synic</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>stimer</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>reset</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>vendor_id</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>frequencies</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>reenlightenment</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>tlbflush</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>ipi</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>avic</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>emsr_bitmap</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>xmm_input</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <defaults>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <spinlocks>4095</spinlocks>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <stimer_direct>on</stimer_direct>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </defaults>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </hyperv>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     <launchSecurity supported='yes'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       <enum name='sectype'>
Nov 29 07:51:09 compute-0 nova_compute[253543]:         <value>tdx</value>
Nov 29 07:51:09 compute-0 nova_compute[253543]:       </enum>
Nov 29 07:51:09 compute-0 nova_compute[253543]:     </launchSecurity>
Nov 29 07:51:09 compute-0 nova_compute[253543]:   </features>
Nov 29 07:51:09 compute-0 nova_compute[253543]: </domainCapabilities>
Nov 29 07:51:09 compute-0 nova_compute[253543]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:51:09 compute-0 nova_compute[253543]: 2025-11-29 07:51:09.850 253547 DEBUG nova.virt.libvirt.host [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 29 07:51:09 compute-0 nova_compute[253543]: 2025-11-29 07:51:09.850 253547 DEBUG nova.virt.libvirt.host [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 29 07:51:09 compute-0 nova_compute[253543]: 2025-11-29 07:51:09.851 253547 DEBUG nova.virt.libvirt.host [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 29 07:51:09 compute-0 nova_compute[253543]: 2025-11-29 07:51:09.851 253547 INFO nova.virt.libvirt.host [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Secure Boot support detected
Nov 29 07:51:09 compute-0 nova_compute[253543]: 2025-11-29 07:51:09.853 253547 INFO nova.virt.libvirt.driver [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 29 07:51:09 compute-0 nova_compute[253543]: 2025-11-29 07:51:09.853 253547 INFO nova.virt.libvirt.driver [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 29 07:51:09 compute-0 nova_compute[253543]: 2025-11-29 07:51:09.862 253547 DEBUG nova.virt.libvirt.driver [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 29 07:51:09 compute-0 nova_compute[253543]: 2025-11-29 07:51:09.937 253547 INFO nova.virt.node [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Determined node identity 858d78b2-ffcd-4247-ba96-0ec767fec62e from /var/lib/nova/compute_id
Nov 29 07:51:09 compute-0 nova_compute[253543]: 2025-11-29 07:51:09.968 253547 WARNING nova.compute.manager [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Compute nodes ['858d78b2-ffcd-4247-ba96-0ec767fec62e'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 29 07:51:10 compute-0 nova_compute[253543]: 2025-11-29 07:51:10.021 253547 INFO nova.compute.manager [None req-25f5e90e-be45-41bc-9a78-fbb1bc5083e2 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 29 07:51:10 compute-0 nova_compute[253543]: 2025-11-29 07:51:10.024 253547 DEBUG oslo_concurrency.lockutils [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:51:10 compute-0 nova_compute[253543]: 2025-11-29 07:51:10.024 253547 DEBUG oslo_concurrency.lockutils [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:51:10 compute-0 nova_compute[253543]: 2025-11-29 07:51:10.024 253547 DEBUG oslo_concurrency.lockutils [None req-52ab0475-9322-4b91-a74a-6d0837ae5add - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:51:10 compute-0 podman[254884]: 2025-11-29 07:51:10.140343495 +0000 UTC m=+0.093572712 container create 2776a4ecd4256df8029e931506ba068d4bc20ddfb5bc317fce8573781cc5aa44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_banzai, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:51:10 compute-0 podman[254884]: 2025-11-29 07:51:10.081338582 +0000 UTC m=+0.034567819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:51:10 compute-0 systemd[1]: Started libpod-conmon-2776a4ecd4256df8029e931506ba068d4bc20ddfb5bc317fce8573781cc5aa44.scope.
Nov 29 07:51:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/999164a76e43434e2cc1d10423aed3113057f6f3f0230e6539019a1196d50a6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/999164a76e43434e2cc1d10423aed3113057f6f3f0230e6539019a1196d50a6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/999164a76e43434e2cc1d10423aed3113057f6f3f0230e6539019a1196d50a6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/999164a76e43434e2cc1d10423aed3113057f6f3f0230e6539019a1196d50a6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:10 compute-0 podman[254884]: 2025-11-29 07:51:10.336776826 +0000 UTC m=+0.290006073 container init 2776a4ecd4256df8029e931506ba068d4bc20ddfb5bc317fce8573781cc5aa44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:51:10 compute-0 podman[254884]: 2025-11-29 07:51:10.345764207 +0000 UTC m=+0.298993425 container start 2776a4ecd4256df8029e931506ba068d4bc20ddfb5bc317fce8573781cc5aa44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_banzai, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:51:10 compute-0 podman[254884]: 2025-11-29 07:51:10.386603303 +0000 UTC m=+0.339832520 container attach 2776a4ecd4256df8029e931506ba068d4bc20ddfb5bc317fce8573781cc5aa44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_banzai, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:51:10 compute-0 virtqemud[254549]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 29 07:51:10 compute-0 virtqemud[254549]: hostname: compute-0
Nov 29 07:51:10 compute-0 virtqemud[254549]: End of file while reading data: Input/output error
Nov 29 07:51:10 compute-0 ceph-mon[75237]: pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:10 compute-0 systemd[1]: libpod-495c3f71b44403abc075c4a688ed39f29d268017db7a6383090dfa11020c2eea.scope: Deactivated successfully.
Nov 29 07:51:10 compute-0 systemd[1]: libpod-495c3f71b44403abc075c4a688ed39f29d268017db7a6383090dfa11020c2eea.scope: Consumed 3.576s CPU time.
Nov 29 07:51:10 compute-0 podman[254857]: 2025-11-29 07:51:10.487697816 +0000 UTC m=+1.001528766 container died 495c3f71b44403abc075c4a688ed39f29d268017db7a6383090dfa11020c2eea (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:51:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-495c3f71b44403abc075c4a688ed39f29d268017db7a6383090dfa11020c2eea-userdata-shm.mount: Deactivated successfully.
Nov 29 07:51:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-279ee5f4f8360e78ce31094c0f5a87bd06a51deaa7bb5af1a725484edd7cfc05-merged.mount: Deactivated successfully.
Nov 29 07:51:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]: {
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:     "0": [
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:         {
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "devices": [
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "/dev/loop3"
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             ],
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "lv_name": "ceph_lv0",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "lv_size": "21470642176",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "name": "ceph_lv0",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "tags": {
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.cluster_name": "ceph",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.crush_device_class": "",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.encrypted": "0",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.osd_id": "0",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.type": "block",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.vdo": "0"
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             },
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "type": "block",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "vg_name": "ceph_vg0"
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:         }
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:     ],
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:     "1": [
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:         {
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "devices": [
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "/dev/loop4"
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             ],
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "lv_name": "ceph_lv1",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "lv_size": "21470642176",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "name": "ceph_lv1",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "tags": {
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.cluster_name": "ceph",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.crush_device_class": "",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.encrypted": "0",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.osd_id": "1",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.type": "block",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.vdo": "0"
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             },
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "type": "block",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "vg_name": "ceph_vg1"
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:         }
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:     ],
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:     "2": [
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:         {
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "devices": [
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "/dev/loop5"
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             ],
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "lv_name": "ceph_lv2",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "lv_size": "21470642176",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "name": "ceph_lv2",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "tags": {
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.cluster_name": "ceph",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.crush_device_class": "",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.encrypted": "0",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.osd_id": "2",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.type": "block",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:                 "ceph.vdo": "0"
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             },
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "type": "block",
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:             "vg_name": "ceph_vg2"
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:         }
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]:     ]
Nov 29 07:51:11 compute-0 flamboyant_banzai[254900]: }
Nov 29 07:51:11 compute-0 systemd[1]: libpod-2776a4ecd4256df8029e931506ba068d4bc20ddfb5bc317fce8573781cc5aa44.scope: Deactivated successfully.
Nov 29 07:51:11 compute-0 podman[254884]: 2025-11-29 07:51:11.144425729 +0000 UTC m=+1.097655006 container died 2776a4ecd4256df8029e931506ba068d4bc20ddfb5bc317fce8573781cc5aa44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 07:51:12 compute-0 ceph-mon[75237]: pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-999164a76e43434e2cc1d10423aed3113057f6f3f0230e6539019a1196d50a6b-merged.mount: Deactivated successfully.
Nov 29 07:51:12 compute-0 podman[254884]: 2025-11-29 07:51:12.285830288 +0000 UTC m=+2.239059505 container remove 2776a4ecd4256df8029e931506ba068d4bc20ddfb5bc317fce8573781cc5aa44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:51:12 compute-0 systemd[1]: libpod-conmon-2776a4ecd4256df8029e931506ba068d4bc20ddfb5bc317fce8573781cc5aa44.scope: Deactivated successfully.
Nov 29 07:51:12 compute-0 sudo[254590]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:12 compute-0 podman[254857]: 2025-11-29 07:51:12.345372056 +0000 UTC m=+2.859202996 container cleanup 495c3f71b44403abc075c4a688ed39f29d268017db7a6383090dfa11020c2eea (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Nov 29 07:51:12 compute-0 podman[254857]: nova_compute
Nov 29 07:51:12 compute-0 podman[254945]: nova_compute
Nov 29 07:51:12 compute-0 sudo[254938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:51:12 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 29 07:51:12 compute-0 systemd[1]: Stopped nova_compute container.
Nov 29 07:51:12 compute-0 sudo[254938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:12 compute-0 sudo[254938]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:12 compute-0 systemd[1]: Starting nova_compute container...
Nov 29 07:51:12 compute-0 sudo[254977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:51:12 compute-0 sudo[254977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:12 compute-0 sudo[254977]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:12 compute-0 sudo[255011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:51:12 compute-0 sudo[255011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:12 compute-0 sudo[255011]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:51:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279ee5f4f8360e78ce31094c0f5a87bd06a51deaa7bb5af1a725484edd7cfc05/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279ee5f4f8360e78ce31094c0f5a87bd06a51deaa7bb5af1a725484edd7cfc05/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279ee5f4f8360e78ce31094c0f5a87bd06a51deaa7bb5af1a725484edd7cfc05/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279ee5f4f8360e78ce31094c0f5a87bd06a51deaa7bb5af1a725484edd7cfc05/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279ee5f4f8360e78ce31094c0f5a87bd06a51deaa7bb5af1a725484edd7cfc05/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:12 compute-0 sudo[255045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:51:12 compute-0 sudo[255045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:12 compute-0 podman[254976]: 2025-11-29 07:51:12.672655108 +0000 UTC m=+0.221280019 container init 495c3f71b44403abc075c4a688ed39f29d268017db7a6383090dfa11020c2eea (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3)
Nov 29 07:51:12 compute-0 podman[254976]: 2025-11-29 07:51:12.679590384 +0000 UTC m=+0.228215275 container start 495c3f71b44403abc075c4a688ed39f29d268017db7a6383090dfa11020c2eea (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute)
Nov 29 07:51:12 compute-0 nova_compute[255040]: + sudo -E kolla_set_configs
Nov 29 07:51:12 compute-0 podman[254976]: nova_compute
Nov 29 07:51:12 compute-0 systemd[1]: Started nova_compute container.
Nov 29 07:51:12 compute-0 sudo[254815]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Validating config file
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Copying service configuration files
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Deleting /etc/ceph
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Creating directory /etc/ceph
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Writing out command to execute
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:51:12 compute-0 nova_compute[255040]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 07:51:12 compute-0 nova_compute[255040]: ++ cat /run_command
Nov 29 07:51:12 compute-0 nova_compute[255040]: + CMD=nova-compute
Nov 29 07:51:12 compute-0 nova_compute[255040]: + ARGS=
Nov 29 07:51:12 compute-0 nova_compute[255040]: + sudo kolla_copy_cacerts
Nov 29 07:51:12 compute-0 nova_compute[255040]: + [[ ! -n '' ]]
Nov 29 07:51:12 compute-0 nova_compute[255040]: + . kolla_extend_start
Nov 29 07:51:12 compute-0 nova_compute[255040]: Running command: 'nova-compute'
Nov 29 07:51:12 compute-0 nova_compute[255040]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 07:51:12 compute-0 nova_compute[255040]: + umask 0022
Nov 29 07:51:12 compute-0 nova_compute[255040]: + exec nova-compute
Nov 29 07:51:13 compute-0 podman[255143]: 2025-11-29 07:51:13.088932349 +0000 UTC m=+0.115091119 container create 42fa2e32034c86fddf8b7c1167423d3fe29813c3445597a460ddc0c851fd560a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:51:13 compute-0 podman[255143]: 2025-11-29 07:51:12.998192534 +0000 UTC m=+0.024351334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:51:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:13 compute-0 systemd[1]: Started libpod-conmon-42fa2e32034c86fddf8b7c1167423d3fe29813c3445597a460ddc0c851fd560a.scope.
Nov 29 07:51:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:51:13 compute-0 podman[255143]: 2025-11-29 07:51:13.24881661 +0000 UTC m=+0.274975400 container init 42fa2e32034c86fddf8b7c1167423d3fe29813c3445597a460ddc0c851fd560a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:51:13 compute-0 podman[255143]: 2025-11-29 07:51:13.25812547 +0000 UTC m=+0.284284240 container start 42fa2e32034c86fddf8b7c1167423d3fe29813c3445597a460ddc0c851fd560a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:51:13 compute-0 systemd[1]: libpod-42fa2e32034c86fddf8b7c1167423d3fe29813c3445597a460ddc0c851fd560a.scope: Deactivated successfully.
Nov 29 07:51:13 compute-0 wizardly_jackson[255238]: 167 167
Nov 29 07:51:13 compute-0 conmon[255238]: conmon 42fa2e32034c86fddf8b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42fa2e32034c86fddf8b7c1167423d3fe29813c3445597a460ddc0c851fd560a.scope/container/memory.events
Nov 29 07:51:13 compute-0 sudo[255290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjlxulvwlxmjfypqypzlmvlqzxlhzxsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764402673.0190134-1566-200895678047870/AnsiballZ_podman_container.py'
Nov 29 07:51:13 compute-0 sudo[255290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:51:13 compute-0 podman[255143]: 2025-11-29 07:51:13.304189115 +0000 UTC m=+0.330348055 container attach 42fa2e32034c86fddf8b7c1167423d3fe29813c3445597a460ddc0c851fd560a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 07:51:13 compute-0 podman[255143]: 2025-11-29 07:51:13.305266614 +0000 UTC m=+0.331425394 container died 42fa2e32034c86fddf8b7c1167423d3fe29813c3445597a460ddc0c851fd560a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 07:51:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b0375cbd55ef5f11df10fdda23b9ce7d05103f4c4a2d85b61d470d65339b483-merged.mount: Deactivated successfully.
Nov 29 07:51:13 compute-0 python3.9[255302]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 07:51:13 compute-0 podman[255143]: 2025-11-29 07:51:13.678037687 +0000 UTC m=+0.704196457 container remove 42fa2e32034c86fddf8b7c1167423d3fe29813c3445597a460ddc0c851fd560a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jackson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 07:51:13 compute-0 systemd[1]: libpod-conmon-42fa2e32034c86fddf8b7c1167423d3fe29813c3445597a460ddc0c851fd560a.scope: Deactivated successfully.
Nov 29 07:51:13 compute-0 systemd[1]: Started libpod-conmon-890575c391546937dff6836c9651851d5d6163f577c18ec3d536ecea46db26fa.scope.
Nov 29 07:51:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc754a7378d39b86a575a77e6b2bef25682c0ef3d078eb6309ccdabfdcdc26f/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc754a7378d39b86a575a77e6b2bef25682c0ef3d078eb6309ccdabfdcdc26f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc754a7378d39b86a575a77e6b2bef25682c0ef3d078eb6309ccdabfdcdc26f/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:13 compute-0 podman[255344]: 2025-11-29 07:51:13.975452279 +0000 UTC m=+0.135314013 container create ab4ce93ca6824daf654f3274852408553b4fab12977574e8b5f4ff0ad50bc894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:51:13 compute-0 podman[255344]: 2025-11-29 07:51:13.906892439 +0000 UTC m=+0.066754203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:51:14 compute-0 podman[255326]: 2025-11-29 07:51:14.045424856 +0000 UTC m=+0.282258265 container init 890575c391546937dff6836c9651851d5d6163f577c18ec3d536ecea46db26fa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 07:51:14 compute-0 podman[255326]: 2025-11-29 07:51:14.053276577 +0000 UTC m=+0.290109986 container start 890575c391546937dff6836c9651851d5d6163f577c18ec3d536ecea46db26fa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, tcib_managed=true, container_name=nova_compute_init, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 07:51:14 compute-0 python3.9[255302]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 29 07:51:14 compute-0 nova_compute_init[255366]: INFO:nova_statedir:Applying nova statedir ownership
Nov 29 07:51:14 compute-0 nova_compute_init[255366]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 29 07:51:14 compute-0 nova_compute_init[255366]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 29 07:51:14 compute-0 nova_compute_init[255366]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 29 07:51:14 compute-0 nova_compute_init[255366]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 29 07:51:14 compute-0 nova_compute_init[255366]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 29 07:51:14 compute-0 nova_compute_init[255366]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 29 07:51:14 compute-0 nova_compute_init[255366]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 29 07:51:14 compute-0 nova_compute_init[255366]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 29 07:51:14 compute-0 nova_compute_init[255366]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 29 07:51:14 compute-0 nova_compute_init[255366]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 29 07:51:14 compute-0 nova_compute_init[255366]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:51:14 compute-0 nova_compute_init[255366]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 29 07:51:14 compute-0 nova_compute_init[255366]: INFO:nova_statedir:Nova statedir ownership complete
Nov 29 07:51:14 compute-0 systemd[1]: libpod-890575c391546937dff6836c9651851d5d6163f577c18ec3d536ecea46db26fa.scope: Deactivated successfully.
Nov 29 07:51:14 compute-0 podman[255368]: 2025-11-29 07:51:14.173214925 +0000 UTC m=+0.042038509 container died 890575c391546937dff6836c9651851d5d6163f577c18ec3d536ecea46db26fa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible)
Nov 29 07:51:14 compute-0 systemd[1]: Started libpod-conmon-ab4ce93ca6824daf654f3274852408553b4fab12977574e8b5f4ff0ad50bc894.scope.
Nov 29 07:51:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:51:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0f52307b576c39462f5377ffb7077bcc56e3d8d8118119452e2f7b516e84a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0f52307b576c39462f5377ffb7077bcc56e3d8d8118119452e2f7b516e84a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0f52307b576c39462f5377ffb7077bcc56e3d8d8118119452e2f7b516e84a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0f52307b576c39462f5377ffb7077bcc56e3d8d8118119452e2f7b516e84a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:51:14 compute-0 ceph-mon[75237]: pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:14 compute-0 podman[255344]: 2025-11-29 07:51:14.3224456 +0000 UTC m=+0.482307364 container init ab4ce93ca6824daf654f3274852408553b4fab12977574e8b5f4ff0ad50bc894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:51:14 compute-0 podman[255344]: 2025-11-29 07:51:14.332063809 +0000 UTC m=+0.491925543 container start ab4ce93ca6824daf654f3274852408553b4fab12977574e8b5f4ff0ad50bc894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:51:14 compute-0 podman[255344]: 2025-11-29 07:51:14.364608112 +0000 UTC m=+0.524469866 container attach ab4ce93ca6824daf654f3274852408553b4fab12977574e8b5f4ff0ad50bc894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 07:51:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-890575c391546937dff6836c9651851d5d6163f577c18ec3d536ecea46db26fa-userdata-shm.mount: Deactivated successfully.
Nov 29 07:51:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cc754a7378d39b86a575a77e6b2bef25682c0ef3d078eb6309ccdabfdcdc26f-merged.mount: Deactivated successfully.
Nov 29 07:51:14 compute-0 podman[255368]: 2025-11-29 07:51:14.547491189 +0000 UTC m=+0.416314763 container cleanup 890575c391546937dff6836c9651851d5d6163f577c18ec3d536ecea46db26fa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm)
Nov 29 07:51:14 compute-0 systemd[1]: libpod-conmon-890575c391546937dff6836c9651851d5d6163f577c18ec3d536ecea46db26fa.scope: Deactivated successfully.
Nov 29 07:51:14 compute-0 sudo[255290]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:14 compute-0 nova_compute[255040]: 2025-11-29 07:51:14.797 255071 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 07:51:14 compute-0 nova_compute[255040]: 2025-11-29 07:51:14.798 255071 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 07:51:14 compute-0 nova_compute[255040]: 2025-11-29 07:51:14.798 255071 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 07:51:14 compute-0 nova_compute[255040]: 2025-11-29 07:51:14.798 255071 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 29 07:51:14 compute-0 nova_compute[255040]: 2025-11-29 07:51:14.946 255071 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:14 compute-0 nova_compute[255040]: 2025-11-29 07:51:14.963 255071 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:14 compute-0 nova_compute[255040]: 2025-11-29 07:51:14.963 255071 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 29 07:51:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:15 compute-0 sshd-session[223236]: Connection closed by 192.168.122.30 port 41734
Nov 29 07:51:15 compute-0 sshd-session[223185]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:51:15 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Nov 29 07:51:15 compute-0 systemd[1]: session-50.scope: Consumed 2min 37.998s CPU time.
Nov 29 07:51:15 compute-0 systemd-logind[782]: Session 50 logged out. Waiting for processes to exit.
Nov 29 07:51:15 compute-0 systemd-logind[782]: Removed session 50.
Nov 29 07:51:15 compute-0 musing_davinci[255391]: {
Nov 29 07:51:15 compute-0 musing_davinci[255391]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:51:15 compute-0 musing_davinci[255391]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:51:15 compute-0 musing_davinci[255391]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:51:15 compute-0 musing_davinci[255391]:         "osd_id": 2,
Nov 29 07:51:15 compute-0 musing_davinci[255391]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:51:15 compute-0 musing_davinci[255391]:         "type": "bluestore"
Nov 29 07:51:15 compute-0 musing_davinci[255391]:     },
Nov 29 07:51:15 compute-0 musing_davinci[255391]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:51:15 compute-0 musing_davinci[255391]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:51:15 compute-0 musing_davinci[255391]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:51:15 compute-0 musing_davinci[255391]:         "osd_id": 0,
Nov 29 07:51:15 compute-0 musing_davinci[255391]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:51:15 compute-0 musing_davinci[255391]:         "type": "bluestore"
Nov 29 07:51:15 compute-0 musing_davinci[255391]:     },
Nov 29 07:51:15 compute-0 musing_davinci[255391]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:51:15 compute-0 musing_davinci[255391]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:51:15 compute-0 musing_davinci[255391]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:51:15 compute-0 musing_davinci[255391]:         "osd_id": 1,
Nov 29 07:51:15 compute-0 musing_davinci[255391]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:51:15 compute-0 musing_davinci[255391]:         "type": "bluestore"
Nov 29 07:51:15 compute-0 musing_davinci[255391]:     }
Nov 29 07:51:15 compute-0 musing_davinci[255391]: }
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.433 255071 INFO nova.virt.driver [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 29 07:51:15 compute-0 systemd[1]: libpod-ab4ce93ca6824daf654f3274852408553b4fab12977574e8b5f4ff0ad50bc894.scope: Deactivated successfully.
Nov 29 07:51:15 compute-0 systemd[1]: libpod-ab4ce93ca6824daf654f3274852408553b4fab12977574e8b5f4ff0ad50bc894.scope: Consumed 1.102s CPU time.
Nov 29 07:51:15 compute-0 podman[255344]: 2025-11-29 07:51:15.451827746 +0000 UTC m=+1.611689480 container died ab4ce93ca6824daf654f3274852408553b4fab12977574e8b5f4ff0ad50bc894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.563 255071 INFO nova.compute.provider_config [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.579 255071 DEBUG oslo_concurrency.lockutils [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.579 255071 DEBUG oslo_concurrency.lockutils [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.580 255071 DEBUG oslo_concurrency.lockutils [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.580 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.580 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.580 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.581 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.581 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.581 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.581 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.581 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.582 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.582 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.582 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.582 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.582 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.582 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.583 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.583 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.583 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.583 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.583 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.584 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.584 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.584 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.584 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.584 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.584 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.585 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.585 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.585 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.585 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.585 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.586 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.586 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.586 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.586 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.586 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.587 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.587 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.587 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.587 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.587 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.588 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.588 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.588 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.588 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.589 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.589 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.589 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.589 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.589 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.589 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.590 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.590 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.590 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.590 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.590 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.590 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.590 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.591 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.591 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.591 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.591 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.591 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.591 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.592 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.592 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.592 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.592 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.592 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.593 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.593 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.593 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.593 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.593 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.593 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.594 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.594 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.594 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.594 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.594 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.595 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.595 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.595 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.595 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.595 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.596 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.596 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.596 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.596 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.596 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.596 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.596 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.597 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.597 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.597 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.597 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.597 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.597 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.598 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.598 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.598 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.598 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.598 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.598 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.599 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.599 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.599 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.599 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.599 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.599 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.600 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.600 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.600 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.600 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.603 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c0f52307b576c39462f5377ffb7077bcc56e3d8d8118119452e2f7b516e84a7-merged.mount: Deactivated successfully.
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.603 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.604 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.604 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.604 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.604 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.604 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.604 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.604 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.605 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.605 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.605 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.605 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.605 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.605 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.605 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.606 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.606 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.606 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.606 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.606 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.606 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.606 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.607 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.607 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.607 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.607 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.607 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.607 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.607 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.608 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.608 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.608 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.608 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.608 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.608 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.609 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.609 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.609 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.609 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.609 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.609 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.609 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.610 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.610 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.610 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.610 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.610 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.610 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.611 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.611 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.611 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.611 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.611 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.611 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.611 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.612 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.612 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.612 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.612 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.612 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.612 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.613 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.613 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.613 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.613 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.613 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.613 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.613 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.614 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.614 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.614 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.614 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.614 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.614 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.615 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.615 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.615 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.615 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.615 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.615 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.616 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.616 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.616 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.616 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.616 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.616 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.617 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.617 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.617 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.617 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.617 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.617 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.617 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.618 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.618 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.618 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.618 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.618 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.618 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.619 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.619 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.619 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.619 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.619 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.619 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.620 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.620 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.620 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.620 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.620 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.620 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.620 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.621 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.621 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.621 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.621 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.621 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.621 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.621 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.622 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.622 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.622 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.622 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.622 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.622 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.623 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.623 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.623 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.623 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.623 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.623 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.623 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.624 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.624 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.624 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.624 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.624 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.624 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.625 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.625 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.625 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.625 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.625 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.625 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.625 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.626 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.626 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.626 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.626 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.626 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.626 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.626 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.627 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.627 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.627 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.627 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.627 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.627 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.628 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.628 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.628 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.628 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.628 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.628 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.628 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.629 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.629 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.629 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.629 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.629 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.629 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.629 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.630 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.630 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.630 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.630 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.630 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.630 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.631 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.631 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.631 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.631 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.631 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.631 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.631 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.632 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.632 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.632 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.632 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.632 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.632 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.633 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.633 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.633 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.633 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.633 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.633 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.634 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.634 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.634 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.634 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.634 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.634 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.635 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.635 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.635 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.635 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.635 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.635 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.635 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.636 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.636 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.636 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.636 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.636 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.636 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.637 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.637 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.637 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.637 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.637 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.637 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.637 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.638 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.638 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.638 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.638 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.638 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.638 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.639 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.639 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.639 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.639 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.639 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.639 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.639 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.640 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.640 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.640 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.640 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.640 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.641 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.641 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.641 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.641 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.642 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.642 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.642 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.642 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.642 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.642 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.643 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.643 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.643 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.643 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.643 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.644 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.644 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.644 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.644 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.644 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.645 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.645 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.645 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.645 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.645 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.646 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.646 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.646 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.646 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.646 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.647 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.647 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.647 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.647 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.648 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.648 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.648 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.648 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.648 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.649 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.649 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.649 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.649 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.649 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.650 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.650 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.650 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.650 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.651 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.651 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.651 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.651 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.651 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.652 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.652 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.652 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.652 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.652 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.653 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.653 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.653 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.653 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.653 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.654 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.654 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.654 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.654 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.654 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.654 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.654 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.655 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.655 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.655 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.655 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.655 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.655 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.655 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.656 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.656 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.656 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.656 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.656 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.656 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.657 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.657 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.657 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.657 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.657 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.657 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.657 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.657 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.658 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.658 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.658 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.658 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.658 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.658 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.658 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.659 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.659 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.659 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.659 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.659 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.659 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.660 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.660 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.660 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.660 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.660 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.660 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.660 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.661 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.661 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.661 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.661 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.661 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.661 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.661 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.662 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.662 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.662 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.662 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.662 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.662 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.662 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.663 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.663 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.663 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.663 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.663 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.663 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.663 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.664 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.664 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.664 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.664 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.670 255071 WARNING oslo_config.cfg [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 07:51:15 compute-0 nova_compute[255040]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 07:51:15 compute-0 nova_compute[255040]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 07:51:15 compute-0 nova_compute[255040]: and ``live_migration_inbound_addr`` respectively.
Nov 29 07:51:15 compute-0 nova_compute[255040]: ).  Its value may be silently ignored in the future.
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.671 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.671 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.672 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.672 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.673 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.674 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.674 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.675 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.675 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.675 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.676 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.676 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.676 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.677 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.677 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.677 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.678 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.678 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.678 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.rbd_secret_uuid        = 321e9cb7-01a2-5759-bf8c-981c9a64aa3e log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.679 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.679 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.679 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.680 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.680 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.680 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.681 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.681 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.681 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.682 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.682 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.682 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.683 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.683 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.683 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.684 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.684 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.684 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.685 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.685 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.685 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.686 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.686 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.686 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.687 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.687 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.687 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.687 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.688 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.688 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.689 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.689 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.689 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.690 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.690 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.690 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.691 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.691 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.691 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.692 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.692 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.693 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.693 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.693 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.694 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.694 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.694 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.695 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.695 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.695 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.696 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.696 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.696 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.697 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.697 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.697 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.698 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.698 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.698 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.699 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.699 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.699 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.700 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.700 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.700 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.701 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.701 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.702 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.702 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.702 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.703 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.703 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.703 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.704 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.704 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.705 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.705 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.705 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.706 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.706 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.706 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.707 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.707 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.707 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.708 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.708 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.708 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.709 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.709 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.709 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.710 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.710 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.710 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.711 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.711 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.711 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.712 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.712 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.712 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.713 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.713 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.713 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.714 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.714 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.715 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.715 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.715 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.716 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.716 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.716 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.716 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.717 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.717 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.717 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.717 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.717 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.718 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.718 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.718 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.718 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.719 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.719 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.719 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.719 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.719 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.720 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.720 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.720 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.720 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.720 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.721 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.721 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.721 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.721 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.722 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.722 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.722 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.722 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.722 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.723 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.723 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.723 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.723 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.723 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.724 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.724 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.724 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.724 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.725 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.725 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.725 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.725 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.725 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.725 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.726 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.726 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.726 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.726 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.726 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.727 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.727 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.727 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.727 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.728 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.728 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.728 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.728 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.729 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.729 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.729 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.729 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.729 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.730 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.730 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.730 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.731 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.731 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.731 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.731 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.732 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.732 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.732 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.732 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.732 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.733 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.733 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.733 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.733 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.733 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.734 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.734 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.734 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.734 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.734 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.735 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.735 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.735 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.735 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.735 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.736 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.736 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.736 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.736 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.737 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.737 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.737 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.737 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.737 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.738 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.738 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.738 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.738 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.739 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.739 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.739 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.739 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.739 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.740 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.740 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.740 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.740 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.741 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.741 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.741 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.741 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.741 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.742 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.742 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.742 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.743 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.743 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.744 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.744 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.744 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.744 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.745 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.745 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.745 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.745 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.745 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.746 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.746 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.746 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.746 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.746 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.747 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.747 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.747 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.747 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.747 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.748 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.748 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.748 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.749 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.749 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.749 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.749 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.750 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.750 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.750 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.750 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.751 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.751 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.751 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.751 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.752 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.752 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.752 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.752 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.752 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.753 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.753 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.753 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.753 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.753 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.753 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.754 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.754 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.754 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.754 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.754 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.754 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.755 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.755 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.755 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.755 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.755 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.755 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.756 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.756 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.756 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.756 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.756 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.757 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.757 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.757 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.757 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.757 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.757 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.758 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.758 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.758 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.758 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.758 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.758 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.759 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.759 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.759 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.759 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.759 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.759 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.759 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.760 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.760 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.760 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.760 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.760 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.760 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.761 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.761 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.761 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.761 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.761 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.761 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.762 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.762 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.762 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.762 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.762 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.762 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.762 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.763 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.763 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.763 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.763 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.763 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.763 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.763 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.764 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.764 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.764 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.764 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.764 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.764 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.765 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.765 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.765 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.765 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.765 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.765 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.766 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.766 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.766 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.766 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.766 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.766 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.767 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.767 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.767 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.767 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.767 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.767 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.768 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.768 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.768 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.768 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.768 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.768 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.769 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.769 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.769 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.769 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.769 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.769 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.769 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.770 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.770 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.770 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.770 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.770 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.770 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.770 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.771 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.771 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.771 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.771 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.771 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.771 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.772 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.772 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.772 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.772 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.772 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.772 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.772 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.773 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.773 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.773 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.773 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.773 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.773 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.773 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.774 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.774 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.774 255071 DEBUG oslo_service.service [None req-9516514f-ae2d-4d9f-b281-3347bb23e09f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.775 255071 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 29 07:51:15 compute-0 podman[255344]: 2025-11-29 07:51:15.774898906 +0000 UTC m=+1.934760670 container remove ab4ce93ca6824daf654f3274852408553b4fab12977574e8b5f4ff0ad50bc894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 07:51:15 compute-0 systemd[1]: libpod-conmon-ab4ce93ca6824daf654f3274852408553b4fab12977574e8b5f4ff0ad50bc894.scope: Deactivated successfully.
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.794 255071 INFO nova.virt.node [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Determined node identity 858d78b2-ffcd-4247-ba96-0ec767fec62e from /var/lib/nova/compute_id
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.795 255071 DEBUG nova.virt.libvirt.host [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.796 255071 DEBUG nova.virt.libvirt.host [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.796 255071 DEBUG nova.virt.libvirt.host [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.796 255071 DEBUG nova.virt.libvirt.host [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 29 07:51:15 compute-0 sudo[255045]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.812 255071 DEBUG nova.virt.libvirt.host [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f63cb1b0580> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.815 255071 DEBUG nova.virt.libvirt.host [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f63cb1b0580> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 29 07:51:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.816 255071 INFO nova.virt.libvirt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Connection event '1' reason 'None'
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.823 255071 INFO nova.virt.libvirt.host [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 07:51:15 compute-0 nova_compute[255040]: 
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <host>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <uuid>a28c55e7-2003-4883-bda8-258835775761</uuid>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <cpu>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <arch>x86_64</arch>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model>EPYC-Rome-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <vendor>AMD</vendor>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <microcode version='16777317'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <signature family='23' model='49' stepping='0'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature name='x2apic'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature name='tsc-deadline'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature name='osxsave'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature name='hypervisor'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature name='tsc_adjust'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature name='spec-ctrl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature name='stibp'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature name='arch-capabilities'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature name='ssbd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature name='cmp_legacy'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature name='topoext'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature name='virt-ssbd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature name='lbrv'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature name='tsc-scale'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature name='vmcb-clean'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature name='pause-filter'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature name='pfthreshold'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature name='svme-addr-chk'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature name='rdctl-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature name='skip-l1dfl-vmentry'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature name='mds-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature name='pschange-mc-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <pages unit='KiB' size='4'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <pages unit='KiB' size='2048'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <pages unit='KiB' size='1048576'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </cpu>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <power_management>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <suspend_mem/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </power_management>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <iommu support='no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <migration_features>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <live/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <uri_transports>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <uri_transport>tcp</uri_transport>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <uri_transport>rdma</uri_transport>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </uri_transports>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </migration_features>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <topology>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <cells num='1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <cell id='0'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:           <memory unit='KiB'>7864324</memory>
Nov 29 07:51:15 compute-0 nova_compute[255040]:           <pages unit='KiB' size='4'>1966081</pages>
Nov 29 07:51:15 compute-0 nova_compute[255040]:           <pages unit='KiB' size='2048'>0</pages>
Nov 29 07:51:15 compute-0 nova_compute[255040]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 29 07:51:15 compute-0 nova_compute[255040]:           <distances>
Nov 29 07:51:15 compute-0 nova_compute[255040]:             <sibling id='0' value='10'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:           </distances>
Nov 29 07:51:15 compute-0 nova_compute[255040]:           <cpus num='8'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:           </cpus>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         </cell>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </cells>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </topology>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <cache>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </cache>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <secmodel>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model>selinux</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <doi>0</doi>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </secmodel>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <secmodel>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model>dac</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <doi>0</doi>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </secmodel>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   </host>
Nov 29 07:51:15 compute-0 nova_compute[255040]: 
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <guest>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <os_type>hvm</os_type>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <arch name='i686'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <wordsize>32</wordsize>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <domain type='qemu'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <domain type='kvm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </arch>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <features>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <pae/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <nonpae/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <acpi default='on' toggle='yes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <apic default='on' toggle='no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <cpuselection/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <deviceboot/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <disksnapshot default='on' toggle='no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <externalSnapshot/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </features>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   </guest>
Nov 29 07:51:15 compute-0 nova_compute[255040]: 
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <guest>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <os_type>hvm</os_type>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <arch name='x86_64'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <wordsize>64</wordsize>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <domain type='qemu'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <domain type='kvm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </arch>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <features>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <acpi default='on' toggle='yes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <apic default='on' toggle='no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <cpuselection/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <deviceboot/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <disksnapshot default='on' toggle='no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <externalSnapshot/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </features>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   </guest>
Nov 29 07:51:15 compute-0 nova_compute[255040]: 
Nov 29 07:51:15 compute-0 nova_compute[255040]: </capabilities>
Nov 29 07:51:15 compute-0 nova_compute[255040]: 
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.829 255071 DEBUG nova.virt.libvirt.host [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.830 255071 DEBUG nova.virt.libvirt.volume.mount [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.834 255071 DEBUG nova.virt.libvirt.host [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 07:51:15 compute-0 nova_compute[255040]: <domainCapabilities>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <domain>kvm</domain>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <arch>i686</arch>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <vcpu max='4096'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <iothreads supported='yes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <os supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <enum name='firmware'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <loader supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='type'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>rom</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>pflash</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='readonly'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>yes</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>no</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='secure'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>no</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </loader>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   </os>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <cpu>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>on</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>off</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </mode>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <mode name='maximum' supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='maximumMigratable'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>on</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>off</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </mode>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <mode name='host-model' supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <vendor>AMD</vendor>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='x2apic'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='stibp'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='ssbd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='succor'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='ibrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='lbrv'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </mode>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <mode name='custom' supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell-v4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cooperlake'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cooperlake-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cooperlake-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Denverton'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mpx'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Denverton-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mpx'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Denverton-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Denverton-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Dhyana-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Genoa'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amd-psfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='auto-ibrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='stibp-always-on'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amd-psfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='auto-ibrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='stibp-always-on'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Milan'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amd-psfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='stibp-always-on'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Rome'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-v4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='GraniteRapids'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mcdt-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pbrsb-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='prefetchiti'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mcdt-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pbrsb-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='prefetchiti'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx10'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx10-128'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx10-256'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx10-512'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mcdt-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pbrsb-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='prefetchiti'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell-noTSX'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell-v4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='IvyBridge'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='IvyBridge-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='IvyBridge-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='KnightsMill'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-4fmaps'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-4vnniw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512er'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512pf'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='KnightsMill-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-4fmaps'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-4vnniw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512er'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512pf'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Opteron_G4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fma4'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xop'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fma4'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xop'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Opteron_G5'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fma4'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tbm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xop'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fma4'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tbm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xop'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='SapphireRapids'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:15 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='SierraForest'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-ne-convert'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='cmpccxadd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mcdt-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pbrsb-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='SierraForest-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-ne-convert'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='cmpccxadd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mcdt-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pbrsb-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Snowridge'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='core-capability'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mpx'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='split-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Snowridge-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='core-capability'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mpx'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='split-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Snowridge-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='core-capability'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='split-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Snowridge-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='core-capability'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='split-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Snowridge-v4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='athlon'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='3dnow'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='3dnowext'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='athlon-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='3dnow'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='3dnowext'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='core2duo'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='core2duo-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='coreduo'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='coreduo-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='n270'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='n270-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='phenom'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='3dnow'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='3dnowext'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='phenom-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='3dnow'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='3dnowext'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </mode>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   </cpu>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <memoryBacking supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <enum name='sourceType'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <value>file</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <value>anonymous</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <value>memfd</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   </memoryBacking>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <devices>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <disk supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='diskDevice'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>disk</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>cdrom</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>floppy</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>lun</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='bus'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>fdc</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>scsi</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>virtio</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>usb</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>sata</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='model'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>virtio</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>virtio-transitional</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>virtio-non-transitional</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </disk>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <graphics supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='type'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>vnc</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>egl-headless</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>dbus</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </graphics>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <video supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='modelType'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>vga</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>cirrus</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>virtio</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>none</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>bochs</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>ramfb</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </video>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <hostdev supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='mode'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>subsystem</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='startupPolicy'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>default</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>mandatory</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>requisite</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>optional</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='subsysType'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>usb</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>pci</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>scsi</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='capsType'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='pciBackend'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </hostdev>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <rng supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='model'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>virtio</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>virtio-transitional</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>virtio-non-transitional</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='backendModel'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>random</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>egd</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>builtin</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </rng>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <filesystem supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='driverType'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>path</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>handle</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>virtiofs</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </filesystem>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <tpm supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='model'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>tpm-tis</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>tpm-crb</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='backendModel'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>emulator</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>external</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='backendVersion'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>2.0</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </tpm>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <redirdev supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='bus'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>usb</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </redirdev>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <channel supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='type'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>pty</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>unix</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </channel>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <crypto supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='model'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='type'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>qemu</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='backendModel'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>builtin</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </crypto>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <interface supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='backendType'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>default</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>passt</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </interface>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <panic supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='model'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>isa</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>hyperv</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </panic>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <console supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='type'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>null</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>vc</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>pty</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>dev</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>file</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>pipe</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>stdio</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>udp</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>tcp</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>unix</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>qemu-vdagent</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>dbus</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </console>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   </devices>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <features>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <gic supported='no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <vmcoreinfo supported='yes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <genid supported='yes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <backingStoreInput supported='yes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <backup supported='yes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <async-teardown supported='yes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <ps2 supported='yes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <sev supported='no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <sgx supported='no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <hyperv supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='features'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>relaxed</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>vapic</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>spinlocks</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>vpindex</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>runtime</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>synic</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>stimer</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>reset</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>vendor_id</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>frequencies</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>reenlightenment</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>tlbflush</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>ipi</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>avic</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>emsr_bitmap</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>xmm_input</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <defaults>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <spinlocks>4095</spinlocks>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <stimer_direct>on</stimer_direct>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </defaults>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </hyperv>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <launchSecurity supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='sectype'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>tdx</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </launchSecurity>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   </features>
Nov 29 07:51:15 compute-0 nova_compute[255040]: </domainCapabilities>
Nov 29 07:51:15 compute-0 nova_compute[255040]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.841 255071 DEBUG nova.virt.libvirt.host [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 07:51:15 compute-0 nova_compute[255040]: <domainCapabilities>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <domain>kvm</domain>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <arch>i686</arch>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <vcpu max='240'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <iothreads supported='yes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <os supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <enum name='firmware'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <loader supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='type'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>rom</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>pflash</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='readonly'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>yes</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>no</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='secure'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>no</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </loader>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   </os>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <cpu>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>on</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>off</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </mode>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <mode name='maximum' supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='maximumMigratable'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>on</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>off</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </mode>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <mode name='host-model' supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <vendor>AMD</vendor>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='x2apic'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='stibp'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='ssbd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='succor'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='ibrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='lbrv'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </mode>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <mode name='custom' supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell-v4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cooperlake'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cooperlake-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cooperlake-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Denverton'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mpx'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Denverton-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mpx'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Denverton-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Denverton-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Dhyana-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Genoa'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amd-psfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='auto-ibrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='stibp-always-on'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amd-psfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='auto-ibrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='stibp-always-on'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Milan'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amd-psfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='stibp-always-on'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Rome'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-v4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='GraniteRapids'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mcdt-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pbrsb-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='prefetchiti'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mcdt-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pbrsb-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='prefetchiti'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx10'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx10-128'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx10-256'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx10-512'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mcdt-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pbrsb-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='prefetchiti'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell-noTSX'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell-v4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='IvyBridge'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='IvyBridge-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='IvyBridge-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='KnightsMill'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-4fmaps'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-4vnniw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512er'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512pf'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='KnightsMill-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-4fmaps'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-4vnniw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512er'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512pf'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:51:15 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 6c9a4a5b-b3c4-4f44-b3ce-ce5abc8d0a8b does not exist
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Opteron_G4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fma4'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xop'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fma4'/>
Nov 29 07:51:15 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev a501d1c0-df45-49e9-b12c-8cbc746a766d does not exist
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xop'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Opteron_G5'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fma4'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tbm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xop'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fma4'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tbm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xop'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='SapphireRapids'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='SierraForest'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-ne-convert'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='cmpccxadd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mcdt-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pbrsb-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='SierraForest-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-ne-convert'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='cmpccxadd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mcdt-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pbrsb-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Snowridge'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='core-capability'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mpx'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='split-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Snowridge-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='core-capability'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mpx'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='split-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Snowridge-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='core-capability'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='split-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Snowridge-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='core-capability'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='split-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Snowridge-v4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='athlon'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='3dnow'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='3dnowext'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='athlon-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='3dnow'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='3dnowext'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='core2duo'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='core2duo-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='coreduo'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='coreduo-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='n270'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='n270-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='phenom'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='3dnow'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='3dnowext'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='phenom-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='3dnow'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='3dnowext'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </mode>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   </cpu>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <memoryBacking supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <enum name='sourceType'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <value>file</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <value>anonymous</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <value>memfd</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   </memoryBacking>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <devices>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <disk supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='diskDevice'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>disk</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>cdrom</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>floppy</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>lun</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='bus'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>ide</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>fdc</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>scsi</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>virtio</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>usb</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>sata</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='model'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>virtio</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>virtio-transitional</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>virtio-non-transitional</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </disk>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <graphics supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='type'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>vnc</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>egl-headless</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>dbus</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </graphics>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <video supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='modelType'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>vga</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>cirrus</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>virtio</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>none</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>bochs</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>ramfb</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </video>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <hostdev supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='mode'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>subsystem</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='startupPolicy'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>default</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>mandatory</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>requisite</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>optional</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='subsysType'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>usb</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>pci</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>scsi</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='capsType'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='pciBackend'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </hostdev>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <rng supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='model'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>virtio</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>virtio-transitional</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>virtio-non-transitional</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='backendModel'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>random</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>egd</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>builtin</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </rng>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <filesystem supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='driverType'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>path</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>handle</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>virtiofs</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </filesystem>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <tpm supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='model'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>tpm-tis</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>tpm-crb</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='backendModel'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>emulator</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>external</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='backendVersion'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>2.0</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </tpm>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <redirdev supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='bus'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>usb</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </redirdev>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <channel supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='type'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>pty</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>unix</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </channel>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <crypto supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='model'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='type'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>qemu</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='backendModel'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>builtin</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </crypto>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <interface supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='backendType'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>default</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>passt</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </interface>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <panic supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='model'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>isa</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>hyperv</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </panic>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <console supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='type'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>null</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>vc</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>pty</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>dev</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>file</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>pipe</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>stdio</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>udp</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>tcp</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>unix</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>qemu-vdagent</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>dbus</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </console>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   </devices>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <features>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <gic supported='no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <vmcoreinfo supported='yes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <genid supported='yes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <backingStoreInput supported='yes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <backup supported='yes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <async-teardown supported='yes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <ps2 supported='yes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <sev supported='no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <sgx supported='no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <hyperv supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='features'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>relaxed</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>vapic</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>spinlocks</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>vpindex</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>runtime</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>synic</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>stimer</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>reset</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>vendor_id</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>frequencies</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>reenlightenment</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>tlbflush</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>ipi</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>avic</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>emsr_bitmap</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>xmm_input</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <defaults>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <spinlocks>4095</spinlocks>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <stimer_direct>on</stimer_direct>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </defaults>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </hyperv>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <launchSecurity supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='sectype'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>tdx</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </launchSecurity>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   </features>
Nov 29 07:51:15 compute-0 nova_compute[255040]: </domainCapabilities>
Nov 29 07:51:15 compute-0 nova_compute[255040]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.879 255071 DEBUG nova.virt.libvirt.host [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 29 07:51:15 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.885 255071 DEBUG nova.virt.libvirt.host [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 07:51:15 compute-0 nova_compute[255040]: <domainCapabilities>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <domain>kvm</domain>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <arch>x86_64</arch>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <vcpu max='4096'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <iothreads supported='yes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <os supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <enum name='firmware'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <value>efi</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <loader supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='type'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>rom</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>pflash</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='readonly'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>yes</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>no</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='secure'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>yes</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>no</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </loader>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   </os>
Nov 29 07:51:15 compute-0 nova_compute[255040]:   <cpu>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>on</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>off</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </mode>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <mode name='maximum' supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <enum name='maximumMigratable'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>on</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <value>off</value>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </mode>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <mode name='host-model' supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <vendor>AMD</vendor>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='x2apic'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='stibp'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='ssbd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='succor'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='ibrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='lbrv'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     </mode>
Nov 29 07:51:15 compute-0 nova_compute[255040]:     <mode name='custom' supported='yes'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Broadwell-v4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cooperlake'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cooperlake-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Cooperlake-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Denverton'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mpx'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Denverton-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mpx'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Denverton-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Denverton-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Dhyana-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Genoa'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amd-psfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='auto-ibrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='stibp-always-on'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amd-psfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='auto-ibrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='stibp-always-on'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Milan'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amd-psfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='stibp-always-on'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Rome'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='EPYC-v4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='GraniteRapids'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 sudo[255505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mcdt-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pbrsb-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='prefetchiti'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mcdt-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pbrsb-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='prefetchiti'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:16 compute-0 sudo[255505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx10'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx10-128'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx10-256'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx10-512'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='mcdt-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pbrsb-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='prefetchiti'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 sudo[255505]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell-noTSX'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Haswell-v4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='IvyBridge'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='IvyBridge-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='IvyBridge-v2'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='KnightsMill'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-4fmaps'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-4vnniw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512er'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512pf'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='KnightsMill-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-4fmaps'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-4vnniw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512er'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512pf'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Opteron_G4'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fma4'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xop'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fma4'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xop'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Opteron_G5'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fma4'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tbm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xop'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fma4'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tbm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xop'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='SapphireRapids'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:51:15 compute-0 nova_compute[255040]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:15 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='SierraForest'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-ifma'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-ne-convert'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-vnni-int8'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='cmpccxadd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='mcdt-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pbrsb-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='SierraForest-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-ifma'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-ne-convert'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-vnni-int8'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='cmpccxadd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='mcdt-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pbrsb-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Snowridge'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='core-capability'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='mpx'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='split-lock-detect'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Snowridge-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='core-capability'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='mpx'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='split-lock-detect'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Snowridge-v2'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='core-capability'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='split-lock-detect'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Snowridge-v3'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='core-capability'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='split-lock-detect'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Snowridge-v4'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='athlon'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='3dnow'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='3dnowext'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='athlon-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='3dnow'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='3dnowext'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='core2duo'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='core2duo-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='coreduo'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='coreduo-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='n270'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='n270-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='phenom'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='3dnow'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='3dnowext'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='phenom-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='3dnow'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='3dnowext'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </mode>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   </cpu>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   <memoryBacking supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <enum name='sourceType'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <value>file</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <value>anonymous</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <value>memfd</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   </memoryBacking>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   <devices>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <disk supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='diskDevice'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>disk</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>cdrom</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>floppy</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>lun</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='bus'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>fdc</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>scsi</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>virtio</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>usb</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>sata</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='model'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>virtio</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>virtio-transitional</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>virtio-non-transitional</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </disk>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <graphics supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='type'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>vnc</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>egl-headless</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>dbus</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </graphics>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <video supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='modelType'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>vga</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>cirrus</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>virtio</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>none</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>bochs</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>ramfb</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </video>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <hostdev supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='mode'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>subsystem</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='startupPolicy'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>default</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>mandatory</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>requisite</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>optional</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='subsysType'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>usb</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>pci</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>scsi</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='capsType'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='pciBackend'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </hostdev>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <rng supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='model'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>virtio</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>virtio-transitional</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>virtio-non-transitional</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='backendModel'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>random</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>egd</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>builtin</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </rng>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <filesystem supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='driverType'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>path</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>handle</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>virtiofs</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </filesystem>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <tpm supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='model'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>tpm-tis</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>tpm-crb</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='backendModel'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>emulator</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>external</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='backendVersion'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>2.0</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </tpm>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <redirdev supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='bus'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>usb</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </redirdev>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <channel supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='type'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>pty</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>unix</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </channel>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <crypto supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='model'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='type'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>qemu</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='backendModel'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>builtin</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </crypto>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <interface supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='backendType'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>default</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>passt</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </interface>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <panic supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='model'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>isa</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>hyperv</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </panic>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <console supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='type'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>null</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>vc</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>pty</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>dev</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>file</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>pipe</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>stdio</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>udp</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>tcp</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>unix</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>qemu-vdagent</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>dbus</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </console>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   </devices>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   <features>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <gic supported='no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <vmcoreinfo supported='yes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <genid supported='yes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <backingStoreInput supported='yes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <backup supported='yes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <async-teardown supported='yes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <ps2 supported='yes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <sev supported='no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <sgx supported='no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <hyperv supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='features'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>relaxed</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>vapic</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>spinlocks</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>vpindex</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>runtime</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>synic</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>stimer</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>reset</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>vendor_id</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>frequencies</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>reenlightenment</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>tlbflush</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>ipi</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>avic</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>emsr_bitmap</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>xmm_input</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <defaults>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <spinlocks>4095</spinlocks>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <stimer_direct>on</stimer_direct>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </defaults>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </hyperv>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <launchSecurity supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='sectype'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>tdx</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </launchSecurity>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   </features>
Nov 29 07:51:16 compute-0 nova_compute[255040]: </domainCapabilities>
Nov 29 07:51:16 compute-0 nova_compute[255040]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:51:16 compute-0 nova_compute[255040]: 2025-11-29 07:51:15.950 255071 DEBUG nova.virt.libvirt.host [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 07:51:16 compute-0 nova_compute[255040]: <domainCapabilities>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   <domain>kvm</domain>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   <arch>x86_64</arch>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   <vcpu max='240'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   <iothreads supported='yes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   <os supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <enum name='firmware'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <loader supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='type'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>rom</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>pflash</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='readonly'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>yes</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>no</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='secure'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>no</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </loader>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   </os>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   <cpu>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>on</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>off</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </mode>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <mode name='maximum' supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='maximumMigratable'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>on</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>off</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </mode>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <mode name='host-model' supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <vendor>AMD</vendor>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <feature policy='require' name='x2apic'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <feature policy='require' name='stibp'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <feature policy='require' name='ssbd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <feature policy='require' name='succor'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <feature policy='require' name='ibrs'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <feature policy='require' name='lbrv'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </mode>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <mode name='custom' supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Broadwell'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Broadwell-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Broadwell-v2'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Broadwell-v3'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Broadwell-v4'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 sudo[255530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Cooperlake'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Cooperlake-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 sudo[255530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Cooperlake-v2'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Denverton'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='mpx'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Denverton-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='mpx'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Denverton-v2'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Denverton-v3'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Dhyana-v2'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='EPYC-Genoa'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amd-psfd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='auto-ibrs'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='stibp-always-on'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amd-psfd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='auto-ibrs'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:16 compute-0 sudo[255530]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='stibp-always-on'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='EPYC-Milan'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amd-psfd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='no-nested-data-bp'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='null-sel-clr-base'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='stibp-always-on'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='EPYC-Rome'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='EPYC-v3'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='EPYC-v4'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='GraniteRapids'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-fp16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='mcdt-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pbrsb-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='prefetchiti'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-fp16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='mcdt-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pbrsb-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='prefetchiti'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-fp16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx10'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx10-128'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx10-256'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx10-512'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='mcdt-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pbrsb-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='prefetchiti'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Haswell'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Haswell-IBRS'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Haswell-noTSX'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Haswell-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Haswell-v2'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Haswell-v3'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Haswell-v4'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='IvyBridge'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='IvyBridge-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='IvyBridge-v2'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='KnightsMill'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-4fmaps'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-4vnniw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512er'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512pf'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='KnightsMill-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-4fmaps'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-4vnniw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512er'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512pf'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Opteron_G4'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fma4'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xop'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fma4'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xop'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Opteron_G5'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fma4'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='tbm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xop'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fma4'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='tbm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xop'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='SapphireRapids'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-int8'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='amx-tile'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-bf16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-fp16'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bitalg'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512ifma'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vbmi2'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrc'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fzrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='la57'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='taa-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='tsx-ldtrk'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xfd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='SierraForest'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-ifma'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-ne-convert'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-vnni-int8'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='cmpccxadd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='mcdt-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pbrsb-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='SierraForest-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-ifma'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-ne-convert'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-vnni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx-vnni-int8'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='bus-lock-detect'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='cmpccxadd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fbsdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='fsrs'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ibrs-all'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='mcdt-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pbrsb-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='psdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='serialize'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vaes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='vpclmulqdq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='hle'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='rtm'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512bw'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512cd'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512dq'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512f'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='avx512vl'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='invpcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pcid'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='pku'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Snowridge'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='core-capability'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='mpx'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='split-lock-detect'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Snowridge-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='core-capability'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='mpx'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='split-lock-detect'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Snowridge-v2'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='core-capability'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='split-lock-detect'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Snowridge-v3'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='core-capability'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='split-lock-detect'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='Snowridge-v4'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='cldemote'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='erms'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='gfni'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdir64b'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='movdiri'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='xsaves'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='athlon'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='3dnow'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='3dnowext'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='athlon-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='3dnow'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='3dnowext'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='core2duo'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='core2duo-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='coreduo'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='coreduo-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='n270'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='n270-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='ss'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='phenom'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='3dnow'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='3dnowext'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <blockers model='phenom-v1'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='3dnow'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <feature name='3dnowext'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </blockers>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </mode>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   </cpu>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   <memoryBacking supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <enum name='sourceType'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <value>file</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <value>anonymous</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <value>memfd</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   </memoryBacking>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   <devices>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <disk supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='diskDevice'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>disk</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>cdrom</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>floppy</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>lun</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='bus'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>ide</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>fdc</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>scsi</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>virtio</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>usb</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>sata</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='model'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>virtio</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>virtio-transitional</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>virtio-non-transitional</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </disk>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <graphics supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='type'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>vnc</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>egl-headless</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>dbus</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </graphics>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <video supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='modelType'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>vga</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>cirrus</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>virtio</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>none</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>bochs</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>ramfb</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </video>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <hostdev supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='mode'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>subsystem</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='startupPolicy'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>default</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>mandatory</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>requisite</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>optional</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='subsysType'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>usb</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>pci</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>scsi</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='capsType'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='pciBackend'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </hostdev>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <rng supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='model'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>virtio</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>virtio-transitional</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>virtio-non-transitional</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='backendModel'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>random</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>egd</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>builtin</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </rng>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <filesystem supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='driverType'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>path</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>handle</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>virtiofs</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </filesystem>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <tpm supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='model'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>tpm-tis</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>tpm-crb</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='backendModel'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>emulator</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>external</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='backendVersion'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>2.0</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </tpm>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <redirdev supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='bus'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>usb</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </redirdev>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <channel supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='type'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>pty</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>unix</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </channel>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <crypto supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='model'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='type'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>qemu</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='backendModel'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>builtin</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </crypto>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <interface supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='backendType'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>default</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>passt</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </interface>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <panic supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='model'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>isa</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>hyperv</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </panic>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <console supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='type'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>null</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>vc</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>pty</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>dev</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>file</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>pipe</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>stdio</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>udp</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>tcp</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>unix</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>qemu-vdagent</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>dbus</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </console>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   </devices>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   <features>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <gic supported='no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <vmcoreinfo supported='yes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <genid supported='yes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <backingStoreInput supported='yes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <backup supported='yes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <async-teardown supported='yes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <ps2 supported='yes'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <sev supported='no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <sgx supported='no'/>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <hyperv supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='features'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>relaxed</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>vapic</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>spinlocks</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>vpindex</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>runtime</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>synic</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>stimer</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>reset</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>vendor_id</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>frequencies</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>reenlightenment</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>tlbflush</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>ipi</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>avic</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>emsr_bitmap</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>xmm_input</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <defaults>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <spinlocks>4095</spinlocks>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <stimer_direct>on</stimer_direct>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </defaults>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </hyperv>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     <launchSecurity supported='yes'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       <enum name='sectype'>
Nov 29 07:51:16 compute-0 nova_compute[255040]:         <value>tdx</value>
Nov 29 07:51:16 compute-0 nova_compute[255040]:       </enum>
Nov 29 07:51:16 compute-0 nova_compute[255040]:     </launchSecurity>
Nov 29 07:51:16 compute-0 nova_compute[255040]:   </features>
Nov 29 07:51:16 compute-0 nova_compute[255040]: </domainCapabilities>
Nov 29 07:51:16 compute-0 nova_compute[255040]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:51:16 compute-0 nova_compute[255040]: 2025-11-29 07:51:16.021 255071 DEBUG nova.virt.libvirt.host [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 29 07:51:16 compute-0 nova_compute[255040]: 2025-11-29 07:51:16.021 255071 INFO nova.virt.libvirt.host [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Secure Boot support detected
Nov 29 07:51:16 compute-0 nova_compute[255040]: 2025-11-29 07:51:16.023 255071 INFO nova.virt.libvirt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 29 07:51:16 compute-0 nova_compute[255040]: 2025-11-29 07:51:16.024 255071 INFO nova.virt.libvirt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 29 07:51:16 compute-0 nova_compute[255040]: 2025-11-29 07:51:16.033 255071 DEBUG nova.virt.libvirt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 29 07:51:16 compute-0 nova_compute[255040]: 2025-11-29 07:51:16.052 255071 INFO nova.virt.node [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Determined node identity 858d78b2-ffcd-4247-ba96-0ec767fec62e from /var/lib/nova/compute_id
Nov 29 07:51:16 compute-0 nova_compute[255040]: 2025-11-29 07:51:16.067 255071 WARNING nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Compute nodes ['858d78b2-ffcd-4247-ba96-0ec767fec62e'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 29 07:51:16 compute-0 nova_compute[255040]: 2025-11-29 07:51:16.095 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 29 07:51:16 compute-0 nova_compute[255040]: 2025-11-29 07:51:16.110 255071 WARNING nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 29 07:51:16 compute-0 nova_compute[255040]: 2025-11-29 07:51:16.111 255071 DEBUG oslo_concurrency.lockutils [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:16 compute-0 nova_compute[255040]: 2025-11-29 07:51:16.111 255071 DEBUG oslo_concurrency.lockutils [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:16 compute-0 nova_compute[255040]: 2025-11-29 07:51:16.111 255071 DEBUG oslo_concurrency.lockutils [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:16 compute-0 nova_compute[255040]: 2025-11-29 07:51:16.111 255071 DEBUG nova.compute.resource_tracker [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:51:16 compute-0 nova_compute[255040]: 2025-11-29 07:51:16.112 255071 DEBUG oslo_concurrency.processutils [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:16 compute-0 ceph-mon[75237]: pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:16 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:51:16 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:51:16 compute-0 rsyslogd[1002]: imjournal from <np0005539583:nova_compute>: begin to drop messages due to rate-limiting
Nov 29 07:51:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:51:16 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2847381128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:16 compute-0 nova_compute[255040]: 2025-11-29 07:51:16.588 255071 DEBUG oslo_concurrency.processutils [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:16 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 07:51:16 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 29 07:51:16 compute-0 nova_compute[255040]: 2025-11-29 07:51:16.921 255071 WARNING nova.virt.libvirt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:51:16 compute-0 nova_compute[255040]: 2025-11-29 07:51:16.923 255071 DEBUG nova.compute.resource_tracker [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5154MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:51:16 compute-0 nova_compute[255040]: 2025-11-29 07:51:16.923 255071 DEBUG oslo_concurrency.lockutils [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:16 compute-0 nova_compute[255040]: 2025-11-29 07:51:16.923 255071 DEBUG oslo_concurrency.lockutils [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:17 compute-0 nova_compute[255040]: 2025-11-29 07:51:17.058 255071 WARNING nova.compute.resource_tracker [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] No compute node record for compute-0.ctlplane.example.com:858d78b2-ffcd-4247-ba96-0ec767fec62e: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 858d78b2-ffcd-4247-ba96-0ec767fec62e could not be found.
Nov 29 07:51:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:17 compute-0 nova_compute[255040]: 2025-11-29 07:51:17.331 255071 INFO nova.compute.resource_tracker [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 858d78b2-ffcd-4247-ba96-0ec767fec62e
Nov 29 07:51:17 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2847381128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:18 compute-0 nova_compute[255040]: 2025-11-29 07:51:18.462 255071 DEBUG nova.compute.resource_tracker [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:51:18 compute-0 nova_compute[255040]: 2025-11-29 07:51:18.463 255071 DEBUG nova.compute.resource_tracker [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:51:18 compute-0 ceph-mon[75237]: pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:51:19 compute-0 nova_compute[255040]: 2025-11-29 07:51:19.339 255071 INFO nova.scheduler.client.report [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [req-a1ebbc1f-c0ec-4d20-a6d9-05ada04a7857] Created resource provider record via placement API for resource provider with UUID 858d78b2-ffcd-4247-ba96-0ec767fec62e and name compute-0.ctlplane.example.com.
Nov 29 07:51:19 compute-0 nova_compute[255040]: 2025-11-29 07:51:19.697 255071 DEBUG oslo_concurrency.processutils [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:51:21 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/484427901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:21 compute-0 ceph-mon[75237]: pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:21 compute-0 nova_compute[255040]: 2025-11-29 07:51:21.158 255071 DEBUG oslo_concurrency.processutils [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:21 compute-0 nova_compute[255040]: 2025-11-29 07:51:21.165 255071 DEBUG nova.virt.libvirt.host [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 29 07:51:21 compute-0 nova_compute[255040]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 29 07:51:21 compute-0 nova_compute[255040]: 2025-11-29 07:51:21.166 255071 INFO nova.virt.libvirt.host [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] kernel doesn't support AMD SEV
Nov 29 07:51:21 compute-0 nova_compute[255040]: 2025-11-29 07:51:21.167 255071 DEBUG nova.compute.provider_tree [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Updating inventory in ProviderTree for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:51:21 compute-0 nova_compute[255040]: 2025-11-29 07:51:21.168 255071 DEBUG nova.virt.libvirt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:51:21 compute-0 nova_compute[255040]: 2025-11-29 07:51:21.218 255071 DEBUG nova.scheduler.client.report [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Updated inventory for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 29 07:51:21 compute-0 nova_compute[255040]: 2025-11-29 07:51:21.219 255071 DEBUG nova.compute.provider_tree [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Updating resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 29 07:51:21 compute-0 nova_compute[255040]: 2025-11-29 07:51:21.219 255071 DEBUG nova.compute.provider_tree [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Updating inventory in ProviderTree for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:51:21 compute-0 nova_compute[255040]: 2025-11-29 07:51:21.302 255071 DEBUG nova.compute.provider_tree [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Updating resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 29 07:51:21 compute-0 nova_compute[255040]: 2025-11-29 07:51:21.334 255071 DEBUG nova.compute.resource_tracker [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:51:21 compute-0 nova_compute[255040]: 2025-11-29 07:51:21.334 255071 DEBUG oslo_concurrency.lockutils [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.411s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:21 compute-0 nova_compute[255040]: 2025-11-29 07:51:21.334 255071 DEBUG nova.service [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 29 07:51:21 compute-0 nova_compute[255040]: 2025-11-29 07:51:21.418 255071 DEBUG nova.service [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 29 07:51:21 compute-0 nova_compute[255040]: 2025-11-29 07:51:21.419 255071 DEBUG nova.servicegroup.drivers.db [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 29 07:51:21 compute-0 podman[255622]: 2025-11-29 07:51:21.95579353 +0000 UTC m=+0.118169353 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 07:51:22 compute-0 ceph-mon[75237]: pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:22 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/484427901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:22 compute-0 nova_compute[255040]: 2025-11-29 07:51:22.420 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:22 compute-0 nova_compute[255040]: 2025-11-29 07:51:22.443 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:24 compute-0 ceph-mon[75237]: pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:51:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:26 compute-0 ceph-mon[75237]: pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:51:27.111 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:51:27.112 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:51:27.112 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:28 compute-0 ceph-mon[75237]: pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:28 compute-0 sshd[189732]: drop connection #0 from [45.78.219.195]:37986 on [38.102.83.203]:22 penalty: exceeded LoginGraceTime
Nov 29 07:51:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:51:30 compute-0 ceph-mon[75237]: pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:32 compute-0 ceph-mon[75237]: pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:33 compute-0 podman[255648]: 2025-11-29 07:51:33.896437403 +0000 UTC m=+0.063714932 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:51:34 compute-0 ceph-mon[75237]: pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:51:34.250119) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402694250384, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 2388, "num_deletes": 507, "total_data_size": 3698726, "memory_usage": 3767576, "flush_reason": "Manual Compaction"}
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402694296691, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 3621365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13319, "largest_seqno": 15706, "table_properties": {"data_size": 3610982, "index_size": 6237, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 22631, "raw_average_key_size": 18, "raw_value_size": 3588525, "raw_average_value_size": 2968, "num_data_blocks": 283, "num_entries": 1209, "num_filter_entries": 1209, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402445, "oldest_key_time": 1764402445, "file_creation_time": 1764402694, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 46703 microseconds, and 14362 cpu microseconds.
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:51:34.296844) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 3621365 bytes OK
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:51:34.296888) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:51:34.299276) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:51:34.299293) EVENT_LOG_v1 {"time_micros": 1764402694299288, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:51:34.299330) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 3687815, prev total WAL file size 3687815, number of live WAL files 2.
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:51:34.301006) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(3536KB)], [32(7097KB)]
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402694301389, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 10889200, "oldest_snapshot_seqno": -1}
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 4259 keys, 8728582 bytes, temperature: kUnknown
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402694392685, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 8728582, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8696927, "index_size": 19897, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10693, "raw_key_size": 104556, "raw_average_key_size": 24, "raw_value_size": 8616707, "raw_average_value_size": 2023, "num_data_blocks": 839, "num_entries": 4259, "num_filter_entries": 4259, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764402694, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:51:34.393145) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 8728582 bytes
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:51:34.395076) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.0 rd, 95.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 6.9 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(5.4) write-amplify(2.4) OK, records in: 5286, records dropped: 1027 output_compression: NoCompression
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:51:34.395131) EVENT_LOG_v1 {"time_micros": 1764402694395115, "job": 14, "event": "compaction_finished", "compaction_time_micros": 91485, "compaction_time_cpu_micros": 34056, "output_level": 6, "num_output_files": 1, "total_output_size": 8728582, "num_input_records": 5286, "num_output_records": 4259, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402694396353, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402694397960, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:51:34.300743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:51:34.398135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:51:34.398144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:51:34.398146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:51:34.398149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:51:34 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:51:34.398151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:51:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:35 compute-0 sshd-session[255667]: Invalid user testftp from 114.34.106.146 port 52556
Nov 29 07:51:35 compute-0 sshd-session[255667]: Received disconnect from 114.34.106.146 port 52556:11: Bye Bye [preauth]
Nov 29 07:51:35 compute-0 sshd-session[255667]: Disconnected from invalid user testftp 114.34.106.146 port 52556 [preauth]
Nov 29 07:51:36 compute-0 ceph-mon[75237]: pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:38 compute-0 ceph-mon[75237]: pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:51:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:51:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:51:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:51:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:51:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:51:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:51:38
Nov 29 07:51:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:51:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:51:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'volumes', 'images', '.mgr', 'backups', '.rgw.root']
Nov 29 07:51:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:51:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:51:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:51:39 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2020140383' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:51:39 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2020140383' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:39 compute-0 podman[255669]: 2025-11-29 07:51:39.907587169 +0000 UTC m=+0.070051741 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:51:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:51:40 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/847464995' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:51:40 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/847464995' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:40 compute-0 ceph-mon[75237]: pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:40 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2020140383' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:40 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2020140383' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:40 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/847464995' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:40 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/847464995' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:51:40 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2203011402' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:51:40 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2203011402' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:41 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2203011402' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:41 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2203011402' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:41 compute-0 rsyslogd[1002]: imjournal: 2180 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 29 07:51:42 compute-0 ceph-mon[75237]: pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:51:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:51:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:51:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:51:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:51:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:51:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:51:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:51:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:51:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:51:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:43 compute-0 ceph-mon[75237]: pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:51:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:46 compute-0 sshd-session[255689]: Invalid user rahul from 103.234.151.178 port 45498
Nov 29 07:51:47 compute-0 sshd-session[255689]: Received disconnect from 103.234.151.178 port 45498:11: Bye Bye [preauth]
Nov 29 07:51:47 compute-0 sshd-session[255689]: Disconnected from invalid user rahul 103.234.151.178 port 45498 [preauth]
Nov 29 07:51:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:47 compute-0 ceph-mon[75237]: pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:48 compute-0 ceph-mon[75237]: pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:51:50 compute-0 ceph-mon[75237]: pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:52 compute-0 ceph-mon[75237]: pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:52 compute-0 podman[255691]: 2025-11-29 07:51:52.990672359 +0000 UTC m=+0.139792632 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 29 07:51:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:51:54 compute-0 ceph-mon[75237]: pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:55 compute-0 sshd-session[255717]: Received disconnect from 103.236.140.19 port 37590:11: Bye Bye [preauth]
Nov 29 07:51:55 compute-0 sshd-session[255717]: Disconnected from authenticating user root 103.236.140.19 port 37590 [preauth]
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:51:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:51:56 compute-0 ceph-mon[75237]: pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:58 compute-0 ceph-mon[75237]: pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:51:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:52:00 compute-0 ceph-mon[75237]: pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:01 compute-0 ceph-mon[75237]: pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:03 compute-0 ceph-mon[75237]: pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:52:04 compute-0 podman[255719]: 2025-11-29 07:52:04.89399576 +0000 UTC m=+0.055693715 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 29 07:52:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:06 compute-0 ceph-mon[75237]: pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:08 compute-0 ceph-mon[75237]: pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:52:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:52:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:52:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:52:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:52:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:52:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:52:09 compute-0 ceph-mon[75237]: pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:10 compute-0 podman[255738]: 2025-11-29 07:52:10.926571071 +0000 UTC m=+0.089005940 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:52:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:12 compute-0 ceph-mon[75237]: pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:14 compute-0 ceph-mon[75237]: pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:52:14 compute-0 nova_compute[255040]: 2025-11-29 07:52:14.978 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:14 compute-0 nova_compute[255040]: 2025-11-29 07:52:14.980 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:14 compute-0 nova_compute[255040]: 2025-11-29 07:52:14.981 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:52:14 compute-0 nova_compute[255040]: 2025-11-29 07:52:14.981 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:52:15 compute-0 nova_compute[255040]: 2025-11-29 07:52:15.006 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:52:15 compute-0 nova_compute[255040]: 2025-11-29 07:52:15.007 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:15 compute-0 nova_compute[255040]: 2025-11-29 07:52:15.008 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:15 compute-0 nova_compute[255040]: 2025-11-29 07:52:15.008 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:15 compute-0 nova_compute[255040]: 2025-11-29 07:52:15.009 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:15 compute-0 nova_compute[255040]: 2025-11-29 07:52:15.009 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:15 compute-0 nova_compute[255040]: 2025-11-29 07:52:15.009 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:15 compute-0 nova_compute[255040]: 2025-11-29 07:52:15.010 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:52:15 compute-0 nova_compute[255040]: 2025-11-29 07:52:15.010 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:15 compute-0 nova_compute[255040]: 2025-11-29 07:52:15.079 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:15 compute-0 nova_compute[255040]: 2025-11-29 07:52:15.080 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:15 compute-0 nova_compute[255040]: 2025-11-29 07:52:15.080 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:15 compute-0 nova_compute[255040]: 2025-11-29 07:52:15.080 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:52:15 compute-0 nova_compute[255040]: 2025-11-29 07:52:15.081 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:52:15 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/26643646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:15 compute-0 nova_compute[255040]: 2025-11-29 07:52:15.598 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:15 compute-0 nova_compute[255040]: 2025-11-29 07:52:15.836 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:52:15 compute-0 nova_compute[255040]: 2025-11-29 07:52:15.838 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5193MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:52:15 compute-0 nova_compute[255040]: 2025-11-29 07:52:15.838 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:15 compute-0 nova_compute[255040]: 2025-11-29 07:52:15.838 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:15 compute-0 ceph-mon[75237]: pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:15 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/26643646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:16 compute-0 nova_compute[255040]: 2025-11-29 07:52:16.046 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:52:16 compute-0 nova_compute[255040]: 2025-11-29 07:52:16.047 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:52:16 compute-0 nova_compute[255040]: 2025-11-29 07:52:16.084 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:16 compute-0 sudo[255780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:16 compute-0 sudo[255780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:16 compute-0 sudo[255780]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:16 compute-0 sudo[255806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:52:16 compute-0 sudo[255806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:16 compute-0 sudo[255806]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:16 compute-0 sudo[255841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:16 compute-0 sudo[255841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:16 compute-0 sudo[255841]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:16 compute-0 sudo[255875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:52:16 compute-0 sudo[255875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:52:16 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3800856911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:16 compute-0 nova_compute[255040]: 2025-11-29 07:52:16.709 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:16 compute-0 nova_compute[255040]: 2025-11-29 07:52:16.714 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:52:16 compute-0 nova_compute[255040]: 2025-11-29 07:52:16.769 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:52:16 compute-0 nova_compute[255040]: 2025-11-29 07:52:16.818 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:52:16 compute-0 nova_compute[255040]: 2025-11-29 07:52:16.819 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:18 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3800856911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:18 compute-0 podman[255972]: 2025-11-29 07:52:18.247235759 +0000 UTC m=+1.338935551 container exec 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 07:52:19 compute-0 podman[255972]: 2025-11-29 07:52:19.002889977 +0000 UTC m=+2.094589709 container exec_died 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 07:52:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:19 compute-0 ceph-mon[75237]: pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:19 compute-0 sudo[255875]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:52:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:52:20 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:52:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:52:20 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:52:20 compute-0 sudo[256131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:20 compute-0 sudo[256131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:20 compute-0 sudo[256131]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:20 compute-0 sudo[256156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:52:20 compute-0 sudo[256156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:20 compute-0 sudo[256156]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:20 compute-0 sudo[256181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:20 compute-0 sudo[256181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:20 compute-0 sudo[256181]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:20 compute-0 sudo[256206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:52:20 compute-0 sudo[256206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:21 compute-0 ceph-mon[75237]: pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:21 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:52:21 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:52:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:21 compute-0 sudo[256206]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:52:21 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:52:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:52:21 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:52:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:52:21 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:52:21 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev a1c30673-903d-42b6-851a-d98caefe0365 does not exist
Nov 29 07:52:21 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 6674bb10-0813-4459-a7ae-86b223a983b3 does not exist
Nov 29 07:52:21 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 038be4fb-0dea-4553-9a39-36a67eff8a20 does not exist
Nov 29 07:52:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:52:21 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:52:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:52:21 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:52:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:52:21 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:52:21 compute-0 sudo[256261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:21 compute-0 sudo[256261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:21 compute-0 sudo[256261]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:21 compute-0 sudo[256286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:52:21 compute-0 sudo[256286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:21 compute-0 sudo[256286]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:21 compute-0 sudo[256311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:21 compute-0 sudo[256311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:21 compute-0 sudo[256311]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:21 compute-0 sudo[256336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:52:21 compute-0 sudo[256336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:22 compute-0 ceph-mon[75237]: pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:22 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:52:22 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:52:22 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:52:22 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:52:22 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:52:22 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:52:22 compute-0 podman[256401]: 2025-11-29 07:52:22.265081447 +0000 UTC m=+0.053474756 container create 3916a63f549daaf295d2d8d1a3efbb462abeb793efd760dd7efed41cedb18fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_williams, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:52:22 compute-0 systemd[1]: Started libpod-conmon-3916a63f549daaf295d2d8d1a3efbb462abeb793efd760dd7efed41cedb18fcb.scope.
Nov 29 07:52:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:52:22 compute-0 podman[256401]: 2025-11-29 07:52:22.239645284 +0000 UTC m=+0.028038573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:52:22 compute-0 podman[256401]: 2025-11-29 07:52:22.344932439 +0000 UTC m=+0.133325748 container init 3916a63f549daaf295d2d8d1a3efbb462abeb793efd760dd7efed41cedb18fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:52:22 compute-0 podman[256401]: 2025-11-29 07:52:22.35203262 +0000 UTC m=+0.140425889 container start 3916a63f549daaf295d2d8d1a3efbb462abeb793efd760dd7efed41cedb18fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 07:52:22 compute-0 podman[256401]: 2025-11-29 07:52:22.355564565 +0000 UTC m=+0.143957904 container attach 3916a63f549daaf295d2d8d1a3efbb462abeb793efd760dd7efed41cedb18fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 07:52:22 compute-0 modest_williams[256417]: 167 167
Nov 29 07:52:22 compute-0 systemd[1]: libpod-3916a63f549daaf295d2d8d1a3efbb462abeb793efd760dd7efed41cedb18fcb.scope: Deactivated successfully.
Nov 29 07:52:22 compute-0 podman[256401]: 2025-11-29 07:52:22.357149428 +0000 UTC m=+0.145542717 container died 3916a63f549daaf295d2d8d1a3efbb462abeb793efd760dd7efed41cedb18fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_williams, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 07:52:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-40abc762eb07a178e9b63d895dfb692af61051c8eb0e0f6e37207b54c34f1261-merged.mount: Deactivated successfully.
Nov 29 07:52:22 compute-0 podman[256401]: 2025-11-29 07:52:22.39859091 +0000 UTC m=+0.186984189 container remove 3916a63f549daaf295d2d8d1a3efbb462abeb793efd760dd7efed41cedb18fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 07:52:22 compute-0 systemd[1]: libpod-conmon-3916a63f549daaf295d2d8d1a3efbb462abeb793efd760dd7efed41cedb18fcb.scope: Deactivated successfully.
Nov 29 07:52:22 compute-0 podman[256440]: 2025-11-29 07:52:22.601920286 +0000 UTC m=+0.055391918 container create db191c2b7568bd29614eb0c723622622c6a4fe43e98a66780995a571c7e31cfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:52:22 compute-0 systemd[1]: Started libpod-conmon-db191c2b7568bd29614eb0c723622622c6a4fe43e98a66780995a571c7e31cfb.scope.
Nov 29 07:52:22 compute-0 podman[256440]: 2025-11-29 07:52:22.572160187 +0000 UTC m=+0.025631879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:52:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf77fe8547ef1b6c1befdd2bf1da021ae8e413101dcefb433fa88ed97d3337af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf77fe8547ef1b6c1befdd2bf1da021ae8e413101dcefb433fa88ed97d3337af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf77fe8547ef1b6c1befdd2bf1da021ae8e413101dcefb433fa88ed97d3337af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf77fe8547ef1b6c1befdd2bf1da021ae8e413101dcefb433fa88ed97d3337af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf77fe8547ef1b6c1befdd2bf1da021ae8e413101dcefb433fa88ed97d3337af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:22 compute-0 podman[256440]: 2025-11-29 07:52:22.707896469 +0000 UTC m=+0.161368121 container init db191c2b7568bd29614eb0c723622622c6a4fe43e98a66780995a571c7e31cfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 07:52:22 compute-0 podman[256440]: 2025-11-29 07:52:22.721539786 +0000 UTC m=+0.175011388 container start db191c2b7568bd29614eb0c723622622c6a4fe43e98a66780995a571c7e31cfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cohen, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:52:22 compute-0 podman[256440]: 2025-11-29 07:52:22.727371852 +0000 UTC m=+0.180843554 container attach db191c2b7568bd29614eb0c723622622c6a4fe43e98a66780995a571c7e31cfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cohen, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:52:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:23 compute-0 priceless_cohen[256457]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:52:23 compute-0 priceless_cohen[256457]: --> relative data size: 1.0
Nov 29 07:52:23 compute-0 priceless_cohen[256457]: --> All data devices are unavailable
Nov 29 07:52:23 compute-0 podman[256478]: 2025-11-29 07:52:23.986716707 +0000 UTC m=+0.144118099 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:52:24 compute-0 systemd[1]: libpod-db191c2b7568bd29614eb0c723622622c6a4fe43e98a66780995a571c7e31cfb.scope: Deactivated successfully.
Nov 29 07:52:24 compute-0 systemd[1]: libpod-db191c2b7568bd29614eb0c723622622c6a4fe43e98a66780995a571c7e31cfb.scope: Consumed 1.239s CPU time.
Nov 29 07:52:24 compute-0 conmon[256457]: conmon db191c2b7568bd29614e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db191c2b7568bd29614eb0c723622622c6a4fe43e98a66780995a571c7e31cfb.scope/container/memory.events
Nov 29 07:52:24 compute-0 podman[256440]: 2025-11-29 07:52:24.018519069 +0000 UTC m=+1.471990671 container died db191c2b7568bd29614eb0c723622622c6a4fe43e98a66780995a571c7e31cfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:52:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf77fe8547ef1b6c1befdd2bf1da021ae8e413101dcefb433fa88ed97d3337af-merged.mount: Deactivated successfully.
Nov 29 07:52:24 compute-0 podman[256440]: 2025-11-29 07:52:24.073333901 +0000 UTC m=+1.526805503 container remove db191c2b7568bd29614eb0c723622622c6a4fe43e98a66780995a571c7e31cfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cohen, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:52:24 compute-0 systemd[1]: libpod-conmon-db191c2b7568bd29614eb0c723622622c6a4fe43e98a66780995a571c7e31cfb.scope: Deactivated successfully.
Nov 29 07:52:24 compute-0 sudo[256336]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:24 compute-0 sudo[256523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:24 compute-0 sudo[256523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:24 compute-0 sudo[256523]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:24 compute-0 ceph-mon[75237]: pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:24 compute-0 sudo[256548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:52:24 compute-0 sudo[256548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:24 compute-0 sudo[256548]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:24 compute-0 sudo[256573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:24 compute-0 sudo[256573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:24 compute-0 sudo[256573]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:24 compute-0 sudo[256598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:52:24 compute-0 sudo[256598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:24 compute-0 podman[256663]: 2025-11-29 07:52:24.726754795 +0000 UTC m=+0.040410286 container create 8bb88f5c468dbac4ea47f8c333b6bb45977117f16c8253272bd25bc9f3c4e00c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hoover, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 07:52:24 compute-0 systemd[1]: Started libpod-conmon-8bb88f5c468dbac4ea47f8c333b6bb45977117f16c8253272bd25bc9f3c4e00c.scope.
Nov 29 07:52:24 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:52:24 compute-0 podman[256663]: 2025-11-29 07:52:24.709378669 +0000 UTC m=+0.023034150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:52:24 compute-0 podman[256663]: 2025-11-29 07:52:24.818479637 +0000 UTC m=+0.132135198 container init 8bb88f5c468dbac4ea47f8c333b6bb45977117f16c8253272bd25bc9f3c4e00c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:52:24 compute-0 podman[256663]: 2025-11-29 07:52:24.831530786 +0000 UTC m=+0.145186277 container start 8bb88f5c468dbac4ea47f8c333b6bb45977117f16c8253272bd25bc9f3c4e00c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hoover, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:52:24 compute-0 podman[256663]: 2025-11-29 07:52:24.837473316 +0000 UTC m=+0.151128907 container attach 8bb88f5c468dbac4ea47f8c333b6bb45977117f16c8253272bd25bc9f3c4e00c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hoover, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:52:24 compute-0 objective_hoover[256679]: 167 167
Nov 29 07:52:24 compute-0 systemd[1]: libpod-8bb88f5c468dbac4ea47f8c333b6bb45977117f16c8253272bd25bc9f3c4e00c.scope: Deactivated successfully.
Nov 29 07:52:24 compute-0 podman[256663]: 2025-11-29 07:52:24.841426952 +0000 UTC m=+0.155082523 container died 8bb88f5c468dbac4ea47f8c333b6bb45977117f16c8253272bd25bc9f3c4e00c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 29 07:52:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb4087c5b3ccf98a21efb860389a026390f75dc18817df547fb27e49de6e70b8-merged.mount: Deactivated successfully.
Nov 29 07:52:24 compute-0 podman[256663]: 2025-11-29 07:52:24.889130432 +0000 UTC m=+0.202785913 container remove 8bb88f5c468dbac4ea47f8c333b6bb45977117f16c8253272bd25bc9f3c4e00c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hoover, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 07:52:24 compute-0 systemd[1]: libpod-conmon-8bb88f5c468dbac4ea47f8c333b6bb45977117f16c8253272bd25bc9f3c4e00c.scope: Deactivated successfully.
Nov 29 07:52:25 compute-0 podman[256704]: 2025-11-29 07:52:25.079787608 +0000 UTC m=+0.055941822 container create 3f08ac43b0d87e993a3a8db9a10b1ef55e99e13f845f615f17d89628da11a3ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:52:25 compute-0 systemd[1]: Started libpod-conmon-3f08ac43b0d87e993a3a8db9a10b1ef55e99e13f845f615f17d89628da11a3ad.scope.
Nov 29 07:52:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:25 compute-0 podman[256704]: 2025-11-29 07:52:25.055878427 +0000 UTC m=+0.032032621 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:52:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:52:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61bbc08a382fd5e4b20265030608ce40d3bb62edad37af8b56be520bb7532092/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61bbc08a382fd5e4b20265030608ce40d3bb62edad37af8b56be520bb7532092/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61bbc08a382fd5e4b20265030608ce40d3bb62edad37af8b56be520bb7532092/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61bbc08a382fd5e4b20265030608ce40d3bb62edad37af8b56be520bb7532092/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:25 compute-0 podman[256704]: 2025-11-29 07:52:25.176188305 +0000 UTC m=+0.152342509 container init 3f08ac43b0d87e993a3a8db9a10b1ef55e99e13f845f615f17d89628da11a3ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lumiere, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:52:25 compute-0 podman[256704]: 2025-11-29 07:52:25.188954178 +0000 UTC m=+0.165108352 container start 3f08ac43b0d87e993a3a8db9a10b1ef55e99e13f845f615f17d89628da11a3ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 07:52:25 compute-0 podman[256704]: 2025-11-29 07:52:25.194162238 +0000 UTC m=+0.170316442 container attach 3f08ac43b0d87e993a3a8db9a10b1ef55e99e13f845f615f17d89628da11a3ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lumiere, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 07:52:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 29 07:52:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2892335317' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 29 07:52:25 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14347 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 29 07:52:25 compute-0 ceph-mgr[75527]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 29 07:52:25 compute-0 ceph-mgr[75527]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 29 07:52:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]: {
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:     "0": [
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:         {
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "devices": [
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "/dev/loop3"
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             ],
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "lv_name": "ceph_lv0",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "lv_size": "21470642176",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "name": "ceph_lv0",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "tags": {
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.cluster_name": "ceph",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.crush_device_class": "",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.encrypted": "0",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.osd_id": "0",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.type": "block",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.vdo": "0"
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             },
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "type": "block",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "vg_name": "ceph_vg0"
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:         }
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:     ],
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:     "1": [
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:         {
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "devices": [
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "/dev/loop4"
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             ],
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "lv_name": "ceph_lv1",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "lv_size": "21470642176",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "name": "ceph_lv1",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "tags": {
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.cluster_name": "ceph",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.crush_device_class": "",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.encrypted": "0",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.osd_id": "1",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.type": "block",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.vdo": "0"
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             },
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "type": "block",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "vg_name": "ceph_vg1"
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:         }
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:     ],
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:     "2": [
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:         {
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "devices": [
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "/dev/loop5"
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             ],
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "lv_name": "ceph_lv2",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "lv_size": "21470642176",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "name": "ceph_lv2",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "tags": {
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.cluster_name": "ceph",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.crush_device_class": "",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.encrypted": "0",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.osd_id": "2",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.type": "block",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:                 "ceph.vdo": "0"
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             },
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "type": "block",
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:             "vg_name": "ceph_vg2"
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:         }
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]:     ]
Nov 29 07:52:25 compute-0 frosty_lumiere[256721]: }
Nov 29 07:52:25 compute-0 systemd[1]: libpod-3f08ac43b0d87e993a3a8db9a10b1ef55e99e13f845f615f17d89628da11a3ad.scope: Deactivated successfully.
Nov 29 07:52:25 compute-0 podman[256704]: 2025-11-29 07:52:25.988872354 +0000 UTC m=+0.965026548 container died 3f08ac43b0d87e993a3a8db9a10b1ef55e99e13f845f615f17d89628da11a3ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lumiere, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 07:52:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-61bbc08a382fd5e4b20265030608ce40d3bb62edad37af8b56be520bb7532092-merged.mount: Deactivated successfully.
Nov 29 07:52:26 compute-0 podman[256704]: 2025-11-29 07:52:26.051180655 +0000 UTC m=+1.027334829 container remove 3f08ac43b0d87e993a3a8db9a10b1ef55e99e13f845f615f17d89628da11a3ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:52:26 compute-0 systemd[1]: libpod-conmon-3f08ac43b0d87e993a3a8db9a10b1ef55e99e13f845f615f17d89628da11a3ad.scope: Deactivated successfully.
Nov 29 07:52:26 compute-0 sudo[256598]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:26 compute-0 sudo[256744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:26 compute-0 sudo[256744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:26 compute-0 sudo[256744]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:26 compute-0 sudo[256769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:52:26 compute-0 sudo[256769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:26 compute-0 sudo[256769]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:26 compute-0 sudo[256794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:26 compute-0 sudo[256794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:26 compute-0 sudo[256794]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:26 compute-0 ceph-mon[75237]: pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:26 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2892335317' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 29 07:52:26 compute-0 ceph-mon[75237]: from='client.14347 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 29 07:52:26 compute-0 sudo[256819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:52:26 compute-0 sudo[256819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:52:27.112 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:52:27.113 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:52:27.113 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:27 compute-0 podman[256884]: 2025-11-29 07:52:27.128853434 +0000 UTC m=+0.043631091 container create 2809e32cef7f39840dbce76aa2c378a2225dd870b5199ca53464154926d4f0ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rubin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:52:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:27 compute-0 systemd[1]: Started libpod-conmon-2809e32cef7f39840dbce76aa2c378a2225dd870b5199ca53464154926d4f0ea.scope.
Nov 29 07:52:27 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:52:27 compute-0 podman[256884]: 2025-11-29 07:52:27.109857124 +0000 UTC m=+0.024634811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:52:27 compute-0 podman[256884]: 2025-11-29 07:52:27.219967119 +0000 UTC m=+0.134744886 container init 2809e32cef7f39840dbce76aa2c378a2225dd870b5199ca53464154926d4f0ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:52:27 compute-0 podman[256884]: 2025-11-29 07:52:27.227940264 +0000 UTC m=+0.142717921 container start 2809e32cef7f39840dbce76aa2c378a2225dd870b5199ca53464154926d4f0ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rubin, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:52:27 compute-0 podman[256884]: 2025-11-29 07:52:27.231834877 +0000 UTC m=+0.146612644 container attach 2809e32cef7f39840dbce76aa2c378a2225dd870b5199ca53464154926d4f0ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 07:52:27 compute-0 eager_rubin[256901]: 167 167
Nov 29 07:52:27 compute-0 systemd[1]: libpod-2809e32cef7f39840dbce76aa2c378a2225dd870b5199ca53464154926d4f0ea.scope: Deactivated successfully.
Nov 29 07:52:27 compute-0 conmon[256901]: conmon 2809e32cef7f39840dbc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2809e32cef7f39840dbce76aa2c378a2225dd870b5199ca53464154926d4f0ea.scope/container/memory.events
Nov 29 07:52:27 compute-0 podman[256884]: 2025-11-29 07:52:27.236259187 +0000 UTC m=+0.151036844 container died 2809e32cef7f39840dbce76aa2c378a2225dd870b5199ca53464154926d4f0ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rubin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:52:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa1de2a204ed03d29119feb1a84cc5ae79c74d73257f54cdf9e1179cfc0154d9-merged.mount: Deactivated successfully.
Nov 29 07:52:27 compute-0 podman[256884]: 2025-11-29 07:52:27.274273787 +0000 UTC m=+0.189051434 container remove 2809e32cef7f39840dbce76aa2c378a2225dd870b5199ca53464154926d4f0ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rubin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:52:27 compute-0 systemd[1]: libpod-conmon-2809e32cef7f39840dbce76aa2c378a2225dd870b5199ca53464154926d4f0ea.scope: Deactivated successfully.
Nov 29 07:52:27 compute-0 podman[256926]: 2025-11-29 07:52:27.494828885 +0000 UTC m=+0.059659431 container create 79be47b585f66736d1ed135c986dff94b6bb404854407b16486df09f0c0a6b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_davinci, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:52:27 compute-0 systemd[1]: Started libpod-conmon-79be47b585f66736d1ed135c986dff94b6bb404854407b16486df09f0c0a6b78.scope.
Nov 29 07:52:27 compute-0 podman[256926]: 2025-11-29 07:52:27.464736768 +0000 UTC m=+0.029567364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:52:27 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:52:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab88cefd6a8d29b5aef38230cab64e6de0ec34129f52809366f20f247da54c03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab88cefd6a8d29b5aef38230cab64e6de0ec34129f52809366f20f247da54c03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab88cefd6a8d29b5aef38230cab64e6de0ec34129f52809366f20f247da54c03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab88cefd6a8d29b5aef38230cab64e6de0ec34129f52809366f20f247da54c03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:27 compute-0 podman[256926]: 2025-11-29 07:52:27.610399257 +0000 UTC m=+0.175229883 container init 79be47b585f66736d1ed135c986dff94b6bb404854407b16486df09f0c0a6b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 07:52:27 compute-0 podman[256926]: 2025-11-29 07:52:27.622500041 +0000 UTC m=+0.187330617 container start 79be47b585f66736d1ed135c986dff94b6bb404854407b16486df09f0c0a6b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_davinci, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 07:52:27 compute-0 podman[256926]: 2025-11-29 07:52:27.626502138 +0000 UTC m=+0.191332774 container attach 79be47b585f66736d1ed135c986dff94b6bb404854407b16486df09f0c0a6b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_davinci, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 07:52:27 compute-0 ceph-mon[75237]: pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:28 compute-0 cranky_davinci[256943]: {
Nov 29 07:52:28 compute-0 cranky_davinci[256943]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:52:28 compute-0 cranky_davinci[256943]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:52:28 compute-0 cranky_davinci[256943]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:52:28 compute-0 cranky_davinci[256943]:         "osd_id": 2,
Nov 29 07:52:28 compute-0 cranky_davinci[256943]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:52:28 compute-0 cranky_davinci[256943]:         "type": "bluestore"
Nov 29 07:52:28 compute-0 cranky_davinci[256943]:     },
Nov 29 07:52:28 compute-0 cranky_davinci[256943]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:52:28 compute-0 cranky_davinci[256943]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:52:28 compute-0 cranky_davinci[256943]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:52:28 compute-0 cranky_davinci[256943]:         "osd_id": 0,
Nov 29 07:52:28 compute-0 cranky_davinci[256943]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:52:28 compute-0 cranky_davinci[256943]:         "type": "bluestore"
Nov 29 07:52:28 compute-0 cranky_davinci[256943]:     },
Nov 29 07:52:28 compute-0 cranky_davinci[256943]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:52:28 compute-0 cranky_davinci[256943]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:52:28 compute-0 cranky_davinci[256943]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:52:28 compute-0 cranky_davinci[256943]:         "osd_id": 1,
Nov 29 07:52:28 compute-0 cranky_davinci[256943]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:52:28 compute-0 cranky_davinci[256943]:         "type": "bluestore"
Nov 29 07:52:28 compute-0 cranky_davinci[256943]:     }
Nov 29 07:52:28 compute-0 cranky_davinci[256943]: }
Nov 29 07:52:28 compute-0 systemd[1]: libpod-79be47b585f66736d1ed135c986dff94b6bb404854407b16486df09f0c0a6b78.scope: Deactivated successfully.
Nov 29 07:52:28 compute-0 systemd[1]: libpod-79be47b585f66736d1ed135c986dff94b6bb404854407b16486df09f0c0a6b78.scope: Consumed 1.122s CPU time.
Nov 29 07:52:28 compute-0 podman[256976]: 2025-11-29 07:52:28.803860792 +0000 UTC m=+0.046130709 container died 79be47b585f66736d1ed135c986dff94b6bb404854407b16486df09f0c0a6b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_davinci, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 07:52:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab88cefd6a8d29b5aef38230cab64e6de0ec34129f52809366f20f247da54c03-merged.mount: Deactivated successfully.
Nov 29 07:52:28 compute-0 podman[256976]: 2025-11-29 07:52:28.862847645 +0000 UTC m=+0.105117512 container remove 79be47b585f66736d1ed135c986dff94b6bb404854407b16486df09f0c0a6b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_davinci, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 07:52:28 compute-0 systemd[1]: libpod-conmon-79be47b585f66736d1ed135c986dff94b6bb404854407b16486df09f0c0a6b78.scope: Deactivated successfully.
Nov 29 07:52:28 compute-0 sudo[256819]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:52:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:52:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:52:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:52:28 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 48b581b9-c0b1-4ed5-868f-3ffa2fe406ca does not exist
Nov 29 07:52:28 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 15bcb2e8-7667-4cde-b6a9-7b50ce03ed6b does not exist
Nov 29 07:52:29 compute-0 sudo[256991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:29 compute-0 sudo[256991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:29 compute-0 sudo[256991]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:29 compute-0 sudo[257016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:52:29 compute-0 sudo[257016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:29 compute-0 sudo[257016]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:52:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:52:29 compute-0 ceph-mon[75237]: pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:52:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:32 compute-0 ceph-mon[75237]: pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:34 compute-0 ceph-mon[75237]: pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:52:35 compute-0 podman[257041]: 2025-11-29 07:52:35.949410115 +0000 UTC m=+0.098652852 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:52:36 compute-0 ceph-mon[75237]: pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:38 compute-0 ceph-mon[75237]: pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:52:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:52:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:52:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:52:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:52:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:52:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:52:38
Nov 29 07:52:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:52:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:52:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'vms']
Nov 29 07:52:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:52:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:40 compute-0 ceph-mon[75237]: pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:52:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:41 compute-0 podman[257060]: 2025-11-29 07:52:41.920506727 +0000 UTC m=+0.094086748 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 07:52:42 compute-0 ceph-mon[75237]: pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:52:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:52:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:52:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:52:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:52:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:52:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:52:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:52:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:52:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:52:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 29 07:52:42 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3019648137' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 29 07:52:42 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14349 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 29 07:52:42 compute-0 ceph-mgr[75527]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 29 07:52:42 compute-0 ceph-mgr[75527]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 29 07:52:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:43 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3019648137' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 29 07:52:44 compute-0 ceph-mon[75237]: from='client.14349 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 29 07:52:44 compute-0 ceph-mon[75237]: pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:52:45 compute-0 ceph-mon[75237]: pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:48 compute-0 ceph-mon[75237]: pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:50 compute-0 ceph-mon[75237]: pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:52:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:52 compute-0 ceph-mon[75237]: pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:54 compute-0 ceph-mon[75237]: pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:54 compute-0 podman[257081]: 2025-11-29 07:52:54.96680516 +0000 UTC m=+0.127303850 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:52:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:52:56 compute-0 ceph-mon[75237]: pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:57 compute-0 ceph-mon[75237]: pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:52:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3466162267' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:52:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3466162267' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:58 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3466162267' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:58 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3466162267' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:52:59 compute-0 ceph-mon[75237]: pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:53:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:01 compute-0 sshd-session[257108]: Invalid user sopuser from 103.234.151.178 port 5784
Nov 29 07:53:01 compute-0 sshd-session[257108]: Received disconnect from 103.234.151.178 port 5784:11: Bye Bye [preauth]
Nov 29 07:53:01 compute-0 sshd-session[257108]: Disconnected from invalid user sopuser 103.234.151.178 port 5784 [preauth]
Nov 29 07:53:02 compute-0 ceph-mon[75237]: pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:53:05 compute-0 ceph-mon[75237]: pgmap v887: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:06 compute-0 ceph-mon[75237]: pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:06 compute-0 podman[257110]: 2025-11-29 07:53:06.897356439 +0000 UTC m=+0.066430536 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:53:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:07 compute-0 ceph-mon[75237]: pgmap v889: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:53:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:53:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:53:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:53:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:53:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:53:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:10 compute-0 ceph-mon[75237]: pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:53:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:12 compute-0 ceph-mon[75237]: pgmap v891: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:12 compute-0 podman[257128]: 2025-11-29 07:53:12.883046867 +0000 UTC m=+0.054666917 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:53:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:14 compute-0 ceph-mon[75237]: pgmap v892: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:53:15 compute-0 ceph-mon[75237]: pgmap v893: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:16 compute-0 nova_compute[255040]: 2025-11-29 07:53:16.807 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:16 compute-0 nova_compute[255040]: 2025-11-29 07:53:16.832 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:16 compute-0 nova_compute[255040]: 2025-11-29 07:53:16.832 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:16 compute-0 nova_compute[255040]: 2025-11-29 07:53:16.833 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:16 compute-0 nova_compute[255040]: 2025-11-29 07:53:16.833 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:16 compute-0 nova_compute[255040]: 2025-11-29 07:53:16.833 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:16 compute-0 nova_compute[255040]: 2025-11-29 07:53:16.833 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:16 compute-0 nova_compute[255040]: 2025-11-29 07:53:16.867 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:16 compute-0 nova_compute[255040]: 2025-11-29 07:53:16.867 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:16 compute-0 nova_compute[255040]: 2025-11-29 07:53:16.868 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:16 compute-0 nova_compute[255040]: 2025-11-29 07:53:16.868 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:53:16 compute-0 nova_compute[255040]: 2025-11-29 07:53:16.868 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:53:17.229357) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402797229857, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1062, "num_deletes": 251, "total_data_size": 1569359, "memory_usage": 1589168, "flush_reason": "Manual Compaction"}
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402797248706, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 1544185, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15707, "largest_seqno": 16768, "table_properties": {"data_size": 1538960, "index_size": 2685, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11029, "raw_average_key_size": 19, "raw_value_size": 1528553, "raw_average_value_size": 2710, "num_data_blocks": 123, "num_entries": 564, "num_filter_entries": 564, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402695, "oldest_key_time": 1764402695, "file_creation_time": 1764402797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 19399 microseconds, and 11911 cpu microseconds.
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:53:17.248830) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 1544185 bytes OK
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:53:17.248859) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:53:17.250903) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:53:17.250954) EVENT_LOG_v1 {"time_micros": 1764402797250942, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:53:17.250979) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1564386, prev total WAL file size 1564386, number of live WAL files 2.
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:53:17.251875) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(1507KB)], [35(8524KB)]
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402797252163, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 10272767, "oldest_snapshot_seqno": -1}
Nov 29 07:53:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:53:17 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/179364866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:17 compute-0 nova_compute[255040]: 2025-11-29 07:53:17.282 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4309 keys, 8502283 bytes, temperature: kUnknown
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402797315660, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 8502283, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8470577, "index_size": 19820, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 106158, "raw_average_key_size": 24, "raw_value_size": 8389719, "raw_average_value_size": 1947, "num_data_blocks": 832, "num_entries": 4309, "num_filter_entries": 4309, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764402797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:53:17.315988) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 8502283 bytes
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:53:17.317216) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.4 rd, 133.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.3 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(12.2) write-amplify(5.5) OK, records in: 4823, records dropped: 514 output_compression: NoCompression
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:53:17.317232) EVENT_LOG_v1 {"time_micros": 1764402797317224, "job": 16, "event": "compaction_finished", "compaction_time_micros": 63654, "compaction_time_cpu_micros": 22419, "output_level": 6, "num_output_files": 1, "total_output_size": 8502283, "num_input_records": 4823, "num_output_records": 4309, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402797317613, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402797318919, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:53:17.251726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:53:17.318995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:53:17.319008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:53:17.319011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:53:17.319013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:17 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:53:17.319015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:17 compute-0 nova_compute[255040]: 2025-11-29 07:53:17.432 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:53:17 compute-0 nova_compute[255040]: 2025-11-29 07:53:17.433 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5198MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:53:17 compute-0 nova_compute[255040]: 2025-11-29 07:53:17.433 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:17 compute-0 nova_compute[255040]: 2025-11-29 07:53:17.433 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:17 compute-0 nova_compute[255040]: 2025-11-29 07:53:17.508 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:53:17 compute-0 nova_compute[255040]: 2025-11-29 07:53:17.508 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:53:17 compute-0 nova_compute[255040]: 2025-11-29 07:53:17.525 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:53:17 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2363711429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:17 compute-0 nova_compute[255040]: 2025-11-29 07:53:17.991 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:17 compute-0 nova_compute[255040]: 2025-11-29 07:53:17.997 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:53:18 compute-0 nova_compute[255040]: 2025-11-29 07:53:18.013 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:53:18 compute-0 nova_compute[255040]: 2025-11-29 07:53:18.014 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:53:18 compute-0 nova_compute[255040]: 2025-11-29 07:53:18.015 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:18 compute-0 nova_compute[255040]: 2025-11-29 07:53:18.157 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:18 compute-0 nova_compute[255040]: 2025-11-29 07:53:18.158 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:18 compute-0 nova_compute[255040]: 2025-11-29 07:53:18.158 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:53:18 compute-0 nova_compute[255040]: 2025-11-29 07:53:18.158 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:53:18 compute-0 nova_compute[255040]: 2025-11-29 07:53:18.178 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:53:18 compute-0 nova_compute[255040]: 2025-11-29 07:53:18.178 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:18 compute-0 nova_compute[255040]: 2025-11-29 07:53:18.179 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:53:18 compute-0 ceph-mon[75237]: pgmap v894: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:18 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/179364866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:18 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2363711429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:19 compute-0 ceph-mon[75237]: pgmap v895: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:53:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:22 compute-0 ceph-mon[75237]: pgmap v896: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:24 compute-0 ceph-mon[75237]: pgmap v897: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:53:25 compute-0 podman[257192]: 2025-11-29 07:53:25.978712184 +0000 UTC m=+0.137960679 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:53:26 compute-0 ceph-mon[75237]: pgmap v898: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:53:27.113 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:53:27.113 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:53:27.113 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:28 compute-0 ceph-mon[75237]: pgmap v899: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:29 compute-0 sudo[257218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:29 compute-0 sudo[257218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:29 compute-0 sudo[257218]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:29 compute-0 sudo[257243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:53:29 compute-0 sudo[257243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:29 compute-0 sudo[257243]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:29 compute-0 sudo[257268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:29 compute-0 sudo[257268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:29 compute-0 sudo[257268]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:29 compute-0 sudo[257293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:53:29 compute-0 sudo[257293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:29 compute-0 ceph-mon[75237]: pgmap v900: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:30 compute-0 sudo[257293]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:53:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:53:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:53:30 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:53:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:53:30 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:53:30 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 919bc6b4-d8e3-4165-beba-a734f6202360 does not exist
Nov 29 07:53:30 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev f35b31a2-4f21-4fe9-8cd8-25ab1b02058c does not exist
Nov 29 07:53:30 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 61a62c92-1edf-4ef8-bb0a-e7ceeda38f46 does not exist
Nov 29 07:53:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:53:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:53:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:53:30 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:53:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:53:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:53:30 compute-0 sudo[257348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:30 compute-0 sudo[257348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:30 compute-0 sudo[257348]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:30 compute-0 sudo[257373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:53:30 compute-0 sudo[257373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:30 compute-0 sudo[257373]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:30 compute-0 sudo[257398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:30 compute-0 sudo[257398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:30 compute-0 sudo[257398]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:30 compute-0 sudo[257423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:53:30 compute-0 sudo[257423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:53:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:53:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:53:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:53:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:53:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:53:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:53:30 compute-0 podman[257489]: 2025-11-29 07:53:30.725325508 +0000 UTC m=+0.029005169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:53:31 compute-0 podman[257489]: 2025-11-29 07:53:31.004324519 +0000 UTC m=+0.308004090 container create 7c591d13218b141c932557be210a5c70cb2dfde8e3afde202b14510c8d72fdf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_easley, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 07:53:31 compute-0 systemd[1]: Started libpod-conmon-7c591d13218b141c932557be210a5c70cb2dfde8e3afde202b14510c8d72fdf6.scope.
Nov 29 07:53:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:53:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:31 compute-0 podman[257489]: 2025-11-29 07:53:31.288756847 +0000 UTC m=+0.592436488 container init 7c591d13218b141c932557be210a5c70cb2dfde8e3afde202b14510c8d72fdf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 07:53:31 compute-0 podman[257489]: 2025-11-29 07:53:31.300868385 +0000 UTC m=+0.604547986 container start 7c591d13218b141c932557be210a5c70cb2dfde8e3afde202b14510c8d72fdf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_easley, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:53:31 compute-0 podman[257489]: 2025-11-29 07:53:31.30545019 +0000 UTC m=+0.609129821 container attach 7c591d13218b141c932557be210a5c70cb2dfde8e3afde202b14510c8d72fdf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 07:53:31 compute-0 musing_easley[257505]: 167 167
Nov 29 07:53:31 compute-0 systemd[1]: libpod-7c591d13218b141c932557be210a5c70cb2dfde8e3afde202b14510c8d72fdf6.scope: Deactivated successfully.
Nov 29 07:53:31 compute-0 podman[257489]: 2025-11-29 07:53:31.313013865 +0000 UTC m=+0.616693526 container died 7c591d13218b141c932557be210a5c70cb2dfde8e3afde202b14510c8d72fdf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_easley, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:53:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-292171582d6118f587856e4a616c04aa75f57f6b472d429f72b9e1dc11c31b97-merged.mount: Deactivated successfully.
Nov 29 07:53:31 compute-0 podman[257489]: 2025-11-29 07:53:31.552894552 +0000 UTC m=+0.856574173 container remove 7c591d13218b141c932557be210a5c70cb2dfde8e3afde202b14510c8d72fdf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_easley, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 07:53:31 compute-0 systemd[1]: libpod-conmon-7c591d13218b141c932557be210a5c70cb2dfde8e3afde202b14510c8d72fdf6.scope: Deactivated successfully.
Nov 29 07:53:31 compute-0 podman[257528]: 2025-11-29 07:53:31.821887011 +0000 UTC m=+0.094852187 container create 170189a1d45259885bffa8acd065b8dcd54396e3753d4355398569cdbd40d2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:53:31 compute-0 ceph-mon[75237]: pgmap v901: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:31 compute-0 podman[257528]: 2025-11-29 07:53:31.762058956 +0000 UTC m=+0.035024142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:53:31 compute-0 systemd[1]: Started libpod-conmon-170189a1d45259885bffa8acd065b8dcd54396e3753d4355398569cdbd40d2ad.scope.
Nov 29 07:53:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:53:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/affc4b911530cf783713e44eee6d211d7f242850c33bebd2f2a0170be042f34b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/affc4b911530cf783713e44eee6d211d7f242850c33bebd2f2a0170be042f34b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/affc4b911530cf783713e44eee6d211d7f242850c33bebd2f2a0170be042f34b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/affc4b911530cf783713e44eee6d211d7f242850c33bebd2f2a0170be042f34b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/affc4b911530cf783713e44eee6d211d7f242850c33bebd2f2a0170be042f34b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:31 compute-0 podman[257528]: 2025-11-29 07:53:31.912715639 +0000 UTC m=+0.185680785 container init 170189a1d45259885bffa8acd065b8dcd54396e3753d4355398569cdbd40d2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mclaren, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:53:31 compute-0 podman[257528]: 2025-11-29 07:53:31.924557291 +0000 UTC m=+0.197522447 container start 170189a1d45259885bffa8acd065b8dcd54396e3753d4355398569cdbd40d2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mclaren, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:53:32 compute-0 podman[257528]: 2025-11-29 07:53:32.121221534 +0000 UTC m=+0.394186670 container attach 170189a1d45259885bffa8acd065b8dcd54396e3753d4355398569cdbd40d2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:53:32 compute-0 vibrant_mclaren[257544]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:53:32 compute-0 vibrant_mclaren[257544]: --> relative data size: 1.0
Nov 29 07:53:32 compute-0 vibrant_mclaren[257544]: --> All data devices are unavailable
Nov 29 07:53:33 compute-0 systemd[1]: libpod-170189a1d45259885bffa8acd065b8dcd54396e3753d4355398569cdbd40d2ad.scope: Deactivated successfully.
Nov 29 07:53:33 compute-0 podman[257528]: 2025-11-29 07:53:33.026016837 +0000 UTC m=+1.298982033 container died 170189a1d45259885bffa8acd065b8dcd54396e3753d4355398569cdbd40d2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mclaren, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 07:53:33 compute-0 systemd[1]: libpod-170189a1d45259885bffa8acd065b8dcd54396e3753d4355398569cdbd40d2ad.scope: Consumed 1.057s CPU time.
Nov 29 07:53:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-affc4b911530cf783713e44eee6d211d7f242850c33bebd2f2a0170be042f34b-merged.mount: Deactivated successfully.
Nov 29 07:53:33 compute-0 podman[257528]: 2025-11-29 07:53:33.376658274 +0000 UTC m=+1.649623460 container remove 170189a1d45259885bffa8acd065b8dcd54396e3753d4355398569cdbd40d2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mclaren, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:53:33 compute-0 systemd[1]: libpod-conmon-170189a1d45259885bffa8acd065b8dcd54396e3753d4355398569cdbd40d2ad.scope: Deactivated successfully.
Nov 29 07:53:33 compute-0 sudo[257423]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:33 compute-0 sudo[257587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:33 compute-0 sudo[257587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:33 compute-0 sudo[257587]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:33 compute-0 sudo[257612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:53:33 compute-0 sudo[257612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:33 compute-0 sudo[257612]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:33 compute-0 sudo[257637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:33 compute-0 sudo[257637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:33 compute-0 sudo[257637]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:33 compute-0 sudo[257662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:53:33 compute-0 sudo[257662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:34 compute-0 podman[257727]: 2025-11-29 07:53:34.024562438 +0000 UTC m=+0.038683633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:53:34 compute-0 podman[257727]: 2025-11-29 07:53:34.242242811 +0000 UTC m=+0.256363926 container create 7086e0587c65aeb23020b6a2d388486920b5a0dfce619d759b3093844e48e586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:53:34 compute-0 ceph-mon[75237]: pgmap v902: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:34 compute-0 systemd[1]: Started libpod-conmon-7086e0587c65aeb23020b6a2d388486920b5a0dfce619d759b3093844e48e586.scope.
Nov 29 07:53:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:53:34 compute-0 podman[257727]: 2025-11-29 07:53:34.43501779 +0000 UTC m=+0.449139015 container init 7086e0587c65aeb23020b6a2d388486920b5a0dfce619d759b3093844e48e586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_colden, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 07:53:34 compute-0 podman[257727]: 2025-11-29 07:53:34.445255038 +0000 UTC m=+0.459376183 container start 7086e0587c65aeb23020b6a2d388486920b5a0dfce619d759b3093844e48e586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 07:53:34 compute-0 hopeful_colden[257743]: 167 167
Nov 29 07:53:34 compute-0 systemd[1]: libpod-7086e0587c65aeb23020b6a2d388486920b5a0dfce619d759b3093844e48e586.scope: Deactivated successfully.
Nov 29 07:53:34 compute-0 podman[257727]: 2025-11-29 07:53:34.621852726 +0000 UTC m=+0.635973861 container attach 7086e0587c65aeb23020b6a2d388486920b5a0dfce619d759b3093844e48e586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_colden, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:53:34 compute-0 podman[257727]: 2025-11-29 07:53:34.622392071 +0000 UTC m=+0.636513176 container died 7086e0587c65aeb23020b6a2d388486920b5a0dfce619d759b3093844e48e586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:53:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b557ffe3d14f6d7a3b96ca35a3bdc10de0b1b5d6ee75504fc3512c27baeef37-merged.mount: Deactivated successfully.
Nov 29 07:53:34 compute-0 podman[257727]: 2025-11-29 07:53:34.697182433 +0000 UTC m=+0.711303538 container remove 7086e0587c65aeb23020b6a2d388486920b5a0dfce619d759b3093844e48e586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_colden, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 07:53:34 compute-0 systemd[1]: libpod-conmon-7086e0587c65aeb23020b6a2d388486920b5a0dfce619d759b3093844e48e586.scope: Deactivated successfully.
Nov 29 07:53:35 compute-0 podman[257767]: 2025-11-29 07:53:34.933342819 +0000 UTC m=+0.045693903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:53:35 compute-0 podman[257767]: 2025-11-29 07:53:35.13618508 +0000 UTC m=+0.248536154 container create 4ac7501fe784a15c472d9334ccd847c213cc4c87da1a3bb29717fa75888bcbc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:53:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:35 compute-0 systemd[1]: Started libpod-conmon-4ac7501fe784a15c472d9334ccd847c213cc4c87da1a3bb29717fa75888bcbc4.scope.
Nov 29 07:53:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:53:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3955014c177deac41005d6e1592d0d3496f7f549133ab792c714d7ce4770076b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3955014c177deac41005d6e1592d0d3496f7f549133ab792c714d7ce4770076b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3955014c177deac41005d6e1592d0d3496f7f549133ab792c714d7ce4770076b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3955014c177deac41005d6e1592d0d3496f7f549133ab792c714d7ce4770076b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:53:35 compute-0 podman[257767]: 2025-11-29 07:53:35.710423732 +0000 UTC m=+0.822774846 container init 4ac7501fe784a15c472d9334ccd847c213cc4c87da1a3bb29717fa75888bcbc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:53:35 compute-0 podman[257767]: 2025-11-29 07:53:35.724574496 +0000 UTC m=+0.836925570 container start 4ac7501fe784a15c472d9334ccd847c213cc4c87da1a3bb29717fa75888bcbc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:53:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:53:36.163 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:53:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:53:36.165 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:53:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:53:36.166 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:53:36 compute-0 podman[257767]: 2025-11-29 07:53:36.314743091 +0000 UTC m=+1.427094235 container attach 4ac7501fe784a15c472d9334ccd847c213cc4c87da1a3bb29717fa75888bcbc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:53:36 compute-0 festive_kilby[257783]: {
Nov 29 07:53:36 compute-0 festive_kilby[257783]:     "0": [
Nov 29 07:53:36 compute-0 festive_kilby[257783]:         {
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "devices": [
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "/dev/loop3"
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             ],
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "lv_name": "ceph_lv0",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "lv_size": "21470642176",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "name": "ceph_lv0",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "tags": {
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.cluster_name": "ceph",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.crush_device_class": "",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.encrypted": "0",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.osd_id": "0",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.type": "block",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.vdo": "0"
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             },
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "type": "block",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "vg_name": "ceph_vg0"
Nov 29 07:53:36 compute-0 festive_kilby[257783]:         }
Nov 29 07:53:36 compute-0 festive_kilby[257783]:     ],
Nov 29 07:53:36 compute-0 festive_kilby[257783]:     "1": [
Nov 29 07:53:36 compute-0 festive_kilby[257783]:         {
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "devices": [
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "/dev/loop4"
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             ],
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "lv_name": "ceph_lv1",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "lv_size": "21470642176",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "name": "ceph_lv1",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "tags": {
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.cluster_name": "ceph",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.crush_device_class": "",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.encrypted": "0",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.osd_id": "1",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.type": "block",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.vdo": "0"
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             },
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "type": "block",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "vg_name": "ceph_vg1"
Nov 29 07:53:36 compute-0 festive_kilby[257783]:         }
Nov 29 07:53:36 compute-0 festive_kilby[257783]:     ],
Nov 29 07:53:36 compute-0 festive_kilby[257783]:     "2": [
Nov 29 07:53:36 compute-0 festive_kilby[257783]:         {
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "devices": [
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "/dev/loop5"
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             ],
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "lv_name": "ceph_lv2",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "lv_size": "21470642176",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "name": "ceph_lv2",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "tags": {
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.cluster_name": "ceph",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.crush_device_class": "",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.encrypted": "0",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.osd_id": "2",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.type": "block",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:                 "ceph.vdo": "0"
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             },
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "type": "block",
Nov 29 07:53:36 compute-0 festive_kilby[257783]:             "vg_name": "ceph_vg2"
Nov 29 07:53:36 compute-0 festive_kilby[257783]:         }
Nov 29 07:53:36 compute-0 festive_kilby[257783]:     ]
Nov 29 07:53:36 compute-0 festive_kilby[257783]: }
Nov 29 07:53:36 compute-0 podman[257767]: 2025-11-29 07:53:36.520999175 +0000 UTC m=+1.633350249 container died 4ac7501fe784a15c472d9334ccd847c213cc4c87da1a3bb29717fa75888bcbc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 07:53:36 compute-0 systemd[1]: libpod-4ac7501fe784a15c472d9334ccd847c213cc4c87da1a3bb29717fa75888bcbc4.scope: Deactivated successfully.
Nov 29 07:53:37 compute-0 ceph-mon[75237]: pgmap v903: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-3955014c177deac41005d6e1592d0d3496f7f549133ab792c714d7ce4770076b-merged.mount: Deactivated successfully.
Nov 29 07:53:38 compute-0 ceph-mon[75237]: pgmap v904: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:38 compute-0 podman[257767]: 2025-11-29 07:53:38.440275671 +0000 UTC m=+3.552626705 container remove 4ac7501fe784a15c472d9334ccd847c213cc4c87da1a3bb29717fa75888bcbc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:53:38 compute-0 sudo[257662]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:38 compute-0 systemd[1]: libpod-conmon-4ac7501fe784a15c472d9334ccd847c213cc4c87da1a3bb29717fa75888bcbc4.scope: Deactivated successfully.
Nov 29 07:53:38 compute-0 sudo[257815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:38 compute-0 sudo[257815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:38 compute-0 sudo[257815]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:38 compute-0 podman[257804]: 2025-11-29 07:53:38.561914176 +0000 UTC m=+0.809724401 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 07:53:38 compute-0 sudo[257846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:53:38 compute-0 sudo[257846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:38 compute-0 sudo[257846]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:38 compute-0 sudo[257872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:38 compute-0 sudo[257872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:38 compute-0 sudo[257872]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:53:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:53:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:53:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:53:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:53:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:53:38 compute-0 sudo[257897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:53:38 compute-0 sudo[257897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:53:38
Nov 29 07:53:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:53:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:53:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'default.rgw.control', 'images', 'volumes', 'default.rgw.meta', 'backups', 'vms', '.rgw.root', 'cephfs.cephfs.meta']
Nov 29 07:53:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:53:39 compute-0 podman[257962]: 2025-11-29 07:53:39.118209601 +0000 UTC m=+0.068923395 container create 39afdc6450c68346e67cc24651ed9ebef86c782ca3cd9083ab77205ec8fce046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 07:53:39 compute-0 systemd[1]: Started libpod-conmon-39afdc6450c68346e67cc24651ed9ebef86c782ca3cd9083ab77205ec8fce046.scope.
Nov 29 07:53:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:39 compute-0 podman[257962]: 2025-11-29 07:53:39.085400179 +0000 UTC m=+0.036114023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:53:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:53:39 compute-0 podman[257962]: 2025-11-29 07:53:39.401897917 +0000 UTC m=+0.352611751 container init 39afdc6450c68346e67cc24651ed9ebef86c782ca3cd9083ab77205ec8fce046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hawking, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:53:39 compute-0 podman[257962]: 2025-11-29 07:53:39.418846597 +0000 UTC m=+0.369560421 container start 39afdc6450c68346e67cc24651ed9ebef86c782ca3cd9083ab77205ec8fce046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:53:39 compute-0 podman[257962]: 2025-11-29 07:53:39.424178823 +0000 UTC m=+0.374892627 container attach 39afdc6450c68346e67cc24651ed9ebef86c782ca3cd9083ab77205ec8fce046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hawking, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 07:53:39 compute-0 serene_hawking[257978]: 167 167
Nov 29 07:53:39 compute-0 systemd[1]: libpod-39afdc6450c68346e67cc24651ed9ebef86c782ca3cd9083ab77205ec8fce046.scope: Deactivated successfully.
Nov 29 07:53:39 compute-0 podman[257962]: 2025-11-29 07:53:39.429341523 +0000 UTC m=+0.380055337 container died 39afdc6450c68346e67cc24651ed9ebef86c782ca3cd9083ab77205ec8fce046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:53:39 compute-0 ceph-mon[75237]: pgmap v905: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-8aa470ff50bf2369f7f3ce75b9c2bf39a60d99d176c7762141bd5df69d8adff1-merged.mount: Deactivated successfully.
Nov 29 07:53:40 compute-0 podman[257962]: 2025-11-29 07:53:40.049481273 +0000 UTC m=+1.000195107 container remove 39afdc6450c68346e67cc24651ed9ebef86c782ca3cd9083ab77205ec8fce046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hawking, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 07:53:40 compute-0 systemd[1]: libpod-conmon-39afdc6450c68346e67cc24651ed9ebef86c782ca3cd9083ab77205ec8fce046.scope: Deactivated successfully.
Nov 29 07:53:40 compute-0 podman[258002]: 2025-11-29 07:53:40.259814327 +0000 UTC m=+0.054756018 container create 4372cc53e16d4dd3cd3c7c23f6409f45b44a3f04ede61f51dc33d28a02ef031d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jang, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 07:53:40 compute-0 systemd[1]: Started libpod-conmon-4372cc53e16d4dd3cd3c7c23f6409f45b44a3f04ede61f51dc33d28a02ef031d.scope.
Nov 29 07:53:40 compute-0 podman[258002]: 2025-11-29 07:53:40.227725745 +0000 UTC m=+0.022667466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:53:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:53:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fa287663d63165e7d76fbb9e1bf56a7e085270dd5b4e9a3b4fb4866e04379b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fa287663d63165e7d76fbb9e1bf56a7e085270dd5b4e9a3b4fb4866e04379b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fa287663d63165e7d76fbb9e1bf56a7e085270dd5b4e9a3b4fb4866e04379b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fa287663d63165e7d76fbb9e1bf56a7e085270dd5b4e9a3b4fb4866e04379b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:40 compute-0 podman[258002]: 2025-11-29 07:53:40.363757481 +0000 UTC m=+0.158699182 container init 4372cc53e16d4dd3cd3c7c23f6409f45b44a3f04ede61f51dc33d28a02ef031d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jang, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 07:53:40 compute-0 podman[258002]: 2025-11-29 07:53:40.376249931 +0000 UTC m=+0.171191632 container start 4372cc53e16d4dd3cd3c7c23f6409f45b44a3f04ede61f51dc33d28a02ef031d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jang, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 07:53:40 compute-0 podman[258002]: 2025-11-29 07:53:40.385562094 +0000 UTC m=+0.180503785 container attach 4372cc53e16d4dd3cd3c7c23f6409f45b44a3f04ede61f51dc33d28a02ef031d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jang, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:53:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:53:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:41 compute-0 youthful_jang[258018]: {
Nov 29 07:53:41 compute-0 youthful_jang[258018]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:53:41 compute-0 youthful_jang[258018]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:53:41 compute-0 youthful_jang[258018]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:53:41 compute-0 youthful_jang[258018]:         "osd_id": 2,
Nov 29 07:53:41 compute-0 youthful_jang[258018]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:53:41 compute-0 youthful_jang[258018]:         "type": "bluestore"
Nov 29 07:53:41 compute-0 youthful_jang[258018]:     },
Nov 29 07:53:41 compute-0 youthful_jang[258018]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:53:41 compute-0 youthful_jang[258018]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:53:41 compute-0 youthful_jang[258018]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:53:41 compute-0 youthful_jang[258018]:         "osd_id": 0,
Nov 29 07:53:41 compute-0 youthful_jang[258018]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:53:41 compute-0 youthful_jang[258018]:         "type": "bluestore"
Nov 29 07:53:41 compute-0 youthful_jang[258018]:     },
Nov 29 07:53:41 compute-0 youthful_jang[258018]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:53:41 compute-0 youthful_jang[258018]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:53:41 compute-0 youthful_jang[258018]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:53:41 compute-0 youthful_jang[258018]:         "osd_id": 1,
Nov 29 07:53:41 compute-0 youthful_jang[258018]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:53:41 compute-0 youthful_jang[258018]:         "type": "bluestore"
Nov 29 07:53:41 compute-0 youthful_jang[258018]:     }
Nov 29 07:53:41 compute-0 youthful_jang[258018]: }
Nov 29 07:53:41 compute-0 systemd[1]: libpod-4372cc53e16d4dd3cd3c7c23f6409f45b44a3f04ede61f51dc33d28a02ef031d.scope: Deactivated successfully.
Nov 29 07:53:41 compute-0 podman[258002]: 2025-11-29 07:53:41.491560704 +0000 UTC m=+1.286502395 container died 4372cc53e16d4dd3cd3c7c23f6409f45b44a3f04ede61f51dc33d28a02ef031d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 07:53:41 compute-0 systemd[1]: libpod-4372cc53e16d4dd3cd3c7c23f6409f45b44a3f04ede61f51dc33d28a02ef031d.scope: Consumed 1.129s CPU time.
Nov 29 07:53:41 compute-0 ceph-mon[75237]: pgmap v906: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fa287663d63165e7d76fbb9e1bf56a7e085270dd5b4e9a3b4fb4866e04379b6-merged.mount: Deactivated successfully.
Nov 29 07:53:42 compute-0 podman[258002]: 2025-11-29 07:53:42.072329252 +0000 UTC m=+1.867270943 container remove 4372cc53e16d4dd3cd3c7c23f6409f45b44a3f04ede61f51dc33d28a02ef031d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 07:53:42 compute-0 systemd[1]: libpod-conmon-4372cc53e16d4dd3cd3c7c23f6409f45b44a3f04ede61f51dc33d28a02ef031d.scope: Deactivated successfully.
Nov 29 07:53:42 compute-0 sudo[257897]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:53:42 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:53:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:53:42 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:53:42 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 71812ec2-e422-4da3-9801-97f13db108ab does not exist
Nov 29 07:53:42 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 0c3155e4-9fba-4b23-bca1-e23493f84c84 does not exist
Nov 29 07:53:42 compute-0 sudo[258063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:42 compute-0 sudo[258063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:42 compute-0 sudo[258063]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:42 compute-0 sudo[258088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:53:42 compute-0 sudo[258088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:42 compute-0 sudo[258088]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:53:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:53:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:53:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:53:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:53:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:53:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:53:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:53:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:53:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:53:43 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:53:43 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:53:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:43 compute-0 podman[258113]: 2025-11-29 07:53:43.960330249 +0000 UTC m=+0.111737277 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 07:53:44 compute-0 ceph-mon[75237]: pgmap v907: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:53:46 compute-0 ceph-mon[75237]: pgmap v908: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:48 compute-0 ceph-mon[75237]: pgmap v909: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:50 compute-0 ceph-mon[75237]: pgmap v910: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:53:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:52 compute-0 ceph-mon[75237]: pgmap v911: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:54 compute-0 ceph-mon[75237]: pgmap v912: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:53:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:53:56 compute-0 ceph-mon[75237]: pgmap v913: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:56 compute-0 podman[258135]: 2025-11-29 07:53:56.951387862 +0000 UTC m=+0.116232619 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:53:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:58 compute-0 ceph-mon[75237]: pgmap v914: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:53:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3480098445' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:53:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:53:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3480098445' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:53:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:53:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3480098445' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:53:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3480098445' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:54:00 compute-0 ceph-mon[75237]: pgmap v915: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:54:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:02 compute-0 ceph-mon[75237]: pgmap v916: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:04 compute-0 ceph-mon[75237]: pgmap v917: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:54:06 compute-0 ceph-mon[75237]: pgmap v918: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:08 compute-0 ceph-mon[75237]: pgmap v919: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:54:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:54:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:54:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:54:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:54:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:54:08 compute-0 podman[258161]: 2025-11-29 07:54:08.895320755 +0000 UTC m=+0.064203666 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 07:54:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:09 compute-0 ceph-mon[75237]: pgmap v920: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:54:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:12 compute-0 ceph-mon[75237]: pgmap v921: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:14 compute-0 ceph-mon[75237]: pgmap v922: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:14 compute-0 podman[258180]: 2025-11-29 07:54:14.908997004 +0000 UTC m=+0.064948446 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 07:54:14 compute-0 nova_compute[255040]: 2025-11-29 07:54:14.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:14 compute-0 nova_compute[255040]: 2025-11-29 07:54:14.978 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:14 compute-0 nova_compute[255040]: 2025-11-29 07:54:14.978 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:54:16 compute-0 ceph-mon[75237]: pgmap v923: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:16 compute-0 sshd-session[258201]: Received disconnect from 103.234.151.178 port 29606:11: Bye Bye [preauth]
Nov 29 07:54:16 compute-0 sshd-session[258201]: Disconnected from authenticating user root 103.234.151.178 port 29606 [preauth]
Nov 29 07:54:16 compute-0 nova_compute[255040]: 2025-11-29 07:54:16.970 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:16 compute-0 nova_compute[255040]: 2025-11-29 07:54:16.974 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:16 compute-0 nova_compute[255040]: 2025-11-29 07:54:16.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:17 compute-0 nova_compute[255040]: 2025-11-29 07:54:17.009 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:17 compute-0 nova_compute[255040]: 2025-11-29 07:54:17.010 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:17 compute-0 nova_compute[255040]: 2025-11-29 07:54:17.010 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:17 compute-0 nova_compute[255040]: 2025-11-29 07:54:17.011 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:54:17 compute-0 nova_compute[255040]: 2025-11-29 07:54:17.012 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:54:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:54:17 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/84328196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:17 compute-0 nova_compute[255040]: 2025-11-29 07:54:17.488 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:54:17 compute-0 nova_compute[255040]: 2025-11-29 07:54:17.688 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:54:17 compute-0 nova_compute[255040]: 2025-11-29 07:54:17.690 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5171MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:54:17 compute-0 nova_compute[255040]: 2025-11-29 07:54:17.690 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:17 compute-0 nova_compute[255040]: 2025-11-29 07:54:17.691 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:17 compute-0 nova_compute[255040]: 2025-11-29 07:54:17.772 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:54:17 compute-0 nova_compute[255040]: 2025-11-29 07:54:17.773 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:54:17 compute-0 nova_compute[255040]: 2025-11-29 07:54:17.796 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:54:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:54:18 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3131836458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:18 compute-0 nova_compute[255040]: 2025-11-29 07:54:18.228 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:54:18 compute-0 nova_compute[255040]: 2025-11-29 07:54:18.237 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:54:18 compute-0 ceph-mon[75237]: pgmap v924: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:18 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/84328196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:18 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3131836458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:19 compute-0 nova_compute[255040]: 2025-11-29 07:54:19.250 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:54:19 compute-0 nova_compute[255040]: 2025-11-29 07:54:19.251 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:54:19 compute-0 nova_compute[255040]: 2025-11-29 07:54:19.252 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:20 compute-0 nova_compute[255040]: 2025-11-29 07:54:20.253 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:20 compute-0 nova_compute[255040]: 2025-11-29 07:54:20.253 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:54:20 compute-0 nova_compute[255040]: 2025-11-29 07:54:20.253 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:54:20 compute-0 ceph-mon[75237]: pgmap v925: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:54:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:21 compute-0 nova_compute[255040]: 2025-11-29 07:54:21.516 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:54:21 compute-0 nova_compute[255040]: 2025-11-29 07:54:21.516 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:21 compute-0 nova_compute[255040]: 2025-11-29 07:54:21.517 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:21 compute-0 nova_compute[255040]: 2025-11-29 07:54:21.517 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:54:22 compute-0 ceph-mon[75237]: pgmap v926: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:24 compute-0 ceph-mon[75237]: pgmap v927: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:54:25 compute-0 ceph-mon[75237]: pgmap v928: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:54:27.113 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:54:27.114 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:54:27.114 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:27 compute-0 podman[258247]: 2025-11-29 07:54:27.91826628 +0000 UTC m=+0.086897152 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Nov 29 07:54:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:29 compute-0 ceph-mon[75237]: pgmap v929: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:30 compute-0 ceph-mon[75237]: pgmap v930: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:54:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:32 compute-0 ceph-mon[75237]: pgmap v931: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:33 compute-0 ceph-mon[75237]: pgmap v932: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:54:36 compute-0 ceph-mon[75237]: pgmap v933: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:38 compute-0 ceph-mon[75237]: pgmap v934: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:54:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:54:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:54:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:54:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:54:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:54:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:54:38
Nov 29 07:54:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:54:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:54:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', '.mgr', 'vms', '.rgw.root', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'volumes']
Nov 29 07:54:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:54:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:39 compute-0 podman[258273]: 2025-11-29 07:54:39.898175152 +0000 UTC m=+0.063531507 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:54:40 compute-0 ceph-mon[75237]: pgmap v935: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:54:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:42 compute-0 ceph-mon[75237]: pgmap v936: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:42 compute-0 sudo[258292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:42 compute-0 sudo[258292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:42 compute-0 sudo[258292]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:42 compute-0 sudo[258317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:54:42 compute-0 sudo[258317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:42 compute-0 sudo[258317]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:42 compute-0 sudo[258342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:42 compute-0 sudo[258342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:42 compute-0 sudo[258342]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:42 compute-0 sudo[258367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:54:42 compute-0 sudo[258367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:54:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:54:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:54:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:54:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:54:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:54:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:54:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:54:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:54:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:54:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:43 compute-0 sudo[258367]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:54:43 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:54:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:54:43 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:54:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:54:43 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:54:43 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev a2d14d05-8fc9-4f43-b653-7128896e2016 does not exist
Nov 29 07:54:43 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev b1b2cfd8-1faa-4de5-b7c2-9b20fe78c75b does not exist
Nov 29 07:54:43 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 59f81704-3702-42cf-b59e-83af05828087 does not exist
Nov 29 07:54:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:54:43 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:54:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:54:43 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:54:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:54:43 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:54:43 compute-0 sudo[258424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:43 compute-0 sudo[258424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:43 compute-0 sudo[258424]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:43 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:54:43 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:54:43 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:54:43 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:54:43 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:54:43 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:54:43 compute-0 sudo[258449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:54:43 compute-0 sudo[258449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:43 compute-0 sudo[258449]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:43 compute-0 sudo[258474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:43 compute-0 sudo[258474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:43 compute-0 sudo[258474]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:43 compute-0 sudo[258499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:54:43 compute-0 sudo[258499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:43 compute-0 podman[258565]: 2025-11-29 07:54:43.949518317 +0000 UTC m=+0.059543328 container create 3d3be6c983b349acb225bf8ee7da6e2ce57e682431e79b21898f4cc7e6d5ba72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mestorf, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 07:54:44 compute-0 systemd[1]: Started libpod-conmon-3d3be6c983b349acb225bf8ee7da6e2ce57e682431e79b21898f4cc7e6d5ba72.scope.
Nov 29 07:54:44 compute-0 podman[258565]: 2025-11-29 07:54:43.924080655 +0000 UTC m=+0.034105706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:54:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:54:44 compute-0 podman[258565]: 2025-11-29 07:54:44.067525142 +0000 UTC m=+0.177550173 container init 3d3be6c983b349acb225bf8ee7da6e2ce57e682431e79b21898f4cc7e6d5ba72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 07:54:44 compute-0 podman[258565]: 2025-11-29 07:54:44.082174875 +0000 UTC m=+0.192199896 container start 3d3be6c983b349acb225bf8ee7da6e2ce57e682431e79b21898f4cc7e6d5ba72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:54:44 compute-0 podman[258565]: 2025-11-29 07:54:44.087000424 +0000 UTC m=+0.197025475 container attach 3d3be6c983b349acb225bf8ee7da6e2ce57e682431e79b21898f4cc7e6d5ba72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mestorf, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:54:44 compute-0 hardcore_mestorf[258582]: 167 167
Nov 29 07:54:44 compute-0 systemd[1]: libpod-3d3be6c983b349acb225bf8ee7da6e2ce57e682431e79b21898f4cc7e6d5ba72.scope: Deactivated successfully.
Nov 29 07:54:44 compute-0 podman[258565]: 2025-11-29 07:54:44.090278942 +0000 UTC m=+0.200303943 container died 3d3be6c983b349acb225bf8ee7da6e2ce57e682431e79b21898f4cc7e6d5ba72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mestorf, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 07:54:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2537a4987fac1d6fbfa0bf2058d23de051b8c4229346f187795e9df465a1cdd-merged.mount: Deactivated successfully.
Nov 29 07:54:44 compute-0 podman[258565]: 2025-11-29 07:54:44.139601065 +0000 UTC m=+0.249626076 container remove 3d3be6c983b349acb225bf8ee7da6e2ce57e682431e79b21898f4cc7e6d5ba72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 07:54:44 compute-0 systemd[1]: libpod-conmon-3d3be6c983b349acb225bf8ee7da6e2ce57e682431e79b21898f4cc7e6d5ba72.scope: Deactivated successfully.
Nov 29 07:54:44 compute-0 podman[258607]: 2025-11-29 07:54:44.322359106 +0000 UTC m=+0.052299244 container create 12a7291d3ec275c0806f751f8f28fc7feac01e1e6104d7422004b01983a71bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 29 07:54:44 compute-0 systemd[1]: Started libpod-conmon-12a7291d3ec275c0806f751f8f28fc7feac01e1e6104d7422004b01983a71bd4.scope.
Nov 29 07:54:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:54:44 compute-0 ceph-mon[75237]: pgmap v937: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4644bce00d1b26b70760b2b18f10b7880b13ea31c47e9f65d231888c8039ebd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4644bce00d1b26b70760b2b18f10b7880b13ea31c47e9f65d231888c8039ebd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4644bce00d1b26b70760b2b18f10b7880b13ea31c47e9f65d231888c8039ebd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:44 compute-0 podman[258607]: 2025-11-29 07:54:44.297446707 +0000 UTC m=+0.027386935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:54:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4644bce00d1b26b70760b2b18f10b7880b13ea31c47e9f65d231888c8039ebd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4644bce00d1b26b70760b2b18f10b7880b13ea31c47e9f65d231888c8039ebd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:44 compute-0 podman[258607]: 2025-11-29 07:54:44.404624322 +0000 UTC m=+0.134564490 container init 12a7291d3ec275c0806f751f8f28fc7feac01e1e6104d7422004b01983a71bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Nov 29 07:54:44 compute-0 podman[258607]: 2025-11-29 07:54:44.412599976 +0000 UTC m=+0.142540114 container start 12a7291d3ec275c0806f751f8f28fc7feac01e1e6104d7422004b01983a71bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kapitsa, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:54:44 compute-0 podman[258607]: 2025-11-29 07:54:44.417482696 +0000 UTC m=+0.147422834 container attach 12a7291d3ec275c0806f751f8f28fc7feac01e1e6104d7422004b01983a71bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kapitsa, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:54:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:45 compute-0 hardcore_kapitsa[258624]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:54:45 compute-0 hardcore_kapitsa[258624]: --> relative data size: 1.0
Nov 29 07:54:45 compute-0 hardcore_kapitsa[258624]: --> All data devices are unavailable
Nov 29 07:54:45 compute-0 systemd[1]: libpod-12a7291d3ec275c0806f751f8f28fc7feac01e1e6104d7422004b01983a71bd4.scope: Deactivated successfully.
Nov 29 07:54:45 compute-0 podman[258607]: 2025-11-29 07:54:45.577850724 +0000 UTC m=+1.307790892 container died 12a7291d3ec275c0806f751f8f28fc7feac01e1e6104d7422004b01983a71bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:54:45 compute-0 systemd[1]: libpod-12a7291d3ec275c0806f751f8f28fc7feac01e1e6104d7422004b01983a71bd4.scope: Consumed 1.105s CPU time.
Nov 29 07:54:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4644bce00d1b26b70760b2b18f10b7880b13ea31c47e9f65d231888c8039ebd-merged.mount: Deactivated successfully.
Nov 29 07:54:45 compute-0 podman[258607]: 2025-11-29 07:54:45.644275905 +0000 UTC m=+1.374216043 container remove 12a7291d3ec275c0806f751f8f28fc7feac01e1e6104d7422004b01983a71bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:54:45 compute-0 systemd[1]: libpod-conmon-12a7291d3ec275c0806f751f8f28fc7feac01e1e6104d7422004b01983a71bd4.scope: Deactivated successfully.
Nov 29 07:54:45 compute-0 sudo[258499]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:54:45 compute-0 podman[258654]: 2025-11-29 07:54:45.695191811 +0000 UTC m=+0.076290817 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 29 07:54:45 compute-0 sudo[258683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:45 compute-0 sudo[258683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:45 compute-0 sudo[258683]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:45 compute-0 sudo[258708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:54:45 compute-0 sudo[258708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:45 compute-0 sudo[258708]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:45 compute-0 sudo[258733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:45 compute-0 sudo[258733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:45 compute-0 sudo[258733]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:45 compute-0 sudo[258758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:54:45 compute-0 sudo[258758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:46 compute-0 podman[258824]: 2025-11-29 07:54:46.312914127 +0000 UTC m=+0.056282641 container create d24560023e92cc0ef55e449fb78697484bc312f91819badb56585df4954d15d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:54:46 compute-0 systemd[1]: Started libpod-conmon-d24560023e92cc0ef55e449fb78697484bc312f91819badb56585df4954d15d7.scope.
Nov 29 07:54:46 compute-0 podman[258824]: 2025-11-29 07:54:46.285154502 +0000 UTC m=+0.028523096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:54:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:54:46 compute-0 ceph-mon[75237]: pgmap v938: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:46 compute-0 podman[258824]: 2025-11-29 07:54:46.411035928 +0000 UTC m=+0.154404482 container init d24560023e92cc0ef55e449fb78697484bc312f91819badb56585df4954d15d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:54:46 compute-0 podman[258824]: 2025-11-29 07:54:46.419692 +0000 UTC m=+0.163060514 container start d24560023e92cc0ef55e449fb78697484bc312f91819badb56585df4954d15d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:54:46 compute-0 podman[258824]: 2025-11-29 07:54:46.423917983 +0000 UTC m=+0.167286517 container attach d24560023e92cc0ef55e449fb78697484bc312f91819badb56585df4954d15d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:54:46 compute-0 musing_euler[258840]: 167 167
Nov 29 07:54:46 compute-0 systemd[1]: libpod-d24560023e92cc0ef55e449fb78697484bc312f91819badb56585df4954d15d7.scope: Deactivated successfully.
Nov 29 07:54:46 compute-0 podman[258824]: 2025-11-29 07:54:46.429307908 +0000 UTC m=+0.172676432 container died d24560023e92cc0ef55e449fb78697484bc312f91819badb56585df4954d15d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 29 07:54:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fb7130cdc2563df6005a78526012ae8753056fa3460f4220b7283d0dd4db3a4-merged.mount: Deactivated successfully.
Nov 29 07:54:46 compute-0 podman[258824]: 2025-11-29 07:54:46.478955579 +0000 UTC m=+0.222324103 container remove d24560023e92cc0ef55e449fb78697484bc312f91819badb56585df4954d15d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 07:54:46 compute-0 systemd[1]: libpod-conmon-d24560023e92cc0ef55e449fb78697484bc312f91819badb56585df4954d15d7.scope: Deactivated successfully.
Nov 29 07:54:46 compute-0 podman[258862]: 2025-11-29 07:54:46.699481333 +0000 UTC m=+0.074772556 container create fa391ae2438ed354df793de17aad6e97803388804f60e039679a815b556a1bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:54:46 compute-0 systemd[1]: Started libpod-conmon-fa391ae2438ed354df793de17aad6e97803388804f60e039679a815b556a1bab.scope.
Nov 29 07:54:46 compute-0 podman[258862]: 2025-11-29 07:54:46.670460895 +0000 UTC m=+0.045752168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:54:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e27c1bf8eb49c8a1e07a8a0790e1e90a2d515f9c8f188fae85ee409cec94b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e27c1bf8eb49c8a1e07a8a0790e1e90a2d515f9c8f188fae85ee409cec94b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e27c1bf8eb49c8a1e07a8a0790e1e90a2d515f9c8f188fae85ee409cec94b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e27c1bf8eb49c8a1e07a8a0790e1e90a2d515f9c8f188fae85ee409cec94b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:46 compute-0 podman[258862]: 2025-11-29 07:54:46.801754916 +0000 UTC m=+0.177046119 container init fa391ae2438ed354df793de17aad6e97803388804f60e039679a815b556a1bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:54:46 compute-0 podman[258862]: 2025-11-29 07:54:46.817058256 +0000 UTC m=+0.192349469 container start fa391ae2438ed354df793de17aad6e97803388804f60e039679a815b556a1bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_newton, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:54:46 compute-0 podman[258862]: 2025-11-29 07:54:46.821491755 +0000 UTC m=+0.196782928 container attach fa391ae2438ed354df793de17aad6e97803388804f60e039679a815b556a1bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_newton, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 07:54:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:47 compute-0 hopeful_newton[258878]: {
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:     "0": [
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:         {
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "devices": [
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "/dev/loop3"
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             ],
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "lv_name": "ceph_lv0",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "lv_size": "21470642176",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "name": "ceph_lv0",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "tags": {
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.cluster_name": "ceph",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.crush_device_class": "",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.encrypted": "0",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.osd_id": "0",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.type": "block",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.vdo": "0"
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             },
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "type": "block",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "vg_name": "ceph_vg0"
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:         }
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:     ],
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:     "1": [
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:         {
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "devices": [
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "/dev/loop4"
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             ],
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "lv_name": "ceph_lv1",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "lv_size": "21470642176",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "name": "ceph_lv1",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "tags": {
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.cluster_name": "ceph",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.crush_device_class": "",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.encrypted": "0",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.osd_id": "1",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.type": "block",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.vdo": "0"
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             },
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "type": "block",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "vg_name": "ceph_vg1"
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:         }
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:     ],
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:     "2": [
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:         {
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "devices": [
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "/dev/loop5"
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             ],
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "lv_name": "ceph_lv2",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "lv_size": "21470642176",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "name": "ceph_lv2",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "tags": {
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.cluster_name": "ceph",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.crush_device_class": "",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.encrypted": "0",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.osd_id": "2",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.type": "block",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:                 "ceph.vdo": "0"
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             },
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "type": "block",
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:             "vg_name": "ceph_vg2"
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:         }
Nov 29 07:54:47 compute-0 hopeful_newton[258878]:     ]
Nov 29 07:54:47 compute-0 hopeful_newton[258878]: }
Nov 29 07:54:47 compute-0 systemd[1]: libpod-fa391ae2438ed354df793de17aad6e97803388804f60e039679a815b556a1bab.scope: Deactivated successfully.
Nov 29 07:54:47 compute-0 conmon[258878]: conmon fa391ae2438ed354df79 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fa391ae2438ed354df793de17aad6e97803388804f60e039679a815b556a1bab.scope/container/memory.events
Nov 29 07:54:47 compute-0 podman[258862]: 2025-11-29 07:54:47.577801857 +0000 UTC m=+0.953093080 container died fa391ae2438ed354df793de17aad6e97803388804f60e039679a815b556a1bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_newton, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 07:54:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4e27c1bf8eb49c8a1e07a8a0790e1e90a2d515f9c8f188fae85ee409cec94b0-merged.mount: Deactivated successfully.
Nov 29 07:54:47 compute-0 podman[258862]: 2025-11-29 07:54:47.651148964 +0000 UTC m=+1.026440167 container remove fa391ae2438ed354df793de17aad6e97803388804f60e039679a815b556a1bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_newton, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 07:54:47 compute-0 systemd[1]: libpod-conmon-fa391ae2438ed354df793de17aad6e97803388804f60e039679a815b556a1bab.scope: Deactivated successfully.
Nov 29 07:54:47 compute-0 sudo[258758]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:47 compute-0 sudo[258898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:47 compute-0 sudo[258898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:47 compute-0 sudo[258898]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:47 compute-0 sudo[258923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:54:47 compute-0 sudo[258923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:47 compute-0 sudo[258923]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:47 compute-0 sudo[258948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:47 compute-0 sudo[258948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:47 compute-0 sudo[258948]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:48 compute-0 sudo[258973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:54:48 compute-0 sudo[258973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:48 compute-0 podman[259039]: 2025-11-29 07:54:48.381307924 +0000 UTC m=+0.036475329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:54:48 compute-0 podman[259039]: 2025-11-29 07:54:48.478961774 +0000 UTC m=+0.134129119 container create 8741418e78c3070a43b7634f29c935eb22f520b64359d47a994eed2dfd5129a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_stonebraker, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 07:54:48 compute-0 ceph-mon[75237]: pgmap v939: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:48 compute-0 systemd[1]: Started libpod-conmon-8741418e78c3070a43b7634f29c935eb22f520b64359d47a994eed2dfd5129a9.scope.
Nov 29 07:54:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:54:48 compute-0 podman[259039]: 2025-11-29 07:54:48.620065348 +0000 UTC m=+0.275232753 container init 8741418e78c3070a43b7634f29c935eb22f520b64359d47a994eed2dfd5129a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 07:54:48 compute-0 podman[259039]: 2025-11-29 07:54:48.628754891 +0000 UTC m=+0.283922236 container start 8741418e78c3070a43b7634f29c935eb22f520b64359d47a994eed2dfd5129a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:54:48 compute-0 zealous_stonebraker[259055]: 167 167
Nov 29 07:54:48 compute-0 podman[259039]: 2025-11-29 07:54:48.637737081 +0000 UTC m=+0.292904506 container attach 8741418e78c3070a43b7634f29c935eb22f520b64359d47a994eed2dfd5129a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:54:48 compute-0 systemd[1]: libpod-8741418e78c3070a43b7634f29c935eb22f520b64359d47a994eed2dfd5129a9.scope: Deactivated successfully.
Nov 29 07:54:48 compute-0 podman[259039]: 2025-11-29 07:54:48.638546893 +0000 UTC m=+0.293714238 container died 8741418e78c3070a43b7634f29c935eb22f520b64359d47a994eed2dfd5129a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_stonebraker, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 07:54:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcbb9991235ececa24f22f43c734d60c1b874e02eb345d7b44a9bc25e78d99d9-merged.mount: Deactivated successfully.
Nov 29 07:54:48 compute-0 podman[259039]: 2025-11-29 07:54:48.72643034 +0000 UTC m=+0.381597685 container remove 8741418e78c3070a43b7634f29c935eb22f520b64359d47a994eed2dfd5129a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 07:54:48 compute-0 systemd[1]: libpod-conmon-8741418e78c3070a43b7634f29c935eb22f520b64359d47a994eed2dfd5129a9.scope: Deactivated successfully.
Nov 29 07:54:49 compute-0 podman[259079]: 2025-11-29 07:54:48.921232924 +0000 UTC m=+0.031865806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:54:49 compute-0 podman[259079]: 2025-11-29 07:54:49.057636672 +0000 UTC m=+0.168269514 container create aa35ae8336f68c68d1a6faffb400f1e14c2ae90662bdf3e33fc42198c502ce46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_beaver, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 07:54:49 compute-0 systemd[1]: Started libpod-conmon-aa35ae8336f68c68d1a6faffb400f1e14c2ae90662bdf3e33fc42198c502ce46.scope.
Nov 29 07:54:49 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:54:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f966186cbf96f6d2fe165b31a1cf14685b5c5db0fa2e6c01fa654a438a151114/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f966186cbf96f6d2fe165b31a1cf14685b5c5db0fa2e6c01fa654a438a151114/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f966186cbf96f6d2fe165b31a1cf14685b5c5db0fa2e6c01fa654a438a151114/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f966186cbf96f6d2fe165b31a1cf14685b5c5db0fa2e6c01fa654a438a151114/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:49 compute-0 podman[259079]: 2025-11-29 07:54:49.568322837 +0000 UTC m=+0.678955689 container init aa35ae8336f68c68d1a6faffb400f1e14c2ae90662bdf3e33fc42198c502ce46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_beaver, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:54:49 compute-0 podman[259079]: 2025-11-29 07:54:49.579679141 +0000 UTC m=+0.690311973 container start aa35ae8336f68c68d1a6faffb400f1e14c2ae90662bdf3e33fc42198c502ce46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 07:54:49 compute-0 podman[259079]: 2025-11-29 07:54:49.606783478 +0000 UTC m=+0.717416310 container attach aa35ae8336f68c68d1a6faffb400f1e14c2ae90662bdf3e33fc42198c502ce46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:54:49 compute-0 ceph-mon[75237]: pgmap v940: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:50 compute-0 recursing_beaver[259096]: {
Nov 29 07:54:50 compute-0 recursing_beaver[259096]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:54:50 compute-0 recursing_beaver[259096]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:54:50 compute-0 recursing_beaver[259096]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:54:50 compute-0 recursing_beaver[259096]:         "osd_id": 2,
Nov 29 07:54:50 compute-0 recursing_beaver[259096]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:54:50 compute-0 recursing_beaver[259096]:         "type": "bluestore"
Nov 29 07:54:50 compute-0 recursing_beaver[259096]:     },
Nov 29 07:54:50 compute-0 recursing_beaver[259096]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:54:50 compute-0 recursing_beaver[259096]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:54:50 compute-0 recursing_beaver[259096]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:54:50 compute-0 recursing_beaver[259096]:         "osd_id": 0,
Nov 29 07:54:50 compute-0 recursing_beaver[259096]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:54:50 compute-0 recursing_beaver[259096]:         "type": "bluestore"
Nov 29 07:54:50 compute-0 recursing_beaver[259096]:     },
Nov 29 07:54:50 compute-0 recursing_beaver[259096]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:54:50 compute-0 recursing_beaver[259096]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:54:50 compute-0 recursing_beaver[259096]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:54:50 compute-0 recursing_beaver[259096]:         "osd_id": 1,
Nov 29 07:54:50 compute-0 recursing_beaver[259096]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:54:50 compute-0 recursing_beaver[259096]:         "type": "bluestore"
Nov 29 07:54:50 compute-0 recursing_beaver[259096]:     }
Nov 29 07:54:50 compute-0 recursing_beaver[259096]: }
Nov 29 07:54:50 compute-0 systemd[1]: libpod-aa35ae8336f68c68d1a6faffb400f1e14c2ae90662bdf3e33fc42198c502ce46.scope: Deactivated successfully.
Nov 29 07:54:50 compute-0 systemd[1]: libpod-aa35ae8336f68c68d1a6faffb400f1e14c2ae90662bdf3e33fc42198c502ce46.scope: Consumed 1.007s CPU time.
Nov 29 07:54:50 compute-0 podman[259079]: 2025-11-29 07:54:50.581330282 +0000 UTC m=+1.691963124 container died aa35ae8336f68c68d1a6faffb400f1e14c2ae90662bdf3e33fc42198c502ce46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 07:54:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:54:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f966186cbf96f6d2fe165b31a1cf14685b5c5db0fa2e6c01fa654a438a151114-merged.mount: Deactivated successfully.
Nov 29 07:54:51 compute-0 podman[259079]: 2025-11-29 07:54:51.112698172 +0000 UTC m=+2.223331014 container remove aa35ae8336f68c68d1a6faffb400f1e14c2ae90662bdf3e33fc42198c502ce46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_beaver, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:54:51 compute-0 sudo[258973]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:54:51 compute-0 systemd[1]: libpod-conmon-aa35ae8336f68c68d1a6faffb400f1e14c2ae90662bdf3e33fc42198c502ce46.scope: Deactivated successfully.
Nov 29 07:54:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:51 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:54:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:54:51 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:54:51 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 144441a2-194f-4b21-b7af-93456845a605 does not exist
Nov 29 07:54:51 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 91c1049d-6177-47f9-bd85-85ee82e3a55f does not exist
Nov 29 07:54:51 compute-0 sudo[259141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:51 compute-0 sudo[259141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:51 compute-0 sudo[259141]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:51 compute-0 sudo[259166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:54:51 compute-0 sudo[259166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:51 compute-0 sudo[259166]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:52 compute-0 ceph-mon[75237]: pgmap v941: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:52 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:54:52 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:54:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:54 compute-0 ceph-mon[75237]: pgmap v942: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:55 compute-0 ceph-mon[75237]: pgmap v943: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:54:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:54:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:57 compute-0 ceph-mon[75237]: pgmap v944: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:54:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1008030770' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:54:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:54:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1008030770' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:54:58 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1008030770' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:54:58 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1008030770' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:54:58 compute-0 podman[259191]: 2025-11-29 07:54:58.977894283 +0000 UTC m=+0.133744657 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible)
Nov 29 07:54:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:54:59 compute-0 ceph-mon[75237]: pgmap v945: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:55:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:02 compute-0 ceph-mon[75237]: pgmap v946: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:04 compute-0 ceph-mon[75237]: pgmap v947: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:55:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:07 compute-0 ceph-mon[75237]: pgmap v948: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:55:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:55:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:55:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:55:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:55:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:55:09 compute-0 ceph-mon[75237]: pgmap v949: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:55:10 compute-0 ceph-mon[75237]: pgmap v950: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:10 compute-0 podman[259216]: 2025-11-29 07:55:10.904276862 +0000 UTC m=+0.069539245 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 07:55:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:12 compute-0 ceph-mon[75237]: pgmap v951: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:14 compute-0 ceph-mon[75237]: pgmap v952: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:14 compute-0 nova_compute[255040]: 2025-11-29 07:55:14.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:55:15 compute-0 podman[259237]: 2025-11-29 07:55:15.899197131 +0000 UTC m=+0.062936709 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 07:55:15 compute-0 nova_compute[255040]: 2025-11-29 07:55:15.969 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:16 compute-0 nova_compute[255040]: 2025-11-29 07:55:16.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:16 compute-0 nova_compute[255040]: 2025-11-29 07:55:16.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:16 compute-0 nova_compute[255040]: 2025-11-29 07:55:16.977 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:16 compute-0 nova_compute[255040]: 2025-11-29 07:55:16.977 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:18 compute-0 nova_compute[255040]: 2025-11-29 07:55:18.974 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:18 compute-0 nova_compute[255040]: 2025-11-29 07:55:18.975 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:55:18 compute-0 nova_compute[255040]: 2025-11-29 07:55:18.976 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:55:18 compute-0 nova_compute[255040]: 2025-11-29 07:55:18.992 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:55:18 compute-0 nova_compute[255040]: 2025-11-29 07:55:18.993 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:18 compute-0 nova_compute[255040]: 2025-11-29 07:55:18.993 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:19 compute-0 nova_compute[255040]: 2025-11-29 07:55:19.020 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:19 compute-0 nova_compute[255040]: 2025-11-29 07:55:19.021 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:19 compute-0 nova_compute[255040]: 2025-11-29 07:55:19.021 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:19 compute-0 nova_compute[255040]: 2025-11-29 07:55:19.022 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:55:19 compute-0 nova_compute[255040]: 2025-11-29 07:55:19.022 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:55:19 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4124271337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:19 compute-0 nova_compute[255040]: 2025-11-29 07:55:19.761 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.739s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:19 compute-0 nova_compute[255040]: 2025-11-29 07:55:19.943 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:55:19 compute-0 nova_compute[255040]: 2025-11-29 07:55:19.945 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5183MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:55:19 compute-0 nova_compute[255040]: 2025-11-29 07:55:19.945 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:19 compute-0 nova_compute[255040]: 2025-11-29 07:55:19.946 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:20 compute-0 ceph-mon[75237]: pgmap v953: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:20 compute-0 nova_compute[255040]: 2025-11-29 07:55:20.027 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:55:20 compute-0 nova_compute[255040]: 2025-11-29 07:55:20.028 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:55:20 compute-0 nova_compute[255040]: 2025-11-29 07:55:20.060 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:55:20 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3010777082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:20 compute-0 nova_compute[255040]: 2025-11-29 07:55:20.586 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:20 compute-0 nova_compute[255040]: 2025-11-29 07:55:20.593 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:55:20 compute-0 nova_compute[255040]: 2025-11-29 07:55:20.615 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:55:20 compute-0 nova_compute[255040]: 2025-11-29 07:55:20.617 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:55:20 compute-0 nova_compute[255040]: 2025-11-29 07:55:20.617 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:55:21 compute-0 ceph-mon[75237]: pgmap v954: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:21 compute-0 ceph-mon[75237]: pgmap v955: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:21 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/4124271337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:21 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3010777082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:22 compute-0 nova_compute[255040]: 2025-11-29 07:55:22.600 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:22 compute-0 nova_compute[255040]: 2025-11-29 07:55:22.601 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:55:22 compute-0 ceph-mon[75237]: pgmap v956: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:24 compute-0 ceph-mon[75237]: pgmap v957: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:55:25 compute-0 ceph-mon[75237]: pgmap v958: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:55:27.114 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:55:27.115 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:55:27.116 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:29 compute-0 ceph-mon[75237]: pgmap v959: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:29 compute-0 podman[259302]: 2025-11-29 07:55:29.959703011 +0000 UTC m=+0.118811907 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 07:55:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:55:31 compute-0 ceph-mon[75237]: pgmap v960: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:32 compute-0 ceph-mon[75237]: pgmap v961: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:34 compute-0 ceph-mon[75237]: pgmap v962: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:55:35 compute-0 ceph-mon[75237]: pgmap v963: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:38 compute-0 ceph-mon[75237]: pgmap v964: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:55:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:55:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:55:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:55:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:55:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:55:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:55:38
Nov 29 07:55:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:55:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:55:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'images', 'vms', '.rgw.root', 'backups', '.mgr', 'default.rgw.control']
Nov 29 07:55:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:55:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:39 compute-0 ceph-mon[75237]: pgmap v965: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:55:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:41 compute-0 podman[259329]: 2025-11-29 07:55:41.905312307 +0000 UTC m=+0.059393434 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 07:55:42 compute-0 ceph-mon[75237]: pgmap v966: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:55:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:55:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:55:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:55:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:55:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:55:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:55:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:55:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:55:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:55:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:44 compute-0 ceph-mon[75237]: pgmap v967: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:55:46 compute-0 ceph-mon[75237]: pgmap v968: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:46 compute-0 podman[259348]: 2025-11-29 07:55:46.913792009 +0000 UTC m=+0.065141319 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:55:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:48 compute-0 ceph-mon[75237]: pgmap v969: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:50 compute-0 ceph-mon[75237]: pgmap v970: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:55:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:51 compute-0 sudo[259368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:51 compute-0 sudo[259368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:51 compute-0 sudo[259368]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:51 compute-0 sudo[259393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:55:51 compute-0 sudo[259393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:51 compute-0 sudo[259393]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:51 compute-0 sudo[259418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:51 compute-0 sudo[259418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:51 compute-0 sudo[259418]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:52 compute-0 sudo[259443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:55:52 compute-0 sudo[259443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:52 compute-0 ceph-mon[75237]: pgmap v971: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:52 compute-0 sudo[259443]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:55:52 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:55:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:55:52 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:55:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:55:52 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:55:52 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 9ecf3a8a-4775-47e7-8c30-123fe168984e does not exist
Nov 29 07:55:52 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 72f6fa25-c674-4416-80fa-91972b661068 does not exist
Nov 29 07:55:52 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev fd353d14-57d5-4074-b83a-42203b7a80a4 does not exist
Nov 29 07:55:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:55:52 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:55:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:55:52 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:55:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:55:52 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:55:52 compute-0 sudo[259500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:52 compute-0 sudo[259500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:52 compute-0 sudo[259500]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:52 compute-0 sudo[259525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:55:52 compute-0 sudo[259525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:52 compute-0 sudo[259525]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:52 compute-0 sudo[259550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:52 compute-0 sudo[259550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:52 compute-0 sudo[259550]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:52 compute-0 sudo[259575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:55:52 compute-0 sudo[259575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:53 compute-0 podman[259640]: 2025-11-29 07:55:53.361408735 +0000 UTC m=+0.049540420 container create bfa2bd077bd8ebc0107fcc71dbbae3b2531a7a9e8407cfd771e9c7d85c065c0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 07:55:53 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:55:53 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:55:53 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:55:53 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:55:53 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:55:53 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:55:53 compute-0 systemd[1]: Started libpod-conmon-bfa2bd077bd8ebc0107fcc71dbbae3b2531a7a9e8407cfd771e9c7d85c065c0f.scope.
Nov 29 07:55:53 compute-0 podman[259640]: 2025-11-29 07:55:53.339883387 +0000 UTC m=+0.028015102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:55:53 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:55:53 compute-0 podman[259640]: 2025-11-29 07:55:53.452867927 +0000 UTC m=+0.140999642 container init bfa2bd077bd8ebc0107fcc71dbbae3b2531a7a9e8407cfd771e9c7d85c065c0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_euclid, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:55:53 compute-0 podman[259640]: 2025-11-29 07:55:53.461673133 +0000 UTC m=+0.149804828 container start bfa2bd077bd8ebc0107fcc71dbbae3b2531a7a9e8407cfd771e9c7d85c065c0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:55:53 compute-0 podman[259640]: 2025-11-29 07:55:53.465252729 +0000 UTC m=+0.153384444 container attach bfa2bd077bd8ebc0107fcc71dbbae3b2531a7a9e8407cfd771e9c7d85c065c0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_euclid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:55:53 compute-0 inspiring_euclid[259656]: 167 167
Nov 29 07:55:53 compute-0 systemd[1]: libpod-bfa2bd077bd8ebc0107fcc71dbbae3b2531a7a9e8407cfd771e9c7d85c065c0f.scope: Deactivated successfully.
Nov 29 07:55:53 compute-0 conmon[259656]: conmon bfa2bd077bd8ebc0107f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bfa2bd077bd8ebc0107fcc71dbbae3b2531a7a9e8407cfd771e9c7d85c065c0f.scope/container/memory.events
Nov 29 07:55:53 compute-0 podman[259640]: 2025-11-29 07:55:53.469930904 +0000 UTC m=+0.158062599 container died bfa2bd077bd8ebc0107fcc71dbbae3b2531a7a9e8407cfd771e9c7d85c065c0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 07:55:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c1ce578824e77a86ddd50a289a2fa7ccecc9c42567caa0afbdcadc6bdac6dfe-merged.mount: Deactivated successfully.
Nov 29 07:55:53 compute-0 podman[259640]: 2025-11-29 07:55:53.516494743 +0000 UTC m=+0.204626468 container remove bfa2bd077bd8ebc0107fcc71dbbae3b2531a7a9e8407cfd771e9c7d85c065c0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:55:53 compute-0 systemd[1]: libpod-conmon-bfa2bd077bd8ebc0107fcc71dbbae3b2531a7a9e8407cfd771e9c7d85c065c0f.scope: Deactivated successfully.
Nov 29 07:55:53 compute-0 podman[259680]: 2025-11-29 07:55:53.759711406 +0000 UTC m=+0.084457756 container create e4fec2f2c10440975b8f56901368018e4a08d290378f3f0533707b8a5a93ab5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:55:53 compute-0 systemd[1]: Started libpod-conmon-e4fec2f2c10440975b8f56901368018e4a08d290378f3f0533707b8a5a93ab5f.scope.
Nov 29 07:55:53 compute-0 podman[259680]: 2025-11-29 07:55:53.727293006 +0000 UTC m=+0.052039436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:55:53 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46bfc51a4b6246469286a94d55d01654d4bc00050db2a9da96f587a40da5fa0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46bfc51a4b6246469286a94d55d01654d4bc00050db2a9da96f587a40da5fa0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46bfc51a4b6246469286a94d55d01654d4bc00050db2a9da96f587a40da5fa0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46bfc51a4b6246469286a94d55d01654d4bc00050db2a9da96f587a40da5fa0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46bfc51a4b6246469286a94d55d01654d4bc00050db2a9da96f587a40da5fa0a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:53 compute-0 podman[259680]: 2025-11-29 07:55:53.851764644 +0000 UTC m=+0.176511024 container init e4fec2f2c10440975b8f56901368018e4a08d290378f3f0533707b8a5a93ab5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_solomon, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:55:53 compute-0 podman[259680]: 2025-11-29 07:55:53.860219881 +0000 UTC m=+0.184966231 container start e4fec2f2c10440975b8f56901368018e4a08d290378f3f0533707b8a5a93ab5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:55:53 compute-0 podman[259680]: 2025-11-29 07:55:53.864828664 +0000 UTC m=+0.189575034 container attach e4fec2f2c10440975b8f56901368018e4a08d290378f3f0533707b8a5a93ab5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_solomon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 29 07:55:54 compute-0 ceph-mon[75237]: pgmap v972: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:54 compute-0 sshd[189732]: Timeout before authentication for connection from 45.78.219.195 to 38.102.83.203, pid = 258133
Nov 29 07:55:55 compute-0 stoic_solomon[259697]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:55:55 compute-0 stoic_solomon[259697]: --> relative data size: 1.0
Nov 29 07:55:55 compute-0 stoic_solomon[259697]: --> All data devices are unavailable
Nov 29 07:55:55 compute-0 systemd[1]: libpod-e4fec2f2c10440975b8f56901368018e4a08d290378f3f0533707b8a5a93ab5f.scope: Deactivated successfully.
Nov 29 07:55:55 compute-0 systemd[1]: libpod-e4fec2f2c10440975b8f56901368018e4a08d290378f3f0533707b8a5a93ab5f.scope: Consumed 1.194s CPU time.
Nov 29 07:55:55 compute-0 podman[259726]: 2025-11-29 07:55:55.150288817 +0000 UTC m=+0.034740222 container died e4fec2f2c10440975b8f56901368018e4a08d290378f3f0533707b8a5a93ab5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_solomon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 07:55:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-46bfc51a4b6246469286a94d55d01654d4bc00050db2a9da96f587a40da5fa0a-merged.mount: Deactivated successfully.
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:55 compute-0 podman[259726]: 2025-11-29 07:55:55.259897807 +0000 UTC m=+0.144349042 container remove e4fec2f2c10440975b8f56901368018e4a08d290378f3f0533707b8a5a93ab5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:55:55 compute-0 systemd[1]: libpod-conmon-e4fec2f2c10440975b8f56901368018e4a08d290378f3f0533707b8a5a93ab5f.scope: Deactivated successfully.
Nov 29 07:55:55 compute-0 sudo[259575]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:55 compute-0 sudo[259740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:55 compute-0 sudo[259740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:55 compute-0 sudo[259740]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:55 compute-0 sudo[259765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:55:55 compute-0 sudo[259765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:55 compute-0 sudo[259765]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:55 compute-0 sudo[259790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:55 compute-0 sudo[259790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:55 compute-0 sudo[259790]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:55 compute-0 sudo[259815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:55:55 compute-0 sudo[259815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:55:55 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 29 07:55:55 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:55:55.734242) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:55:55 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 29 07:55:55 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402955734395, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1464, "num_deletes": 252, "total_data_size": 2279346, "memory_usage": 2319368, "flush_reason": "Manual Compaction"}
Nov 29 07:55:55 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:55:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402956033388, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1314289, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16769, "largest_seqno": 18232, "table_properties": {"data_size": 1309221, "index_size": 2336, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12976, "raw_average_key_size": 20, "raw_value_size": 1298096, "raw_average_value_size": 2018, "num_data_blocks": 107, "num_entries": 643, "num_filter_entries": 643, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402798, "oldest_key_time": 1764402798, "file_creation_time": 1764402955, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 299200 microseconds, and 7495 cpu microseconds.
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:55:56 compute-0 podman[259879]: 2025-11-29 07:55:55.947408614 +0000 UTC m=+0.028694991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:55:56.033461) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1314289 bytes OK
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:55:56.033483) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:55:56.113717) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:55:56.113766) EVENT_LOG_v1 {"time_micros": 1764402956113755, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:55:56.113792) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2272902, prev total WAL file size 2272902, number of live WAL files 2.
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:55:56.114976) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373536' seq:0, type:0; will stop at (end)
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1283KB)], [38(8303KB)]
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402956115079, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 9816572, "oldest_snapshot_seqno": -1}
Nov 29 07:55:56 compute-0 podman[259879]: 2025-11-29 07:55:56.130319318 +0000 UTC m=+0.211605675 container create ca07bdf371efa7fa905ee383b8ea3ab95a999aec0ec126bd1f8e359dc6305ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4507 keys, 7424363 bytes, temperature: kUnknown
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402956456214, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 7424363, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7393967, "index_size": 17998, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 110514, "raw_average_key_size": 24, "raw_value_size": 7312234, "raw_average_value_size": 1622, "num_data_blocks": 756, "num_entries": 4507, "num_filter_entries": 4507, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764402956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:55:56 compute-0 systemd[1]: Started libpod-conmon-ca07bdf371efa7fa905ee383b8ea3ab95a999aec0ec126bd1f8e359dc6305ddc.scope.
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:55:56.456508) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7424363 bytes
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:55:56.526829) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 28.8 rd, 21.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.1 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(13.1) write-amplify(5.6) OK, records in: 4952, records dropped: 445 output_compression: NoCompression
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:55:56.526880) EVENT_LOG_v1 {"time_micros": 1764402956526857, "job": 18, "event": "compaction_finished", "compaction_time_micros": 341233, "compaction_time_cpu_micros": 28029, "output_level": 6, "num_output_files": 1, "total_output_size": 7424363, "num_input_records": 4952, "num_output_records": 4507, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402956527792, "job": 18, "event": "table_file_deletion", "file_number": 40}
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402956529742, "job": 18, "event": "table_file_deletion", "file_number": 38}
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:55:56.114810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:55:56.529839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:55:56.529847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:55:56.529850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:55:56.529853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:55:56 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:55:56.529856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:55:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:55:56 compute-0 ceph-mon[75237]: pgmap v973: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:56 compute-0 podman[259879]: 2025-11-29 07:55:56.604300639 +0000 UTC m=+0.685586996 container init ca07bdf371efa7fa905ee383b8ea3ab95a999aec0ec126bd1f8e359dc6305ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:55:56 compute-0 podman[259879]: 2025-11-29 07:55:56.611673737 +0000 UTC m=+0.692960094 container start ca07bdf371efa7fa905ee383b8ea3ab95a999aec0ec126bd1f8e359dc6305ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 07:55:56 compute-0 thirsty_yonath[259894]: 167 167
Nov 29 07:55:56 compute-0 systemd[1]: libpod-ca07bdf371efa7fa905ee383b8ea3ab95a999aec0ec126bd1f8e359dc6305ddc.scope: Deactivated successfully.
Nov 29 07:55:56 compute-0 podman[259879]: 2025-11-29 07:55:56.712521471 +0000 UTC m=+0.793807838 container attach ca07bdf371efa7fa905ee383b8ea3ab95a999aec0ec126bd1f8e359dc6305ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 07:55:56 compute-0 podman[259879]: 2025-11-29 07:55:56.713412336 +0000 UTC m=+0.794698693 container died ca07bdf371efa7fa905ee383b8ea3ab95a999aec0ec126bd1f8e359dc6305ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:55:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-9582022a4a67743005256dd3b30ee7134d2b5b357d847be24968c11aafe4681d-merged.mount: Deactivated successfully.
Nov 29 07:55:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:57 compute-0 podman[259879]: 2025-11-29 07:55:57.379214369 +0000 UTC m=+1.460500766 container remove ca07bdf371efa7fa905ee383b8ea3ab95a999aec0ec126bd1f8e359dc6305ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 07:55:57 compute-0 systemd[1]: libpod-conmon-ca07bdf371efa7fa905ee383b8ea3ab95a999aec0ec126bd1f8e359dc6305ddc.scope: Deactivated successfully.
Nov 29 07:55:57 compute-0 podman[259918]: 2025-11-29 07:55:57.586872859 +0000 UTC m=+0.047797504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:55:57 compute-0 podman[259918]: 2025-11-29 07:55:57.785868565 +0000 UTC m=+0.246793190 container create 18f9fc0593bb381b85a5aab4105f541c74d9eb7d648e442eb9477e0a5d2e9d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 07:55:57 compute-0 ceph-mon[75237]: pgmap v974: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:58 compute-0 systemd[1]: Started libpod-conmon-18f9fc0593bb381b85a5aab4105f541c74d9eb7d648e442eb9477e0a5d2e9d73.scope.
Nov 29 07:55:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de4644b9d4862cf3f81f0da35f0c9e5ca4c49bd0d3ae2bbfdaa47925fa9630f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de4644b9d4862cf3f81f0da35f0c9e5ca4c49bd0d3ae2bbfdaa47925fa9630f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de4644b9d4862cf3f81f0da35f0c9e5ca4c49bd0d3ae2bbfdaa47925fa9630f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de4644b9d4862cf3f81f0da35f0c9e5ca4c49bd0d3ae2bbfdaa47925fa9630f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:58 compute-0 podman[259918]: 2025-11-29 07:55:58.168786823 +0000 UTC m=+0.629711448 container init 18f9fc0593bb381b85a5aab4105f541c74d9eb7d648e442eb9477e0a5d2e9d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_burnell, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 07:55:58 compute-0 podman[259918]: 2025-11-29 07:55:58.18205277 +0000 UTC m=+0.642977405 container start 18f9fc0593bb381b85a5aab4105f541c74d9eb7d648e442eb9477e0a5d2e9d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_burnell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 07:55:58 compute-0 podman[259918]: 2025-11-29 07:55:58.470152165 +0000 UTC m=+0.931076860 container attach 18f9fc0593bb381b85a5aab4105f541c74d9eb7d648e442eb9477e0a5d2e9d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_burnell, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:55:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:55:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/271394369' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:55:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:55:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/271394369' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:55:58 compute-0 infallible_burnell[259935]: {
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:     "0": [
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:         {
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "devices": [
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "/dev/loop3"
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             ],
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "lv_name": "ceph_lv0",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "lv_size": "21470642176",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "name": "ceph_lv0",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "tags": {
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.cluster_name": "ceph",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.crush_device_class": "",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.encrypted": "0",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.osd_id": "0",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.type": "block",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.vdo": "0"
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             },
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "type": "block",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "vg_name": "ceph_vg0"
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:         }
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:     ],
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:     "1": [
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:         {
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "devices": [
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "/dev/loop4"
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             ],
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "lv_name": "ceph_lv1",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "lv_size": "21470642176",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "name": "ceph_lv1",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "tags": {
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.cluster_name": "ceph",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.crush_device_class": "",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.encrypted": "0",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.osd_id": "1",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.type": "block",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.vdo": "0"
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             },
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "type": "block",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "vg_name": "ceph_vg1"
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:         }
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:     ],
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:     "2": [
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:         {
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "devices": [
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "/dev/loop5"
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             ],
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "lv_name": "ceph_lv2",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "lv_size": "21470642176",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "name": "ceph_lv2",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "tags": {
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.cluster_name": "ceph",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.crush_device_class": "",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.encrypted": "0",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.osd_id": "2",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.type": "block",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:                 "ceph.vdo": "0"
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             },
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "type": "block",
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:             "vg_name": "ceph_vg2"
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:         }
Nov 29 07:55:58 compute-0 infallible_burnell[259935]:     ]
Nov 29 07:55:58 compute-0 infallible_burnell[259935]: }
Nov 29 07:55:58 compute-0 systemd[1]: libpod-18f9fc0593bb381b85a5aab4105f541c74d9eb7d648e442eb9477e0a5d2e9d73.scope: Deactivated successfully.
Nov 29 07:55:58 compute-0 podman[259918]: 2025-11-29 07:55:58.977038048 +0000 UTC m=+1.437962703 container died 18f9fc0593bb381b85a5aab4105f541c74d9eb7d648e442eb9477e0a5d2e9d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_burnell, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:55:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:55:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/271394369' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:55:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/271394369' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:56:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:56:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-1de4644b9d4862cf3f81f0da35f0c9e5ca4c49bd0d3ae2bbfdaa47925fa9630f-merged.mount: Deactivated successfully.
Nov 29 07:56:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:01 compute-0 ceph-mon[75237]: pgmap v975: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:01 compute-0 podman[259918]: 2025-11-29 07:56:01.826951155 +0000 UTC m=+4.287875780 container remove 18f9fc0593bb381b85a5aab4105f541c74d9eb7d648e442eb9477e0a5d2e9d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_burnell, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 07:56:01 compute-0 systemd[1]: libpod-conmon-18f9fc0593bb381b85a5aab4105f541c74d9eb7d648e442eb9477e0a5d2e9d73.scope: Deactivated successfully.
Nov 29 07:56:01 compute-0 sudo[259815]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:01 compute-0 podman[259956]: 2025-11-29 07:56:01.947851476 +0000 UTC m=+1.755370834 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:56:01 compute-0 sudo[259976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:56:02 compute-0 sudo[259976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:02 compute-0 sudo[259976]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:02 compute-0 sudo[260008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:56:02 compute-0 sudo[260008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:02 compute-0 sudo[260008]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:02 compute-0 sudo[260033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:56:02 compute-0 sudo[260033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:02 compute-0 sudo[260033]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:02 compute-0 sudo[260058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:56:02 compute-0 sudo[260058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:02 compute-0 podman[260124]: 2025-11-29 07:56:02.665323527 +0000 UTC m=+0.031090825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:56:02 compute-0 ceph-mon[75237]: pgmap v976: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:02 compute-0 podman[260124]: 2025-11-29 07:56:02.998127292 +0000 UTC m=+0.363894580 container create 695be6e1bb8715668ab60ea5d07937525e29f544f3b63dbd94123aff80def702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaum, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:56:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:03 compute-0 systemd[1]: Started libpod-conmon-695be6e1bb8715668ab60ea5d07937525e29f544f3b63dbd94123aff80def702.scope.
Nov 29 07:56:03 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:56:04 compute-0 podman[260124]: 2025-11-29 07:56:04.167792919 +0000 UTC m=+1.533560217 container init 695be6e1bb8715668ab60ea5d07937525e29f544f3b63dbd94123aff80def702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:56:04 compute-0 podman[260124]: 2025-11-29 07:56:04.182966726 +0000 UTC m=+1.548734004 container start 695be6e1bb8715668ab60ea5d07937525e29f544f3b63dbd94123aff80def702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:56:04 compute-0 sharp_chaum[260140]: 167 167
Nov 29 07:56:04 compute-0 systemd[1]: libpod-695be6e1bb8715668ab60ea5d07937525e29f544f3b63dbd94123aff80def702.scope: Deactivated successfully.
Nov 29 07:56:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:05 compute-0 podman[260124]: 2025-11-29 07:56:05.331418343 +0000 UTC m=+2.697185661 container attach 695be6e1bb8715668ab60ea5d07937525e29f544f3b63dbd94123aff80def702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaum, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:56:05 compute-0 podman[260124]: 2025-11-29 07:56:05.333156149 +0000 UTC m=+2.698923457 container died 695be6e1bb8715668ab60ea5d07937525e29f544f3b63dbd94123aff80def702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaum, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:56:05 compute-0 ceph-mon[75237]: pgmap v977: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:56:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a74919001c946b9af01bdcc06e321c09d573e3cfdab17260bf7a13fbb6e89140-merged.mount: Deactivated successfully.
Nov 29 07:56:08 compute-0 ceph-mon[75237]: pgmap v978: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:56:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:56:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:56:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:56:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:56:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:56:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:11 compute-0 ceph-mon[75237]: pgmap v979: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:11 compute-0 ceph-mon[75237]: pgmap v980: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:11 compute-0 podman[260124]: 2025-11-29 07:56:11.705748513 +0000 UTC m=+9.071515821 container remove 695be6e1bb8715668ab60ea5d07937525e29f544f3b63dbd94123aff80def702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaum, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 07:56:11 compute-0 systemd[1]: libpod-conmon-695be6e1bb8715668ab60ea5d07937525e29f544f3b63dbd94123aff80def702.scope: Deactivated successfully.
Nov 29 07:56:11 compute-0 podman[260164]: 2025-11-29 07:56:11.904258397 +0000 UTC m=+0.037032444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:56:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:56:12 compute-0 podman[260164]: 2025-11-29 07:56:12.597383274 +0000 UTC m=+0.730157271 container create 237b41904693295805f3f85f55adacbd7e3b33853e29a91077ae4e0f51990898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ramanujan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:56:12 compute-0 ceph-mon[75237]: pgmap v981: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:12 compute-0 systemd[1]: Started libpod-conmon-237b41904693295805f3f85f55adacbd7e3b33853e29a91077ae4e0f51990898.scope.
Nov 29 07:56:12 compute-0 podman[260178]: 2025-11-29 07:56:12.735396876 +0000 UTC m=+0.089999234 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 07:56:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c1ad8e253f2749bfb2d80ebeb213b987691c3bd974b52052f0125bfb79dbf90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c1ad8e253f2749bfb2d80ebeb213b987691c3bd974b52052f0125bfb79dbf90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c1ad8e253f2749bfb2d80ebeb213b987691c3bd974b52052f0125bfb79dbf90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c1ad8e253f2749bfb2d80ebeb213b987691c3bd974b52052f0125bfb79dbf90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:56:13 compute-0 podman[260164]: 2025-11-29 07:56:13.002341013 +0000 UTC m=+1.135114990 container init 237b41904693295805f3f85f55adacbd7e3b33853e29a91077ae4e0f51990898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 07:56:13 compute-0 podman[260164]: 2025-11-29 07:56:13.0171111 +0000 UTC m=+1.149885097 container start 237b41904693295805f3f85f55adacbd7e3b33853e29a91077ae4e0f51990898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ramanujan, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:56:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:13 compute-0 podman[260164]: 2025-11-29 07:56:13.721543801 +0000 UTC m=+1.854317838 container attach 237b41904693295805f3f85f55adacbd7e3b33853e29a91077ae4e0f51990898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ramanujan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 07:56:13 compute-0 ceph-mon[75237]: pgmap v982: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]: {
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]:         "osd_id": 2,
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]:         "type": "bluestore"
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]:     },
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]:         "osd_id": 0,
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]:         "type": "bluestore"
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]:     },
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]:         "osd_id": 1,
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]:         "type": "bluestore"
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]:     }
Nov 29 07:56:14 compute-0 nifty_ramanujan[260197]: }
Nov 29 07:56:14 compute-0 systemd[1]: libpod-237b41904693295805f3f85f55adacbd7e3b33853e29a91077ae4e0f51990898.scope: Deactivated successfully.
Nov 29 07:56:14 compute-0 systemd[1]: libpod-237b41904693295805f3f85f55adacbd7e3b33853e29a91077ae4e0f51990898.scope: Consumed 1.104s CPU time.
Nov 29 07:56:14 compute-0 podman[260164]: 2025-11-29 07:56:14.122990477 +0000 UTC m=+2.255764504 container died 237b41904693295805f3f85f55adacbd7e3b33853e29a91077ae4e0f51990898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:56:14 compute-0 sshd[189732]: drop connection #0 from [45.78.219.195]:50454 on [38.102.83.203]:22 penalty: exceeded LoginGraceTime
Nov 29 07:56:14 compute-0 nova_compute[255040]: 2025-11-29 07:56:14.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:14 compute-0 nova_compute[255040]: 2025-11-29 07:56:14.977 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:14 compute-0 nova_compute[255040]: 2025-11-29 07:56:14.978 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 07:56:15 compute-0 nova_compute[255040]: 2025-11-29 07:56:14.999 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 07:56:15 compute-0 nova_compute[255040]: 2025-11-29 07:56:15.002 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:15 compute-0 nova_compute[255040]: 2025-11-29 07:56:15.002 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 07:56:15 compute-0 nova_compute[255040]: 2025-11-29 07:56:15.017 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:56:18 compute-0 nova_compute[255040]: 2025-11-29 07:56:18.023 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:18 compute-0 nova_compute[255040]: 2025-11-29 07:56:18.023 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:18 compute-0 nova_compute[255040]: 2025-11-29 07:56:18.023 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:18 compute-0 ceph-mon[75237]: pgmap v983: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:18 compute-0 nova_compute[255040]: 2025-11-29 07:56:18.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:18 compute-0 nova_compute[255040]: 2025-11-29 07:56:18.976 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:56:18 compute-0 nova_compute[255040]: 2025-11-29 07:56:18.977 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:56:18 compute-0 nova_compute[255040]: 2025-11-29 07:56:18.991 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:56:18 compute-0 nova_compute[255040]: 2025-11-29 07:56:18.992 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c1ad8e253f2749bfb2d80ebeb213b987691c3bd974b52052f0125bfb79dbf90-merged.mount: Deactivated successfully.
Nov 29 07:56:19 compute-0 ceph-mon[75237]: pgmap v984: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:19 compute-0 nova_compute[255040]: 2025-11-29 07:56:19.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:20 compute-0 nova_compute[255040]: 2025-11-29 07:56:20.019 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:20 compute-0 nova_compute[255040]: 2025-11-29 07:56:20.020 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:20 compute-0 nova_compute[255040]: 2025-11-29 07:56:20.020 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:20 compute-0 nova_compute[255040]: 2025-11-29 07:56:20.020 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:56:20 compute-0 nova_compute[255040]: 2025-11-29 07:56:20.021 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:20 compute-0 podman[260164]: 2025-11-29 07:56:20.353145292 +0000 UTC m=+8.485919249 container remove 237b41904693295805f3f85f55adacbd7e3b33853e29a91077ae4e0f51990898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 07:56:20 compute-0 systemd[1]: libpod-conmon-237b41904693295805f3f85f55adacbd7e3b33853e29a91077ae4e0f51990898.scope: Deactivated successfully.
Nov 29 07:56:20 compute-0 sudo[260058]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:56:20 compute-0 podman[260244]: 2025-11-29 07:56:20.459601496 +0000 UTC m=+2.622835588 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:56:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:56:21 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1327801152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:21 compute-0 nova_compute[255040]: 2025-11-29 07:56:21.107 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:21 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:56:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:56:21 compute-0 ceph-mon[75237]: pgmap v985: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:21 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:56:21 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 19521ad8-13c4-4275-9ca5-f005d438a4ac does not exist
Nov 29 07:56:21 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 7dec8861-64f9-4a9a-a26c-7b026f9f1354 does not exist
Nov 29 07:56:21 compute-0 nova_compute[255040]: 2025-11-29 07:56:21.347 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:56:21 compute-0 nova_compute[255040]: 2025-11-29 07:56:21.349 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5189MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:56:21 compute-0 nova_compute[255040]: 2025-11-29 07:56:21.349 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:21 compute-0 nova_compute[255040]: 2025-11-29 07:56:21.350 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:21 compute-0 sudo[260291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:56:21 compute-0 sudo[260291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:21 compute-0 sudo[260291]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:21 compute-0 sudo[260316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:56:21 compute-0 sudo[260316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:21 compute-0 sudo[260316]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:21 compute-0 nova_compute[255040]: 2025-11-29 07:56:21.546 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:56:21 compute-0 nova_compute[255040]: 2025-11-29 07:56:21.547 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:56:21 compute-0 nova_compute[255040]: 2025-11-29 07:56:21.647 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Refreshing inventories for resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 07:56:21 compute-0 nova_compute[255040]: 2025-11-29 07:56:21.738 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Updating ProviderTree inventory for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 07:56:21 compute-0 nova_compute[255040]: 2025-11-29 07:56:21.738 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Updating inventory in ProviderTree for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:56:21 compute-0 nova_compute[255040]: 2025-11-29 07:56:21.761 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Refreshing aggregate associations for resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 07:56:21 compute-0 nova_compute[255040]: 2025-11-29 07:56:21.786 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Refreshing trait associations for resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e, traits: COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_ABM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE,COMPUTE_NODE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 07:56:21 compute-0 nova_compute[255040]: 2025-11-29 07:56:21.802 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:56:22 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1207622196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:22 compute-0 nova_compute[255040]: 2025-11-29 07:56:22.235 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:22 compute-0 nova_compute[255040]: 2025-11-29 07:56:22.241 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:56:22 compute-0 nova_compute[255040]: 2025-11-29 07:56:22.257 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:56:22 compute-0 nova_compute[255040]: 2025-11-29 07:56:22.259 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:56:22 compute-0 nova_compute[255040]: 2025-11-29 07:56:22.259 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:56:23 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1327801152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:23 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:56:23 compute-0 ceph-mon[75237]: pgmap v986: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:23 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:56:23 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1207622196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:23 compute-0 nova_compute[255040]: 2025-11-29 07:56:23.259 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:23 compute-0 nova_compute[255040]: 2025-11-29 07:56:23.260 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:23 compute-0 nova_compute[255040]: 2025-11-29 07:56:23.261 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:56:24 compute-0 ceph-mon[75237]: pgmap v987: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:26 compute-0 ceph-mon[75237]: pgmap v988: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:56:27.116 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:56:27.117 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:56:27.117 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:27 compute-0 ceph-mon[75237]: pgmap v989: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:56:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:30 compute-0 ceph-mon[75237]: pgmap v990: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:32 compute-0 ceph-mon[75237]: pgmap v991: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:32 compute-0 podman[260363]: 2025-11-29 07:56:32.953634838 +0000 UTC m=+0.116964808 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 07:56:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:56:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:34 compute-0 ceph-mon[75237]: pgmap v992: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:36 compute-0 ceph-mon[75237]: pgmap v993: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:56:38 compute-0 ceph-mon[75237]: pgmap v994: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:56:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:56:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:56:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:56:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:56:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:56:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:56:38
Nov 29 07:56:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:56:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:56:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'vms', 'backups', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'volumes']
Nov 29 07:56:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:56:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:40 compute-0 ceph-mon[75237]: pgmap v995: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:56:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:56:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:56:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:56:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:56:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:56:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:56:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:56:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:56:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:56:42 compute-0 podman[260390]: 2025-11-29 07:56:42.896587359 +0000 UTC m=+0.060433982 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:56:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:56:43 compute-0 ceph-mon[75237]: pgmap v996: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:44 compute-0 ceph-mon[75237]: pgmap v997: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Nov 29 07:56:46 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:56:46 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4073 writes, 18K keys, 4073 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 4072 writes, 4072 syncs, 1.00 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1270 writes, 5804 keys, 1270 commit groups, 1.0 writes per commit group, ingest: 8.37 MB, 0.01 MB/s
                                           Interval WAL: 1269 writes, 1269 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     31.7      0.68              0.08         9    0.075       0      0       0.0       0.0
                                             L6      1/0    7.08 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   2.8     46.5     37.6      1.62              0.25         8    0.203     35K   4243       0.0       0.0
                                            Sum      1/0    7.08 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.8     32.8     35.9      2.30              0.33        17    0.135     35K   4243       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.3     41.1     39.7      0.95              0.17         8    0.118     19K   2448       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     46.5     37.6      1.62              0.25         8    0.203     35K   4243       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     31.7      0.68              0.08         8    0.084       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     24.3      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.021, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.08 GB write, 0.05 MB/s write, 0.07 GB read, 0.04 MB/s read, 2.3 seconds
                                           Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55dbdf32d1f0#2 capacity: 308.00 MB usage: 4.17 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 8.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(261,3.86 MB,1.25395%) FilterBlock(18,108.30 KB,0.0343372%) IndexBlock(18,206.48 KB,0.0654691%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 07:56:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:47 compute-0 ceph-mon[75237]: pgmap v998: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Nov 29 07:56:47 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 29 07:56:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:56:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Nov 29 07:56:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Nov 29 07:56:48 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Nov 29 07:56:48 compute-0 ceph-mon[75237]: pgmap v999: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:48 compute-0 ceph-mon[75237]: osdmap e135: 3 total, 3 up, 3 in
Nov 29 07:56:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:49 compute-0 ceph-mon[75237]: osdmap e136: 3 total, 3 up, 3 in
Nov 29 07:56:50 compute-0 ceph-mon[75237]: pgmap v1002: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:56:50 compute-0 podman[260413]: 2025-11-29 07:56:50.906007697 +0000 UTC m=+0.072279569 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 07:56:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 639 B/s wr, 9 op/s
Nov 29 07:56:52 compute-0 ceph-mon[75237]: pgmap v1003: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 639 B/s wr, 9 op/s
Nov 29 07:56:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 8.4 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.0 MiB/s wr, 14 op/s
Nov 29 07:56:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Nov 29 07:56:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Nov 29 07:56:53 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Nov 29 07:56:54 compute-0 ceph-mon[75237]: pgmap v1004: 305 pgs: 305 active+clean; 8.4 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.0 MiB/s wr, 14 op/s
Nov 29 07:56:54 compute-0 ceph-mon[75237]: osdmap e137: 3 total, 3 up, 3 in
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 16 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.1 MiB/s wr, 24 op/s
Nov 29 07:56:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Nov 29 07:56:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Nov 29 07:56:55 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00026072190206420523 of space, bias 1.0, pg target 0.07821657061926157 quantized to 32 (current 32)
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:56:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:56:56 compute-0 ceph-mon[75237]: pgmap v1006: 305 pgs: 305 active+clean; 16 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.1 MiB/s wr, 24 op/s
Nov 29 07:56:56 compute-0 ceph-mon[75237]: osdmap e138: 3 total, 3 up, 3 in
Nov 29 07:56:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 37 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 4.6 MiB/s wr, 40 op/s
Nov 29 07:56:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 29 07:56:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Nov 29 07:56:58 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 29 07:56:58 compute-0 ceph-mon[75237]: pgmap v1008: 305 pgs: 305 active+clean; 37 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 4.6 MiB/s wr, 40 op/s
Nov 29 07:56:58 compute-0 ceph-mon[75237]: osdmap e139: 3 total, 3 up, 3 in
Nov 29 07:56:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:56:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/928512923' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:56:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:56:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/928512923' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:56:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 37 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 4.7 MiB/s wr, 34 op/s
Nov 29 07:56:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/928512923' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:56:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/928512923' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:57:00 compute-0 ceph-mon[75237]: pgmap v1010: 305 pgs: 305 active+clean; 37 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 4.7 MiB/s wr, 34 op/s
Nov 29 07:57:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 4.3 MiB/s wr, 34 op/s
Nov 29 07:57:01 compute-0 ceph-mon[75237]: pgmap v1011: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 4.3 MiB/s wr, 34 op/s
Nov 29 07:57:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 3.1 MiB/s wr, 23 op/s
Nov 29 07:57:03 compute-0 podman[260433]: 2025-11-29 07:57:03.931681749 +0000 UTC m=+0.101605810 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 07:57:04 compute-0 ceph-mon[75237]: pgmap v1012: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 3.1 MiB/s wr, 23 op/s
Nov 29 07:57:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.6 MiB/s wr, 19 op/s
Nov 29 07:57:06 compute-0 ceph-mon[75237]: pgmap v1013: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.6 MiB/s wr, 19 op/s
Nov 29 07:57:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 455 KiB/s wr, 5 op/s
Nov 29 07:57:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:08 compute-0 ceph-mon[75237]: pgmap v1014: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 455 KiB/s wr, 5 op/s
Nov 29 07:57:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:57:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:57:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:57:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:57:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:57:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:57:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 403 KiB/s wr, 5 op/s
Nov 29 07:57:10 compute-0 ceph-mon[75237]: pgmap v1015: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 403 KiB/s wr, 5 op/s
Nov 29 07:57:10 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 29 07:57:10 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:57:10.835736) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:57:10 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 29 07:57:10 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403030835858, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 860, "num_deletes": 251, "total_data_size": 1149138, "memory_usage": 1166928, "flush_reason": "Manual Compaction"}
Nov 29 07:57:10 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 29 07:57:10 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403030855056, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 1138010, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18233, "largest_seqno": 19092, "table_properties": {"data_size": 1133580, "index_size": 2082, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9568, "raw_average_key_size": 19, "raw_value_size": 1124714, "raw_average_value_size": 2295, "num_data_blocks": 94, "num_entries": 490, "num_filter_entries": 490, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402956, "oldest_key_time": 1764402956, "file_creation_time": 1764403030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:57:10 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 19379 microseconds, and 7875 cpu microseconds.
Nov 29 07:57:10 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:57:10 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:57:10.855146) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 1138010 bytes OK
Nov 29 07:57:10 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:57:10.855164) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 29 07:57:10 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:57:10.857072) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 29 07:57:10 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:57:10.857084) EVENT_LOG_v1 {"time_micros": 1764403030857080, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:57:10 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:57:10.857114) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:57:10 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1144895, prev total WAL file size 1144895, number of live WAL files 2.
Nov 29 07:57:10 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:57:10 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:57:10.857609) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 29 07:57:10 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:57:10 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(1111KB)], [41(7250KB)]
Nov 29 07:57:10 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403030857650, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 8562373, "oldest_snapshot_seqno": -1}
Nov 29 07:57:10 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4480 keys, 6789167 bytes, temperature: kUnknown
Nov 29 07:57:10 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403030998828, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6789167, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6759598, "index_size": 17299, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11205, "raw_key_size": 110598, "raw_average_key_size": 24, "raw_value_size": 6678808, "raw_average_value_size": 1490, "num_data_blocks": 719, "num_entries": 4480, "num_filter_entries": 4480, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764403030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:57:10 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:57:11 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:57:10.999142) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6789167 bytes
Nov 29 07:57:11 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:57:11.005286) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 60.6 rd, 48.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 7.1 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(13.5) write-amplify(6.0) OK, records in: 4997, records dropped: 517 output_compression: NoCompression
Nov 29 07:57:11 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:57:11.005350) EVENT_LOG_v1 {"time_micros": 1764403031005328, "job": 20, "event": "compaction_finished", "compaction_time_micros": 141292, "compaction_time_cpu_micros": 14784, "output_level": 6, "num_output_files": 1, "total_output_size": 6789167, "num_input_records": 4997, "num_output_records": 4480, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:57:11 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:57:11 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403031005865, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 29 07:57:11 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:57:11 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403031007449, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 29 07:57:11 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:57:10.857543) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:57:11 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:57:11.007576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:57:11 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:57:11.007582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:57:11 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:57:11.007584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:57:11 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:57:11.007585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:57:11 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:57:11.007587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:57:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 379 KiB/s wr, 4 op/s
Nov 29 07:57:11 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:57:11.749 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:57:11 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:57:11.750 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:57:12 compute-0 ceph-mon[75237]: pgmap v1016: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 379 KiB/s wr, 4 op/s
Nov 29 07:57:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:57:13 compute-0 ceph-mon[75237]: pgmap v1017: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:57:13 compute-0 podman[260460]: 2025-11-29 07:57:13.938305929 +0000 UTC m=+0.092268770 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 07:57:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Nov 29 07:57:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:57:15 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:57:15.753 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:57:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Nov 29 07:57:15 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Nov 29 07:57:15 compute-0 nova_compute[255040]: 2025-11-29 07:57:15.978 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:16 compute-0 ceph-mon[75237]: pgmap v1018: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:57:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Nov 29 07:57:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Nov 29 07:57:17 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Nov 29 07:57:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 127 B/s wr, 0 op/s
Nov 29 07:57:17 compute-0 ceph-mon[75237]: osdmap e140: 3 total, 3 up, 3 in
Nov 29 07:57:17 compute-0 nova_compute[255040]: 2025-11-29 07:57:17.969 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:17 compute-0 nova_compute[255040]: 2025-11-29 07:57:17.970 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:17 compute-0 nova_compute[255040]: 2025-11-29 07:57:17.996 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:18 compute-0 ceph-mon[75237]: osdmap e141: 3 total, 3 up, 3 in
Nov 29 07:57:18 compute-0 ceph-mon[75237]: pgmap v1021: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 127 B/s wr, 0 op/s
Nov 29 07:57:18 compute-0 nova_compute[255040]: 2025-11-29 07:57:18.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:18 compute-0 nova_compute[255040]: 2025-11-29 07:57:18.977 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 127 B/s wr, 0 op/s
Nov 29 07:57:19 compute-0 nova_compute[255040]: 2025-11-29 07:57:19.977 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:19 compute-0 nova_compute[255040]: 2025-11-29 07:57:19.977 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:57:19 compute-0 nova_compute[255040]: 2025-11-29 07:57:19.977 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:57:19 compute-0 nova_compute[255040]: 2025-11-29 07:57:19.996 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:57:20 compute-0 ceph-mon[75237]: pgmap v1022: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 127 B/s wr, 0 op/s
Nov 29 07:57:20 compute-0 nova_compute[255040]: 2025-11-29 07:57:20.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:21 compute-0 nova_compute[255040]: 2025-11-29 07:57:21.008 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:21 compute-0 nova_compute[255040]: 2025-11-29 07:57:21.009 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:21 compute-0 nova_compute[255040]: 2025-11-29 07:57:21.010 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:21 compute-0 nova_compute[255040]: 2025-11-29 07:57:21.010 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:57:21 compute-0 nova_compute[255040]: 2025-11-29 07:57:21.011 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 511 B/s wr, 11 op/s
Nov 29 07:57:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:57:21 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3163287023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:21 compute-0 nova_compute[255040]: 2025-11-29 07:57:21.479 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:21 compute-0 sudo[260502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:57:21 compute-0 sudo[260502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:21 compute-0 sudo[260502]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:21 compute-0 sudo[260534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:57:21 compute-0 sudo[260534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:21 compute-0 sudo[260534]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:21 compute-0 podman[260527]: 2025-11-29 07:57:21.621217098 +0000 UTC m=+0.067772062 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 07:57:21 compute-0 nova_compute[255040]: 2025-11-29 07:57:21.669 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:57:21 compute-0 sudo[260573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:57:21 compute-0 nova_compute[255040]: 2025-11-29 07:57:21.670 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5164MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:57:21 compute-0 nova_compute[255040]: 2025-11-29 07:57:21.671 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:21 compute-0 nova_compute[255040]: 2025-11-29 07:57:21.671 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:21 compute-0 sudo[260573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:21 compute-0 sudo[260573]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:21 compute-0 sudo[260598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:57:21 compute-0 sudo[260598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:21 compute-0 nova_compute[255040]: 2025-11-29 07:57:21.797 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:57:21 compute-0 nova_compute[255040]: 2025-11-29 07:57:21.798 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:57:21 compute-0 nova_compute[255040]: 2025-11-29 07:57:21.817 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:57:21 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2281597298' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:57:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:57:22 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2281597298' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:57:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:57:22 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3588336492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:22 compute-0 sudo[260598]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:22 compute-0 nova_compute[255040]: 2025-11-29 07:57:22.326 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:22 compute-0 nova_compute[255040]: 2025-11-29 07:57:22.333 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:57:22 compute-0 nova_compute[255040]: 2025-11-29 07:57:22.346 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:57:22 compute-0 nova_compute[255040]: 2025-11-29 07:57:22.348 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:57:22 compute-0 nova_compute[255040]: 2025-11-29 07:57:22.349 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:22 compute-0 ceph-mon[75237]: pgmap v1023: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 511 B/s wr, 11 op/s
Nov 29 07:57:22 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3163287023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:22 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2281597298' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:57:22 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2281597298' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:57:22 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3588336492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:57:22 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:57:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:57:22 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:57:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:57:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:57:22 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1331208551' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:57:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:57:22 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1331208551' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:57:22 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:57:22 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 854a8194-6e39-4b1a-afa6-4fa26aba1e30 does not exist
Nov 29 07:57:22 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev aca95b8a-7373-4799-bb0f-2ab5998ea80a does not exist
Nov 29 07:57:22 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 78e53422-7598-408c-bb77-bcf31967dae6 does not exist
Nov 29 07:57:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:57:22 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:57:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:57:22 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:57:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:57:22 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:57:22 compute-0 sudo[260677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:57:22 compute-0 sudo[260677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:22 compute-0 sudo[260677]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:22 compute-0 sudo[260702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:57:22 compute-0 sudo[260702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:22 compute-0 sudo[260702]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:22 compute-0 sudo[260727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:57:22 compute-0 sudo[260727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:22 compute-0 sudo[260727]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:22 compute-0 sudo[260752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:57:22 compute-0 sudo[260752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:23 compute-0 podman[260818]: 2025-11-29 07:57:23.081175533 +0000 UTC m=+0.070283060 container create 4f904085cd1684acc4bbfc4c163bf00a2ac6722ab4d7db129243d7900d00191a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 07:57:23 compute-0 systemd[1]: Started libpod-conmon-4f904085cd1684acc4bbfc4c163bf00a2ac6722ab4d7db129243d7900d00191a.scope.
Nov 29 07:57:23 compute-0 podman[260818]: 2025-11-29 07:57:23.045059533 +0000 UTC m=+0.034167130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:57:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:57:23 compute-0 podman[260818]: 2025-11-29 07:57:23.179371591 +0000 UTC m=+0.168479158 container init 4f904085cd1684acc4bbfc4c163bf00a2ac6722ab4d7db129243d7900d00191a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kapitsa, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 07:57:23 compute-0 podman[260818]: 2025-11-29 07:57:23.19199536 +0000 UTC m=+0.181102877 container start 4f904085cd1684acc4bbfc4c163bf00a2ac6722ab4d7db129243d7900d00191a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:57:23 compute-0 podman[260818]: 2025-11-29 07:57:23.196338556 +0000 UTC m=+0.185446083 container attach 4f904085cd1684acc4bbfc4c163bf00a2ac6722ab4d7db129243d7900d00191a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kapitsa, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 07:57:23 compute-0 inspiring_kapitsa[260834]: 167 167
Nov 29 07:57:23 compute-0 systemd[1]: libpod-4f904085cd1684acc4bbfc4c163bf00a2ac6722ab4d7db129243d7900d00191a.scope: Deactivated successfully.
Nov 29 07:57:23 compute-0 podman[260818]: 2025-11-29 07:57:23.200876098 +0000 UTC m=+0.189983615 container died 4f904085cd1684acc4bbfc4c163bf00a2ac6722ab4d7db129243d7900d00191a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kapitsa, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 07:57:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-64e5e41b5dd78d0616941f7f711ffa78ffe7ddef52ffa7bd3a071f5a7403ec18-merged.mount: Deactivated successfully.
Nov 29 07:57:23 compute-0 podman[260818]: 2025-11-29 07:57:23.250996574 +0000 UTC m=+0.240104071 container remove 4f904085cd1684acc4bbfc4c163bf00a2ac6722ab4d7db129243d7900d00191a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kapitsa, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:57:23 compute-0 systemd[1]: libpod-conmon-4f904085cd1684acc4bbfc4c163bf00a2ac6722ab4d7db129243d7900d00191a.scope: Deactivated successfully.
Nov 29 07:57:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 767 B/s wr, 13 op/s
Nov 29 07:57:23 compute-0 nova_compute[255040]: 2025-11-29 07:57:23.349 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:23 compute-0 nova_compute[255040]: 2025-11-29 07:57:23.350 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:23 compute-0 nova_compute[255040]: 2025-11-29 07:57:23.350 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:57:23 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:57:23 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:57:23 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1331208551' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:57:23 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1331208551' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:57:23 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:57:23 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:57:23 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:57:23 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:57:23 compute-0 podman[260857]: 2025-11-29 07:57:23.495235684 +0000 UTC m=+0.074280896 container create 116be069040747fc6fac294cb7bb8e80fe234babb3b72a797423d5aecb45aaff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_gagarin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 07:57:23 compute-0 systemd[1]: Started libpod-conmon-116be069040747fc6fac294cb7bb8e80fe234babb3b72a797423d5aecb45aaff.scope.
Nov 29 07:57:23 compute-0 podman[260857]: 2025-11-29 07:57:23.461493168 +0000 UTC m=+0.040538420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:57:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:57:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87d905eedb70a3e8175b8b410e5611527796bdbe0bd101518ac327b075bc194/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:57:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87d905eedb70a3e8175b8b410e5611527796bdbe0bd101518ac327b075bc194/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:57:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87d905eedb70a3e8175b8b410e5611527796bdbe0bd101518ac327b075bc194/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:57:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87d905eedb70a3e8175b8b410e5611527796bdbe0bd101518ac327b075bc194/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:57:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87d905eedb70a3e8175b8b410e5611527796bdbe0bd101518ac327b075bc194/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:57:23 compute-0 podman[260857]: 2025-11-29 07:57:23.580492964 +0000 UTC m=+0.159538136 container init 116be069040747fc6fac294cb7bb8e80fe234babb3b72a797423d5aecb45aaff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 07:57:23 compute-0 podman[260857]: 2025-11-29 07:57:23.59818657 +0000 UTC m=+0.177231742 container start 116be069040747fc6fac294cb7bb8e80fe234babb3b72a797423d5aecb45aaff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_gagarin, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 07:57:23 compute-0 podman[260857]: 2025-11-29 07:57:23.602556967 +0000 UTC m=+0.181602139 container attach 116be069040747fc6fac294cb7bb8e80fe234babb3b72a797423d5aecb45aaff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_gagarin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:57:24 compute-0 ceph-mon[75237]: pgmap v1024: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 767 B/s wr, 13 op/s
Nov 29 07:57:24 compute-0 strange_gagarin[260873]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:57:24 compute-0 strange_gagarin[260873]: --> relative data size: 1.0
Nov 29 07:57:24 compute-0 strange_gagarin[260873]: --> All data devices are unavailable
Nov 29 07:57:24 compute-0 systemd[1]: libpod-116be069040747fc6fac294cb7bb8e80fe234babb3b72a797423d5aecb45aaff.scope: Deactivated successfully.
Nov 29 07:57:24 compute-0 systemd[1]: libpod-116be069040747fc6fac294cb7bb8e80fe234babb3b72a797423d5aecb45aaff.scope: Consumed 1.210s CPU time.
Nov 29 07:57:24 compute-0 podman[260857]: 2025-11-29 07:57:24.856707295 +0000 UTC m=+1.435752507 container died 116be069040747fc6fac294cb7bb8e80fe234babb3b72a797423d5aecb45aaff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_gagarin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 07:57:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-a87d905eedb70a3e8175b8b410e5611527796bdbe0bd101518ac327b075bc194-merged.mount: Deactivated successfully.
Nov 29 07:57:24 compute-0 podman[260857]: 2025-11-29 07:57:24.927152656 +0000 UTC m=+1.506197828 container remove 116be069040747fc6fac294cb7bb8e80fe234babb3b72a797423d5aecb45aaff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:57:24 compute-0 systemd[1]: libpod-conmon-116be069040747fc6fac294cb7bb8e80fe234babb3b72a797423d5aecb45aaff.scope: Deactivated successfully.
Nov 29 07:57:24 compute-0 sudo[260752]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:25 compute-0 sudo[260914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:57:25 compute-0 sudo[260914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:25 compute-0 sudo[260914]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:25 compute-0 sudo[260939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:57:25 compute-0 sudo[260939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:25 compute-0 sudo[260939]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:25 compute-0 sudo[260964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:57:25 compute-0 sudo[260964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:25 compute-0 sudo[260964]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 760 B/s wr, 13 op/s
Nov 29 07:57:25 compute-0 sudo[260989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:57:25 compute-0 sudo[260989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:25 compute-0 podman[261056]: 2025-11-29 07:57:25.740045442 +0000 UTC m=+0.021825748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:57:26 compute-0 ceph-mon[75237]: pgmap v1025: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 760 B/s wr, 13 op/s
Nov 29 07:57:26 compute-0 podman[261056]: 2025-11-29 07:57:26.479879564 +0000 UTC m=+0.761659850 container create 38c52d5a2131d355387f35d0da93067696db02f19f9f82cafc29bb995eaa784f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 07:57:26 compute-0 systemd[1]: Started libpod-conmon-38c52d5a2131d355387f35d0da93067696db02f19f9f82cafc29bb995eaa784f.scope.
Nov 29 07:57:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:57:26 compute-0 podman[261056]: 2025-11-29 07:57:26.826205476 +0000 UTC m=+1.107985802 container init 38c52d5a2131d355387f35d0da93067696db02f19f9f82cafc29bb995eaa784f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:57:26 compute-0 podman[261056]: 2025-11-29 07:57:26.83899481 +0000 UTC m=+1.120775096 container start 38c52d5a2131d355387f35d0da93067696db02f19f9f82cafc29bb995eaa784f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 07:57:26 compute-0 practical_banach[261073]: 167 167
Nov 29 07:57:26 compute-0 systemd[1]: libpod-38c52d5a2131d355387f35d0da93067696db02f19f9f82cafc29bb995eaa784f.scope: Deactivated successfully.
Nov 29 07:57:26 compute-0 podman[261056]: 2025-11-29 07:57:26.857589449 +0000 UTC m=+1.139369785 container attach 38c52d5a2131d355387f35d0da93067696db02f19f9f82cafc29bb995eaa784f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_banach, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:57:26 compute-0 podman[261056]: 2025-11-29 07:57:26.858610287 +0000 UTC m=+1.140390603 container died 38c52d5a2131d355387f35d0da93067696db02f19f9f82cafc29bb995eaa784f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_banach, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 07:57:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f74054b5a141f7cb4d9c773dcded0dd7e151722dc9dbbdf98bc547983a97e253-merged.mount: Deactivated successfully.
Nov 29 07:57:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:57:27.117 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:57:27.119 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:57:27.119 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.6 KiB/s wr, 22 op/s
Nov 29 07:57:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Nov 29 07:57:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Nov 29 07:57:28 compute-0 podman[261056]: 2025-11-29 07:57:28.196505604 +0000 UTC m=+2.478285890 container remove 38c52d5a2131d355387f35d0da93067696db02f19f9f82cafc29bb995eaa784f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_banach, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 07:57:28 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Nov 29 07:57:28 compute-0 ceph-mon[75237]: pgmap v1026: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.6 KiB/s wr, 22 op/s
Nov 29 07:57:28 compute-0 ceph-mon[75237]: osdmap e142: 3 total, 3 up, 3 in
Nov 29 07:57:28 compute-0 systemd[1]: libpod-conmon-38c52d5a2131d355387f35d0da93067696db02f19f9f82cafc29bb995eaa784f.scope: Deactivated successfully.
Nov 29 07:57:28 compute-0 podman[261098]: 2025-11-29 07:57:28.393969768 +0000 UTC m=+0.052003288 container create f48e37a6ebc23d0cdbf9f1b4b58c5fd62b1c9d144b8a3e2d02a715873a15d3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_maxwell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 07:57:28 compute-0 systemd[1]: Started libpod-conmon-f48e37a6ebc23d0cdbf9f1b4b58c5fd62b1c9d144b8a3e2d02a715873a15d3d3.scope.
Nov 29 07:57:28 compute-0 podman[261098]: 2025-11-29 07:57:28.370676182 +0000 UTC m=+0.028709722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:57:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85928c023737f6877a0a388d8288199ba19c691fa79c710cac3cadd4c197bb99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85928c023737f6877a0a388d8288199ba19c691fa79c710cac3cadd4c197bb99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85928c023737f6877a0a388d8288199ba19c691fa79c710cac3cadd4c197bb99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85928c023737f6877a0a388d8288199ba19c691fa79c710cac3cadd4c197bb99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:57:29 compute-0 podman[261098]: 2025-11-29 07:57:29.174164103 +0000 UTC m=+0.832197713 container init f48e37a6ebc23d0cdbf9f1b4b58c5fd62b1c9d144b8a3e2d02a715873a15d3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_maxwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:57:29 compute-0 podman[261098]: 2025-11-29 07:57:29.189314611 +0000 UTC m=+0.847348131 container start f48e37a6ebc23d0cdbf9f1b4b58c5fd62b1c9d144b8a3e2d02a715873a15d3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_maxwell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 07:57:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.6 KiB/s wr, 22 op/s
Nov 29 07:57:30 compute-0 busy_maxwell[261114]: {
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:     "0": [
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:         {
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "devices": [
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "/dev/loop3"
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             ],
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "lv_name": "ceph_lv0",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "lv_size": "21470642176",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "name": "ceph_lv0",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "tags": {
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.cluster_name": "ceph",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.crush_device_class": "",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.encrypted": "0",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.osd_id": "0",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.type": "block",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.vdo": "0"
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             },
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "type": "block",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "vg_name": "ceph_vg0"
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:         }
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:     ],
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:     "1": [
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:         {
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "devices": [
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "/dev/loop4"
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             ],
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "lv_name": "ceph_lv1",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "lv_size": "21470642176",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "name": "ceph_lv1",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "tags": {
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.cluster_name": "ceph",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.crush_device_class": "",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.encrypted": "0",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.osd_id": "1",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.type": "block",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.vdo": "0"
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             },
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "type": "block",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "vg_name": "ceph_vg1"
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:         }
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:     ],
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:     "2": [
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:         {
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "devices": [
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "/dev/loop5"
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             ],
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "lv_name": "ceph_lv2",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "lv_size": "21470642176",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "name": "ceph_lv2",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "tags": {
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.cluster_name": "ceph",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.crush_device_class": "",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.encrypted": "0",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.osd_id": "2",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.type": "block",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:                 "ceph.vdo": "0"
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             },
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "type": "block",
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:             "vg_name": "ceph_vg2"
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:         }
Nov 29 07:57:30 compute-0 busy_maxwell[261114]:     ]
Nov 29 07:57:30 compute-0 busy_maxwell[261114]: }
Nov 29 07:57:30 compute-0 systemd[1]: libpod-f48e37a6ebc23d0cdbf9f1b4b58c5fd62b1c9d144b8a3e2d02a715873a15d3d3.scope: Deactivated successfully.
Nov 29 07:57:30 compute-0 podman[261098]: 2025-11-29 07:57:30.039513217 +0000 UTC m=+1.697546787 container attach f48e37a6ebc23d0cdbf9f1b4b58c5fd62b1c9d144b8a3e2d02a715873a15d3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_maxwell, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 07:57:30 compute-0 podman[261098]: 2025-11-29 07:57:30.040398761 +0000 UTC m=+1.698432301 container died f48e37a6ebc23d0cdbf9f1b4b58c5fd62b1c9d144b8a3e2d02a715873a15d3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_maxwell, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:57:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.9 KiB/s wr, 30 op/s
Nov 29 07:57:32 compute-0 ceph-mon[75237]: pgmap v1028: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.6 KiB/s wr, 22 op/s
Nov 29 07:57:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-85928c023737f6877a0a388d8288199ba19c691fa79c710cac3cadd4c197bb99-merged.mount: Deactivated successfully.
Nov 29 07:57:32 compute-0 podman[261098]: 2025-11-29 07:57:32.97144367 +0000 UTC m=+4.629477190 container remove f48e37a6ebc23d0cdbf9f1b4b58c5fd62b1c9d144b8a3e2d02a715873a15d3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:57:33 compute-0 sudo[260989]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:33 compute-0 systemd[1]: libpod-conmon-f48e37a6ebc23d0cdbf9f1b4b58c5fd62b1c9d144b8a3e2d02a715873a15d3d3.scope: Deactivated successfully.
Nov 29 07:57:33 compute-0 ceph-mon[75237]: pgmap v1029: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.9 KiB/s wr, 30 op/s
Nov 29 07:57:33 compute-0 sudo[261135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:57:33 compute-0 sudo[261135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:33 compute-0 sudo[261135]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:33 compute-0 sudo[261160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:57:33 compute-0 sudo[261160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Nov 29 07:57:33 compute-0 sudo[261160]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Nov 29 07:57:33 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Nov 29 07:57:33 compute-0 sudo[261185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:57:33 compute-0 sudo[261185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:33 compute-0 sudo[261185]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 3.6 KiB/s wr, 36 op/s
Nov 29 07:57:33 compute-0 sudo[261210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:57:33 compute-0 sudo[261210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:33 compute-0 podman[261275]: 2025-11-29 07:57:33.708380925 +0000 UTC m=+0.046783288 container create f1e9b48570e7f842c2f49d5eaa94c68e90fb93090368f66ef3921dc42c1958bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 07:57:33 compute-0 systemd[1]: Started libpod-conmon-f1e9b48570e7f842c2f49d5eaa94c68e90fb93090368f66ef3921dc42c1958bd.scope.
Nov 29 07:57:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:57:33 compute-0 podman[261275]: 2025-11-29 07:57:33.690011612 +0000 UTC m=+0.028413995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:57:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:57:33 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4288777572' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:57:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:57:33 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4288777572' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:57:34 compute-0 podman[261275]: 2025-11-29 07:57:34.116774725 +0000 UTC m=+0.455177118 container init f1e9b48570e7f842c2f49d5eaa94c68e90fb93090368f66ef3921dc42c1958bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:57:34 compute-0 podman[261275]: 2025-11-29 07:57:34.124462171 +0000 UTC m=+0.462864574 container start f1e9b48570e7f842c2f49d5eaa94c68e90fb93090368f66ef3921dc42c1958bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bassi, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 07:57:34 compute-0 vigorous_bassi[261291]: 167 167
Nov 29 07:57:34 compute-0 systemd[1]: libpod-f1e9b48570e7f842c2f49d5eaa94c68e90fb93090368f66ef3921dc42c1958bd.scope: Deactivated successfully.
Nov 29 07:57:34 compute-0 podman[261275]: 2025-11-29 07:57:34.134175262 +0000 UTC m=+0.472577625 container attach f1e9b48570e7f842c2f49d5eaa94c68e90fb93090368f66ef3921dc42c1958bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bassi, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:57:34 compute-0 podman[261275]: 2025-11-29 07:57:34.134612373 +0000 UTC m=+0.473014736 container died f1e9b48570e7f842c2f49d5eaa94c68e90fb93090368f66ef3921dc42c1958bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 07:57:34 compute-0 ceph-mon[75237]: osdmap e143: 3 total, 3 up, 3 in
Nov 29 07:57:34 compute-0 ceph-mon[75237]: pgmap v1031: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 3.6 KiB/s wr, 36 op/s
Nov 29 07:57:34 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4288777572' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:57:34 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4288777572' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:57:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e4ccf2d3676354b8c0246f5ba29c594473f1cbfe3c396086078ebab360114a0-merged.mount: Deactivated successfully.
Nov 29 07:57:34 compute-0 podman[261275]: 2025-11-29 07:57:34.737051725 +0000 UTC m=+1.075454088 container remove f1e9b48570e7f842c2f49d5eaa94c68e90fb93090368f66ef3921dc42c1958bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 07:57:34 compute-0 systemd[1]: libpod-conmon-f1e9b48570e7f842c2f49d5eaa94c68e90fb93090368f66ef3921dc42c1958bd.scope: Deactivated successfully.
Nov 29 07:57:34 compute-0 podman[261296]: 2025-11-29 07:57:34.838345406 +0000 UTC m=+0.668787005 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 07:57:34 compute-0 podman[261342]: 2025-11-29 07:57:34.89657613 +0000 UTC m=+0.028277691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:57:35 compute-0 podman[261342]: 2025-11-29 07:57:35.146609686 +0000 UTC m=+0.278311227 container create 8f4880a14d432be033d23b49e5a5422324a71852424c70fa4335ebc319d7d5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shamir, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:57:35 compute-0 systemd[1]: Started libpod-conmon-8f4880a14d432be033d23b49e5a5422324a71852424c70fa4335ebc319d7d5c3.scope.
Nov 29 07:57:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/353305fe65fb52354ddadbf60bdb518002806f27a505648332688442fa439d0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/353305fe65fb52354ddadbf60bdb518002806f27a505648332688442fa439d0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/353305fe65fb52354ddadbf60bdb518002806f27a505648332688442fa439d0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/353305fe65fb52354ddadbf60bdb518002806f27a505648332688442fa439d0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:57:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.4 KiB/s wr, 39 op/s
Nov 29 07:57:35 compute-0 podman[261342]: 2025-11-29 07:57:35.785430375 +0000 UTC m=+0.917131996 container init 8f4880a14d432be033d23b49e5a5422324a71852424c70fa4335ebc319d7d5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shamir, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:57:35 compute-0 podman[261342]: 2025-11-29 07:57:35.795972308 +0000 UTC m=+0.927673839 container start 8f4880a14d432be033d23b49e5a5422324a71852424c70fa4335ebc319d7d5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shamir, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 07:57:36 compute-0 podman[261342]: 2025-11-29 07:57:36.024302781 +0000 UTC m=+1.156004342 container attach 8f4880a14d432be033d23b49e5a5422324a71852424c70fa4335ebc319d7d5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shamir, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:57:36 compute-0 ceph-mon[75237]: pgmap v1032: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.4 KiB/s wr, 39 op/s
Nov 29 07:57:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:57:36 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3717303125' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:57:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:57:36 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3717303125' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:57:36 compute-0 fervent_shamir[261358]: {
Nov 29 07:57:36 compute-0 fervent_shamir[261358]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:57:36 compute-0 fervent_shamir[261358]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:57:36 compute-0 fervent_shamir[261358]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:57:36 compute-0 fervent_shamir[261358]:         "osd_id": 2,
Nov 29 07:57:36 compute-0 fervent_shamir[261358]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:57:36 compute-0 fervent_shamir[261358]:         "type": "bluestore"
Nov 29 07:57:36 compute-0 fervent_shamir[261358]:     },
Nov 29 07:57:36 compute-0 fervent_shamir[261358]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:57:36 compute-0 fervent_shamir[261358]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:57:36 compute-0 fervent_shamir[261358]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:57:36 compute-0 fervent_shamir[261358]:         "osd_id": 0,
Nov 29 07:57:36 compute-0 fervent_shamir[261358]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:57:36 compute-0 fervent_shamir[261358]:         "type": "bluestore"
Nov 29 07:57:36 compute-0 fervent_shamir[261358]:     },
Nov 29 07:57:36 compute-0 fervent_shamir[261358]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:57:36 compute-0 fervent_shamir[261358]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:57:36 compute-0 fervent_shamir[261358]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:57:36 compute-0 fervent_shamir[261358]:         "osd_id": 1,
Nov 29 07:57:36 compute-0 fervent_shamir[261358]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:57:36 compute-0 fervent_shamir[261358]:         "type": "bluestore"
Nov 29 07:57:36 compute-0 fervent_shamir[261358]:     }
Nov 29 07:57:36 compute-0 fervent_shamir[261358]: }
Nov 29 07:57:36 compute-0 systemd[1]: libpod-8f4880a14d432be033d23b49e5a5422324a71852424c70fa4335ebc319d7d5c3.scope: Deactivated successfully.
Nov 29 07:57:36 compute-0 systemd[1]: libpod-8f4880a14d432be033d23b49e5a5422324a71852424c70fa4335ebc319d7d5c3.scope: Consumed 1.043s CPU time.
Nov 29 07:57:36 compute-0 podman[261342]: 2025-11-29 07:57:36.837308298 +0000 UTC m=+1.969009869 container died 8f4880a14d432be033d23b49e5a5422324a71852424c70fa4335ebc319d7d5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shamir, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:57:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-353305fe65fb52354ddadbf60bdb518002806f27a505648332688442fa439d0f-merged.mount: Deactivated successfully.
Nov 29 07:57:36 compute-0 podman[261342]: 2025-11-29 07:57:36.91328058 +0000 UTC m=+2.044982111 container remove 8f4880a14d432be033d23b49e5a5422324a71852424c70fa4335ebc319d7d5c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shamir, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:57:36 compute-0 systemd[1]: libpod-conmon-8f4880a14d432be033d23b49e5a5422324a71852424c70fa4335ebc319d7d5c3.scope: Deactivated successfully.
Nov 29 07:57:36 compute-0 sudo[261210]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:57:36 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:57:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:57:36 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:57:36 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev f70c125e-1cf6-4b4a-9715-d13de50ee196 does not exist
Nov 29 07:57:36 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 222c5742-b983-4528-8c72-00355216098a does not exist
Nov 29 07:57:37 compute-0 sudo[261402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:57:37 compute-0 sudo[261402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:37 compute-0 sudo[261402]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:37 compute-0 sudo[261427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:57:37 compute-0 sudo[261427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:37 compute-0 sudo[261427]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3717303125' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:57:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3717303125' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:57:37 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:57:37 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:57:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.1 KiB/s wr, 79 op/s
Nov 29 07:57:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:57:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/518628762' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:57:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:57:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/518628762' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:57:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:38 compute-0 ceph-mon[75237]: pgmap v1033: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.1 KiB/s wr, 79 op/s
Nov 29 07:57:38 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/518628762' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:57:38 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/518628762' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:57:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:57:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:57:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:57:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:57:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:57:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:57:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:57:38
Nov 29 07:57:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:57:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:57:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['.rgw.root', 'backups', '.mgr', 'images', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data']
Nov 29 07:57:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:57:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 2.8 KiB/s wr, 72 op/s
Nov 29 07:57:40 compute-0 ceph-mon[75237]: pgmap v1034: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 2.8 KiB/s wr, 72 op/s
Nov 29 07:57:40 compute-0 ceph-mgr[75527]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1430667654
Nov 29 07:57:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 27 KiB/s wr, 63 op/s
Nov 29 07:57:42 compute-0 ceph-mon[75237]: pgmap v1035: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 27 KiB/s wr, 63 op/s
Nov 29 07:57:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:57:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:57:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:57:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:57:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:57:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:57:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:57:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:57:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:57:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:57:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 27 KiB/s wr, 67 op/s
Nov 29 07:57:44 compute-0 ceph-mon[75237]: pgmap v1036: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 27 KiB/s wr, 67 op/s
Nov 29 07:57:44 compute-0 podman[261452]: 2025-11-29 07:57:44.939039525 +0000 UTC m=+0.101334014 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:57:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 23 KiB/s wr, 56 op/s
Nov 29 07:57:45 compute-0 ceph-mon[75237]: pgmap v1037: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 23 KiB/s wr, 56 op/s
Nov 29 07:57:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 23 KiB/s wr, 46 op/s
Nov 29 07:57:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:57:47 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/692294738' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:57:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:57:47 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/692294738' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:57:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:48 compute-0 ceph-mon[75237]: pgmap v1038: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 23 KiB/s wr, 46 op/s
Nov 29 07:57:48 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/692294738' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:57:48 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/692294738' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:57:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 22 KiB/s wr, 12 op/s
Nov 29 07:57:49 compute-0 ceph-mon[75237]: pgmap v1039: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 22 KiB/s wr, 12 op/s
Nov 29 07:57:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 22 KiB/s wr, 12 op/s
Nov 29 07:57:51 compute-0 podman[261473]: 2025-11-29 07:57:51.898126288 +0000 UTC m=+0.072319814 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 29 07:57:52 compute-0 ceph-mon[75237]: pgmap v1040: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 22 KiB/s wr, 12 op/s
Nov 29 07:57:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 426 B/s wr, 5 op/s
Nov 29 07:57:53 compute-0 nova_compute[255040]: 2025-11-29 07:57:53.415 255071 DEBUG oslo_concurrency.lockutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Acquiring lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:53 compute-0 nova_compute[255040]: 2025-11-29 07:57:53.416 255071 DEBUG oslo_concurrency.lockutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:53 compute-0 nova_compute[255040]: 2025-11-29 07:57:53.440 255071 DEBUG nova.compute.manager [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:57:53 compute-0 nova_compute[255040]: 2025-11-29 07:57:53.563 255071 DEBUG oslo_concurrency.lockutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:53 compute-0 nova_compute[255040]: 2025-11-29 07:57:53.564 255071 DEBUG oslo_concurrency.lockutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:53 compute-0 nova_compute[255040]: 2025-11-29 07:57:53.575 255071 DEBUG nova.virt.hardware [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:57:53 compute-0 nova_compute[255040]: 2025-11-29 07:57:53.575 255071 INFO nova.compute.claims [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Claim successful on node compute-0.ctlplane.example.com
Nov 29 07:57:53 compute-0 nova_compute[255040]: 2025-11-29 07:57:53.700 255071 DEBUG oslo_concurrency.processutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:57:54 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/76412812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:54 compute-0 nova_compute[255040]: 2025-11-29 07:57:54.173 255071 DEBUG oslo_concurrency.processutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:54 compute-0 nova_compute[255040]: 2025-11-29 07:57:54.182 255071 DEBUG nova.compute.provider_tree [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:57:54 compute-0 nova_compute[255040]: 2025-11-29 07:57:54.202 255071 DEBUG nova.scheduler.client.report [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:57:54 compute-0 nova_compute[255040]: 2025-11-29 07:57:54.234 255071 DEBUG oslo_concurrency.lockutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:54 compute-0 nova_compute[255040]: 2025-11-29 07:57:54.235 255071 DEBUG nova.compute.manager [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:57:54 compute-0 nova_compute[255040]: 2025-11-29 07:57:54.313 255071 DEBUG nova.compute.manager [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:57:54 compute-0 nova_compute[255040]: 2025-11-29 07:57:54.313 255071 DEBUG nova.network.neutron [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:57:54 compute-0 nova_compute[255040]: 2025-11-29 07:57:54.343 255071 INFO nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:57:54 compute-0 nova_compute[255040]: 2025-11-29 07:57:54.369 255071 DEBUG nova.compute.manager [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:57:54 compute-0 ceph-mon[75237]: pgmap v1041: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 426 B/s wr, 5 op/s
Nov 29 07:57:54 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/76412812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:54 compute-0 nova_compute[255040]: 2025-11-29 07:57:54.471 255071 DEBUG nova.compute.manager [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:57:54 compute-0 nova_compute[255040]: 2025-11-29 07:57:54.473 255071 DEBUG nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:57:54 compute-0 nova_compute[255040]: 2025-11-29 07:57:54.473 255071 INFO nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Creating image(s)
Nov 29 07:57:54 compute-0 nova_compute[255040]: 2025-11-29 07:57:54.503 255071 DEBUG nova.storage.rbd_utils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] rbd image 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:57:54 compute-0 nova_compute[255040]: 2025-11-29 07:57:54.531 255071 DEBUG nova.storage.rbd_utils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] rbd image 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:57:54 compute-0 nova_compute[255040]: 2025-11-29 07:57:54.556 255071 DEBUG nova.storage.rbd_utils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] rbd image 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:57:54 compute-0 nova_compute[255040]: 2025-11-29 07:57:54.560 255071 DEBUG oslo_concurrency.lockutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Acquiring lock "55a6637599f7119d0d1afd670bb8713620840059" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:54 compute-0 nova_compute[255040]: 2025-11-29 07:57:54.562 255071 DEBUG oslo_concurrency.lockutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:54 compute-0 nova_compute[255040]: 2025-11-29 07:57:54.869 255071 WARNING oslo_policy.policy [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 29 07:57:54 compute-0 nova_compute[255040]: 2025-11-29 07:57:54.869 255071 WARNING oslo_policy.policy [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 29 07:57:54 compute-0 nova_compute[255040]: 2025-11-29 07:57:54.871 255071 DEBUG nova.policy [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0b288cb3716343b3b86a120d6c892ab4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd5757f1dcffd49e48fe28b1c2c26b71a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:57:55 compute-0 nova_compute[255040]: 2025-11-29 07:57:55.117 255071 DEBUG nova.virt.libvirt.imagebackend [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Image locations are: [{'url': 'rbd://321e9cb7-01a2-5759-bf8c-981c9a64aa3e/images/36a9388d-0d77-4d24-a915-be92247e5dbc/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://321e9cb7-01a2-5759-bf8c-981c9a64aa3e/images/36a9388d-0d77-4d24-a915-be92247e5dbc/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s
Nov 29 07:57:55 compute-0 nova_compute[255040]: 2025-11-29 07:57:55.836 255071 DEBUG nova.network.neutron [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Successfully created port: 45c74639-2d52-4fcf-9874-4ec3f104851e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 4.514940255258188e-06 of space, bias 1.0, pg target 0.0013544820765774564 quantized to 32 (current 32)
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:57:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:57:56 compute-0 ceph-mon[75237]: pgmap v1042: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s
Nov 29 07:57:56 compute-0 nova_compute[255040]: 2025-11-29 07:57:56.424 255071 DEBUG oslo_concurrency.processutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:56 compute-0 nova_compute[255040]: 2025-11-29 07:57:56.497 255071 DEBUG oslo_concurrency.processutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059.part --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:56 compute-0 nova_compute[255040]: 2025-11-29 07:57:56.498 255071 DEBUG nova.virt.images [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] 36a9388d-0d77-4d24-a915-be92247e5dbc was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 29 07:57:56 compute-0 nova_compute[255040]: 2025-11-29 07:57:56.499 255071 DEBUG nova.privsep.utils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 29 07:57:56 compute-0 nova_compute[255040]: 2025-11-29 07:57:56.499 255071 DEBUG oslo_concurrency.processutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059.part /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:56 compute-0 nova_compute[255040]: 2025-11-29 07:57:56.768 255071 DEBUG oslo_concurrency.processutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059.part /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059.converted" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:56 compute-0 nova_compute[255040]: 2025-11-29 07:57:56.778 255071 DEBUG oslo_concurrency.processutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:56 compute-0 nova_compute[255040]: 2025-11-29 07:57:56.839 255071 DEBUG oslo_concurrency.processutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059.converted --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:56 compute-0 nova_compute[255040]: 2025-11-29 07:57:56.840 255071 DEBUG oslo_concurrency.lockutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:56 compute-0 nova_compute[255040]: 2025-11-29 07:57:56.862 255071 DEBUG nova.storage.rbd_utils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] rbd image 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:57:56 compute-0 nova_compute[255040]: 2025-11-29 07:57:56.866 255071 DEBUG oslo_concurrency.processutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:57 compute-0 nova_compute[255040]: 2025-11-29 07:57:57.172 255071 DEBUG nova.network.neutron [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Successfully updated port: 45c74639-2d52-4fcf-9874-4ec3f104851e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:57:57 compute-0 nova_compute[255040]: 2025-11-29 07:57:57.190 255071 DEBUG oslo_concurrency.lockutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Acquiring lock "refresh_cache-5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:57:57 compute-0 nova_compute[255040]: 2025-11-29 07:57:57.190 255071 DEBUG oslo_concurrency.lockutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Acquired lock "refresh_cache-5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:57:57 compute-0 nova_compute[255040]: 2025-11-29 07:57:57.190 255071 DEBUG nova.network.neutron [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:57:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 597 B/s wr, 8 op/s
Nov 29 07:57:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:57:57 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1343623964' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:57:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:57:57 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1343623964' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:57:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Nov 29 07:57:57 compute-0 nova_compute[255040]: 2025-11-29 07:57:57.418 255071 DEBUG nova.network.neutron [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:57:57 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1343623964' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:57:57 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1343623964' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:57:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Nov 29 07:57:57 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Nov 29 07:57:57 compute-0 nova_compute[255040]: 2025-11-29 07:57:57.652 255071 DEBUG nova.compute.manager [req-ef7e9958-0f66-49fc-9cd8-4e1f1390a833 req-edfb4177-f207-47bc-8a5e-6952e1962154 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Received event network-changed-45c74639-2d52-4fcf-9874-4ec3f104851e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:57:57 compute-0 nova_compute[255040]: 2025-11-29 07:57:57.652 255071 DEBUG nova.compute.manager [req-ef7e9958-0f66-49fc-9cd8-4e1f1390a833 req-edfb4177-f207-47bc-8a5e-6952e1962154 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Refreshing instance network info cache due to event network-changed-45c74639-2d52-4fcf-9874-4ec3f104851e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:57:57 compute-0 nova_compute[255040]: 2025-11-29 07:57:57.653 255071 DEBUG oslo_concurrency.lockutils [req-ef7e9958-0f66-49fc-9cd8-4e1f1390a833 req-edfb4177-f207-47bc-8a5e-6952e1962154 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:57:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:58 compute-0 nova_compute[255040]: 2025-11-29 07:57:58.319 255071 DEBUG nova.network.neutron [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Updating instance_info_cache with network_info: [{"id": "45c74639-2d52-4fcf-9874-4ec3f104851e", "address": "fa:16:3e:96:5f:e8", "network": {"id": "94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-82283185-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5757f1dcffd49e48fe28b1c2c26b71a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45c74639-2d", "ovs_interfaceid": "45c74639-2d52-4fcf-9874-4ec3f104851e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:57:58 compute-0 nova_compute[255040]: 2025-11-29 07:57:58.335 255071 DEBUG oslo_concurrency.lockutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Releasing lock "refresh_cache-5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:57:58 compute-0 nova_compute[255040]: 2025-11-29 07:57:58.335 255071 DEBUG nova.compute.manager [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Instance network_info: |[{"id": "45c74639-2d52-4fcf-9874-4ec3f104851e", "address": "fa:16:3e:96:5f:e8", "network": {"id": "94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-82283185-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5757f1dcffd49e48fe28b1c2c26b71a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45c74639-2d", "ovs_interfaceid": "45c74639-2d52-4fcf-9874-4ec3f104851e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:57:58 compute-0 nova_compute[255040]: 2025-11-29 07:57:58.336 255071 DEBUG oslo_concurrency.lockutils [req-ef7e9958-0f66-49fc-9cd8-4e1f1390a833 req-edfb4177-f207-47bc-8a5e-6952e1962154 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:57:58 compute-0 nova_compute[255040]: 2025-11-29 07:57:58.336 255071 DEBUG nova.network.neutron [req-ef7e9958-0f66-49fc-9cd8-4e1f1390a833 req-edfb4177-f207-47bc-8a5e-6952e1962154 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Refreshing network info cache for port 45c74639-2d52-4fcf-9874-4ec3f104851e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:57:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Nov 29 07:57:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Nov 29 07:57:58 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Nov 29 07:57:58 compute-0 ceph-mon[75237]: pgmap v1043: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 597 B/s wr, 8 op/s
Nov 29 07:57:58 compute-0 ceph-mon[75237]: osdmap e144: 3 total, 3 up, 3 in
Nov 29 07:57:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:57:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2509397752' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:57:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:57:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2509397752' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:57:58 compute-0 nova_compute[255040]: 2025-11-29 07:57:58.806 255071 DEBUG oslo_concurrency.processutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.940s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:58 compute-0 nova_compute[255040]: 2025-11-29 07:57:58.869 255071 DEBUG nova.storage.rbd_utils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] resizing rbd image 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:57:58 compute-0 nova_compute[255040]: 2025-11-29 07:57:58.975 255071 DEBUG nova.objects.instance [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lazy-loading 'migration_context' on Instance uuid 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:57:58 compute-0 nova_compute[255040]: 2025-11-29 07:57:58.992 255071 DEBUG nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:57:58 compute-0 nova_compute[255040]: 2025-11-29 07:57:58.993 255071 DEBUG nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Ensure instance console log exists: /var/lib/nova/instances/5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:57:58 compute-0 nova_compute[255040]: 2025-11-29 07:57:58.993 255071 DEBUG oslo_concurrency.lockutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:58 compute-0 nova_compute[255040]: 2025-11-29 07:57:58.994 255071 DEBUG oslo_concurrency.lockutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:58 compute-0 nova_compute[255040]: 2025-11-29 07:57:58.994 255071 DEBUG oslo_concurrency.lockutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:58 compute-0 nova_compute[255040]: 2025-11-29 07:57:58.998 255071 DEBUG nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Start _get_guest_xml network_info=[{"id": "45c74639-2d52-4fcf-9874-4ec3f104851e", "address": "fa:16:3e:96:5f:e8", "network": {"id": "94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-82283185-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5757f1dcffd49e48fe28b1c2c26b71a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45c74639-2d", "ovs_interfaceid": "45c74639-2d52-4fcf-9874-4ec3f104851e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'image_id': '36a9388d-0d77-4d24-a915-be92247e5dbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.003 255071 WARNING nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.008 255071 DEBUG nova.virt.libvirt.host [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.009 255071 DEBUG nova.virt.libvirt.host [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.012 255071 DEBUG nova.virt.libvirt.host [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.013 255071 DEBUG nova.virt.libvirt.host [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.014 255071 DEBUG nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.014 255071 DEBUG nova.virt.hardware [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.015 255071 DEBUG nova.virt.hardware [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.015 255071 DEBUG nova.virt.hardware [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.015 255071 DEBUG nova.virt.hardware [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.016 255071 DEBUG nova.virt.hardware [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.016 255071 DEBUG nova.virt.hardware [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.016 255071 DEBUG nova.virt.hardware [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.016 255071 DEBUG nova.virt.hardware [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.017 255071 DEBUG nova.virt.hardware [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.017 255071 DEBUG nova.virt.hardware [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.017 255071 DEBUG nova.virt.hardware [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.024 255071 DEBUG nova.privsep.utils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.024 255071 DEBUG oslo_concurrency.processutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 895 B/s wr, 13 op/s
Nov 29 07:57:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Nov 29 07:57:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Nov 29 07:57:59 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Nov 29 07:57:59 compute-0 ceph-mon[75237]: osdmap e145: 3 total, 3 up, 3 in
Nov 29 07:57:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2509397752' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:57:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2509397752' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:57:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:57:59 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2751743101' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.522 255071 DEBUG nova.network.neutron [req-ef7e9958-0f66-49fc-9cd8-4e1f1390a833 req-edfb4177-f207-47bc-8a5e-6952e1962154 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Updated VIF entry in instance network info cache for port 45c74639-2d52-4fcf-9874-4ec3f104851e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.524 255071 DEBUG nova.network.neutron [req-ef7e9958-0f66-49fc-9cd8-4e1f1390a833 req-edfb4177-f207-47bc-8a5e-6952e1962154 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Updating instance_info_cache with network_info: [{"id": "45c74639-2d52-4fcf-9874-4ec3f104851e", "address": "fa:16:3e:96:5f:e8", "network": {"id": "94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-82283185-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5757f1dcffd49e48fe28b1c2c26b71a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45c74639-2d", "ovs_interfaceid": "45c74639-2d52-4fcf-9874-4ec3f104851e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.542 255071 DEBUG oslo_concurrency.lockutils [req-ef7e9958-0f66-49fc-9cd8-4e1f1390a833 req-edfb4177-f207-47bc-8a5e-6952e1962154 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.545 255071 DEBUG oslo_concurrency.processutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.584 255071 DEBUG nova.storage.rbd_utils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] rbd image 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:57:59 compute-0 nova_compute[255040]: 2025-11-29 07:57:59.590 255071 DEBUG oslo_concurrency.processutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:58:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:58:00 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2266907977' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.021 255071 DEBUG oslo_concurrency.processutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.024 255071 DEBUG nova.virt.libvirt.vif [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:57:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-EncryptedVolumesExtendAttachedTest-instance-700650819',display_name='tempest-EncryptedVolumesExtendAttachedTest-instance-700650819',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-encryptedvolumesextendattachedtest-instance-700650819',id=1,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIXdJHxCf+KhIzKU93CDT91LAa/ODzUHhpG+4ryHEHWAJajfjeKrKpfFQLVHoHOxBkcvZ0Yaky80vVJ9BA42t0nKn5643xxhHfoAlCf/6QaaHOImmOmRutgA8MPci8r5PQ==',key_name='tempest-keypair-419189598',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5757f1dcffd49e48fe28b1c2c26b71a',ramdisk_id='',reservation_id='r-jqytidtm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-EncryptedVolumesExtendAttachedTest-306938447',owner_user_name='tempest-EncryptedVolumesExtendAttachedTest-306938447-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:57:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0b288cb3716343b3b86a120d6c892ab4',uuid=5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "45c74639-2d52-4fcf-9874-4ec3f104851e", "address": "fa:16:3e:96:5f:e8", "network": {"id": "94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-82283185-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5757f1dcffd49e48fe28b1c2c26b71a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45c74639-2d", "ovs_interfaceid": "45c74639-2d52-4fcf-9874-4ec3f104851e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.025 255071 DEBUG nova.network.os_vif_util [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Converting VIF {"id": "45c74639-2d52-4fcf-9874-4ec3f104851e", "address": "fa:16:3e:96:5f:e8", "network": {"id": "94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-82283185-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5757f1dcffd49e48fe28b1c2c26b71a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45c74639-2d", "ovs_interfaceid": "45c74639-2d52-4fcf-9874-4ec3f104851e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.027 255071 DEBUG nova.network.os_vif_util [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:5f:e8,bridge_name='br-int',has_traffic_filtering=True,id=45c74639-2d52-4fcf-9874-4ec3f104851e,network=Network(94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45c74639-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.031 255071 DEBUG nova.objects.instance [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lazy-loading 'pci_devices' on Instance uuid 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.061 255071 DEBUG nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:58:00 compute-0 nova_compute[255040]:   <uuid>5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e</uuid>
Nov 29 07:58:00 compute-0 nova_compute[255040]:   <name>instance-00000001</name>
Nov 29 07:58:00 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 07:58:00 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 07:58:00 compute-0 nova_compute[255040]:   <metadata>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <nova:name>tempest-EncryptedVolumesExtendAttachedTest-instance-700650819</nova:name>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 07:57:59</nova:creationTime>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 07:58:00 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 07:58:00 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 07:58:00 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 07:58:00 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:58:00 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 07:58:00 compute-0 nova_compute[255040]:         <nova:user uuid="0b288cb3716343b3b86a120d6c892ab4">tempest-EncryptedVolumesExtendAttachedTest-306938447-project-member</nova:user>
Nov 29 07:58:00 compute-0 nova_compute[255040]:         <nova:project uuid="d5757f1dcffd49e48fe28b1c2c26b71a">tempest-EncryptedVolumesExtendAttachedTest-306938447</nova:project>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <nova:root type="image" uuid="36a9388d-0d77-4d24-a915-be92247e5dbc"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 07:58:00 compute-0 nova_compute[255040]:         <nova:port uuid="45c74639-2d52-4fcf-9874-4ec3f104851e">
Nov 29 07:58:00 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 07:58:00 compute-0 nova_compute[255040]:   </metadata>
Nov 29 07:58:00 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <system>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <entry name="serial">5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e</entry>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <entry name="uuid">5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e</entry>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     </system>
Nov 29 07:58:00 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 07:58:00 compute-0 nova_compute[255040]:   <os>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:   </os>
Nov 29 07:58:00 compute-0 nova_compute[255040]:   <features>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <apic/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:   </features>
Nov 29 07:58:00 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:   </clock>
Nov 29 07:58:00 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:   </cpu>
Nov 29 07:58:00 compute-0 nova_compute[255040]:   <devices>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e_disk">
Nov 29 07:58:00 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       </source>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 07:58:00 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       </auth>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     </disk>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e_disk.config">
Nov 29 07:58:00 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       </source>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 07:58:00 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       </auth>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     </disk>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:96:5f:e8"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <target dev="tap45c74639-2d"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     </interface>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e/console.log" append="off"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     </serial>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <video>
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     </video>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     </rng>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 07:58:00 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 07:58:00 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 07:58:00 compute-0 nova_compute[255040]:   </devices>
Nov 29 07:58:00 compute-0 nova_compute[255040]: </domain>
Nov 29 07:58:00 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.062 255071 DEBUG nova.compute.manager [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Preparing to wait for external event network-vif-plugged-45c74639-2d52-4fcf-9874-4ec3f104851e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.062 255071 DEBUG oslo_concurrency.lockutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Acquiring lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.063 255071 DEBUG oslo_concurrency.lockutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.063 255071 DEBUG oslo_concurrency.lockutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.064 255071 DEBUG nova.virt.libvirt.vif [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:57:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-EncryptedVolumesExtendAttachedTest-instance-700650819',display_name='tempest-EncryptedVolumesExtendAttachedTest-instance-700650819',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-encryptedvolumesextendattachedtest-instance-700650819',id=1,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIXdJHxCf+KhIzKU93CDT91LAa/ODzUHhpG+4ryHEHWAJajfjeKrKpfFQLVHoHOxBkcvZ0Yaky80vVJ9BA42t0nKn5643xxhHfoAlCf/6QaaHOImmOmRutgA8MPci8r5PQ==',key_name='tempest-keypair-419189598',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5757f1dcffd49e48fe28b1c2c26b71a',ramdisk_id='',reservation_id='r-jqytidtm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-EncryptedVolumesExtendAttachedTest-306938447',owner_user_name='tempest-EncryptedVolumesExtendAttachedTest-306938447-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:57:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0b288cb3716343b3b86a120d6c892ab4',uuid=5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "45c74639-2d52-4fcf-9874-4ec3f104851e", "address": "fa:16:3e:96:5f:e8", "network": {"id": "94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-82283185-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5757f1dcffd49e48fe28b1c2c26b71a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45c74639-2d", "ovs_interfaceid": "45c74639-2d52-4fcf-9874-4ec3f104851e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.065 255071 DEBUG nova.network.os_vif_util [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Converting VIF {"id": "45c74639-2d52-4fcf-9874-4ec3f104851e", "address": "fa:16:3e:96:5f:e8", "network": {"id": "94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-82283185-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5757f1dcffd49e48fe28b1c2c26b71a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45c74639-2d", "ovs_interfaceid": "45c74639-2d52-4fcf-9874-4ec3f104851e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.066 255071 DEBUG nova.network.os_vif_util [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:5f:e8,bridge_name='br-int',has_traffic_filtering=True,id=45c74639-2d52-4fcf-9874-4ec3f104851e,network=Network(94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45c74639-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.067 255071 DEBUG os_vif [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:5f:e8,bridge_name='br-int',has_traffic_filtering=True,id=45c74639-2d52-4fcf-9874-4ec3f104851e,network=Network(94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45c74639-2d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.134 255071 DEBUG ovsdbapp.backend.ovs_idl [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.135 255071 DEBUG ovsdbapp.backend.ovs_idl [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.135 255071 DEBUG ovsdbapp.backend.ovs_idl [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.135 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.136 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.136 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.137 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.138 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.140 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.149 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.149 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.149 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.151 255071 INFO oslo.privsep.daemon [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpr553td_h/privsep.sock']
Nov 29 07:58:00 compute-0 ceph-mon[75237]: pgmap v1046: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 895 B/s wr, 13 op/s
Nov 29 07:58:00 compute-0 ceph-mon[75237]: osdmap e146: 3 total, 3 up, 3 in
Nov 29 07:58:00 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2751743101' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:58:00 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2266907977' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.955 255071 INFO oslo.privsep.daemon [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Spawned new privsep daemon via rootwrap
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.788 261760 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.794 261760 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.797 261760 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Nov 29 07:58:00 compute-0 nova_compute[255040]: 2025-11-29 07:58:00.797 261760 INFO oslo.privsep.daemon [-] privsep daemon running as pid 261760
Nov 29 07:58:01 compute-0 nova_compute[255040]: 2025-11-29 07:58:01.273 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:01 compute-0 nova_compute[255040]: 2025-11-29 07:58:01.274 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45c74639-2d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:58:01 compute-0 nova_compute[255040]: 2025-11-29 07:58:01.275 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap45c74639-2d, col_values=(('external_ids', {'iface-id': '45c74639-2d52-4fcf-9874-4ec3f104851e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:96:5f:e8', 'vm-uuid': '5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:58:01 compute-0 nova_compute[255040]: 2025-11-29 07:58:01.278 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:01 compute-0 NetworkManager[49116]: <info>  [1764403081.2804] manager: (tap45c74639-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Nov 29 07:58:01 compute-0 nova_compute[255040]: 2025-11-29 07:58:01.283 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:58:01 compute-0 nova_compute[255040]: 2025-11-29 07:58:01.286 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:01 compute-0 nova_compute[255040]: 2025-11-29 07:58:01.286 255071 INFO os_vif [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:5f:e8,bridge_name='br-int',has_traffic_filtering=True,id=45c74639-2d52-4fcf-9874-4ec3f104851e,network=Network(94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45c74639-2d')
Nov 29 07:58:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 68 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.2 MiB/s wr, 98 op/s
Nov 29 07:58:01 compute-0 nova_compute[255040]: 2025-11-29 07:58:01.467 255071 DEBUG nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:58:01 compute-0 nova_compute[255040]: 2025-11-29 07:58:01.468 255071 DEBUG nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:58:01 compute-0 nova_compute[255040]: 2025-11-29 07:58:01.468 255071 DEBUG nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] No VIF found with MAC fa:16:3e:96:5f:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:58:01 compute-0 nova_compute[255040]: 2025-11-29 07:58:01.469 255071 INFO nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Using config drive
Nov 29 07:58:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Nov 29 07:58:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Nov 29 07:58:01 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Nov 29 07:58:01 compute-0 nova_compute[255040]: 2025-11-29 07:58:01.508 255071 DEBUG nova.storage.rbd_utils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] rbd image 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:58:02 compute-0 ceph-mon[75237]: pgmap v1048: 305 pgs: 305 active+clean; 68 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.2 MiB/s wr, 98 op/s
Nov 29 07:58:02 compute-0 ceph-mon[75237]: osdmap e147: 3 total, 3 up, 3 in
Nov 29 07:58:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 114 op/s
Nov 29 07:58:03 compute-0 nova_compute[255040]: 2025-11-29 07:58:03.328 255071 INFO nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Creating config drive at /var/lib/nova/instances/5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e/disk.config
Nov 29 07:58:03 compute-0 nova_compute[255040]: 2025-11-29 07:58:03.343 255071 DEBUG oslo_concurrency.processutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_mzn9fwc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:58:03 compute-0 nova_compute[255040]: 2025-11-29 07:58:03.493 255071 DEBUG oslo_concurrency.processutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_mzn9fwc" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:58:03 compute-0 nova_compute[255040]: 2025-11-29 07:58:03.537 255071 DEBUG nova.storage.rbd_utils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] rbd image 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:58:03 compute-0 nova_compute[255040]: 2025-11-29 07:58:03.543 255071 DEBUG oslo_concurrency.processutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e/disk.config 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:58:03 compute-0 nova_compute[255040]: 2025-11-29 07:58:03.706 255071 DEBUG oslo_concurrency.processutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e/disk.config 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:58:03 compute-0 nova_compute[255040]: 2025-11-29 07:58:03.707 255071 INFO nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Deleting local config drive /var/lib/nova/instances/5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e/disk.config because it was imported into RBD.
Nov 29 07:58:03 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 29 07:58:03 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 29 07:58:03 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 29 07:58:03 compute-0 NetworkManager[49116]: <info>  [1764403083.8541] manager: (tap45c74639-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Nov 29 07:58:03 compute-0 kernel: tap45c74639-2d: entered promiscuous mode
Nov 29 07:58:03 compute-0 nova_compute[255040]: 2025-11-29 07:58:03.858 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:03 compute-0 ovn_controller[153295]: 2025-11-29T07:58:03Z|00027|binding|INFO|Claiming lport 45c74639-2d52-4fcf-9874-4ec3f104851e for this chassis.
Nov 29 07:58:03 compute-0 ovn_controller[153295]: 2025-11-29T07:58:03Z|00028|binding|INFO|45c74639-2d52-4fcf-9874-4ec3f104851e: Claiming fa:16:3e:96:5f:e8 10.100.0.6
Nov 29 07:58:03 compute-0 nova_compute[255040]: 2025-11-29 07:58:03.864 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:03.876 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:5f:e8 10.100.0.6'], port_security=['fa:16:3e:96:5f:e8 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5757f1dcffd49e48fe28b1c2c26b71a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5be2926e-1d47-4331-838e-0bd2d002e2f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=73c4b79a-ad43-42b8-bec9-14d0c32e9bad, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=45c74639-2d52-4fcf-9874-4ec3f104851e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:58:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:03.878 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 45c74639-2d52-4fcf-9874-4ec3f104851e in datapath 94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c bound to our chassis
Nov 29 07:58:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:03.882 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c
Nov 29 07:58:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:03.884 163500 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmprlq35fsj/privsep.sock']
Nov 29 07:58:03 compute-0 systemd-udevd[261858]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:58:03 compute-0 systemd-machined[216271]: New machine qemu-1-instance-00000001.
Nov 29 07:58:03 compute-0 NetworkManager[49116]: <info>  [1764403083.9301] device (tap45c74639-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:58:03 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:58:03 compute-0 NetworkManager[49116]: <info>  [1764403083.9320] device (tap45c74639-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:58:03 compute-0 nova_compute[255040]: 2025-11-29 07:58:03.940 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:03 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Nov 29 07:58:03 compute-0 ovn_controller[153295]: 2025-11-29T07:58:03Z|00029|binding|INFO|Setting lport 45c74639-2d52-4fcf-9874-4ec3f104851e ovn-installed in OVS
Nov 29 07:58:03 compute-0 ovn_controller[153295]: 2025-11-29T07:58:03Z|00030|binding|INFO|Setting lport 45c74639-2d52-4fcf-9874-4ec3f104851e up in Southbound
Nov 29 07:58:03 compute-0 nova_compute[255040]: 2025-11-29 07:58:03.948 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.262 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Nov 29 07:58:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Nov 29 07:58:04 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Nov 29 07:58:04 compute-0 ceph-mon[75237]: pgmap v1050: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 114 op/s
Nov 29 07:58:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:04.681 163500 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 29 07:58:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:04.683 163500 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmprlq35fsj/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 29 07:58:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:04.538 261880 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 29 07:58:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:04.551 261880 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 29 07:58:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:04.554 261880 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Nov 29 07:58:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:04.554 261880 INFO oslo.privsep.daemon [-] privsep daemon running as pid 261880
Nov 29 07:58:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:04.686 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[fe60f2d2-de78-41ee-9d08-35542ac0ea24]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.724 255071 DEBUG nova.compute.manager [req-b4787cfc-61ce-4c0a-a9cc-44dbf5132fc4 req-3956077b-ec49-44eb-8846-f1da99fe17e6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Received event network-vif-plugged-45c74639-2d52-4fcf-9874-4ec3f104851e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.725 255071 DEBUG oslo_concurrency.lockutils [req-b4787cfc-61ce-4c0a-a9cc-44dbf5132fc4 req-3956077b-ec49-44eb-8846-f1da99fe17e6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.725 255071 DEBUG oslo_concurrency.lockutils [req-b4787cfc-61ce-4c0a-a9cc-44dbf5132fc4 req-3956077b-ec49-44eb-8846-f1da99fe17e6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.726 255071 DEBUG oslo_concurrency.lockutils [req-b4787cfc-61ce-4c0a-a9cc-44dbf5132fc4 req-3956077b-ec49-44eb-8846-f1da99fe17e6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.726 255071 DEBUG nova.compute.manager [req-b4787cfc-61ce-4c0a-a9cc-44dbf5132fc4 req-3956077b-ec49-44eb-8846-f1da99fe17e6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Processing event network-vif-plugged-45c74639-2d52-4fcf-9874-4ec3f104851e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.749 255071 DEBUG nova.compute.manager [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.751 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403084.7489657, 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.751 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] VM Started (Lifecycle Event)
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.761 255071 DEBUG nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.766 255071 INFO nova.virt.libvirt.driver [-] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Instance spawned successfully.
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.767 255071 DEBUG nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.838 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.843 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.874 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.874 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403084.750439, 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.875 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] VM Paused (Lifecycle Event)
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.883 255071 DEBUG nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.883 255071 DEBUG nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.884 255071 DEBUG nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.884 255071 DEBUG nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.885 255071 DEBUG nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.885 255071 DEBUG nova.virt.libvirt.driver [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.912 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.916 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403084.7612352, 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.917 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] VM Resumed (Lifecycle Event)
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.937 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.942 255071 INFO nova.compute.manager [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Took 10.47 seconds to spawn the instance on the hypervisor.
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.943 255071 DEBUG nova.compute.manager [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.947 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:58:04 compute-0 nova_compute[255040]: 2025-11-29 07:58:04.985 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:58:05 compute-0 nova_compute[255040]: 2025-11-29 07:58:05.028 255071 INFO nova.compute.manager [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Took 11.51 seconds to build instance.
Nov 29 07:58:05 compute-0 nova_compute[255040]: 2025-11-29 07:58:05.050 255071 DEBUG oslo_concurrency.lockutils [None req-dbea31e2-eb10-40b9-9053-4d604b101968 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.5 MiB/s wr, 165 op/s
Nov 29 07:58:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:05.385 261880 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:05.386 261880 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:05.386 261880 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Nov 29 07:58:05 compute-0 ceph-mon[75237]: osdmap e148: 3 total, 3 up, 3 in
Nov 29 07:58:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Nov 29 07:58:05 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Nov 29 07:58:05 compute-0 podman[261925]: 2025-11-29 07:58:05.994788183 +0000 UTC m=+0.158086408 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:58:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:06.233 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[dab317e2-a75c-4f07-ac78-60660884b95a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:06.234 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap94996ac3-31 in ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:58:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:06.236 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap94996ac3-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:58:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:06.237 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[6b94f2d5-bed5-446f-9129-5299644946fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:06.241 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[de35090d-0d8d-4da2-999a-9449556ec80d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:06 compute-0 nova_compute[255040]: 2025-11-29 07:58:06.277 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:06.293 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[be4c8e0e-5df6-4e0c-ab83-bb7f341fce61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:06.313 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[a870804d-e3a2-4930-ac7f-98e89eb659f8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:06.316 163500 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpkkpnc83a/privsep.sock']
Nov 29 07:58:06 compute-0 ceph-mon[75237]: pgmap v1052: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.5 MiB/s wr, 165 op/s
Nov 29 07:58:06 compute-0 ceph-mon[75237]: osdmap e149: 3 total, 3 up, 3 in
Nov 29 07:58:06 compute-0 nova_compute[255040]: 2025-11-29 07:58:06.901 255071 DEBUG nova.compute.manager [req-f7de8512-5129-488d-95cf-78a400ea98f7 req-fb6a87dd-2e65-4468-bd1e-b530a4a0036b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Received event network-vif-plugged-45c74639-2d52-4fcf-9874-4ec3f104851e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:58:06 compute-0 nova_compute[255040]: 2025-11-29 07:58:06.902 255071 DEBUG oslo_concurrency.lockutils [req-f7de8512-5129-488d-95cf-78a400ea98f7 req-fb6a87dd-2e65-4468-bd1e-b530a4a0036b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:06 compute-0 nova_compute[255040]: 2025-11-29 07:58:06.902 255071 DEBUG oslo_concurrency.lockutils [req-f7de8512-5129-488d-95cf-78a400ea98f7 req-fb6a87dd-2e65-4468-bd1e-b530a4a0036b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:06 compute-0 nova_compute[255040]: 2025-11-29 07:58:06.903 255071 DEBUG oslo_concurrency.lockutils [req-f7de8512-5129-488d-95cf-78a400ea98f7 req-fb6a87dd-2e65-4468-bd1e-b530a4a0036b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:06 compute-0 nova_compute[255040]: 2025-11-29 07:58:06.903 255071 DEBUG nova.compute.manager [req-f7de8512-5129-488d-95cf-78a400ea98f7 req-fb6a87dd-2e65-4468-bd1e-b530a4a0036b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] No waiting events found dispatching network-vif-plugged-45c74639-2d52-4fcf-9874-4ec3f104851e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:58:06 compute-0 nova_compute[255040]: 2025-11-29 07:58:06.903 255071 WARNING nova.compute.manager [req-f7de8512-5129-488d-95cf-78a400ea98f7 req-fb6a87dd-2e65-4468-bd1e-b530a4a0036b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Received unexpected event network-vif-plugged-45c74639-2d52-4fcf-9874-4ec3f104851e for instance with vm_state active and task_state None.
Nov 29 07:58:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:07.162 163500 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 29 07:58:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:07.164 163500 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpkkpnc83a/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 29 07:58:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:06.993 261961 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 29 07:58:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:07.002 261961 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 29 07:58:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:07.005 261961 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 29 07:58:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:07.005 261961 INFO oslo.privsep.daemon [-] privsep daemon running as pid 261961
Nov 29 07:58:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:07.168 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[829cc268-66de-47b6-95fc-904209ab1627]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:58:07 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1783559313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:58:07 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1783559313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 185 op/s
Nov 29 07:58:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Nov 29 07:58:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Nov 29 07:58:07 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1783559313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:07 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1783559313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:07 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Nov 29 07:58:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:07.732 261961 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:07.732 261961 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:07.732 261961 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:07 compute-0 NetworkManager[49116]: <info>  [1764403087.8586] manager: (patch-provnet-0b50aea8-d2d6-4416-bd00-1ceabb7a7c1d-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Nov 29 07:58:07 compute-0 NetworkManager[49116]: <info>  [1764403087.8595] device (patch-provnet-0b50aea8-d2d6-4416-bd00-1ceabb7a7c1d-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:58:07 compute-0 NetworkManager[49116]: <info>  [1764403087.8606] manager: (patch-br-int-to-provnet-0b50aea8-d2d6-4416-bd00-1ceabb7a7c1d): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Nov 29 07:58:07 compute-0 NetworkManager[49116]: <info>  [1764403087.8610] device (patch-br-int-to-provnet-0b50aea8-d2d6-4416-bd00-1ceabb7a7c1d)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:58:07 compute-0 NetworkManager[49116]: <info>  [1764403087.8618] manager: (patch-br-int-to-provnet-0b50aea8-d2d6-4416-bd00-1ceabb7a7c1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Nov 29 07:58:07 compute-0 NetworkManager[49116]: <info>  [1764403087.8626] manager: (patch-provnet-0b50aea8-d2d6-4416-bd00-1ceabb7a7c1d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Nov 29 07:58:07 compute-0 NetworkManager[49116]: <info>  [1764403087.8631] device (patch-provnet-0b50aea8-d2d6-4416-bd00-1ceabb7a7c1d-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 07:58:07 compute-0 NetworkManager[49116]: <info>  [1764403087.8635] device (patch-br-int-to-provnet-0b50aea8-d2d6-4416-bd00-1ceabb7a7c1d)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 07:58:07 compute-0 nova_compute[255040]: 2025-11-29 07:58:07.866 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:07 compute-0 nova_compute[255040]: 2025-11-29 07:58:07.936 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:07 compute-0 nova_compute[255040]: 2025-11-29 07:58:07.945 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Nov 29 07:58:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Nov 29 07:58:08 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:08.294 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[6fca633d-cb73-429b-b873-52f724d528a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:08 compute-0 NetworkManager[49116]: <info>  [1764403088.3237] manager: (tap94996ac3-30): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:08.322 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[61e49deb-f547-4ad6-8d05-f1c38f18311e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:08 compute-0 systemd-udevd[261974]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:08.356 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[0ebc8359-07b1-4481-b34e-1e926dfc6b20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:08.360 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[cbd4128f-976b-4362-9c4a-b7148e80b3d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:08 compute-0 NetworkManager[49116]: <info>  [1764403088.3932] device (tap94996ac3-30): carrier: link connected
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:08.401 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[1468dd7d-157e-443a-a562-d9b2d6d300a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:08 compute-0 nova_compute[255040]: 2025-11-29 07:58:08.409 255071 DEBUG nova.compute.manager [req-24cb95dd-4706-448e-95ca-4bb8390baf73 req-347410d1-bc14-40cd-a28b-d91e44211d6f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Received event network-changed-45c74639-2d52-4fcf-9874-4ec3f104851e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:58:08 compute-0 nova_compute[255040]: 2025-11-29 07:58:08.409 255071 DEBUG nova.compute.manager [req-24cb95dd-4706-448e-95ca-4bb8390baf73 req-347410d1-bc14-40cd-a28b-d91e44211d6f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Refreshing instance network info cache due to event network-changed-45c74639-2d52-4fcf-9874-4ec3f104851e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:58:08 compute-0 nova_compute[255040]: 2025-11-29 07:58:08.409 255071 DEBUG oslo_concurrency.lockutils [req-24cb95dd-4706-448e-95ca-4bb8390baf73 req-347410d1-bc14-40cd-a28b-d91e44211d6f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:58:08 compute-0 nova_compute[255040]: 2025-11-29 07:58:08.409 255071 DEBUG oslo_concurrency.lockutils [req-24cb95dd-4706-448e-95ca-4bb8390baf73 req-347410d1-bc14-40cd-a28b-d91e44211d6f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:58:08 compute-0 nova_compute[255040]: 2025-11-29 07:58:08.410 255071 DEBUG nova.network.neutron [req-24cb95dd-4706-448e-95ca-4bb8390baf73 req-347410d1-bc14-40cd-a28b-d91e44211d6f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Refreshing network info cache for port 45c74639-2d52-4fcf-9874-4ec3f104851e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:08.425 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[8f4c9314-b8ee-48a9-bc1e-4a8daef67f49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap94996ac3-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:c3:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532573, 'reachable_time': 29083, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261993, 'error': None, 'target': 'ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:08.452 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[d72d38ec-5ed2-4156-abf1-d46d25057520]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe88:c390'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 532573, 'tstamp': 532573}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261994, 'error': None, 'target': 'ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:08.479 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[e9448d66-7877-4a64-887d-6816147e8bee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap94996ac3-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:c3:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532573, 'reachable_time': 29083, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261995, 'error': None, 'target': 'ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:08.525 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[e149df30-2689-4415-b42b-1026e0bb60d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:08 compute-0 ceph-mon[75237]: pgmap v1054: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 185 op/s
Nov 29 07:58:08 compute-0 ceph-mon[75237]: osdmap e150: 3 total, 3 up, 3 in
Nov 29 07:58:08 compute-0 ceph-mon[75237]: osdmap e151: 3 total, 3 up, 3 in
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:08.604 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[c14244f8-b561-444e-a986-c8da4369943b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:08.607 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94996ac3-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:08.607 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:08.608 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94996ac3-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:58:08 compute-0 kernel: tap94996ac3-30: entered promiscuous mode
Nov 29 07:58:08 compute-0 NetworkManager[49116]: <info>  [1764403088.6126] manager: (tap94996ac3-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Nov 29 07:58:08 compute-0 nova_compute[255040]: 2025-11-29 07:58:08.614 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:08.615 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap94996ac3-30, col_values=(('external_ids', {'iface-id': 'ca861b85-d243-4b8e-8435-da25d0786127'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:58:08 compute-0 nova_compute[255040]: 2025-11-29 07:58:08.616 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:08 compute-0 ovn_controller[153295]: 2025-11-29T07:58:08Z|00031|binding|INFO|Releasing lport ca861b85-d243-4b8e-8435-da25d0786127 from this chassis (sb_readonly=0)
Nov 29 07:58:08 compute-0 nova_compute[255040]: 2025-11-29 07:58:08.641 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:08.643 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:08.645 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f2d940-0bb4-4489-bbe1-6611ce640ec3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:08.647 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: global
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c.pid.haproxy
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: 
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: 
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: 
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID 94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:58:08 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:08.649 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c', 'env', 'PROCESS_TAG=haproxy-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:58:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:58:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:58:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:58:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:58:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:58:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:58:09 compute-0 podman[262027]: 2025-11-29 07:58:09.071924906 +0000 UTC m=+0.034443536 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:58:09 compute-0 nova_compute[255040]: 2025-11-29 07:58:09.263 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 40 KiB/s wr, 192 op/s
Nov 29 07:58:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Nov 29 07:58:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Nov 29 07:58:09 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Nov 29 07:58:09 compute-0 podman[262027]: 2025-11-29 07:58:09.916627285 +0000 UTC m=+0.879145835 container create 02cf26a8e96467f3fffe763b9d95786e9e2c3d5883a888aff7feaa91b9a44301 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:58:09 compute-0 ceph-mon[75237]: pgmap v1057: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 40 KiB/s wr, 192 op/s
Nov 29 07:58:09 compute-0 ceph-mon[75237]: osdmap e152: 3 total, 3 up, 3 in
Nov 29 07:58:09 compute-0 systemd[1]: Started libpod-conmon-02cf26a8e96467f3fffe763b9d95786e9e2c3d5883a888aff7feaa91b9a44301.scope.
Nov 29 07:58:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:58:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af59e6bb946ff976edcdce579b5f1550b27ae1563e9163126d521e44a76be12b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:10 compute-0 podman[262027]: 2025-11-29 07:58:10.164177774 +0000 UTC m=+1.126696334 container init 02cf26a8e96467f3fffe763b9d95786e9e2c3d5883a888aff7feaa91b9a44301 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:58:10 compute-0 podman[262027]: 2025-11-29 07:58:10.174254365 +0000 UTC m=+1.136772905 container start 02cf26a8e96467f3fffe763b9d95786e9e2c3d5883a888aff7feaa91b9a44301 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:58:10 compute-0 nova_compute[255040]: 2025-11-29 07:58:10.189 255071 DEBUG nova.network.neutron [req-24cb95dd-4706-448e-95ca-4bb8390baf73 req-347410d1-bc14-40cd-a28b-d91e44211d6f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Updated VIF entry in instance network info cache for port 45c74639-2d52-4fcf-9874-4ec3f104851e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:58:10 compute-0 nova_compute[255040]: 2025-11-29 07:58:10.195 255071 DEBUG nova.network.neutron [req-24cb95dd-4706-448e-95ca-4bb8390baf73 req-347410d1-bc14-40cd-a28b-d91e44211d6f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Updating instance_info_cache with network_info: [{"id": "45c74639-2d52-4fcf-9874-4ec3f104851e", "address": "fa:16:3e:96:5f:e8", "network": {"id": "94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-82283185-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5757f1dcffd49e48fe28b1c2c26b71a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45c74639-2d", "ovs_interfaceid": "45c74639-2d52-4fcf-9874-4ec3f104851e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:58:10 compute-0 neutron-haproxy-ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c[262042]: [NOTICE]   (262046) : New worker (262048) forked
Nov 29 07:58:10 compute-0 neutron-haproxy-ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c[262042]: [NOTICE]   (262046) : Loading success.
Nov 29 07:58:10 compute-0 nova_compute[255040]: 2025-11-29 07:58:10.243 255071 DEBUG oslo_concurrency.lockutils [req-24cb95dd-4706-448e-95ca-4bb8390baf73 req-347410d1-bc14-40cd-a28b-d91e44211d6f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:58:11 compute-0 nova_compute[255040]: 2025-11-29 07:58:11.280 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 32 KiB/s wr, 218 op/s
Nov 29 07:58:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Nov 29 07:58:12 compute-0 ceph-mon[75237]: pgmap v1059: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 32 KiB/s wr, 218 op/s
Nov 29 07:58:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Nov 29 07:58:12 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Nov 29 07:58:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Nov 29 07:58:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Nov 29 07:58:13 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Nov 29 07:58:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 133 op/s
Nov 29 07:58:13 compute-0 ceph-mon[75237]: osdmap e153: 3 total, 3 up, 3 in
Nov 29 07:58:13 compute-0 ceph-mon[75237]: osdmap e154: 3 total, 3 up, 3 in
Nov 29 07:58:14 compute-0 nova_compute[255040]: 2025-11-29 07:58:14.268 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Nov 29 07:58:14 compute-0 ceph-mon[75237]: pgmap v1062: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 133 op/s
Nov 29 07:58:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Nov 29 07:58:14 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Nov 29 07:58:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.4 KiB/s wr, 160 op/s
Nov 29 07:58:15 compute-0 ceph-mon[75237]: osdmap e155: 3 total, 3 up, 3 in
Nov 29 07:58:15 compute-0 podman[262058]: 2025-11-29 07:58:15.943306524 +0000 UTC m=+0.105589757 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:58:16 compute-0 nova_compute[255040]: 2025-11-29 07:58:16.283 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Nov 29 07:58:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Nov 29 07:58:16 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Nov 29 07:58:16 compute-0 ceph-mon[75237]: pgmap v1064: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.4 KiB/s wr, 160 op/s
Nov 29 07:58:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.9 KiB/s wr, 54 op/s
Nov 29 07:58:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Nov 29 07:58:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Nov 29 07:58:17 compute-0 ceph-mon[75237]: osdmap e156: 3 total, 3 up, 3 in
Nov 29 07:58:17 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Nov 29 07:58:17 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 07:58:17 compute-0 nova_compute[255040]: 2025-11-29 07:58:17.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Nov 29 07:58:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Nov 29 07:58:18 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Nov 29 07:58:18 compute-0 ceph-mon[75237]: pgmap v1066: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.9 KiB/s wr, 54 op/s
Nov 29 07:58:18 compute-0 ceph-mon[75237]: osdmap e157: 3 total, 3 up, 3 in
Nov 29 07:58:18 compute-0 ceph-mon[75237]: osdmap e158: 3 total, 3 up, 3 in
Nov 29 07:58:18 compute-0 nova_compute[255040]: 2025-11-29 07:58:18.969 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:19 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 07:58:19 compute-0 nova_compute[255040]: 2025-11-29 07:58:19.269 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.0 KiB/s wr, 54 op/s
Nov 29 07:58:19 compute-0 ovn_controller[153295]: 2025-11-29T07:58:19Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:96:5f:e8 10.100.0.6
Nov 29 07:58:19 compute-0 ovn_controller[153295]: 2025-11-29T07:58:19Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:96:5f:e8 10.100.0.6
Nov 29 07:58:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:58:19 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1349052246' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:58:19 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1349052246' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:19 compute-0 nova_compute[255040]: 2025-11-29 07:58:19.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:19 compute-0 nova_compute[255040]: 2025-11-29 07:58:19.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:19 compute-0 nova_compute[255040]: 2025-11-29 07:58:19.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:58:20 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/162385342' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:58:20 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/162385342' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:20 compute-0 ceph-mon[75237]: pgmap v1069: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.0 KiB/s wr, 54 op/s
Nov 29 07:58:20 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1349052246' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:20 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1349052246' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:20 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/162385342' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:20 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/162385342' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:20 compute-0 nova_compute[255040]: 2025-11-29 07:58:20.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:20 compute-0 nova_compute[255040]: 2025-11-29 07:58:20.975 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:58:20 compute-0 nova_compute[255040]: 2025-11-29 07:58:20.976 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:58:21 compute-0 nova_compute[255040]: 2025-11-29 07:58:21.286 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 111 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 418 KiB/s rd, 3.0 MiB/s wr, 182 op/s
Nov 29 07:58:21 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:21.368 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:58:21 compute-0 nova_compute[255040]: 2025-11-29 07:58:21.369 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:21 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:21.370 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:58:21 compute-0 nova_compute[255040]: 2025-11-29 07:58:21.430 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "refresh_cache-5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:58:21 compute-0 nova_compute[255040]: 2025-11-29 07:58:21.431 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquired lock "refresh_cache-5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:58:21 compute-0 nova_compute[255040]: 2025-11-29 07:58:21.431 255071 DEBUG nova.network.neutron [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 07:58:21 compute-0 nova_compute[255040]: 2025-11-29 07:58:21.431 255071 DEBUG nova.objects.instance [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:58:21 compute-0 ceph-mon[75237]: pgmap v1070: 305 pgs: 305 active+clean; 111 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 418 KiB/s rd, 3.0 MiB/s wr, 182 op/s
Nov 29 07:58:22 compute-0 podman[262078]: 2025-11-29 07:58:22.939027194 +0000 UTC m=+0.094983623 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:58:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 542 KiB/s rd, 3.8 MiB/s wr, 205 op/s
Nov 29 07:58:23 compute-0 nova_compute[255040]: 2025-11-29 07:58:23.725 255071 DEBUG nova.network.neutron [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Updating instance_info_cache with network_info: [{"id": "45c74639-2d52-4fcf-9874-4ec3f104851e", "address": "fa:16:3e:96:5f:e8", "network": {"id": "94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-82283185-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5757f1dcffd49e48fe28b1c2c26b71a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45c74639-2d", "ovs_interfaceid": "45c74639-2d52-4fcf-9874-4ec3f104851e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:58:23 compute-0 nova_compute[255040]: 2025-11-29 07:58:23.756 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Releasing lock "refresh_cache-5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:58:23 compute-0 nova_compute[255040]: 2025-11-29 07:58:23.756 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 07:58:23 compute-0 nova_compute[255040]: 2025-11-29 07:58:23.757 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:23 compute-0 nova_compute[255040]: 2025-11-29 07:58:23.757 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:23 compute-0 nova_compute[255040]: 2025-11-29 07:58:23.758 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:58:23 compute-0 nova_compute[255040]: 2025-11-29 07:58:23.758 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:23 compute-0 nova_compute[255040]: 2025-11-29 07:58:23.782 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:23 compute-0 nova_compute[255040]: 2025-11-29 07:58:23.783 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:23 compute-0 nova_compute[255040]: 2025-11-29 07:58:23.784 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:23 compute-0 nova_compute[255040]: 2025-11-29 07:58:23.784 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:58:23 compute-0 nova_compute[255040]: 2025-11-29 07:58:23.785 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:58:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:58:24 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1995926207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:24 compute-0 nova_compute[255040]: 2025-11-29 07:58:24.210 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:58:24 compute-0 nova_compute[255040]: 2025-11-29 07:58:24.271 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:24 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:24.373 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:58:24 compute-0 ceph-mon[75237]: pgmap v1071: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 542 KiB/s rd, 3.8 MiB/s wr, 205 op/s
Nov 29 07:58:24 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1995926207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:24 compute-0 nova_compute[255040]: 2025-11-29 07:58:24.479 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:58:24 compute-0 nova_compute[255040]: 2025-11-29 07:58:24.480 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:58:24 compute-0 nova_compute[255040]: 2025-11-29 07:58:24.716 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:58:24 compute-0 nova_compute[255040]: 2025-11-29 07:58:24.718 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4675MB free_disk=59.9428596496582GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:58:24 compute-0 nova_compute[255040]: 2025-11-29 07:58:24.718 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:24 compute-0 nova_compute[255040]: 2025-11-29 07:58:24.718 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:24 compute-0 nova_compute[255040]: 2025-11-29 07:58:24.868 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:58:24 compute-0 nova_compute[255040]: 2025-11-29 07:58:24.869 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:58:24 compute-0 nova_compute[255040]: 2025-11-29 07:58:24.870 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:58:24 compute-0 nova_compute[255040]: 2025-11-29 07:58:24.905 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:58:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 448 KiB/s rd, 3.2 MiB/s wr, 165 op/s
Nov 29 07:58:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:58:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3042273300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:25 compute-0 nova_compute[255040]: 2025-11-29 07:58:25.378 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:58:25 compute-0 nova_compute[255040]: 2025-11-29 07:58:25.386 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Updating inventory in ProviderTree for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:58:25 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3042273300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:25 compute-0 nova_compute[255040]: 2025-11-29 07:58:25.465 255071 ERROR nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [req-d866a069-bafc-4d47-8963-df5d0f486147] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 858d78b2-ffcd-4247-ba96-0ec767fec62e.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-d866a069-bafc-4d47-8963-df5d0f486147"}]}
Nov 29 07:58:25 compute-0 nova_compute[255040]: 2025-11-29 07:58:25.482 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Refreshing inventories for resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 07:58:25 compute-0 nova_compute[255040]: 2025-11-29 07:58:25.502 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Updating ProviderTree inventory for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 07:58:25 compute-0 nova_compute[255040]: 2025-11-29 07:58:25.502 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Updating inventory in ProviderTree for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:58:25 compute-0 nova_compute[255040]: 2025-11-29 07:58:25.518 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Refreshing aggregate associations for resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 07:58:25 compute-0 nova_compute[255040]: 2025-11-29 07:58:25.539 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Refreshing trait associations for resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e, traits: COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_ABM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE,COMPUTE_NODE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 07:58:25 compute-0 nova_compute[255040]: 2025-11-29 07:58:25.573 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:58:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:58:26 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/554942155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:26 compute-0 nova_compute[255040]: 2025-11-29 07:58:26.098 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:58:26 compute-0 nova_compute[255040]: 2025-11-29 07:58:26.106 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Updating inventory in ProviderTree for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:58:26 compute-0 nova_compute[255040]: 2025-11-29 07:58:26.291 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:26 compute-0 nova_compute[255040]: 2025-11-29 07:58:26.334 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Updated inventory for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 29 07:58:26 compute-0 nova_compute[255040]: 2025-11-29 07:58:26.335 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Updating resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 29 07:58:26 compute-0 nova_compute[255040]: 2025-11-29 07:58:26.335 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Updating inventory in ProviderTree for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:58:26 compute-0 ceph-mon[75237]: pgmap v1072: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 448 KiB/s rd, 3.2 MiB/s wr, 165 op/s
Nov 29 07:58:26 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/554942155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:26 compute-0 nova_compute[255040]: 2025-11-29 07:58:26.577 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:58:26 compute-0 nova_compute[255040]: 2025-11-29 07:58:26.578 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:27.117 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:27.118 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:27.119 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 370 KiB/s rd, 2.7 MiB/s wr, 137 op/s
Nov 29 07:58:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Nov 29 07:58:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Nov 29 07:58:28 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Nov 29 07:58:28 compute-0 ceph-mon[75237]: pgmap v1073: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 370 KiB/s rd, 2.7 MiB/s wr, 137 op/s
Nov 29 07:58:28 compute-0 ceph-mon[75237]: osdmap e159: 3 total, 3 up, 3 in
Nov 29 07:58:29 compute-0 nova_compute[255040]: 2025-11-29 07:58:29.275 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 359 KiB/s rd, 2.6 MiB/s wr, 133 op/s
Nov 29 07:58:30 compute-0 ceph-mon[75237]: pgmap v1075: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 359 KiB/s rd, 2.6 MiB/s wr, 133 op/s
Nov 29 07:58:31 compute-0 nova_compute[255040]: 2025-11-29 07:58:31.293 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 823 KiB/s wr, 30 op/s
Nov 29 07:58:32 compute-0 ceph-mon[75237]: pgmap v1076: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 823 KiB/s wr, 30 op/s
Nov 29 07:58:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 0 op/s
Nov 29 07:58:34 compute-0 nova_compute[255040]: 2025-11-29 07:58:34.277 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:34 compute-0 ceph-mon[75237]: pgmap v1077: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 0 op/s
Nov 29 07:58:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 0 op/s
Nov 29 07:58:36 compute-0 nova_compute[255040]: 2025-11-29 07:58:36.295 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:36 compute-0 ceph-mon[75237]: pgmap v1078: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 0 op/s
Nov 29 07:58:36 compute-0 podman[262165]: 2025-11-29 07:58:36.957140395 +0000 UTC m=+0.108542177 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 29 07:58:37 compute-0 sudo[262191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:37 compute-0 sudo[262191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:37 compute-0 sudo[262191]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:37 compute-0 sudo[262216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:58:37 compute-0 sudo[262216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:37 compute-0 sudo[262216]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s wr, 0 op/s
Nov 29 07:58:37 compute-0 sudo[262241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:37 compute-0 sudo[262241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:37 compute-0 sudo[262241]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:37 compute-0 sudo[262266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:58:37 compute-0 sudo[262266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:37 compute-0 sudo[262266]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:38 compute-0 sudo[262322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:38 compute-0 sudo[262322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:38 compute-0 sudo[262322]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:38 compute-0 sudo[262347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:58:38 compute-0 sudo[262347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:38 compute-0 sudo[262347]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:38 compute-0 sudo[262372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:38 compute-0 sudo[262372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:38 compute-0 sudo[262372]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:38 compute-0 sudo[262397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 29 07:58:38 compute-0 sudo[262397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:38 compute-0 nova_compute[255040]: 2025-11-29 07:58:38.322 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:38 compute-0 ceph-mon[75237]: pgmap v1079: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s wr, 0 op/s
Nov 29 07:58:38 compute-0 sudo[262397]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:58:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:58:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:58:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:58:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:58:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:58:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:58:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:58:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:58:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:58:38 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 1da97de1-22a6-4d3c-a789-2cf8669a04cd does not exist
Nov 29 07:58:38 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 2c88ade1-6ec8-41cd-a8ba-808e5f2c2b99 does not exist
Nov 29 07:58:38 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 6a00ddd2-38e5-4e2f-8a4c-20333c006576 does not exist
Nov 29 07:58:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:58:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:58:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:58:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:58:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:58:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:58:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:58:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:58:38 compute-0 sudo[262440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:38 compute-0 sudo[262440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:58:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:58:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:58:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:58:38 compute-0 sudo[262440]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:38 compute-0 sudo[262465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:58:38 compute-0 sudo[262465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:38 compute-0 sudo[262465]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:38 compute-0 sudo[262490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:38 compute-0 sudo[262490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:58:38
Nov 29 07:58:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:58:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:58:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'backups', '.mgr', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.control']
Nov 29 07:58:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:58:38 compute-0 sudo[262490]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:38 compute-0 sudo[262515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:58:38 compute-0 sudo[262515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:39 compute-0 podman[262579]: 2025-11-29 07:58:39.270174054 +0000 UTC m=+0.048696459 container create 8fe19e612b2dc36f049ea07c312e24a5080994a6b7b3332fc8914195e5751886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_greider, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:58:39 compute-0 nova_compute[255040]: 2025-11-29 07:58:39.280 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:39 compute-0 systemd[1]: Started libpod-conmon-8fe19e612b2dc36f049ea07c312e24a5080994a6b7b3332fc8914195e5751886.scope.
Nov 29 07:58:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 29 07:58:39 compute-0 podman[262579]: 2025-11-29 07:58:39.247191466 +0000 UTC m=+0.025713891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:58:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:58:39 compute-0 podman[262579]: 2025-11-29 07:58:39.379012827 +0000 UTC m=+0.157535252 container init 8fe19e612b2dc36f049ea07c312e24a5080994a6b7b3332fc8914195e5751886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:58:39 compute-0 podman[262579]: 2025-11-29 07:58:39.391816081 +0000 UTC m=+0.170338486 container start 8fe19e612b2dc36f049ea07c312e24a5080994a6b7b3332fc8914195e5751886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_greider, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:58:39 compute-0 podman[262579]: 2025-11-29 07:58:39.396128477 +0000 UTC m=+0.174650882 container attach 8fe19e612b2dc36f049ea07c312e24a5080994a6b7b3332fc8914195e5751886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 07:58:39 compute-0 ecstatic_greider[262596]: 167 167
Nov 29 07:58:39 compute-0 systemd[1]: libpod-8fe19e612b2dc36f049ea07c312e24a5080994a6b7b3332fc8914195e5751886.scope: Deactivated successfully.
Nov 29 07:58:39 compute-0 conmon[262596]: conmon 8fe19e612b2dc36f049e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8fe19e612b2dc36f049ea07c312e24a5080994a6b7b3332fc8914195e5751886.scope/container/memory.events
Nov 29 07:58:39 compute-0 podman[262579]: 2025-11-29 07:58:39.403656899 +0000 UTC m=+0.182179304 container died 8fe19e612b2dc36f049ea07c312e24a5080994a6b7b3332fc8914195e5751886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_greider, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:58:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-aeeab2a23c9e845ae056e52deddd4b0dc1eb543889ec3a151cdd00b0cfe05740-merged.mount: Deactivated successfully.
Nov 29 07:58:39 compute-0 podman[262579]: 2025-11-29 07:58:39.460715902 +0000 UTC m=+0.239238307 container remove 8fe19e612b2dc36f049ea07c312e24a5080994a6b7b3332fc8914195e5751886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_greider, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:58:39 compute-0 systemd[1]: libpod-conmon-8fe19e612b2dc36f049ea07c312e24a5080994a6b7b3332fc8914195e5751886.scope: Deactivated successfully.
Nov 29 07:58:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:58:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:58:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:58:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:58:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:58:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:58:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:58:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:58:39 compute-0 podman[262619]: 2025-11-29 07:58:39.671611216 +0000 UTC m=+0.049131551 container create 9d06f723bc544ec6523dcd37de6e58ee186c8f77508c87882ca98507bec0c1d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_robinson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:58:39 compute-0 systemd[1]: Started libpod-conmon-9d06f723bc544ec6523dcd37de6e58ee186c8f77508c87882ca98507bec0c1d5.scope.
Nov 29 07:58:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:58:39 compute-0 podman[262619]: 2025-11-29 07:58:39.651144307 +0000 UTC m=+0.028664662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38750818217b066b1b33c8107adaa4f4597703ce4584dda0c42ada9adf701970/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38750818217b066b1b33c8107adaa4f4597703ce4584dda0c42ada9adf701970/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38750818217b066b1b33c8107adaa4f4597703ce4584dda0c42ada9adf701970/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38750818217b066b1b33c8107adaa4f4597703ce4584dda0c42ada9adf701970/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38750818217b066b1b33c8107adaa4f4597703ce4584dda0c42ada9adf701970/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:39 compute-0 podman[262619]: 2025-11-29 07:58:39.765857988 +0000 UTC m=+0.143378343 container init 9d06f723bc544ec6523dcd37de6e58ee186c8f77508c87882ca98507bec0c1d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:58:39 compute-0 podman[262619]: 2025-11-29 07:58:39.773365899 +0000 UTC m=+0.150886234 container start 9d06f723bc544ec6523dcd37de6e58ee186c8f77508c87882ca98507bec0c1d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:58:39 compute-0 podman[262619]: 2025-11-29 07:58:39.777685235 +0000 UTC m=+0.155205570 container attach 9d06f723bc544ec6523dcd37de6e58ee186c8f77508c87882ca98507bec0c1d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_robinson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 07:58:40 compute-0 nova_compute[255040]: 2025-11-29 07:58:40.610 255071 DEBUG oslo_concurrency.lockutils [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Acquiring lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:40 compute-0 nova_compute[255040]: 2025-11-29 07:58:40.612 255071 DEBUG oslo_concurrency.lockutils [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:40 compute-0 ceph-mon[75237]: pgmap v1080: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 29 07:58:40 compute-0 nova_compute[255040]: 2025-11-29 07:58:40.630 255071 DEBUG nova.objects.instance [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lazy-loading 'flavor' on Instance uuid 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:58:40 compute-0 nova_compute[255040]: 2025-11-29 07:58:40.669 255071 INFO nova.virt.libvirt.driver [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Ignoring supplied device name: /dev/vdb
Nov 29 07:58:40 compute-0 nova_compute[255040]: 2025-11-29 07:58:40.686 255071 DEBUG oslo_concurrency.lockutils [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:40 compute-0 stupefied_robinson[262636]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:58:40 compute-0 stupefied_robinson[262636]: --> relative data size: 1.0
Nov 29 07:58:40 compute-0 stupefied_robinson[262636]: --> All data devices are unavailable
Nov 29 07:58:40 compute-0 systemd[1]: libpod-9d06f723bc544ec6523dcd37de6e58ee186c8f77508c87882ca98507bec0c1d5.scope: Deactivated successfully.
Nov 29 07:58:40 compute-0 systemd[1]: libpod-9d06f723bc544ec6523dcd37de6e58ee186c8f77508c87882ca98507bec0c1d5.scope: Consumed 1.165s CPU time.
Nov 29 07:58:41 compute-0 nova_compute[255040]: 2025-11-29 07:58:41.012 255071 DEBUG oslo_concurrency.lockutils [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Acquiring lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:41 compute-0 nova_compute[255040]: 2025-11-29 07:58:41.014 255071 DEBUG oslo_concurrency.lockutils [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:41 compute-0 nova_compute[255040]: 2025-11-29 07:58:41.014 255071 INFO nova.compute.manager [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Attaching volume 6e4e0855-a938-4ee2-8827-c1624e3b4891 to /dev/vdb
Nov 29 07:58:41 compute-0 podman[262667]: 2025-11-29 07:58:41.053492564 +0000 UTC m=+0.039206534 container died 9d06f723bc544ec6523dcd37de6e58ee186c8f77508c87882ca98507bec0c1d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_robinson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:58:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-38750818217b066b1b33c8107adaa4f4597703ce4584dda0c42ada9adf701970-merged.mount: Deactivated successfully.
Nov 29 07:58:41 compute-0 podman[262667]: 2025-11-29 07:58:41.120371211 +0000 UTC m=+0.106085161 container remove 9d06f723bc544ec6523dcd37de6e58ee186c8f77508c87882ca98507bec0c1d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_robinson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 07:58:41 compute-0 systemd[1]: libpod-conmon-9d06f723bc544ec6523dcd37de6e58ee186c8f77508c87882ca98507bec0c1d5.scope: Deactivated successfully.
Nov 29 07:58:41 compute-0 sudo[262515]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:41 compute-0 nova_compute[255040]: 2025-11-29 07:58:41.215 255071 DEBUG os_brick.utils [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 07:58:41 compute-0 nova_compute[255040]: 2025-11-29 07:58:41.218 255071 INFO oslo.privsep.daemon [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmphsoflsei/privsep.sock']
Nov 29 07:58:41 compute-0 sudo[262681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:41 compute-0 sudo[262681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:41 compute-0 sudo[262681]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:41 compute-0 nova_compute[255040]: 2025-11-29 07:58:41.300 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:41 compute-0 sudo[262707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:58:41 compute-0 sudo[262707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:41 compute-0 sudo[262707]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s wr, 0 op/s
Nov 29 07:58:41 compute-0 sudo[262734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:41 compute-0 sudo[262734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:41 compute-0 sudo[262734]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:41 compute-0 sudo[262759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:58:41 compute-0 sudo[262759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:41 compute-0 podman[262825]: 2025-11-29 07:58:41.833455994 +0000 UTC m=+0.049750387 container create 84087f0aaf8b601e7b89a5e9923a748b4f625b1c604436c4192358b011dabca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tu, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 07:58:41 compute-0 systemd[1]: Started libpod-conmon-84087f0aaf8b601e7b89a5e9923a748b4f625b1c604436c4192358b011dabca0.scope.
Nov 29 07:58:41 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:58:41 compute-0 podman[262825]: 2025-11-29 07:58:41.810056046 +0000 UTC m=+0.026350469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:58:41 compute-0 podman[262825]: 2025-11-29 07:58:41.924280434 +0000 UTC m=+0.140574857 container init 84087f0aaf8b601e7b89a5e9923a748b4f625b1c604436c4192358b011dabca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tu, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:58:41 compute-0 podman[262825]: 2025-11-29 07:58:41.933082411 +0000 UTC m=+0.149376804 container start 84087f0aaf8b601e7b89a5e9923a748b4f625b1c604436c4192358b011dabca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tu, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 07:58:41 compute-0 podman[262825]: 2025-11-29 07:58:41.937286074 +0000 UTC m=+0.153580467 container attach 84087f0aaf8b601e7b89a5e9923a748b4f625b1c604436c4192358b011dabca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:58:41 compute-0 keen_tu[262844]: 167 167
Nov 29 07:58:41 compute-0 systemd[1]: libpod-84087f0aaf8b601e7b89a5e9923a748b4f625b1c604436c4192358b011dabca0.scope: Deactivated successfully.
Nov 29 07:58:41 compute-0 conmon[262844]: conmon 84087f0aaf8b601e7b89 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-84087f0aaf8b601e7b89a5e9923a748b4f625b1c604436c4192358b011dabca0.scope/container/memory.events
Nov 29 07:58:41 compute-0 podman[262825]: 2025-11-29 07:58:41.941709813 +0000 UTC m=+0.158004216 container died 84087f0aaf8b601e7b89a5e9923a748b4f625b1c604436c4192358b011dabca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tu, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:58:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba561aafbd17dca8b84729491647abe1abea828c9f463fb4f64f889dc2a88834-merged.mount: Deactivated successfully.
Nov 29 07:58:41 compute-0 podman[262825]: 2025-11-29 07:58:41.989111086 +0000 UTC m=+0.205405489 container remove 84087f0aaf8b601e7b89a5e9923a748b4f625b1c604436c4192358b011dabca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tu, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:58:41 compute-0 nova_compute[255040]: 2025-11-29 07:58:41.991 255071 INFO oslo.privsep.daemon [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Spawned new privsep daemon via rootwrap
Nov 29 07:58:41 compute-0 nova_compute[255040]: 2025-11-29 07:58:41.860 262843 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 29 07:58:41 compute-0 nova_compute[255040]: 2025-11-29 07:58:41.865 262843 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 29 07:58:41 compute-0 nova_compute[255040]: 2025-11-29 07:58:41.868 262843 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 29 07:58:41 compute-0 nova_compute[255040]: 2025-11-29 07:58:41.868 262843 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262843
Nov 29 07:58:41 compute-0 nova_compute[255040]: 2025-11-29 07:58:41.998 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a33974-6cca-4210-adc3-ecfc2e4846bb]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:42 compute-0 systemd[1]: libpod-conmon-84087f0aaf8b601e7b89a5e9923a748b4f625b1c604436c4192358b011dabca0.scope: Deactivated successfully.
Nov 29 07:58:42 compute-0 nova_compute[255040]: 2025-11-29 07:58:42.096 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:58:42 compute-0 nova_compute[255040]: 2025-11-29 07:58:42.118 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:58:42 compute-0 nova_compute[255040]: 2025-11-29 07:58:42.118 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[32565ee0-7b4b-4366-b5e6-d70748d24a06]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:42 compute-0 nova_compute[255040]: 2025-11-29 07:58:42.121 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:58:42 compute-0 nova_compute[255040]: 2025-11-29 07:58:42.131 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:58:42 compute-0 nova_compute[255040]: 2025-11-29 07:58:42.132 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[281b2fb1-d0f7-441a-9df4-9d294574025c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:42 compute-0 nova_compute[255040]: 2025-11-29 07:58:42.136 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:58:42 compute-0 nova_compute[255040]: 2025-11-29 07:58:42.150 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:58:42 compute-0 nova_compute[255040]: 2025-11-29 07:58:42.151 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[27bd4a24-b449-4523-a82d-3ce045b352f6]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:42 compute-0 nova_compute[255040]: 2025-11-29 07:58:42.153 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[af25de7c-37ec-4399-93d4-b964d7e884e6]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:42 compute-0 nova_compute[255040]: 2025-11-29 07:58:42.154 255071 DEBUG oslo_concurrency.processutils [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:58:42 compute-0 nova_compute[255040]: 2025-11-29 07:58:42.178 255071 DEBUG oslo_concurrency.processutils [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:58:42 compute-0 nova_compute[255040]: 2025-11-29 07:58:42.183 255071 DEBUG os_brick.initiator.connectors.lightos [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 07:58:42 compute-0 nova_compute[255040]: 2025-11-29 07:58:42.185 255071 DEBUG os_brick.initiator.connectors.lightos [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 07:58:42 compute-0 nova_compute[255040]: 2025-11-29 07:58:42.185 255071 DEBUG os_brick.initiator.connectors.lightos [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 07:58:42 compute-0 nova_compute[255040]: 2025-11-29 07:58:42.186 255071 DEBUG os_brick.utils [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] <== get_connector_properties: return (969ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 07:58:42 compute-0 nova_compute[255040]: 2025-11-29 07:58:42.186 255071 DEBUG nova.virt.block_device [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Updating existing volume attachment record: 98c81e8c-2b17-4313-addf-0689a71a58d5 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 07:58:42 compute-0 podman[262871]: 2025-11-29 07:58:42.197305637 +0000 UTC m=+0.063414424 container create 4da23ad096856fed62cb0603bc62b0b29e4533288a35a5d916e73dbe9d07c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 07:58:42 compute-0 systemd[1]: Started libpod-conmon-4da23ad096856fed62cb0603bc62b0b29e4533288a35a5d916e73dbe9d07c64b.scope.
Nov 29 07:58:42 compute-0 podman[262871]: 2025-11-29 07:58:42.164223139 +0000 UTC m=+0.030331946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:58:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86efe188d9b0b361d72f8825154a5518191e04795709be027853af5b6d5bc87c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86efe188d9b0b361d72f8825154a5518191e04795709be027853af5b6d5bc87c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86efe188d9b0b361d72f8825154a5518191e04795709be027853af5b6d5bc87c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86efe188d9b0b361d72f8825154a5518191e04795709be027853af5b6d5bc87c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:42 compute-0 podman[262871]: 2025-11-29 07:58:42.282786513 +0000 UTC m=+0.148895320 container init 4da23ad096856fed62cb0603bc62b0b29e4533288a35a5d916e73dbe9d07c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 07:58:42 compute-0 podman[262871]: 2025-11-29 07:58:42.29157491 +0000 UTC m=+0.157683697 container start 4da23ad096856fed62cb0603bc62b0b29e4533288a35a5d916e73dbe9d07c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kapitsa, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:58:42 compute-0 podman[262871]: 2025-11-29 07:58:42.29531115 +0000 UTC m=+0.161419937 container attach 4da23ad096856fed62cb0603bc62b0b29e4533288a35a5d916e73dbe9d07c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:58:42 compute-0 ceph-mon[75237]: pgmap v1081: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s wr, 0 op/s
Nov 29 07:58:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:58:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:58:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:58:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:58:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:58:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:58:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:58:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:58:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:58:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]: {
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:     "0": [
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:         {
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "devices": [
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "/dev/loop3"
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             ],
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "lv_name": "ceph_lv0",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "lv_size": "21470642176",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "name": "ceph_lv0",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "tags": {
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.cluster_name": "ceph",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.crush_device_class": "",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.encrypted": "0",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.osd_id": "0",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.type": "block",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.vdo": "0"
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             },
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "type": "block",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "vg_name": "ceph_vg0"
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:         }
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:     ],
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:     "1": [
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:         {
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "devices": [
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "/dev/loop4"
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             ],
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "lv_name": "ceph_lv1",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "lv_size": "21470642176",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "name": "ceph_lv1",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "tags": {
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.cluster_name": "ceph",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.crush_device_class": "",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.encrypted": "0",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.osd_id": "1",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.type": "block",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.vdo": "0"
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             },
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "type": "block",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "vg_name": "ceph_vg1"
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:         }
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:     ],
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:     "2": [
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:         {
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "devices": [
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "/dev/loop5"
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             ],
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "lv_name": "ceph_lv2",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "lv_size": "21470642176",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "name": "ceph_lv2",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "tags": {
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.cluster_name": "ceph",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.crush_device_class": "",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.encrypted": "0",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.osd_id": "2",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.type": "block",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:                 "ceph.vdo": "0"
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             },
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "type": "block",
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:             "vg_name": "ceph_vg2"
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:         }
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]:     ]
Nov 29 07:58:43 compute-0 distracted_kapitsa[262893]: }
Nov 29 07:58:43 compute-0 systemd[1]: libpod-4da23ad096856fed62cb0603bc62b0b29e4533288a35a5d916e73dbe9d07c64b.scope: Deactivated successfully.
Nov 29 07:58:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:58:43 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2366378332' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:58:43 compute-0 podman[262902]: 2025-11-29 07:58:43.143891493 +0000 UTC m=+0.029496323 container died 4da23ad096856fed62cb0603bc62b0b29e4533288a35a5d916e73dbe9d07c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:58:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-86efe188d9b0b361d72f8825154a5518191e04795709be027853af5b6d5bc87c-merged.mount: Deactivated successfully.
Nov 29 07:58:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:43 compute-0 podman[262902]: 2025-11-29 07:58:43.213213636 +0000 UTC m=+0.098818456 container remove 4da23ad096856fed62cb0603bc62b0b29e4533288a35a5d916e73dbe9d07c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kapitsa, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:58:43 compute-0 systemd[1]: libpod-conmon-4da23ad096856fed62cb0603bc62b0b29e4533288a35a5d916e73dbe9d07c64b.scope: Deactivated successfully.
Nov 29 07:58:43 compute-0 sudo[262759]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 29 07:58:43 compute-0 sudo[262918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:43 compute-0 sudo[262918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:43 compute-0 sudo[262918]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:43 compute-0 sudo[262943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:58:43 compute-0 sudo[262943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:43 compute-0 sudo[262943]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:43 compute-0 sudo[262968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:43 compute-0 sudo[262968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:43 compute-0 sudo[262968]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:43 compute-0 sudo[262993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:58:43 compute-0 sudo[262993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:43 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2366378332' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:58:43 compute-0 podman[263058]: 2025-11-29 07:58:43.914593645 +0000 UTC m=+0.056772586 container create 3569c7d16c2ffa34e1428d6b1dc00afd55278a8486782dd3aeb41f8042657371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:58:43 compute-0 systemd[1]: Started libpod-conmon-3569c7d16c2ffa34e1428d6b1dc00afd55278a8486782dd3aeb41f8042657371.scope.
Nov 29 07:58:43 compute-0 podman[263058]: 2025-11-29 07:58:43.88757256 +0000 UTC m=+0.029751601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:58:43 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:58:44 compute-0 podman[263058]: 2025-11-29 07:58:44.00639581 +0000 UTC m=+0.148574801 container init 3569c7d16c2ffa34e1428d6b1dc00afd55278a8486782dd3aeb41f8042657371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ramanujan, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:58:44 compute-0 podman[263058]: 2025-11-29 07:58:44.016454841 +0000 UTC m=+0.158633792 container start 3569c7d16c2ffa34e1428d6b1dc00afd55278a8486782dd3aeb41f8042657371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ramanujan, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 07:58:44 compute-0 podman[263058]: 2025-11-29 07:58:44.021393263 +0000 UTC m=+0.163572234 container attach 3569c7d16c2ffa34e1428d6b1dc00afd55278a8486782dd3aeb41f8042657371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 07:58:44 compute-0 admiring_ramanujan[263075]: 167 167
Nov 29 07:58:44 compute-0 systemd[1]: libpod-3569c7d16c2ffa34e1428d6b1dc00afd55278a8486782dd3aeb41f8042657371.scope: Deactivated successfully.
Nov 29 07:58:44 compute-0 conmon[263075]: conmon 3569c7d16c2ffa34e142 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3569c7d16c2ffa34e1428d6b1dc00afd55278a8486782dd3aeb41f8042657371.scope/container/memory.events
Nov 29 07:58:44 compute-0 podman[263058]: 2025-11-29 07:58:44.026680766 +0000 UTC m=+0.168859707 container died 3569c7d16c2ffa34e1428d6b1dc00afd55278a8486782dd3aeb41f8042657371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ramanujan, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 07:58:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef64a1d7152a6e108498c9678edfc7154c6263ecaad33d3c2e36f7a801278095-merged.mount: Deactivated successfully.
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.064 255071 DEBUG os_brick.encryptors [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Using volume encryption metadata '{'encryption_key_id': '5fd34816-1da5-44bd-9e58-e7d93b73a9a3', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-6e4e0855-a938-4ee2-8827-c1624e3b4891', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '6e4e0855-a938-4ee2-8827-c1624e3b4891', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e', 'attached_at': '', 'detached_at': '', 'volume_id': '6e4e0855-a938-4ee2-8827-c1624e3b4891', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.068 255071 DEBUG oslo_concurrency.lockutils [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.069 255071 DEBUG oslo_concurrency.lockutils [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.070 255071 DEBUG oslo_concurrency.lockutils [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:44 compute-0 podman[263058]: 2025-11-29 07:58:44.07077673 +0000 UTC m=+0.212955681 container remove 3569c7d16c2ffa34e1428d6b1dc00afd55278a8486782dd3aeb41f8042657371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 07:58:44 compute-0 systemd[1]: libpod-conmon-3569c7d16c2ffa34e1428d6b1dc00afd55278a8486782dd3aeb41f8042657371.scope: Deactivated successfully.
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.085 255071 DEBUG barbicanclient.client [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.108 255071 DEBUG barbicanclient.v1.secrets [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/5fd34816-1da5-44bd-9e58-e7d93b73a9a3 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.109 255071 INFO barbicanclient.base [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Calculated Secrets uuid ref: secrets/5fd34816-1da5-44bd-9e58-e7d93b73a9a3
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.131 255071 DEBUG barbicanclient.client [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.132 255071 INFO barbicanclient.base [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Calculated Secrets uuid ref: secrets/5fd34816-1da5-44bd-9e58-e7d93b73a9a3
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.159 255071 DEBUG barbicanclient.client [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.160 255071 INFO barbicanclient.base [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Calculated Secrets uuid ref: secrets/5fd34816-1da5-44bd-9e58-e7d93b73a9a3
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.184 255071 DEBUG barbicanclient.client [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.185 255071 INFO barbicanclient.base [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Calculated Secrets uuid ref: secrets/5fd34816-1da5-44bd-9e58-e7d93b73a9a3
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.216 255071 DEBUG barbicanclient.client [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.217 255071 INFO barbicanclient.base [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Calculated Secrets uuid ref: secrets/5fd34816-1da5-44bd-9e58-e7d93b73a9a3
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.238 255071 DEBUG barbicanclient.client [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.239 255071 INFO barbicanclient.base [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Calculated Secrets uuid ref: secrets/5fd34816-1da5-44bd-9e58-e7d93b73a9a3
Nov 29 07:58:44 compute-0 podman[263097]: 2025-11-29 07:58:44.25990432 +0000 UTC m=+0.044643620 container create 3e9453e55e3725fbd9cd2421789840daed8f5bfa3c8896557611952ed27022cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.273 255071 DEBUG barbicanclient.client [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.273 255071 INFO barbicanclient.base [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Calculated Secrets uuid ref: secrets/5fd34816-1da5-44bd-9e58-e7d93b73a9a3
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.290 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.295 255071 DEBUG barbicanclient.client [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.296 255071 INFO barbicanclient.base [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Calculated Secrets uuid ref: secrets/5fd34816-1da5-44bd-9e58-e7d93b73a9a3
Nov 29 07:58:44 compute-0 systemd[1]: Started libpod-conmon-3e9453e55e3725fbd9cd2421789840daed8f5bfa3c8896557611952ed27022cf.scope.
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.319 255071 DEBUG barbicanclient.client [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.320 255071 INFO barbicanclient.base [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Calculated Secrets uuid ref: secrets/5fd34816-1da5-44bd-9e58-e7d93b73a9a3
Nov 29 07:58:44 compute-0 podman[263097]: 2025-11-29 07:58:44.238191047 +0000 UTC m=+0.022930367 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.342 255071 DEBUG barbicanclient.client [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.343 255071 INFO barbicanclient.base [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Calculated Secrets uuid ref: secrets/5fd34816-1da5-44bd-9e58-e7d93b73a9a3
Nov 29 07:58:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc119cf8826d2ef2ad693b3c90ddf974edd3b619ad02cdfc98013f6b8d9d4b62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc119cf8826d2ef2ad693b3c90ddf974edd3b619ad02cdfc98013f6b8d9d4b62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc119cf8826d2ef2ad693b3c90ddf974edd3b619ad02cdfc98013f6b8d9d4b62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc119cf8826d2ef2ad693b3c90ddf974edd3b619ad02cdfc98013f6b8d9d4b62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.369 255071 DEBUG barbicanclient.client [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.369 255071 INFO barbicanclient.base [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Calculated Secrets uuid ref: secrets/5fd34816-1da5-44bd-9e58-e7d93b73a9a3
Nov 29 07:58:44 compute-0 podman[263097]: 2025-11-29 07:58:44.37458609 +0000 UTC m=+0.159325400 container init 3e9453e55e3725fbd9cd2421789840daed8f5bfa3c8896557611952ed27022cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_knuth, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 07:58:44 compute-0 podman[263097]: 2025-11-29 07:58:44.384319622 +0000 UTC m=+0.169058922 container start 3e9453e55e3725fbd9cd2421789840daed8f5bfa3c8896557611952ed27022cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:58:44 compute-0 podman[263097]: 2025-11-29 07:58:44.388274848 +0000 UTC m=+0.173014178 container attach 3e9453e55e3725fbd9cd2421789840daed8f5bfa3c8896557611952ed27022cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.408 255071 DEBUG barbicanclient.client [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.408 255071 INFO barbicanclient.base [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Calculated Secrets uuid ref: secrets/5fd34816-1da5-44bd-9e58-e7d93b73a9a3
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.438 255071 DEBUG barbicanclient.client [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.438 255071 INFO barbicanclient.base [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Calculated Secrets uuid ref: secrets/5fd34816-1da5-44bd-9e58-e7d93b73a9a3
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.481 255071 DEBUG barbicanclient.client [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.482 255071 INFO barbicanclient.base [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Calculated Secrets uuid ref: secrets/5fd34816-1da5-44bd-9e58-e7d93b73a9a3
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.520 255071 DEBUG barbicanclient.client [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.521 255071 INFO barbicanclient.base [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Calculated Secrets uuid ref: secrets/5fd34816-1da5-44bd-9e58-e7d93b73a9a3
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.539 255071 DEBUG barbicanclient.client [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.541 255071 DEBUG nova.virt.libvirt.host [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 07:58:44 compute-0 nova_compute[255040]:   <usage type="volume">
Nov 29 07:58:44 compute-0 nova_compute[255040]:     <volume>6e4e0855-a938-4ee2-8827-c1624e3b4891</volume>
Nov 29 07:58:44 compute-0 nova_compute[255040]:   </usage>
Nov 29 07:58:44 compute-0 nova_compute[255040]: </secret>
Nov 29 07:58:44 compute-0 nova_compute[255040]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.555 255071 DEBUG nova.objects.instance [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lazy-loading 'flavor' on Instance uuid 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.597 255071 DEBUG nova.virt.libvirt.driver [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Attempting to attach volume 6e4e0855-a938-4ee2-8827-c1624e3b4891 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 07:58:44 compute-0 nova_compute[255040]: 2025-11-29 07:58:44.602 255071 DEBUG nova.virt.libvirt.guest [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 07:58:44 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:58:44 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-6e4e0855-a938-4ee2-8827-c1624e3b4891">
Nov 29 07:58:44 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:58:44 compute-0 nova_compute[255040]:   </source>
Nov 29 07:58:44 compute-0 nova_compute[255040]:   <auth username="openstack">
Nov 29 07:58:44 compute-0 nova_compute[255040]:     <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 07:58:44 compute-0 nova_compute[255040]:   </auth>
Nov 29 07:58:44 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:58:44 compute-0 nova_compute[255040]:   <serial>6e4e0855-a938-4ee2-8827-c1624e3b4891</serial>
Nov 29 07:58:44 compute-0 nova_compute[255040]:   <encryption format="luks">
Nov 29 07:58:44 compute-0 nova_compute[255040]:     <secret type="passphrase" uuid="391fa11b-9603-4fa5-addb-5468759843c3"/>
Nov 29 07:58:44 compute-0 nova_compute[255040]:   </encryption>
Nov 29 07:58:44 compute-0 nova_compute[255040]: </disk>
Nov 29 07:58:44 compute-0 nova_compute[255040]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 07:58:44 compute-0 ceph-mon[75237]: pgmap v1082: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 29 07:58:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1.1 KiB/s wr, 2 op/s
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]: {
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]:         "osd_id": 2,
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]:         "type": "bluestore"
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]:     },
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]:         "osd_id": 0,
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]:         "type": "bluestore"
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]:     },
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]:         "osd_id": 1,
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]:         "type": "bluestore"
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]:     }
Nov 29 07:58:45 compute-0 flamboyant_knuth[263113]: }
Nov 29 07:58:45 compute-0 systemd[1]: libpod-3e9453e55e3725fbd9cd2421789840daed8f5bfa3c8896557611952ed27022cf.scope: Deactivated successfully.
Nov 29 07:58:45 compute-0 systemd[1]: libpod-3e9453e55e3725fbd9cd2421789840daed8f5bfa3c8896557611952ed27022cf.scope: Consumed 1.067s CPU time.
Nov 29 07:58:45 compute-0 podman[263097]: 2025-11-29 07:58:45.575684192 +0000 UTC m=+1.360423492 container died 3e9453e55e3725fbd9cd2421789840daed8f5bfa3c8896557611952ed27022cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_knuth, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:58:45 compute-0 sshd-session[262653]: Invalid user magento from 45.78.219.195 port 36652
Nov 29 07:58:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc119cf8826d2ef2ad693b3c90ddf974edd3b619ad02cdfc98013f6b8d9d4b62-merged.mount: Deactivated successfully.
Nov 29 07:58:45 compute-0 podman[263097]: 2025-11-29 07:58:45.639947549 +0000 UTC m=+1.424686849 container remove 3e9453e55e3725fbd9cd2421789840daed8f5bfa3c8896557611952ed27022cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 07:58:45 compute-0 systemd[1]: libpod-conmon-3e9453e55e3725fbd9cd2421789840daed8f5bfa3c8896557611952ed27022cf.scope: Deactivated successfully.
Nov 29 07:58:45 compute-0 ceph-mon[75237]: pgmap v1083: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1.1 KiB/s wr, 2 op/s
Nov 29 07:58:45 compute-0 sudo[262993]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:58:45 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:58:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:58:45 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:58:45 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 5d3813d4-343c-4664-8e9c-9ead24c7cc0a does not exist
Nov 29 07:58:45 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 56815feb-023a-4e1e-8484-d7fa14c44e79 does not exist
Nov 29 07:58:45 compute-0 sudo[263179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:45 compute-0 sudo[263179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:45 compute-0 sudo[263179]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:45 compute-0 sudo[263204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:58:45 compute-0 sudo[263204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:45 compute-0 sudo[263204]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:46 compute-0 sshd-session[262653]: Received disconnect from 45.78.219.195 port 36652:11: Bye Bye [preauth]
Nov 29 07:58:46 compute-0 sshd-session[262653]: Disconnected from invalid user magento 45.78.219.195 port 36652 [preauth]
Nov 29 07:58:46 compute-0 nova_compute[255040]: 2025-11-29 07:58:46.305 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:46 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:58:46 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:58:46 compute-0 podman[263229]: 2025-11-29 07:58:46.904393692 +0000 UTC m=+0.070016212 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:58:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 2 op/s
Nov 29 07:58:47 compute-0 nova_compute[255040]: 2025-11-29 07:58:47.418 255071 DEBUG nova.virt.libvirt.driver [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:58:47 compute-0 nova_compute[255040]: 2025-11-29 07:58:47.419 255071 DEBUG nova.virt.libvirt.driver [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:58:47 compute-0 nova_compute[255040]: 2025-11-29 07:58:47.419 255071 DEBUG nova.virt.libvirt.driver [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:58:47 compute-0 nova_compute[255040]: 2025-11-29 07:58:47.420 255071 DEBUG nova.virt.libvirt.driver [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] No VIF found with MAC fa:16:3e:96:5f:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:58:47 compute-0 nova_compute[255040]: 2025-11-29 07:58:47.433 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:47 compute-0 ceph-mon[75237]: pgmap v1084: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 2 op/s
Nov 29 07:58:47 compute-0 nova_compute[255040]: 2025-11-29 07:58:47.962 255071 DEBUG oslo_concurrency.lockutils [None req-0fc36c8c-6224-4452-b0b9-fb0f66924b1d 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 6.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:49 compute-0 nova_compute[255040]: 2025-11-29 07:58:49.293 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 2 op/s
Nov 29 07:58:49 compute-0 nova_compute[255040]: 2025-11-29 07:58:49.408 255071 DEBUG nova.compute.manager [req-5143d8de-4c60-4631-ad6e-7dc55c34ed04 req-8014acd3-c195-4a68-ab2a-f4cd7cbb5195 25ec8781b6804b3590f81f8e2d32f01e d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Received event volume-extended-6e4e0855-a938-4ee2-8827-c1624e3b4891 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:58:49 compute-0 nova_compute[255040]: 2025-11-29 07:58:49.424 255071 DEBUG nova.compute.manager [req-5143d8de-4c60-4631-ad6e-7dc55c34ed04 req-8014acd3-c195-4a68-ab2a-f4cd7cbb5195 25ec8781b6804b3590f81f8e2d32f01e d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Handling volume-extended event for volume 6e4e0855-a938-4ee2-8827-c1624e3b4891 extend_volume /usr/lib/python3.9/site-packages/nova/compute/manager.py:10896
Nov 29 07:58:49 compute-0 nova_compute[255040]: 2025-11-29 07:58:49.445 255071 INFO nova.compute.manager [req-5143d8de-4c60-4631-ad6e-7dc55c34ed04 req-8014acd3-c195-4a68-ab2a-f4cd7cbb5195 25ec8781b6804b3590f81f8e2d32f01e d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Cinder extended volume 6e4e0855-a938-4ee2-8827-c1624e3b4891; extending it to detect new size
Nov 29 07:58:49 compute-0 nova_compute[255040]: 2025-11-29 07:58:49.669 255071 DEBUG os_brick.encryptors [req-5143d8de-4c60-4631-ad6e-7dc55c34ed04 req-8014acd3-c195-4a68-ab2a-f4cd7cbb5195 25ec8781b6804b3590f81f8e2d32f01e d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Using volume encryption metadata '{'encryption_key_id': '5fd34816-1da5-44bd-9e58-e7d93b73a9a3', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-6e4e0855-a938-4ee2-8827-c1624e3b4891', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '6e4e0855-a938-4ee2-8827-c1624e3b4891', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e', 'attached_at': '', 'detached_at': '', 'volume_id': '6e4e0855-a938-4ee2-8827-c1624e3b4891', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 07:58:49 compute-0 nova_compute[255040]: 2025-11-29 07:58:49.671 255071 INFO oslo.privsep.daemon [req-5143d8de-4c60-4631-ad6e-7dc55c34ed04 req-8014acd3-c195-4a68-ab2a-f4cd7cbb5195 25ec8781b6804b3590f81f8e2d32f01e d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpr0obnt2e/privsep.sock']
Nov 29 07:58:50 compute-0 nova_compute[255040]: 2025-11-29 07:58:50.090 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:50 compute-0 ceph-mon[75237]: pgmap v1085: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 2 op/s
Nov 29 07:58:50 compute-0 nova_compute[255040]: 2025-11-29 07:58:50.393 255071 INFO oslo.privsep.daemon [req-5143d8de-4c60-4631-ad6e-7dc55c34ed04 req-8014acd3-c195-4a68-ab2a-f4cd7cbb5195 25ec8781b6804b3590f81f8e2d32f01e d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Spawned new privsep daemon via rootwrap
Nov 29 07:58:50 compute-0 nova_compute[255040]: 2025-11-29 07:58:50.263 263252 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 29 07:58:50 compute-0 nova_compute[255040]: 2025-11-29 07:58:50.267 263252 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 29 07:58:50 compute-0 nova_compute[255040]: 2025-11-29 07:58:50.269 263252 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 29 07:58:50 compute-0 nova_compute[255040]: 2025-11-29 07:58:50.269 263252 INFO oslo.privsep.daemon [-] privsep daemon running as pid 263252
Nov 29 07:58:50 compute-0 systemd[1]: Created slice Slice /system/systemd-coredump.
Nov 29 07:58:50 compute-0 systemd[1]: Started Process Core Dump (PID 263273/UID 0).
Nov 29 07:58:51 compute-0 nova_compute[255040]: 2025-11-29 07:58:51.309 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 4.5 KiB/s wr, 6 op/s
Nov 29 07:58:51 compute-0 ceph-mon[75237]: pgmap v1086: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 4.5 KiB/s wr, 6 op/s
Nov 29 07:58:51 compute-0 systemd-coredump[263274]: Process 263254 (qemu-img) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 263264:
                                                    #0  0x00007f1a518bc03c __pthread_kill_implementation (libc.so.6 + 0x8d03c)
                                                    #1  0x00007f1a5186eb86 raise (libc.so.6 + 0x3fb86)
                                                    #2  0x00007f1a51858873 abort (libc.so.6 + 0x29873)
                                                    #3  0x000056457621c5df ___interceptor_pthread_create (qemu-img + 0x4f5df)
                                                    #4  0x00007f1a4ea92ff4 _ZN6Thread10try_createEm (libceph-common.so.2 + 0x258ff4)
                                                    #5  0x00007f1a4ea956ae _ZN6Thread6createEPKcm (libceph-common.so.2 + 0x25b6ae)
                                                    #6  0x00007f1a4f99c26b _ZNSt8_Rb_treeISt4pairINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESt10type_indexES0_IKS8_N4ceph12immobile_anyILm576EEEESt10_Select1stISD_ENSA_6common11CephContext19associated_objs_cmpESaISD_EE22_M_emplace_hint_uniqueIJRKSt21piecewise_construct_tSt5tupleIJRSt17basic_string_viewIcS4_ERS7_EESP_IJRKSt15in_place_type_tIN6librbd21TaskFinisherSingletonEERPSH_EEEEESt17_Rb_tree_iteratorISD_ESt23_Rb_tree_const_iteratorISD_EDpOT_.constprop.0 (librbd.so.1 + 0x51126b)
                                                    #7  0x00007f1a4f5c97a6 _ZN6librbd8ImageCtx4initEv (librbd.so.1 + 0x13e7a6)
                                                    #8  0x00007f1a4f6a32d3 _ZN6librbd5image11OpenRequestINS_8ImageCtxEE12send_refreshEv (librbd.so.1 + 0x2182d3)
                                                    #9  0x00007f1a4f6a3f46 _ZN6librbd5image11OpenRequestINS_8ImageCtxEE23handle_v2_get_data_poolEPi (librbd.so.1 + 0x218f46)
                                                    #10 0x00007f1a4f6a42a7 _ZN6librbd4util6detail20rados_state_callbackINS_5image11OpenRequestINS_8ImageCtxEEEXadL_ZNS6_23handle_v2_get_data_poolEPiEELb1EEEvPvS8_ (librbd.so.1 + 0x2192a7)
                                                    #11 0x00007f1a4f3a20ac _ZN5boost4asio6detail18completion_handlerINS1_7binder0IN8librados14CB_AioCompleteEEENS0_10io_context19basic_executor_typeISaIvELm0EEEE11do_completeEPvPNS1_19scheduler_operationERKNS_6system10error_codeEm (librados.so.2 + 0xad0ac)
                                                    #12 0x00007f1a4f3a1585 _ZN5boost4asio6detail14strand_service11do_completeEPvPNS1_19scheduler_operationERKNS_6system10error_codeEm (librados.so.2 + 0xac585)
                                                    #13 0x00007f1a4f41c498 _ZN5boost4asio6detail9scheduler3runERNS_6system10error_codeE.constprop.0.isra.0 (librados.so.2 + 0x127498)
                                                    #14 0x00007f1a4f3bb4e4 _ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJZ17make_named_threadIZN4ceph5async15io_context_pool5startEsEUlvE_JEES_St17basic_string_viewIcSt11char_traitsIcEEOT_DpOT0_EUlSD_SG_E_S7_EEEEE6_M_runEv (librados.so.2 + 0xc64e4)
                                                    #15 0x00007f1a4e129ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)
                                                    #16 0x00007f1a518ba2fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #17 0x00007f1a5193f400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 263265:
                                                    #0  0x00007f1a518b738a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f1a518b98e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f1a4f41c266 _ZN5boost4asio6detail9scheduler3runERNS_6system10error_codeE.constprop.0.isra.0 (librados.so.2 + 0x127266)
                                                    #3  0x00007f1a4f3bb4e4 _ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJZ17make_named_threadIZN4ceph5async15io_context_pool5startEsEUlvE_JEES_St17basic_string_viewIcSt11char_traitsIcEEOT_DpOT0_EUlSD_SG_E_S7_EEEEE6_M_runEv (librados.so.2 + 0xc64e4)
                                                    #4  0x00007f1a4e129ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)
                                                    #5  0x00007f1a518ba2fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f1a5193f400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 263257:
                                                    #0  0x00007f1a5193ea3e epoll_wait (libc.so.6 + 0x10fa3e)
                                                    #1  0x00007f1a4ec7a618 _ZN11EpollDriver10event_waitERSt6vectorI14FiredFileEventSaIS1_EEP7timeval (libceph-common.so.2 + 0x440618)
                                                    #2  0x00007f1a4ec78702 _ZN11EventCenter14process_eventsEjPNSt6chrono8durationImSt5ratioILl1ELl1000000000EEEE (libceph-common.so.2 + 0x43e702)
                                                    #3  0x00007f1a4ec792c6 _ZNSt17_Function_handlerIFvvEZN12NetworkStack10add_threadEP6WorkerEUlvE_E9_M_invokeERKSt9_Any_data (libceph-common.so.2 + 0x43f2c6)
                                                    #4  0x00007f1a4e129ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)
                                                    #5  0x00007f1a518ba2fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f1a5193f400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 263272:
                                                    #0  0x00007f1a5193f3ed __clone3 (libc.so.6 + 0x1103ed)
                                                    ELF object binary architecture: AMD x86-64
Nov 29 07:58:51 compute-0 systemd[1]: systemd-coredump@0-263273-0.service: Deactivated successfully.
Nov 29 07:58:51 compute-0 nova_compute[255040]: 2025-11-29 07:58:51.839 255071 ERROR nova.virt.libvirt.driver [req-5143d8de-4c60-4631-ad6e-7dc55c34ed04 req-8014acd3-c195-4a68-ab2a-f4cd7cbb5195 25ec8781b6804b3590f81f8e2d32f01e d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Unknown error when attempting to find the payload_offset for LUKSv1 encrypted disk rbd:volumes/volume-6e4e0855-a938-4ee2-8827-c1624e3b4891:id=openstack.: nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-6e4e0855-a938-4ee2-8827-c1624e3b4891:id=openstack : Unexpected error while running command.
Nov 29 07:58:51 compute-0 nova_compute[255040]: Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-6e4e0855-a938-4ee2-8827-c1624e3b4891:id=openstack --force-share --output=json
Nov 29 07:58:51 compute-0 nova_compute[255040]: Exit code: -6
Nov 29 07:58:51 compute-0 nova_compute[255040]: Stdout: ''
Nov 29 07:58:51 compute-0 nova_compute[255040]: Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-21.1.3.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Nov 29 07:58:51 compute-0 nova_compute[255040]: 2025-11-29 07:58:51.839 255071 ERROR nova.virt.libvirt.driver [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Traceback (most recent call last):
Nov 29 07:58:51 compute-0 nova_compute[255040]: 2025-11-29 07:58:51.839 255071 ERROR nova.virt.libvirt.driver [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2788, in _resize_attached_encrypted_volume
Nov 29 07:58:51 compute-0 nova_compute[255040]: 2025-11-29 07:58:51.839 255071 ERROR nova.virt.libvirt.driver [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e]     info = images.privileged_qemu_img_info(path)
Nov 29 07:58:51 compute-0 nova_compute[255040]: 2025-11-29 07:58:51.839 255071 ERROR nova.virt.libvirt.driver [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e]   File "/usr/lib/python3.9/site-packages/nova/virt/images.py", line 57, in privileged_qemu_img_info
Nov 29 07:58:51 compute-0 nova_compute[255040]: 2025-11-29 07:58:51.839 255071 ERROR nova.virt.libvirt.driver [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e]     info = nova.privsep.qemu.privileged_qemu_img_info(path, format=format)
Nov 29 07:58:51 compute-0 nova_compute[255040]: 2025-11-29 07:58:51.839 255071 ERROR nova.virt.libvirt.driver [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e]   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Nov 29 07:58:51 compute-0 nova_compute[255040]: 2025-11-29 07:58:51.839 255071 ERROR nova.virt.libvirt.driver [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e]     return self.channel.remote_call(name, args, kwargs,
Nov 29 07:58:51 compute-0 nova_compute[255040]: 2025-11-29 07:58:51.839 255071 ERROR nova.virt.libvirt.driver [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e]   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Nov 29 07:58:51 compute-0 nova_compute[255040]: 2025-11-29 07:58:51.839 255071 ERROR nova.virt.libvirt.driver [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e]     raise exc_type(*result[2])
Nov 29 07:58:51 compute-0 nova_compute[255040]: 2025-11-29 07:58:51.839 255071 ERROR nova.virt.libvirt.driver [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-6e4e0855-a938-4ee2-8827-c1624e3b4891:id=openstack : Unexpected error while running command.
Nov 29 07:58:51 compute-0 nova_compute[255040]: 2025-11-29 07:58:51.839 255071 ERROR nova.virt.libvirt.driver [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-6e4e0855-a938-4ee2-8827-c1624e3b4891:id=openstack --force-share --output=json
Nov 29 07:58:51 compute-0 nova_compute[255040]: 2025-11-29 07:58:51.839 255071 ERROR nova.virt.libvirt.driver [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Exit code: -6
Nov 29 07:58:51 compute-0 nova_compute[255040]: 2025-11-29 07:58:51.839 255071 ERROR nova.virt.libvirt.driver [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Stdout: ''
Nov 29 07:58:51 compute-0 nova_compute[255040]: 2025-11-29 07:58:51.839 255071 ERROR nova.virt.libvirt.driver [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-21.1.3.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Nov 29 07:58:51 compute-0 nova_compute[255040]: 2025-11-29 07:58:51.839 255071 ERROR nova.virt.libvirt.driver [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] 
Nov 29 07:58:51 compute-0 nova_compute[255040]: 2025-11-29 07:58:51.843 255071 WARNING nova.compute.manager [req-5143d8de-4c60-4631-ad6e-7dc55c34ed04 req-8014acd3-c195-4a68-ab2a-f4cd7cbb5195 25ec8781b6804b3590f81f8e2d32f01e d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Extend volume failed, volume_id=6e4e0855-a938-4ee2-8827-c1624e3b4891, reason: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-6e4e0855-a938-4ee2-8827-c1624e3b4891:id=openstack : Unexpected error while running command.
Nov 29 07:58:51 compute-0 nova_compute[255040]: Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-6e4e0855-a938-4ee2-8827-c1624e3b4891:id=openstack --force-share --output=json
Nov 29 07:58:51 compute-0 nova_compute[255040]: Exit code: -6
Nov 29 07:58:51 compute-0 nova_compute[255040]: Stdout: ''
Nov 29 07:58:51 compute-0 nova_compute[255040]: Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-21.1.3.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n': nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-6e4e0855-a938-4ee2-8827-c1624e3b4891:id=openstack : Unexpected error while running command.
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server [req-5143d8de-4c60-4631-ad6e-7dc55c34ed04 req-8014acd3-c195-4a68-ab2a-f4cd7cbb5195 25ec8781b6804b3590f81f8e2d32f01e d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Exception during message handling: nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-6e4e0855-a938-4ee2-8827-c1624e3b4891:id=openstack : Unexpected error while running command.
Nov 29 07:58:52 compute-0 nova_compute[255040]: Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-6e4e0855-a938-4ee2-8827-c1624e3b4891:id=openstack --force-share --output=json
Nov 29 07:58:52 compute-0 nova_compute[255040]: Exit code: -6
Nov 29 07:58:52 compute-0 nova_compute[255040]: Stdout: ''
Nov 29 07:58:52 compute-0 nova_compute[255040]: Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-21.1.3.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server     self.force_reraise()
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server     raise self.value
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11073, in external_instance_event
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server     self.extend_volume(context, instance, event.tag)
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/utils.py", line 1439, in decorated_function
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 214, in decorated_function
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server     compute_utils.add_instance_fault_from_exc(context,
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server     self.force_reraise()
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server     raise self.value
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10930, in extend_volume
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server     self.driver.extend_volume(context, connection_info, instance,
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2865, in extend_volume
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server     self._resize_attached_encrypted_volume(
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2804, in _resize_attached_encrypted_volume
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server     LOG.exception('Unknown error when attempting to find the '
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server     self.force_reraise()
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server     raise self.value
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2788, in _resize_attached_encrypted_volume
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server     info = images.privileged_qemu_img_info(path)
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/images.py", line 57, in privileged_qemu_img_info
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server     info = nova.privsep.qemu.privileged_qemu_img_info(path, format=format)
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server     return self.channel.remote_call(name, args, kwargs,
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server     raise exc_type(*result[2])
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-6e4e0855-a938-4ee2-8827-c1624e3b4891:id=openstack : Unexpected error while running command.
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-6e4e0855-a938-4ee2-8827-c1624e3b4891:id=openstack --force-share --output=json
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server Exit code: -6
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server Stdout: ''
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-21.1.3.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.066 255071 ERROR oslo_messaging.rpc.server 
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.938 255071 DEBUG oslo_concurrency.lockutils [None req-9aa99bb7-5ee2-4a59-84bd-a33b97847beb 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Acquiring lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.939 255071 DEBUG oslo_concurrency.lockutils [None req-9aa99bb7-5ee2-4a59-84bd-a33b97847beb 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:52 compute-0 nova_compute[255040]: 2025-11-29 07:58:52.961 255071 INFO nova.compute.manager [None req-9aa99bb7-5ee2-4a59-84bd-a33b97847beb 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Detaching volume 6e4e0855-a938-4ee2-8827-c1624e3b4891
Nov 29 07:58:53 compute-0 nova_compute[255040]: 2025-11-29 07:58:53.124 255071 INFO nova.virt.block_device [None req-9aa99bb7-5ee2-4a59-84bd-a33b97847beb 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Attempting to driver detach volume 6e4e0855-a938-4ee2-8827-c1624e3b4891 from mountpoint /dev/vdb
Nov 29 07:58:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:53 compute-0 nova_compute[255040]: 2025-11-29 07:58:53.243 255071 DEBUG os_brick.encryptors [None req-9aa99bb7-5ee2-4a59-84bd-a33b97847beb 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Using volume encryption metadata '{'encryption_key_id': '5fd34816-1da5-44bd-9e58-e7d93b73a9a3', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-6e4e0855-a938-4ee2-8827-c1624e3b4891', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '6e4e0855-a938-4ee2-8827-c1624e3b4891', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e', 'attached_at': '', 'detached_at': '', 'volume_id': '6e4e0855-a938-4ee2-8827-c1624e3b4891', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 07:58:53 compute-0 nova_compute[255040]: 2025-11-29 07:58:53.253 255071 DEBUG nova.virt.libvirt.driver [None req-9aa99bb7-5ee2-4a59-84bd-a33b97847beb 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Attempting to detach device vdb from instance 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 07:58:53 compute-0 nova_compute[255040]: 2025-11-29 07:58:53.254 255071 DEBUG nova.virt.libvirt.guest [None req-9aa99bb7-5ee2-4a59-84bd-a33b97847beb 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 07:58:53 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:58:53 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-6e4e0855-a938-4ee2-8827-c1624e3b4891">
Nov 29 07:58:53 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:58:53 compute-0 nova_compute[255040]:   </source>
Nov 29 07:58:53 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:58:53 compute-0 nova_compute[255040]:   <serial>6e4e0855-a938-4ee2-8827-c1624e3b4891</serial>
Nov 29 07:58:53 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 07:58:53 compute-0 nova_compute[255040]:   <encryption format="luks">
Nov 29 07:58:53 compute-0 nova_compute[255040]:     <secret type="passphrase" uuid="391fa11b-9603-4fa5-addb-5468759843c3"/>
Nov 29 07:58:53 compute-0 nova_compute[255040]:   </encryption>
Nov 29 07:58:53 compute-0 nova_compute[255040]: </disk>
Nov 29 07:58:53 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 07:58:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.2 KiB/s wr, 7 op/s
Nov 29 07:58:53 compute-0 nova_compute[255040]: 2025-11-29 07:58:53.341 255071 INFO nova.virt.libvirt.driver [None req-9aa99bb7-5ee2-4a59-84bd-a33b97847beb 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Successfully detached device vdb from instance 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e from the persistent domain config.
Nov 29 07:58:53 compute-0 nova_compute[255040]: 2025-11-29 07:58:53.342 255071 DEBUG nova.virt.libvirt.driver [None req-9aa99bb7-5ee2-4a59-84bd-a33b97847beb 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 07:58:53 compute-0 nova_compute[255040]: 2025-11-29 07:58:53.342 255071 DEBUG nova.virt.libvirt.guest [None req-9aa99bb7-5ee2-4a59-84bd-a33b97847beb 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 07:58:53 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:58:53 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-6e4e0855-a938-4ee2-8827-c1624e3b4891">
Nov 29 07:58:53 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:58:53 compute-0 nova_compute[255040]:   </source>
Nov 29 07:58:53 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:58:53 compute-0 nova_compute[255040]:   <serial>6e4e0855-a938-4ee2-8827-c1624e3b4891</serial>
Nov 29 07:58:53 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 07:58:53 compute-0 nova_compute[255040]:   <encryption format="luks">
Nov 29 07:58:53 compute-0 nova_compute[255040]:     <secret type="passphrase" uuid="391fa11b-9603-4fa5-addb-5468759843c3"/>
Nov 29 07:58:53 compute-0 nova_compute[255040]:   </encryption>
Nov 29 07:58:53 compute-0 nova_compute[255040]: </disk>
Nov 29 07:58:53 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 07:58:53 compute-0 nova_compute[255040]: 2025-11-29 07:58:53.466 255071 DEBUG nova.virt.libvirt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Received event <DeviceRemovedEvent: 1764403133.4659688, 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 07:58:53 compute-0 nova_compute[255040]: 2025-11-29 07:58:53.469 255071 DEBUG nova.virt.libvirt.driver [None req-9aa99bb7-5ee2-4a59-84bd-a33b97847beb 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 07:58:53 compute-0 nova_compute[255040]: 2025-11-29 07:58:53.472 255071 INFO nova.virt.libvirt.driver [None req-9aa99bb7-5ee2-4a59-84bd-a33b97847beb 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Successfully detached device vdb from instance 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e from the live domain config.
Nov 29 07:58:53 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:58:53 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.9 total, 600.0 interval
                                           Cumulative writes: 6877 writes, 27K keys, 6877 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6877 writes, 1359 syncs, 5.06 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1113 writes, 3306 keys, 1113 commit groups, 1.0 writes per commit group, ingest: 3.30 MB, 0.01 MB/s
                                           Interval WAL: 1113 writes, 446 syncs, 2.50 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 07:58:53 compute-0 nova_compute[255040]: 2025-11-29 07:58:53.778 255071 DEBUG nova.objects.instance [None req-9aa99bb7-5ee2-4a59-84bd-a33b97847beb 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lazy-loading 'flavor' on Instance uuid 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:58:53 compute-0 nova_compute[255040]: 2025-11-29 07:58:53.816 255071 DEBUG oslo_concurrency.lockutils [None req-9aa99bb7-5ee2-4a59-84bd-a33b97847beb 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:53 compute-0 podman[263281]: 2025-11-29 07:58:53.906019989 +0000 UTC m=+0.069688183 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:58:54 compute-0 nova_compute[255040]: 2025-11-29 07:58:54.295 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:54 compute-0 nova_compute[255040]: 2025-11-29 07:58:54.475 255071 DEBUG oslo_concurrency.lockutils [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Acquiring lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:54 compute-0 nova_compute[255040]: 2025-11-29 07:58:54.475 255071 DEBUG oslo_concurrency.lockutils [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:54 compute-0 nova_compute[255040]: 2025-11-29 07:58:54.476 255071 DEBUG oslo_concurrency.lockutils [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Acquiring lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:54 compute-0 nova_compute[255040]: 2025-11-29 07:58:54.476 255071 DEBUG oslo_concurrency.lockutils [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:54 compute-0 nova_compute[255040]: 2025-11-29 07:58:54.476 255071 DEBUG oslo_concurrency.lockutils [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:54 compute-0 nova_compute[255040]: 2025-11-29 07:58:54.477 255071 INFO nova.compute.manager [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Terminating instance
Nov 29 07:58:54 compute-0 nova_compute[255040]: 2025-11-29 07:58:54.478 255071 DEBUG nova.compute.manager [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:58:54 compute-0 ceph-mon[75237]: pgmap v1087: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.2 KiB/s wr, 7 op/s
Nov 29 07:58:54 compute-0 kernel: tap45c74639-2d (unregistering): left promiscuous mode
Nov 29 07:58:54 compute-0 NetworkManager[49116]: <info>  [1764403134.7784] device (tap45c74639-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:58:54 compute-0 nova_compute[255040]: 2025-11-29 07:58:54.789 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:54 compute-0 ovn_controller[153295]: 2025-11-29T07:58:54Z|00032|binding|INFO|Releasing lport 45c74639-2d52-4fcf-9874-4ec3f104851e from this chassis (sb_readonly=0)
Nov 29 07:58:54 compute-0 ovn_controller[153295]: 2025-11-29T07:58:54Z|00033|binding|INFO|Setting lport 45c74639-2d52-4fcf-9874-4ec3f104851e down in Southbound
Nov 29 07:58:54 compute-0 ovn_controller[153295]: 2025-11-29T07:58:54Z|00034|binding|INFO|Removing iface tap45c74639-2d ovn-installed in OVS
Nov 29 07:58:54 compute-0 nova_compute[255040]: 2025-11-29 07:58:54.792 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:54 compute-0 nova_compute[255040]: 2025-11-29 07:58:54.811 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:54 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 29 07:58:54 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 19.442s CPU time.
Nov 29 07:58:54 compute-0 systemd-machined[216271]: Machine qemu-1-instance-00000001 terminated.
Nov 29 07:58:54 compute-0 nova_compute[255040]: 2025-11-29 07:58:54.901 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:54 compute-0 nova_compute[255040]: 2025-11-29 07:58:54.907 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:54 compute-0 nova_compute[255040]: 2025-11-29 07:58:54.921 255071 INFO nova.virt.libvirt.driver [-] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Instance destroyed successfully.
Nov 29 07:58:54 compute-0 nova_compute[255040]: 2025-11-29 07:58:54.922 255071 DEBUG nova.objects.instance [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lazy-loading 'resources' on Instance uuid 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:58:54 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:54.955 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:5f:e8 10.100.0.6'], port_security=['fa:16:3e:96:5f:e8 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5757f1dcffd49e48fe28b1c2c26b71a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5be2926e-1d47-4331-838e-0bd2d002e2f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=73c4b79a-ad43-42b8-bec9-14d0c32e9bad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=45c74639-2d52-4fcf-9874-4ec3f104851e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:58:54 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:54.957 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 45c74639-2d52-4fcf-9874-4ec3f104851e in datapath 94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c unbound from our chassis
Nov 29 07:58:54 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:54.958 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:58:54 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:54.962 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[c52cad96-2e39-46c4-b87a-49f5ccb6c857]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:54 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:54.963 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c namespace which is not needed anymore
Nov 29 07:58:55 compute-0 nova_compute[255040]: 2025-11-29 07:58:55.031 255071 DEBUG nova.virt.libvirt.vif [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:57:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-EncryptedVolumesExtendAttachedTest-instance-700650819',display_name='tempest-EncryptedVolumesExtendAttachedTest-instance-700650819',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-encryptedvolumesextendattachedtest-instance-700650819',id=1,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIXdJHxCf+KhIzKU93CDT91LAa/ODzUHhpG+4ryHEHWAJajfjeKrKpfFQLVHoHOxBkcvZ0Yaky80vVJ9BA42t0nKn5643xxhHfoAlCf/6QaaHOImmOmRutgA8MPci8r5PQ==',key_name='tempest-keypair-419189598',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:58:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5757f1dcffd49e48fe28b1c2c26b71a',ramdisk_id='',reservation_id='r-jqytidtm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-EncryptedVolumesExtendAttachedTest-306938447',owner_user_name='tempest-EncryptedVolumesExtendAttachedTest-306938447-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:58:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0b288cb3716343b3b86a120d6c892ab4',uuid=5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "45c74639-2d52-4fcf-9874-4ec3f104851e", "address": "fa:16:3e:96:5f:e8", "network": {"id": "94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-82283185-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5757f1dcffd49e48fe28b1c2c26b71a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45c74639-2d", "ovs_interfaceid": "45c74639-2d52-4fcf-9874-4ec3f104851e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:58:55 compute-0 nova_compute[255040]: 2025-11-29 07:58:55.031 255071 DEBUG nova.network.os_vif_util [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Converting VIF {"id": "45c74639-2d52-4fcf-9874-4ec3f104851e", "address": "fa:16:3e:96:5f:e8", "network": {"id": "94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-82283185-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5757f1dcffd49e48fe28b1c2c26b71a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap45c74639-2d", "ovs_interfaceid": "45c74639-2d52-4fcf-9874-4ec3f104851e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:58:55 compute-0 nova_compute[255040]: 2025-11-29 07:58:55.033 255071 DEBUG nova.network.os_vif_util [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:96:5f:e8,bridge_name='br-int',has_traffic_filtering=True,id=45c74639-2d52-4fcf-9874-4ec3f104851e,network=Network(94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45c74639-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:58:55 compute-0 nova_compute[255040]: 2025-11-29 07:58:55.033 255071 DEBUG os_vif [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:5f:e8,bridge_name='br-int',has_traffic_filtering=True,id=45c74639-2d52-4fcf-9874-4ec3f104851e,network=Network(94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45c74639-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:58:55 compute-0 nova_compute[255040]: 2025-11-29 07:58:55.036 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:55 compute-0 nova_compute[255040]: 2025-11-29 07:58:55.036 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45c74639-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:58:55 compute-0 nova_compute[255040]: 2025-11-29 07:58:55.038 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:55 compute-0 nova_compute[255040]: 2025-11-29 07:58:55.042 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:55 compute-0 nova_compute[255040]: 2025-11-29 07:58:55.047 255071 INFO os_vif [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:5f:e8,bridge_name='br-int',has_traffic_filtering=True,id=45c74639-2d52-4fcf-9874-4ec3f104851e,network=Network(94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap45c74639-2d')
Nov 29 07:58:55 compute-0 neutron-haproxy-ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c[262042]: [NOTICE]   (262046) : haproxy version is 2.8.14-c23fe91
Nov 29 07:58:55 compute-0 neutron-haproxy-ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c[262042]: [NOTICE]   (262046) : path to executable is /usr/sbin/haproxy
Nov 29 07:58:55 compute-0 neutron-haproxy-ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c[262042]: [WARNING]  (262046) : Exiting Master process...
Nov 29 07:58:55 compute-0 neutron-haproxy-ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c[262042]: [WARNING]  (262046) : Exiting Master process...
Nov 29 07:58:55 compute-0 neutron-haproxy-ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c[262042]: [ALERT]    (262046) : Current worker (262048) exited with code 143 (Terminated)
Nov 29 07:58:55 compute-0 neutron-haproxy-ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c[262042]: [WARNING]  (262046) : All workers exited. Exiting... (0)
Nov 29 07:58:55 compute-0 systemd[1]: libpod-02cf26a8e96467f3fffe763b9d95786e9e2c3d5883a888aff7feaa91b9a44301.scope: Deactivated successfully.
Nov 29 07:58:55 compute-0 podman[263349]: 2025-11-29 07:58:55.217563878 +0000 UTC m=+0.137369671 container died 02cf26a8e96467f3fffe763b9d95786e9e2c3d5883a888aff7feaa91b9a44301 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 4.2 KiB/s wr, 9 op/s
Nov 29 07:58:55 compute-0 nova_compute[255040]: 2025-11-29 07:58:55.365 255071 DEBUG nova.compute.manager [req-eadbb070-35ce-40c2-85ff-78bc77f0efaf req-287391a9-df88-47b3-8ec0-3cc14ac1f0ef cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Received event network-vif-unplugged-45c74639-2d52-4fcf-9874-4ec3f104851e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:58:55 compute-0 nova_compute[255040]: 2025-11-29 07:58:55.366 255071 DEBUG oslo_concurrency.lockutils [req-eadbb070-35ce-40c2-85ff-78bc77f0efaf req-287391a9-df88-47b3-8ec0-3cc14ac1f0ef cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:55 compute-0 nova_compute[255040]: 2025-11-29 07:58:55.366 255071 DEBUG oslo_concurrency.lockutils [req-eadbb070-35ce-40c2-85ff-78bc77f0efaf req-287391a9-df88-47b3-8ec0-3cc14ac1f0ef cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:55 compute-0 nova_compute[255040]: 2025-11-29 07:58:55.366 255071 DEBUG oslo_concurrency.lockutils [req-eadbb070-35ce-40c2-85ff-78bc77f0efaf req-287391a9-df88-47b3-8ec0-3cc14ac1f0ef cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:55 compute-0 nova_compute[255040]: 2025-11-29 07:58:55.367 255071 DEBUG nova.compute.manager [req-eadbb070-35ce-40c2-85ff-78bc77f0efaf req-287391a9-df88-47b3-8ec0-3cc14ac1f0ef cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] No waiting events found dispatching network-vif-unplugged-45c74639-2d52-4fcf-9874-4ec3f104851e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:58:55 compute-0 nova_compute[255040]: 2025-11-29 07:58:55.367 255071 DEBUG nova.compute.manager [req-eadbb070-35ce-40c2-85ff-78bc77f0efaf req-287391a9-df88-47b3-8ec0-3cc14ac1f0ef cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Received event network-vif-unplugged-45c74639-2d52-4fcf-9874-4ec3f104851e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:58:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-02cf26a8e96467f3fffe763b9d95786e9e2c3d5883a888aff7feaa91b9a44301-userdata-shm.mount: Deactivated successfully.
Nov 29 07:58:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-af59e6bb946ff976edcdce579b5f1550b27ae1563e9163126d521e44a76be12b-merged.mount: Deactivated successfully.
Nov 29 07:58:55 compute-0 podman[263349]: 2025-11-29 07:58:55.467343397 +0000 UTC m=+0.387149190 container cleanup 02cf26a8e96467f3fffe763b9d95786e9e2c3d5883a888aff7feaa91b9a44301 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 07:58:55 compute-0 systemd[1]: libpod-conmon-02cf26a8e96467f3fffe763b9d95786e9e2c3d5883a888aff7feaa91b9a44301.scope: Deactivated successfully.
Nov 29 07:58:55 compute-0 ceph-mon[75237]: pgmap v1088: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 4.2 KiB/s wr, 9 op/s
Nov 29 07:58:55 compute-0 podman[263382]: 2025-11-29 07:58:55.90190994 +0000 UTC m=+0.406579792 container remove 02cf26a8e96467f3fffe763b9d95786e9e2c3d5883a888aff7feaa91b9a44301 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:58:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:55.909 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[40ac732d-ee85-4254-aa45-17497b266463]: (4, ('Sat Nov 29 07:58:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c (02cf26a8e96467f3fffe763b9d95786e9e2c3d5883a888aff7feaa91b9a44301)\n02cf26a8e96467f3fffe763b9d95786e9e2c3d5883a888aff7feaa91b9a44301\nSat Nov 29 07:58:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c (02cf26a8e96467f3fffe763b9d95786e9e2c3d5883a888aff7feaa91b9a44301)\n02cf26a8e96467f3fffe763b9d95786e9e2c3d5883a888aff7feaa91b9a44301\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:55.912 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[024f103c-edc0-421c-ba4d-a46ec49e6a19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:55.913 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94996ac3-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:58:55 compute-0 nova_compute[255040]: 2025-11-29 07:58:55.917 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:55 compute-0 kernel: tap94996ac3-30: left promiscuous mode
Nov 29 07:58:55 compute-0 nova_compute[255040]: 2025-11-29 07:58:55.933 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:55.939 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[7479ae90-213a-4525-b0f1-49ac850188f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:55.954 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[f006e561-1884-49ee-81e9-067e08fd0c34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:55.956 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[958e1109-8734-458b-b73a-58cd32f80810]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:55.977 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec61513-9d58-4c8b-940e-709b0e1431eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532562, 'reachable_time': 42949, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263399, 'error': None, 'target': 'ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:58:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d94996ac3\x2d35ae\x2d45fc\x2db8e9\x2dec4ad5c5c35c.mount: Deactivated successfully.
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007606084562414534 of space, bias 1.0, pg target 0.22818253687243603 quantized to 32 (current 32)
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 4.45134954743765e-06 of space, bias 1.0, pg target 0.0013354048642312951 quantized to 32 (current 32)
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:58:55 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:58:56 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:55.998 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-94996ac3-35ae-45fc-b8e9-ec4ad5c5c35c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:58:56 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:58:56.001 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[1519be4d-22b4-4bb8-8c0b-f0afd920bd2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:56 compute-0 nova_compute[255040]: 2025-11-29 07:58:56.218 255071 INFO nova.virt.libvirt.driver [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Deleting instance files /var/lib/nova/instances/5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e_del
Nov 29 07:58:56 compute-0 nova_compute[255040]: 2025-11-29 07:58:56.219 255071 INFO nova.virt.libvirt.driver [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Deletion of /var/lib/nova/instances/5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e_del complete
Nov 29 07:58:56 compute-0 nova_compute[255040]: 2025-11-29 07:58:56.300 255071 DEBUG oslo_concurrency.lockutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Acquiring lock "f663740c-6ef5-4e28-9746-851907470acd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:56 compute-0 nova_compute[255040]: 2025-11-29 07:58:56.301 255071 DEBUG oslo_concurrency.lockutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Lock "f663740c-6ef5-4e28-9746-851907470acd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:56 compute-0 nova_compute[255040]: 2025-11-29 07:58:56.306 255071 DEBUG nova.virt.libvirt.host [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Nov 29 07:58:56 compute-0 nova_compute[255040]: 2025-11-29 07:58:56.306 255071 INFO nova.virt.libvirt.host [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] UEFI support detected
Nov 29 07:58:56 compute-0 nova_compute[255040]: 2025-11-29 07:58:56.308 255071 INFO nova.compute.manager [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Took 1.83 seconds to destroy the instance on the hypervisor.
Nov 29 07:58:56 compute-0 nova_compute[255040]: 2025-11-29 07:58:56.309 255071 DEBUG oslo.service.loopingcall [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:58:56 compute-0 nova_compute[255040]: 2025-11-29 07:58:56.309 255071 DEBUG nova.compute.manager [-] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:58:56 compute-0 nova_compute[255040]: 2025-11-29 07:58:56.309 255071 DEBUG nova.network.neutron [-] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:58:56 compute-0 nova_compute[255040]: 2025-11-29 07:58:56.592 255071 DEBUG nova.compute.manager [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:58:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 59 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 4.7 KiB/s wr, 27 op/s
Nov 29 07:58:58 compute-0 nova_compute[255040]: 2025-11-29 07:58:58.213 255071 DEBUG nova.compute.manager [req-3c8c3c96-81d6-4a0e-abf2-84e169b8a159 req-b15e65a0-8d96-4e8d-b4bf-8a26b5aa566b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Received event network-vif-plugged-45c74639-2d52-4fcf-9874-4ec3f104851e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:58:58 compute-0 nova_compute[255040]: 2025-11-29 07:58:58.214 255071 DEBUG oslo_concurrency.lockutils [req-3c8c3c96-81d6-4a0e-abf2-84e169b8a159 req-b15e65a0-8d96-4e8d-b4bf-8a26b5aa566b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:58 compute-0 nova_compute[255040]: 2025-11-29 07:58:58.214 255071 DEBUG oslo_concurrency.lockutils [req-3c8c3c96-81d6-4a0e-abf2-84e169b8a159 req-b15e65a0-8d96-4e8d-b4bf-8a26b5aa566b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:58 compute-0 nova_compute[255040]: 2025-11-29 07:58:58.214 255071 DEBUG oslo_concurrency.lockutils [req-3c8c3c96-81d6-4a0e-abf2-84e169b8a159 req-b15e65a0-8d96-4e8d-b4bf-8a26b5aa566b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:58 compute-0 nova_compute[255040]: 2025-11-29 07:58:58.214 255071 DEBUG nova.compute.manager [req-3c8c3c96-81d6-4a0e-abf2-84e169b8a159 req-b15e65a0-8d96-4e8d-b4bf-8a26b5aa566b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] No waiting events found dispatching network-vif-plugged-45c74639-2d52-4fcf-9874-4ec3f104851e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:58:58 compute-0 nova_compute[255040]: 2025-11-29 07:58:58.215 255071 WARNING nova.compute.manager [req-3c8c3c96-81d6-4a0e-abf2-84e169b8a159 req-b15e65a0-8d96-4e8d-b4bf-8a26b5aa566b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Received unexpected event network-vif-plugged-45c74639-2d52-4fcf-9874-4ec3f104851e for instance with vm_state active and task_state deleting.
Nov 29 07:58:58 compute-0 nova_compute[255040]: 2025-11-29 07:58:58.307 255071 DEBUG oslo_concurrency.lockutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:58 compute-0 nova_compute[255040]: 2025-11-29 07:58:58.308 255071 DEBUG oslo_concurrency.lockutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:58 compute-0 nova_compute[255040]: 2025-11-29 07:58:58.319 255071 DEBUG nova.virt.hardware [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:58:58 compute-0 nova_compute[255040]: 2025-11-29 07:58:58.320 255071 INFO nova.compute.claims [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Claim successful on node compute-0.ctlplane.example.com
Nov 29 07:58:58 compute-0 nova_compute[255040]: 2025-11-29 07:58:58.451 255071 DEBUG oslo_concurrency.processutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:58:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:58 compute-0 ceph-mon[75237]: pgmap v1089: 305 pgs: 305 active+clean; 59 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 4.7 KiB/s wr, 27 op/s
Nov 29 07:58:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:58:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2962556101' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:58:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2962556101' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:58:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/456317605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.006 255071 DEBUG oslo_concurrency.processutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.016 255071 DEBUG nova.compute.provider_tree [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.035 255071 DEBUG nova.scheduler.client.report [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.065 255071 DEBUG oslo_concurrency.lockutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.067 255071 DEBUG nova.compute.manager [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.101 255071 DEBUG nova.network.neutron [-] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.126 255071 DEBUG nova.compute.manager [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.127 255071 DEBUG nova.network.neutron [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.134 255071 INFO nova.compute.manager [-] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Took 2.83 seconds to deallocate network for instance.
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.309 255071 INFO nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.313 255071 DEBUG oslo_concurrency.lockutils [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.313 255071 DEBUG oslo_concurrency.lockutils [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.314 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.329 255071 DEBUG nova.compute.manager [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:58:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 59 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 4.7 KiB/s wr, 27 op/s
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.348 255071 DEBUG nova.policy [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a05ba0abba70499fbc58e9840c97b6d3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c301766e23b54f51bc3ecc646fdabab5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.373 255071 DEBUG oslo_concurrency.processutils [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.406 255071 DEBUG nova.compute.manager [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.408 255071 DEBUG nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.409 255071 INFO nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Creating image(s)
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.434 255071 DEBUG nova.storage.rbd_utils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] rbd image f663740c-6ef5-4e28-9746-851907470acd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.457 255071 DEBUG nova.storage.rbd_utils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] rbd image f663740c-6ef5-4e28-9746-851907470acd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.480 255071 DEBUG nova.storage.rbd_utils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] rbd image f663740c-6ef5-4e28-9746-851907470acd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.484 255071 DEBUG oslo_concurrency.processutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.555 255071 DEBUG oslo_concurrency.processutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.556 255071 DEBUG oslo_concurrency.lockutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Acquiring lock "55a6637599f7119d0d1afd670bb8713620840059" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.556 255071 DEBUG oslo_concurrency.lockutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.557 255071 DEBUG oslo_concurrency.lockutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.581 255071 DEBUG nova.storage.rbd_utils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] rbd image f663740c-6ef5-4e28-9746-851907470acd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:58:59 compute-0 nova_compute[255040]: 2025-11-29 07:58:59.588 255071 DEBUG oslo_concurrency.processutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 f663740c-6ef5-4e28-9746-851907470acd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:00 compute-0 nova_compute[255040]: 2025-11-29 07:59:00.040 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:00 compute-0 nova_compute[255040]: 2025-11-29 07:59:00.339 255071 DEBUG nova.compute.manager [req-bedad839-977f-4499-8bcc-3229ddc56890 req-975df9cc-fae0-474e-a001-6659dd01ced6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Received event network-vif-deleted-45c74639-2d52-4fcf-9874-4ec3f104851e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:59:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:59:00 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/676255550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:00 compute-0 nova_compute[255040]: 2025-11-29 07:59:00.451 255071 DEBUG oslo_concurrency.processutils [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:00 compute-0 nova_compute[255040]: 2025-11-29 07:59:00.461 255071 DEBUG nova.compute.provider_tree [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:59:00 compute-0 nova_compute[255040]: 2025-11-29 07:59:00.479 255071 DEBUG nova.scheduler.client.report [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:59:00 compute-0 nova_compute[255040]: 2025-11-29 07:59:00.501 255071 DEBUG oslo_concurrency.lockutils [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:00 compute-0 nova_compute[255040]: 2025-11-29 07:59:00.530 255071 INFO nova.scheduler.client.report [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Deleted allocations for instance 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e
Nov 29 07:59:01 compute-0 nova_compute[255040]: 2025-11-29 07:59:01.042 255071 DEBUG oslo_concurrency.lockutils [None req-077d4a9e-02b5-45c6-a803-7f797a617661 0b288cb3716343b3b86a120d6c892ab4 d5757f1dcffd49e48fe28b1c2c26b71a - - default default] Lock "5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 42 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 5.3 KiB/s wr, 33 op/s
Nov 29 07:59:01 compute-0 nova_compute[255040]: 2025-11-29 07:59:01.439 255071 DEBUG nova.network.neutron [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Successfully created port: 2882a412-8149-42bc-be44-538ca28e3f31 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:59:02 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2962556101' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:02 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2962556101' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:02 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/456317605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:02 compute-0 ceph-mon[75237]: pgmap v1090: 305 pgs: 305 active+clean; 59 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 4.7 KiB/s wr, 27 op/s
Nov 29 07:59:02 compute-0 nova_compute[255040]: 2025-11-29 07:59:02.372 255071 DEBUG nova.network.neutron [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Successfully updated port: 2882a412-8149-42bc-be44-538ca28e3f31 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:59:02 compute-0 nova_compute[255040]: 2025-11-29 07:59:02.395 255071 DEBUG oslo_concurrency.lockutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Acquiring lock "refresh_cache-f663740c-6ef5-4e28-9746-851907470acd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:59:02 compute-0 nova_compute[255040]: 2025-11-29 07:59:02.396 255071 DEBUG oslo_concurrency.lockutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Acquired lock "refresh_cache-f663740c-6ef5-4e28-9746-851907470acd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:59:02 compute-0 nova_compute[255040]: 2025-11-29 07:59:02.396 255071 DEBUG nova.network.neutron [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:59:02 compute-0 nova_compute[255040]: 2025-11-29 07:59:02.459 255071 DEBUG nova.compute.manager [req-dee667ef-3fb3-4097-87cb-3a6754a44a52 req-7b15c57c-5d01-465f-ba5c-43fb2dd16cd1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Received event network-changed-2882a412-8149-42bc-be44-538ca28e3f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:59:02 compute-0 nova_compute[255040]: 2025-11-29 07:59:02.460 255071 DEBUG nova.compute.manager [req-dee667ef-3fb3-4097-87cb-3a6754a44a52 req-7b15c57c-5d01-465f-ba5c-43fb2dd16cd1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Refreshing instance network info cache due to event network-changed-2882a412-8149-42bc-be44-538ca28e3f31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:59:02 compute-0 nova_compute[255040]: 2025-11-29 07:59:02.461 255071 DEBUG oslo_concurrency.lockutils [req-dee667ef-3fb3-4097-87cb-3a6754a44a52 req-7b15c57c-5d01-465f-ba5c-43fb2dd16cd1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-f663740c-6ef5-4e28-9746-851907470acd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:59:02 compute-0 nova_compute[255040]: 2025-11-29 07:59:02.591 255071 DEBUG nova.network.neutron [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:59:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 42 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 KiB/s wr, 30 op/s
Nov 29 07:59:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:03 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/676255550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:03 compute-0 ceph-mon[75237]: pgmap v1091: 305 pgs: 305 active+clean; 42 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 5.3 KiB/s wr, 33 op/s
Nov 29 07:59:04 compute-0 nova_compute[255040]: 2025-11-29 07:59:04.300 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:05 compute-0 nova_compute[255040]: 2025-11-29 07:59:05.044 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 42 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.3 KiB/s wr, 29 op/s
Nov 29 07:59:05 compute-0 ceph-mon[75237]: pgmap v1092: 305 pgs: 305 active+clean; 42 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 KiB/s wr, 30 op/s
Nov 29 07:59:06 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:59:06 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 7812 writes, 30K keys, 7812 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7811 writes, 1574 syncs, 4.96 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1073 writes, 2968 keys, 1073 commit groups, 1.0 writes per commit group, ingest: 1.78 MB, 0.00 MB/s
                                           Interval WAL: 1072 writes, 448 syncs, 2.39 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 07:59:07 compute-0 nova_compute[255040]: 2025-11-29 07:59:07.325 255071 DEBUG nova.network.neutron [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Updating instance_info_cache with network_info: [{"id": "2882a412-8149-42bc-be44-538ca28e3f31", "address": "fa:16:3e:a5:5e:84", "network": {"id": "cd160439-5666-434d-854d-1a14849672c3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1569429700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c301766e23b54f51bc3ecc646fdabab5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2882a412-81", "ovs_interfaceid": "2882a412-8149-42bc-be44-538ca28e3f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:59:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 44 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 93 KiB/s wr, 31 op/s
Nov 29 07:59:07 compute-0 nova_compute[255040]: 2025-11-29 07:59:07.686 255071 DEBUG oslo_concurrency.lockutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Releasing lock "refresh_cache-f663740c-6ef5-4e28-9746-851907470acd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:59:07 compute-0 nova_compute[255040]: 2025-11-29 07:59:07.687 255071 DEBUG nova.compute.manager [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Instance network_info: |[{"id": "2882a412-8149-42bc-be44-538ca28e3f31", "address": "fa:16:3e:a5:5e:84", "network": {"id": "cd160439-5666-434d-854d-1a14849672c3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1569429700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c301766e23b54f51bc3ecc646fdabab5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2882a412-81", "ovs_interfaceid": "2882a412-8149-42bc-be44-538ca28e3f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:59:07 compute-0 nova_compute[255040]: 2025-11-29 07:59:07.688 255071 DEBUG oslo_concurrency.lockutils [req-dee667ef-3fb3-4097-87cb-3a6754a44a52 req-7b15c57c-5d01-465f-ba5c-43fb2dd16cd1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-f663740c-6ef5-4e28-9746-851907470acd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:59:07 compute-0 nova_compute[255040]: 2025-11-29 07:59:07.689 255071 DEBUG nova.network.neutron [req-dee667ef-3fb3-4097-87cb-3a6754a44a52 req-7b15c57c-5d01-465f-ba5c-43fb2dd16cd1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Refreshing network info cache for port 2882a412-8149-42bc-be44-538ca28e3f31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:59:07 compute-0 podman[263540]: 2025-11-29 07:59:07.956582043 +0000 UTC m=+0.110177657 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 07:59:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:59:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:59:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:59:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:59:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:59:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:59:08 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 07:59:09 compute-0 nova_compute[255040]: 2025-11-29 07:59:09.054 255071 DEBUG nova.network.neutron [req-dee667ef-3fb3-4097-87cb-3a6754a44a52 req-7b15c57c-5d01-465f-ba5c-43fb2dd16cd1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Updated VIF entry in instance network info cache for port 2882a412-8149-42bc-be44-538ca28e3f31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:59:09 compute-0 nova_compute[255040]: 2025-11-29 07:59:09.055 255071 DEBUG nova.network.neutron [req-dee667ef-3fb3-4097-87cb-3a6754a44a52 req-7b15c57c-5d01-465f-ba5c-43fb2dd16cd1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Updating instance_info_cache with network_info: [{"id": "2882a412-8149-42bc-be44-538ca28e3f31", "address": "fa:16:3e:a5:5e:84", "network": {"id": "cd160439-5666-434d-854d-1a14849672c3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1569429700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c301766e23b54f51bc3ecc646fdabab5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2882a412-81", "ovs_interfaceid": "2882a412-8149-42bc-be44-538ca28e3f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:59:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:09 compute-0 nova_compute[255040]: 2025-11-29 07:59:09.158 255071 DEBUG oslo_concurrency.lockutils [req-dee667ef-3fb3-4097-87cb-3a6754a44a52 req-7b15c57c-5d01-465f-ba5c-43fb2dd16cd1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-f663740c-6ef5-4e28-9746-851907470acd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:59:09 compute-0 nova_compute[255040]: 2025-11-29 07:59:09.301 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 49 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 351 KiB/s wr, 11 op/s
Nov 29 07:59:09 compute-0 ceph-mon[75237]: pgmap v1093: 305 pgs: 305 active+clean; 42 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.3 KiB/s wr, 29 op/s
Nov 29 07:59:09 compute-0 nova_compute[255040]: 2025-11-29 07:59:09.872 255071 DEBUG oslo_concurrency.processutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 f663740c-6ef5-4e28-9746-851907470acd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 10.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:59:09 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1123612419' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:59:09 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1123612419' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:09 compute-0 nova_compute[255040]: 2025-11-29 07:59:09.941 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403134.9196723, 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:59:09 compute-0 nova_compute[255040]: 2025-11-29 07:59:09.942 255071 INFO nova.compute.manager [-] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] VM Stopped (Lifecycle Event)
Nov 29 07:59:09 compute-0 nova_compute[255040]: 2025-11-29 07:59:09.949 255071 DEBUG nova.storage.rbd_utils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] resizing rbd image f663740c-6ef5-4e28-9746-851907470acd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:09.999 255071 DEBUG nova.compute.manager [None req-fe819a3e-af6b-49ad-beea-b1efe5b52eaf - - - - - -] [instance: 5bf6ac19-f8cd-4ae4-bd26-bda0ccdcb04e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.045 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.176 255071 DEBUG nova.objects.instance [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Lazy-loading 'migration_context' on Instance uuid f663740c-6ef5-4e28-9746-851907470acd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.199 255071 DEBUG nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.199 255071 DEBUG nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Ensure instance console log exists: /var/lib/nova/instances/f663740c-6ef5-4e28-9746-851907470acd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.200 255071 DEBUG oslo_concurrency.lockutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.200 255071 DEBUG oslo_concurrency.lockutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.200 255071 DEBUG oslo_concurrency.lockutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.202 255071 DEBUG nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Start _get_guest_xml network_info=[{"id": "2882a412-8149-42bc-be44-538ca28e3f31", "address": "fa:16:3e:a5:5e:84", "network": {"id": "cd160439-5666-434d-854d-1a14849672c3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1569429700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c301766e23b54f51bc3ecc646fdabab5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2882a412-81", "ovs_interfaceid": "2882a412-8149-42bc-be44-538ca28e3f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'image_id': '36a9388d-0d77-4d24-a915-be92247e5dbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.206 255071 WARNING nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.214 255071 DEBUG nova.virt.libvirt.host [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.215 255071 DEBUG nova.virt.libvirt.host [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.218 255071 DEBUG nova.virt.libvirt.host [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.218 255071 DEBUG nova.virt.libvirt.host [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.219 255071 DEBUG nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.219 255071 DEBUG nova.virt.hardware [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.219 255071 DEBUG nova.virt.hardware [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.220 255071 DEBUG nova.virt.hardware [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.220 255071 DEBUG nova.virt.hardware [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.220 255071 DEBUG nova.virt.hardware [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.220 255071 DEBUG nova.virt.hardware [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.221 255071 DEBUG nova.virt.hardware [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.221 255071 DEBUG nova.virt.hardware [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.221 255071 DEBUG nova.virt.hardware [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.221 255071 DEBUG nova.virt.hardware [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.222 255071 DEBUG nova.virt.hardware [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.224 255071 DEBUG oslo_concurrency.processutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:10 compute-0 ceph-mon[75237]: pgmap v1094: 305 pgs: 305 active+clean; 44 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 93 KiB/s wr, 31 op/s
Nov 29 07:59:10 compute-0 ceph-mon[75237]: pgmap v1095: 305 pgs: 305 active+clean; 49 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 351 KiB/s wr, 11 op/s
Nov 29 07:59:10 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1123612419' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:10 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1123612419' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:59:10 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3011402216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:59:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:59:10 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3661726371' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:59:10 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3661726371' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.682 255071 DEBUG oslo_concurrency.processutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.712 255071 DEBUG nova.storage.rbd_utils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] rbd image f663740c-6ef5-4e28-9746-851907470acd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:59:10 compute-0 nova_compute[255040]: 2025-11-29 07:59:10.717 255071 DEBUG oslo_concurrency.processutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:59:11 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1166621754' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.153 255071 DEBUG oslo_concurrency.processutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.155 255071 DEBUG nova.virt.libvirt.vif [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:58:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-374271226',display_name='tempest-VolumesActionsTest-instance-374271226',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-374271226',id=2,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c301766e23b54f51bc3ecc646fdabab5',ramdisk_id='',reservation_id='r-ny5dug40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1723029332',owner_user_name='tempest-VolumesActionsTest-1723029332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:58:59Z,user_data=None,user_id='a05ba0abba70499fbc58e9840c97b6d3',uuid=f663740c-6ef5-4e28-9746-851907470acd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2882a412-8149-42bc-be44-538ca28e3f31", "address": "fa:16:3e:a5:5e:84", "network": {"id": "cd160439-5666-434d-854d-1a14849672c3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1569429700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c301766e23b54f51bc3ecc646fdabab5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2882a412-81", "ovs_interfaceid": "2882a412-8149-42bc-be44-538ca28e3f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.155 255071 DEBUG nova.network.os_vif_util [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Converting VIF {"id": "2882a412-8149-42bc-be44-538ca28e3f31", "address": "fa:16:3e:a5:5e:84", "network": {"id": "cd160439-5666-434d-854d-1a14849672c3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1569429700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c301766e23b54f51bc3ecc646fdabab5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2882a412-81", "ovs_interfaceid": "2882a412-8149-42bc-be44-538ca28e3f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.156 255071 DEBUG nova.network.os_vif_util [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:5e:84,bridge_name='br-int',has_traffic_filtering=True,id=2882a412-8149-42bc-be44-538ca28e3f31,network=Network(cd160439-5666-434d-854d-1a14849672c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2882a412-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.158 255071 DEBUG nova.objects.instance [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Lazy-loading 'pci_devices' on Instance uuid f663740c-6ef5-4e28-9746-851907470acd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.182 255071 DEBUG nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:59:11 compute-0 nova_compute[255040]:   <uuid>f663740c-6ef5-4e28-9746-851907470acd</uuid>
Nov 29 07:59:11 compute-0 nova_compute[255040]:   <name>instance-00000002</name>
Nov 29 07:59:11 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 07:59:11 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 07:59:11 compute-0 nova_compute[255040]:   <metadata>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <nova:name>tempest-VolumesActionsTest-instance-374271226</nova:name>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 07:59:10</nova:creationTime>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 07:59:11 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 07:59:11 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 07:59:11 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 07:59:11 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:59:11 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 07:59:11 compute-0 nova_compute[255040]:         <nova:user uuid="a05ba0abba70499fbc58e9840c97b6d3">tempest-VolumesActionsTest-1723029332-project-member</nova:user>
Nov 29 07:59:11 compute-0 nova_compute[255040]:         <nova:project uuid="c301766e23b54f51bc3ecc646fdabab5">tempest-VolumesActionsTest-1723029332</nova:project>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <nova:root type="image" uuid="36a9388d-0d77-4d24-a915-be92247e5dbc"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 07:59:11 compute-0 nova_compute[255040]:         <nova:port uuid="2882a412-8149-42bc-be44-538ca28e3f31">
Nov 29 07:59:11 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 07:59:11 compute-0 nova_compute[255040]:   </metadata>
Nov 29 07:59:11 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <system>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <entry name="serial">f663740c-6ef5-4e28-9746-851907470acd</entry>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <entry name="uuid">f663740c-6ef5-4e28-9746-851907470acd</entry>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     </system>
Nov 29 07:59:11 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 07:59:11 compute-0 nova_compute[255040]:   <os>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:   </os>
Nov 29 07:59:11 compute-0 nova_compute[255040]:   <features>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <apic/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:   </features>
Nov 29 07:59:11 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:   </clock>
Nov 29 07:59:11 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:   </cpu>
Nov 29 07:59:11 compute-0 nova_compute[255040]:   <devices>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/f663740c-6ef5-4e28-9746-851907470acd_disk">
Nov 29 07:59:11 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       </source>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 07:59:11 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       </auth>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     </disk>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/f663740c-6ef5-4e28-9746-851907470acd_disk.config">
Nov 29 07:59:11 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       </source>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 07:59:11 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       </auth>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     </disk>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:a5:5e:84"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <target dev="tap2882a412-81"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     </interface>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/f663740c-6ef5-4e28-9746-851907470acd/console.log" append="off"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     </serial>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <video>
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     </video>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     </rng>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 07:59:11 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 07:59:11 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 07:59:11 compute-0 nova_compute[255040]:   </devices>
Nov 29 07:59:11 compute-0 nova_compute[255040]: </domain>
Nov 29 07:59:11 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.184 255071 DEBUG nova.compute.manager [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Preparing to wait for external event network-vif-plugged-2882a412-8149-42bc-be44-538ca28e3f31 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.184 255071 DEBUG oslo_concurrency.lockutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Acquiring lock "f663740c-6ef5-4e28-9746-851907470acd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.184 255071 DEBUG oslo_concurrency.lockutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Lock "f663740c-6ef5-4e28-9746-851907470acd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.185 255071 DEBUG oslo_concurrency.lockutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Lock "f663740c-6ef5-4e28-9746-851907470acd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.185 255071 DEBUG nova.virt.libvirt.vif [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:58:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-374271226',display_name='tempest-VolumesActionsTest-instance-374271226',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-374271226',id=2,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c301766e23b54f51bc3ecc646fdabab5',ramdisk_id='',reservation_id='r-ny5dug40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1723029332',owner_user_name='tempest-VolumesActionsTest-1723029332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:58:59Z,user_data=None,user_id='a05ba0abba70499fbc58e9840c97b6d3',uuid=f663740c-6ef5-4e28-9746-851907470acd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2882a412-8149-42bc-be44-538ca28e3f31", "address": "fa:16:3e:a5:5e:84", "network": {"id": "cd160439-5666-434d-854d-1a14849672c3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1569429700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c301766e23b54f51bc3ecc646fdabab5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2882a412-81", "ovs_interfaceid": "2882a412-8149-42bc-be44-538ca28e3f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.186 255071 DEBUG nova.network.os_vif_util [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Converting VIF {"id": "2882a412-8149-42bc-be44-538ca28e3f31", "address": "fa:16:3e:a5:5e:84", "network": {"id": "cd160439-5666-434d-854d-1a14849672c3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1569429700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c301766e23b54f51bc3ecc646fdabab5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2882a412-81", "ovs_interfaceid": "2882a412-8149-42bc-be44-538ca28e3f31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.186 255071 DEBUG nova.network.os_vif_util [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:5e:84,bridge_name='br-int',has_traffic_filtering=True,id=2882a412-8149-42bc-be44-538ca28e3f31,network=Network(cd160439-5666-434d-854d-1a14849672c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2882a412-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.187 255071 DEBUG os_vif [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:5e:84,bridge_name='br-int',has_traffic_filtering=True,id=2882a412-8149-42bc-be44-538ca28e3f31,network=Network(cd160439-5666-434d-854d-1a14849672c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2882a412-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.187 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.188 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.188 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.193 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.193 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2882a412-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.194 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2882a412-81, col_values=(('external_ids', {'iface-id': '2882a412-8149-42bc-be44-538ca28e3f31', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:5e:84', 'vm-uuid': 'f663740c-6ef5-4e28-9746-851907470acd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.196 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:11 compute-0 NetworkManager[49116]: <info>  [1764403151.1970] manager: (tap2882a412-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.198 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.202 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.203 255071 INFO os_vif [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:5e:84,bridge_name='br-int',has_traffic_filtering=True,id=2882a412-8149-42bc-be44-538ca28e3f31,network=Network(cd160439-5666-434d-854d-1a14849672c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2882a412-81')
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.277 255071 DEBUG nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.277 255071 DEBUG nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.278 255071 DEBUG nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] No VIF found with MAC fa:16:3e:a5:5e:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.278 255071 INFO nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Using config drive
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.295 255071 DEBUG nova.storage.rbd_utils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] rbd image f663740c-6ef5-4e28-9746-851907470acd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:59:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 67 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 MiB/s wr, 30 op/s
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.595 255071 INFO nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Creating config drive at /var/lib/nova/instances/f663740c-6ef5-4e28-9746-851907470acd/disk.config
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.602 255071 DEBUG oslo_concurrency.processutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f663740c-6ef5-4e28-9746-851907470acd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzy6dxxka execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.744 255071 DEBUG oslo_concurrency.processutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f663740c-6ef5-4e28-9746-851907470acd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzy6dxxka" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:11 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3011402216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:59:11 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3661726371' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:11 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3661726371' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:11 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1166621754' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.785 255071 DEBUG nova.storage.rbd_utils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] rbd image f663740c-6ef5-4e28-9746-851907470acd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:59:11 compute-0 nova_compute[255040]: 2025-11-29 07:59:11.789 255071 DEBUG oslo_concurrency.processutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f663740c-6ef5-4e28-9746-851907470acd/disk.config f663740c-6ef5-4e28-9746-851907470acd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:12 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:59:12 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 7137 writes, 27K keys, 7137 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7137 writes, 1446 syncs, 4.94 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1268 writes, 3392 keys, 1268 commit groups, 1.0 writes per commit group, ingest: 2.29 MB, 0.00 MB/s
                                           Interval WAL: 1268 writes, 540 syncs, 2.35 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 07:59:12 compute-0 nova_compute[255040]: 2025-11-29 07:59:12.843 255071 DEBUG oslo_concurrency.processutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f663740c-6ef5-4e28-9746-851907470acd/disk.config f663740c-6ef5-4e28-9746-851907470acd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:12 compute-0 nova_compute[255040]: 2025-11-29 07:59:12.844 255071 INFO nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Deleting local config drive /var/lib/nova/instances/f663740c-6ef5-4e28-9746-851907470acd/disk.config because it was imported into RBD.
Nov 29 07:59:12 compute-0 ceph-mon[75237]: pgmap v1096: 305 pgs: 305 active+clean; 67 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 MiB/s wr, 30 op/s
Nov 29 07:59:12 compute-0 kernel: tap2882a412-81: entered promiscuous mode
Nov 29 07:59:12 compute-0 ovn_controller[153295]: 2025-11-29T07:59:12Z|00035|binding|INFO|Claiming lport 2882a412-8149-42bc-be44-538ca28e3f31 for this chassis.
Nov 29 07:59:12 compute-0 ovn_controller[153295]: 2025-11-29T07:59:12Z|00036|binding|INFO|2882a412-8149-42bc-be44-538ca28e3f31: Claiming fa:16:3e:a5:5e:84 10.100.0.6
Nov 29 07:59:12 compute-0 NetworkManager[49116]: <info>  [1764403152.9257] manager: (tap2882a412-81): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Nov 29 07:59:12 compute-0 nova_compute[255040]: 2025-11-29 07:59:12.925 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:12 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:12.934 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:5e:84 10.100.0.6'], port_security=['fa:16:3e:a5:5e:84 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f663740c-6ef5-4e28-9746-851907470acd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd160439-5666-434d-854d-1a14849672c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c301766e23b54f51bc3ecc646fdabab5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2cd2d8d8-9ea8-48ce-8168-e4a5cefbb2d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81ef2cbb-df9d-470c-af02-3f1a1bf4c210, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=2882a412-8149-42bc-be44-538ca28e3f31) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:59:12 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:12.936 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 2882a412-8149-42bc-be44-538ca28e3f31 in datapath cd160439-5666-434d-854d-1a14849672c3 bound to our chassis
Nov 29 07:59:12 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:12.938 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd160439-5666-434d-854d-1a14849672c3
Nov 29 07:59:12 compute-0 ovn_controller[153295]: 2025-11-29T07:59:12Z|00037|binding|INFO|Setting lport 2882a412-8149-42bc-be44-538ca28e3f31 ovn-installed in OVS
Nov 29 07:59:12 compute-0 ovn_controller[153295]: 2025-11-29T07:59:12Z|00038|binding|INFO|Setting lport 2882a412-8149-42bc-be44-538ca28e3f31 up in Southbound
Nov 29 07:59:12 compute-0 nova_compute[255040]: 2025-11-29 07:59:12.947 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:12 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:12.953 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[be6e4c2c-6982-421c-85d9-11486399bf53]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:12 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:12.955 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcd160439-51 in ovnmeta-cd160439-5666-434d-854d-1a14849672c3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:59:12 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:12.957 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcd160439-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:59:12 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:12.957 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[dcf318b3-18fa-4262-8d4c-781abd604df9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:12 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:12.958 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[04867d1d-84dc-457b-82bc-8fc1898b51ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:12 compute-0 systemd-udevd[263776]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:59:12 compute-0 systemd-machined[216271]: New machine qemu-2-instance-00000002.
Nov 29 07:59:12 compute-0 NetworkManager[49116]: <info>  [1764403152.9762] device (tap2882a412-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:59:12 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Nov 29 07:59:12 compute-0 NetworkManager[49116]: <info>  [1764403152.9776] device (tap2882a412-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:59:12 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:12.976 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[43e4c18b-2c00-4ff4-9c4c-9f4a707d938c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:12 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:12.994 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[a3bd9bef-746f-4d0d-ba43-011db621b931]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:13.036 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[84f9a149-89ad-4a78-a04a-56faf78875f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:13.045 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c10afb-0ab4-4129-81e5-830d23470766]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:13 compute-0 NetworkManager[49116]: <info>  [1764403153.0466] manager: (tapcd160439-50): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Nov 29 07:59:13 compute-0 systemd-udevd[263779]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:59:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:59:13 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2583265479' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:59:13 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2583265479' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:13.086 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[35d86a0c-01d0-4556-a5f5-877d12c49ff4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:13.090 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[09097f84-77d5-490b-8cb6-49c5e72e9e73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:13 compute-0 NetworkManager[49116]: <info>  [1764403153.1242] device (tapcd160439-50): carrier: link connected
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:13.132 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[c66f7605-182e-46bc-9615-02d65c777b01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:13.150 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[840db098-42f3-4cb4-a843-e154722f7d05]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd160439-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:e7:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539046, 'reachable_time': 19054, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263808, 'error': None, 'target': 'ovnmeta-cd160439-5666-434d-854d-1a14849672c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:13.165 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[f19533e8-5664-4f98-87fd-1913450251b8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe94:e753'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539046, 'tstamp': 539046}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263809, 'error': None, 'target': 'ovnmeta-cd160439-5666-434d-854d-1a14849672c3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:13.198 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[0dd9dc02-e2b1-4359-83b5-56bcc5e8f6f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd160439-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:e7:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539046, 'reachable_time': 19054, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263810, 'error': None, 'target': 'ovnmeta-cd160439-5666-434d-854d-1a14849672c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:13.252 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[4a1c2c09-74c4-4dbc-9e03-4cbf85c6ab32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:13 compute-0 nova_compute[255040]: 2025-11-29 07:59:13.266 255071 DEBUG nova.compute.manager [req-1ca7e002-e36f-4dd9-8e3d-6605bb411940 req-1af925a9-f6a8-4097-9c05-e9c7c829db1c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Received event network-vif-plugged-2882a412-8149-42bc-be44-538ca28e3f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:59:13 compute-0 nova_compute[255040]: 2025-11-29 07:59:13.267 255071 DEBUG oslo_concurrency.lockutils [req-1ca7e002-e36f-4dd9-8e3d-6605bb411940 req-1af925a9-f6a8-4097-9c05-e9c7c829db1c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "f663740c-6ef5-4e28-9746-851907470acd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:13 compute-0 nova_compute[255040]: 2025-11-29 07:59:13.267 255071 DEBUG oslo_concurrency.lockutils [req-1ca7e002-e36f-4dd9-8e3d-6605bb411940 req-1af925a9-f6a8-4097-9c05-e9c7c829db1c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "f663740c-6ef5-4e28-9746-851907470acd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:13 compute-0 nova_compute[255040]: 2025-11-29 07:59:13.268 255071 DEBUG oslo_concurrency.lockutils [req-1ca7e002-e36f-4dd9-8e3d-6605bb411940 req-1af925a9-f6a8-4097-9c05-e9c7c829db1c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "f663740c-6ef5-4e28-9746-851907470acd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:13 compute-0 nova_compute[255040]: 2025-11-29 07:59:13.268 255071 DEBUG nova.compute.manager [req-1ca7e002-e36f-4dd9-8e3d-6605bb411940 req-1af925a9-f6a8-4097-9c05-e9c7c829db1c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Processing event network-vif-plugged-2882a412-8149-42bc-be44-538ca28e3f31 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:13.326 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[3cbff9d4-1187-4e38-946d-6edd6278bdf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:13.328 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd160439-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:13.329 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:13.329 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd160439-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:13 compute-0 nova_compute[255040]: 2025-11-29 07:59:13.332 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:13 compute-0 NetworkManager[49116]: <info>  [1764403153.3326] manager: (tapcd160439-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Nov 29 07:59:13 compute-0 kernel: tapcd160439-50: entered promiscuous mode
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:13.336 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd160439-50, col_values=(('external_ids', {'iface-id': 'aa582f6c-337f-423b-a9ff-23e3004ff8b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:13 compute-0 ovn_controller[153295]: 2025-11-29T07:59:13Z|00039|binding|INFO|Releasing lport aa582f6c-337f-423b-a9ff-23e3004ff8b3 from this chassis (sb_readonly=0)
Nov 29 07:59:13 compute-0 nova_compute[255040]: 2025-11-29 07:59:13.337 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 88 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 29 07:59:13 compute-0 nova_compute[255040]: 2025-11-29 07:59:13.360 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:13.361 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cd160439-5666-434d-854d-1a14849672c3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cd160439-5666-434d-854d-1a14849672c3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:13.362 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[d7861bd3-0b81-423f-a216-2920a8f88db9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:13.363 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: global
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-cd160439-5666-434d-854d-1a14849672c3
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/cd160439-5666-434d-854d-1a14849672c3.pid.haproxy
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: 
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: 
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: 
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID cd160439-5666-434d-854d-1a14849672c3
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:13.364 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cd160439-5666-434d-854d-1a14849672c3', 'env', 'PROCESS_TAG=haproxy-cd160439-5666-434d-854d-1a14849672c3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cd160439-5666-434d-854d-1a14849672c3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:59:13 compute-0 podman[263881]: 2025-11-29 07:59:13.776738041 +0000 UTC m=+0.057748372 container create 583ea0590e70dc31deab1b525b5ebc3b6986b02869ed8e4fab20dec2de775c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd160439-5666-434d-854d-1a14849672c3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 07:59:13 compute-0 nova_compute[255040]: 2025-11-29 07:59:13.778 255071 DEBUG nova.compute.manager [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:59:13 compute-0 nova_compute[255040]: 2025-11-29 07:59:13.780 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403153.7782028, f663740c-6ef5-4e28-9746-851907470acd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:59:13 compute-0 nova_compute[255040]: 2025-11-29 07:59:13.780 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: f663740c-6ef5-4e28-9746-851907470acd] VM Started (Lifecycle Event)
Nov 29 07:59:13 compute-0 nova_compute[255040]: 2025-11-29 07:59:13.786 255071 DEBUG nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:59:13 compute-0 nova_compute[255040]: 2025-11-29 07:59:13.791 255071 INFO nova.virt.libvirt.driver [-] [instance: f663740c-6ef5-4e28-9746-851907470acd] Instance spawned successfully.
Nov 29 07:59:13 compute-0 nova_compute[255040]: 2025-11-29 07:59:13.791 255071 DEBUG nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:59:13 compute-0 systemd[1]: Started libpod-conmon-583ea0590e70dc31deab1b525b5ebc3b6986b02869ed8e4fab20dec2de775c3c.scope.
Nov 29 07:59:13 compute-0 podman[263881]: 2025-11-29 07:59:13.743082972 +0000 UTC m=+0.024093323 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:59:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:59:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954436b254573d290ed0c5a370e2fededa4db3f1daa038a2624a2b6e60b2199d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:13 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2583265479' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:13 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2583265479' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:13 compute-0 ceph-mon[75237]: pgmap v1097: 305 pgs: 305 active+clean; 88 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 29 07:59:13 compute-0 podman[263881]: 2025-11-29 07:59:13.90108335 +0000 UTC m=+0.182093701 container init 583ea0590e70dc31deab1b525b5ebc3b6986b02869ed8e4fab20dec2de775c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd160439-5666-434d-854d-1a14849672c3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 07:59:13 compute-0 podman[263881]: 2025-11-29 07:59:13.908978292 +0000 UTC m=+0.189988623 container start 583ea0590e70dc31deab1b525b5ebc3b6986b02869ed8e4fab20dec2de775c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd160439-5666-434d-854d-1a14849672c3, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 07:59:13 compute-0 neutron-haproxy-ovnmeta-cd160439-5666-434d-854d-1a14849672c3[263897]: [NOTICE]   (263901) : New worker (263903) forked
Nov 29 07:59:13 compute-0 neutron-haproxy-ovnmeta-cd160439-5666-434d-854d-1a14849672c3[263897]: [NOTICE]   (263901) : Loading success.
Nov 29 07:59:14 compute-0 nova_compute[255040]: 2025-11-29 07:59:14.062 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: f663740c-6ef5-4e28-9746-851907470acd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:59:14 compute-0 nova_compute[255040]: 2025-11-29 07:59:14.073 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: f663740c-6ef5-4e28-9746-851907470acd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:59:14 compute-0 nova_compute[255040]: 2025-11-29 07:59:14.078 255071 DEBUG nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:59:14 compute-0 nova_compute[255040]: 2025-11-29 07:59:14.079 255071 DEBUG nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:59:14 compute-0 nova_compute[255040]: 2025-11-29 07:59:14.079 255071 DEBUG nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:59:14 compute-0 nova_compute[255040]: 2025-11-29 07:59:14.080 255071 DEBUG nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:59:14 compute-0 nova_compute[255040]: 2025-11-29 07:59:14.080 255071 DEBUG nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:59:14 compute-0 nova_compute[255040]: 2025-11-29 07:59:14.081 255071 DEBUG nova.virt.libvirt.driver [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:59:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:59:14.105546) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403154105723, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1544, "num_deletes": 265, "total_data_size": 2217245, "memory_usage": 2255648, "flush_reason": "Manual Compaction"}
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Nov 29 07:59:14 compute-0 nova_compute[255040]: 2025-11-29 07:59:14.107 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: f663740c-6ef5-4e28-9746-851907470acd] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:59:14 compute-0 nova_compute[255040]: 2025-11-29 07:59:14.108 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403153.7797282, f663740c-6ef5-4e28-9746-851907470acd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:59:14 compute-0 nova_compute[255040]: 2025-11-29 07:59:14.108 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: f663740c-6ef5-4e28-9746-851907470acd] VM Paused (Lifecycle Event)
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403154127047, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 2170552, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19093, "largest_seqno": 20636, "table_properties": {"data_size": 2163249, "index_size": 4246, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15501, "raw_average_key_size": 19, "raw_value_size": 2148305, "raw_average_value_size": 2764, "num_data_blocks": 191, "num_entries": 777, "num_filter_entries": 777, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403032, "oldest_key_time": 1764403032, "file_creation_time": 1764403154, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 21607 microseconds, and 7642 cpu microseconds.
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:59:14.127177) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 2170552 bytes OK
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:59:14.127203) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:59:14.129592) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:59:14.129611) EVENT_LOG_v1 {"time_micros": 1764403154129605, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:59:14.129630) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 2210228, prev total WAL file size 2210228, number of live WAL files 2.
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:59:14.130879) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353038' seq:0, type:0; will stop at (end)
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(2119KB)], [44(6630KB)]
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403154131000, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 8959719, "oldest_snapshot_seqno": -1}
Nov 29 07:59:14 compute-0 nova_compute[255040]: 2025-11-29 07:59:14.138 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: f663740c-6ef5-4e28-9746-851907470acd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:59:14 compute-0 nova_compute[255040]: 2025-11-29 07:59:14.148 255071 INFO nova.compute.manager [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Took 14.74 seconds to spawn the instance on the hypervisor.
Nov 29 07:59:14 compute-0 nova_compute[255040]: 2025-11-29 07:59:14.149 255071 DEBUG nova.compute.manager [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:59:14 compute-0 nova_compute[255040]: 2025-11-29 07:59:14.150 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403153.786035, f663740c-6ef5-4e28-9746-851907470acd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:59:14 compute-0 nova_compute[255040]: 2025-11-29 07:59:14.151 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: f663740c-6ef5-4e28-9746-851907470acd] VM Resumed (Lifecycle Event)
Nov 29 07:59:14 compute-0 nova_compute[255040]: 2025-11-29 07:59:14.182 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: f663740c-6ef5-4e28-9746-851907470acd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:59:14 compute-0 nova_compute[255040]: 2025-11-29 07:59:14.185 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: f663740c-6ef5-4e28-9746-851907470acd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4716 keys, 8832761 bytes, temperature: kUnknown
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403154193591, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 8832761, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8798687, "index_size": 21164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 116996, "raw_average_key_size": 24, "raw_value_size": 8710934, "raw_average_value_size": 1847, "num_data_blocks": 883, "num_entries": 4716, "num_filter_entries": 4716, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764403154, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:59:14.193841) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8832761 bytes
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:59:14.196242) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.0 rd, 141.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 6.5 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(8.2) write-amplify(4.1) OK, records in: 5257, records dropped: 541 output_compression: NoCompression
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:59:14.196259) EVENT_LOG_v1 {"time_micros": 1764403154196251, "job": 22, "event": "compaction_finished", "compaction_time_micros": 62665, "compaction_time_cpu_micros": 26324, "output_level": 6, "num_output_files": 1, "total_output_size": 8832761, "num_input_records": 5257, "num_output_records": 4716, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403154196729, "job": 22, "event": "table_file_deletion", "file_number": 46}
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403154197735, "job": 22, "event": "table_file_deletion", "file_number": 44}
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:59:14.130674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:59:14.197924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:59:14.197933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:59:14.197937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:59:14.197940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:59:14 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-07:59:14.197942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:59:14 compute-0 nova_compute[255040]: 2025-11-29 07:59:14.303 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:14 compute-0 nova_compute[255040]: 2025-11-29 07:59:14.499 255071 INFO nova.compute.manager [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Took 16.23 seconds to build instance.
Nov 29 07:59:15 compute-0 nova_compute[255040]: 2025-11-29 07:59:15.003 255071 DEBUG oslo_concurrency.lockutils [None req-b527d613-42fd-4589-bd3e-b28ebb47fb1c a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Lock "f663740c-6ef5-4e28-9746-851907470acd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 88 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 964 KiB/s rd, 1.8 MiB/s wr, 94 op/s
Nov 29 07:59:15 compute-0 nova_compute[255040]: 2025-11-29 07:59:15.781 255071 DEBUG nova.compute.manager [req-e890ca0e-20f2-46e5-a1d0-62c216112680 req-27b2cdbf-91c7-46ee-ba09-10b0ce62188b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Received event network-vif-plugged-2882a412-8149-42bc-be44-538ca28e3f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:59:15 compute-0 nova_compute[255040]: 2025-11-29 07:59:15.781 255071 DEBUG oslo_concurrency.lockutils [req-e890ca0e-20f2-46e5-a1d0-62c216112680 req-27b2cdbf-91c7-46ee-ba09-10b0ce62188b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "f663740c-6ef5-4e28-9746-851907470acd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:15 compute-0 nova_compute[255040]: 2025-11-29 07:59:15.782 255071 DEBUG oslo_concurrency.lockutils [req-e890ca0e-20f2-46e5-a1d0-62c216112680 req-27b2cdbf-91c7-46ee-ba09-10b0ce62188b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "f663740c-6ef5-4e28-9746-851907470acd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:15 compute-0 nova_compute[255040]: 2025-11-29 07:59:15.782 255071 DEBUG oslo_concurrency.lockutils [req-e890ca0e-20f2-46e5-a1d0-62c216112680 req-27b2cdbf-91c7-46ee-ba09-10b0ce62188b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "f663740c-6ef5-4e28-9746-851907470acd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:15 compute-0 nova_compute[255040]: 2025-11-29 07:59:15.782 255071 DEBUG nova.compute.manager [req-e890ca0e-20f2-46e5-a1d0-62c216112680 req-27b2cdbf-91c7-46ee-ba09-10b0ce62188b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] No waiting events found dispatching network-vif-plugged-2882a412-8149-42bc-be44-538ca28e3f31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:59:15 compute-0 nova_compute[255040]: 2025-11-29 07:59:15.782 255071 WARNING nova.compute.manager [req-e890ca0e-20f2-46e5-a1d0-62c216112680 req-27b2cdbf-91c7-46ee-ba09-10b0ce62188b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Received unexpected event network-vif-plugged-2882a412-8149-42bc-be44-538ca28e3f31 for instance with vm_state active and task_state None.
Nov 29 07:59:15 compute-0 ceph-mgr[75527]: [devicehealth INFO root] Check health
Nov 29 07:59:16 compute-0 ceph-mon[75237]: pgmap v1098: 305 pgs: 305 active+clean; 88 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 964 KiB/s rd, 1.8 MiB/s wr, 94 op/s
Nov 29 07:59:16 compute-0 nova_compute[255040]: 2025-11-29 07:59:16.197 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:16 compute-0 ovn_controller[153295]: 2025-11-29T07:59:16Z|00040|binding|INFO|Releasing lport aa582f6c-337f-423b-a9ff-23e3004ff8b3 from this chassis (sb_readonly=0)
Nov 29 07:59:16 compute-0 nova_compute[255040]: 2025-11-29 07:59:16.462 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:16 compute-0 ovn_controller[153295]: 2025-11-29T07:59:16Z|00041|binding|INFO|Releasing lport aa582f6c-337f-423b-a9ff-23e3004ff8b3 from this chassis (sb_readonly=0)
Nov 29 07:59:16 compute-0 nova_compute[255040]: 2025-11-29 07:59:16.683 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 88 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 964 KiB/s rd, 1.8 MiB/s wr, 94 op/s
Nov 29 07:59:17 compute-0 podman[263913]: 2025-11-29 07:59:17.929871118 +0000 UTC m=+0.082974542 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 07:59:18 compute-0 ceph-mon[75237]: pgmap v1099: 305 pgs: 305 active+clean; 88 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 964 KiB/s rd, 1.8 MiB/s wr, 94 op/s
Nov 29 07:59:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:19 compute-0 nova_compute[255040]: 2025-11-29 07:59:19.306 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 88 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 115 op/s
Nov 29 07:59:20 compute-0 ceph-mon[75237]: pgmap v1100: 305 pgs: 305 active+clean; 88 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 115 op/s
Nov 29 07:59:21 compute-0 nova_compute[255040]: 2025-11-29 07:59:21.199 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 88 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 125 op/s
Nov 29 07:59:21 compute-0 nova_compute[255040]: 2025-11-29 07:59:21.796 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:21 compute-0 nova_compute[255040]: 2025-11-29 07:59:21.797 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:21 compute-0 nova_compute[255040]: 2025-11-29 07:59:21.824 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:21 compute-0 nova_compute[255040]: 2025-11-29 07:59:21.825 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:21 compute-0 nova_compute[255040]: 2025-11-29 07:59:21.826 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:21 compute-0 nova_compute[255040]: 2025-11-29 07:59:21.826 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:21 compute-0 nova_compute[255040]: 2025-11-29 07:59:21.849 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:21 compute-0 nova_compute[255040]: 2025-11-29 07:59:21.850 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:21 compute-0 nova_compute[255040]: 2025-11-29 07:59:21.850 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:21 compute-0 nova_compute[255040]: 2025-11-29 07:59:21.850 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:59:21 compute-0 nova_compute[255040]: 2025-11-29 07:59:21.851 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:21 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:21.896 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:59:21 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:21.897 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:59:21 compute-0 nova_compute[255040]: 2025-11-29 07:59:21.897 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:22 compute-0 ceph-mon[75237]: pgmap v1101: 305 pgs: 305 active+clean; 88 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 125 op/s
Nov 29 07:59:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:22.900 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 88 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 607 KiB/s wr, 106 op/s
Nov 29 07:59:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:59:23 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/646823733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:23 compute-0 nova_compute[255040]: 2025-11-29 07:59:23.558 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.707s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:23 compute-0 nova_compute[255040]: 2025-11-29 07:59:23.640 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:59:23 compute-0 nova_compute[255040]: 2025-11-29 07:59:23.641 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:59:23 compute-0 nova_compute[255040]: 2025-11-29 07:59:23.859 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:59:23 compute-0 nova_compute[255040]: 2025-11-29 07:59:23.860 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4560MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:59:23 compute-0 nova_compute[255040]: 2025-11-29 07:59:23.861 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:23 compute-0 nova_compute[255040]: 2025-11-29 07:59:23.861 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:23 compute-0 nova_compute[255040]: 2025-11-29 07:59:23.949 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance f663740c-6ef5-4e28-9746-851907470acd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:59:23 compute-0 nova_compute[255040]: 2025-11-29 07:59:23.949 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:59:23 compute-0 nova_compute[255040]: 2025-11-29 07:59:23.950 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:59:23 compute-0 nova_compute[255040]: 2025-11-29 07:59:23.990 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:24 compute-0 nova_compute[255040]: 2025-11-29 07:59:24.310 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:59:24 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2483286914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:24 compute-0 nova_compute[255040]: 2025-11-29 07:59:24.501 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:24 compute-0 nova_compute[255040]: 2025-11-29 07:59:24.508 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:59:24 compute-0 nova_compute[255040]: 2025-11-29 07:59:24.526 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:59:24 compute-0 nova_compute[255040]: 2025-11-29 07:59:24.553 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:59:24 compute-0 nova_compute[255040]: 2025-11-29 07:59:24.554 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:24 compute-0 nova_compute[255040]: 2025-11-29 07:59:24.704 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:24 compute-0 nova_compute[255040]: 2025-11-29 07:59:24.705 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:59:24 compute-0 nova_compute[255040]: 2025-11-29 07:59:24.705 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:59:24 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/646823733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:24 compute-0 podman[263980]: 2025-11-29 07:59:24.966145607 +0000 UTC m=+0.119696844 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible)
Nov 29 07:59:24 compute-0 nova_compute[255040]: 2025-11-29 07:59:24.970 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "refresh_cache-f663740c-6ef5-4e28-9746-851907470acd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:59:24 compute-0 nova_compute[255040]: 2025-11-29 07:59:24.970 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquired lock "refresh_cache-f663740c-6ef5-4e28-9746-851907470acd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:59:24 compute-0 nova_compute[255040]: 2025-11-29 07:59:24.971 255071 DEBUG nova.network.neutron [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: f663740c-6ef5-4e28-9746-851907470acd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 07:59:24 compute-0 nova_compute[255040]: 2025-11-29 07:59:24.971 255071 DEBUG nova.objects.instance [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f663740c-6ef5-4e28-9746-851907470acd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:59:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 88 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 80 op/s
Nov 29 07:59:26 compute-0 ceph-mon[75237]: pgmap v1102: 305 pgs: 305 active+clean; 88 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 607 KiB/s wr, 106 op/s
Nov 29 07:59:26 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2483286914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:26 compute-0 ceph-mon[75237]: pgmap v1103: 305 pgs: 305 active+clean; 88 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 80 op/s
Nov 29 07:59:26 compute-0 nova_compute[255040]: 2025-11-29 07:59:26.337 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:26 compute-0 nova_compute[255040]: 2025-11-29 07:59:26.779 255071 DEBUG nova.network.neutron [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: f663740c-6ef5-4e28-9746-851907470acd] Updating instance_info_cache with network_info: [{"id": "2882a412-8149-42bc-be44-538ca28e3f31", "address": "fa:16:3e:a5:5e:84", "network": {"id": "cd160439-5666-434d-854d-1a14849672c3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1569429700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c301766e23b54f51bc3ecc646fdabab5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2882a412-81", "ovs_interfaceid": "2882a412-8149-42bc-be44-538ca28e3f31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:59:26 compute-0 nova_compute[255040]: 2025-11-29 07:59:26.797 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Releasing lock "refresh_cache-f663740c-6ef5-4e28-9746-851907470acd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:59:26 compute-0 nova_compute[255040]: 2025-11-29 07:59:26.798 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: f663740c-6ef5-4e28-9746-851907470acd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 07:59:26 compute-0 nova_compute[255040]: 2025-11-29 07:59:26.798 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:26 compute-0 nova_compute[255040]: 2025-11-29 07:59:26.799 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:26 compute-0 nova_compute[255040]: 2025-11-29 07:59:26.799 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:26 compute-0 nova_compute[255040]: 2025-11-29 07:59:26.799 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:59:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:27.119 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:27.119 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:27.120 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 88 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 341 B/s wr, 40 op/s
Nov 29 07:59:29 compute-0 nova_compute[255040]: 2025-11-29 07:59:29.309 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 88 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 682 B/s wr, 44 op/s
Nov 29 07:59:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:29 compute-0 ceph-mon[75237]: pgmap v1104: 305 pgs: 305 active+clean; 88 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 341 B/s wr, 40 op/s
Nov 29 07:59:31 compute-0 ceph-mon[75237]: pgmap v1105: 305 pgs: 305 active+clean; 88 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 682 B/s wr, 44 op/s
Nov 29 07:59:31 compute-0 nova_compute[255040]: 2025-11-29 07:59:31.341 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 92 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 303 KiB/s rd, 469 KiB/s wr, 32 op/s
Nov 29 07:59:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Nov 29 07:59:32 compute-0 ceph-mon[75237]: pgmap v1106: 305 pgs: 305 active+clean; 92 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 303 KiB/s rd, 469 KiB/s wr, 32 op/s
Nov 29 07:59:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Nov 29 07:59:32 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Nov 29 07:59:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:59:33 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4043964733' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:59:33 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4043964733' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 98 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 1.3 MiB/s wr, 44 op/s
Nov 29 07:59:33 compute-0 ceph-mon[75237]: osdmap e160: 3 total, 3 up, 3 in
Nov 29 07:59:33 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4043964733' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:33 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4043964733' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:34 compute-0 nova_compute[255040]: 2025-11-29 07:59:34.312 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:34 compute-0 nova_compute[255040]: 2025-11-29 07:59:34.802 255071 DEBUG oslo_concurrency.lockutils [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Acquiring lock "f663740c-6ef5-4e28-9746-851907470acd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:34 compute-0 nova_compute[255040]: 2025-11-29 07:59:34.803 255071 DEBUG oslo_concurrency.lockutils [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Lock "f663740c-6ef5-4e28-9746-851907470acd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:34 compute-0 nova_compute[255040]: 2025-11-29 07:59:34.804 255071 DEBUG oslo_concurrency.lockutils [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Acquiring lock "f663740c-6ef5-4e28-9746-851907470acd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:34 compute-0 nova_compute[255040]: 2025-11-29 07:59:34.804 255071 DEBUG oslo_concurrency.lockutils [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Lock "f663740c-6ef5-4e28-9746-851907470acd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:34 compute-0 nova_compute[255040]: 2025-11-29 07:59:34.805 255071 DEBUG oslo_concurrency.lockutils [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Lock "f663740c-6ef5-4e28-9746-851907470acd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:34 compute-0 nova_compute[255040]: 2025-11-29 07:59:34.807 255071 INFO nova.compute.manager [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Terminating instance
Nov 29 07:59:34 compute-0 nova_compute[255040]: 2025-11-29 07:59:34.809 255071 DEBUG nova.compute.manager [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:59:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:34 compute-0 ceph-mon[75237]: pgmap v1108: 305 pgs: 305 active+clean; 98 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 1.3 MiB/s wr, 44 op/s
Nov 29 07:59:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:59:34 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1443182988' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:59:34 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1443182988' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:34 compute-0 kernel: tap2882a412-81 (unregistering): left promiscuous mode
Nov 29 07:59:34 compute-0 NetworkManager[49116]: <info>  [1764403174.9233] device (tap2882a412-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:59:34 compute-0 ovn_controller[153295]: 2025-11-29T07:59:34Z|00042|binding|INFO|Releasing lport 2882a412-8149-42bc-be44-538ca28e3f31 from this chassis (sb_readonly=0)
Nov 29 07:59:34 compute-0 ovn_controller[153295]: 2025-11-29T07:59:34Z|00043|binding|INFO|Setting lport 2882a412-8149-42bc-be44-538ca28e3f31 down in Southbound
Nov 29 07:59:34 compute-0 nova_compute[255040]: 2025-11-29 07:59:34.980 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:34 compute-0 ovn_controller[153295]: 2025-11-29T07:59:34Z|00044|binding|INFO|Removing iface tap2882a412-81 ovn-installed in OVS
Nov 29 07:59:34 compute-0 nova_compute[255040]: 2025-11-29 07:59:34.985 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:34 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:34.990 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:5e:84 10.100.0.6'], port_security=['fa:16:3e:a5:5e:84 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f663740c-6ef5-4e28-9746-851907470acd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd160439-5666-434d-854d-1a14849672c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c301766e23b54f51bc3ecc646fdabab5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2cd2d8d8-9ea8-48ce-8168-e4a5cefbb2d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81ef2cbb-df9d-470c-af02-3f1a1bf4c210, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=2882a412-8149-42bc-be44-538ca28e3f31) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:59:34 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:34.992 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 2882a412-8149-42bc-be44-538ca28e3f31 in datapath cd160439-5666-434d-854d-1a14849672c3 unbound from our chassis
Nov 29 07:59:34 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:34.993 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cd160439-5666-434d-854d-1a14849672c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:59:34 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:34.994 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[e9f7bafd-1158-4438-94dc-6932a9e6643e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:34 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:34.995 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cd160439-5666-434d-854d-1a14849672c3 namespace which is not needed anymore
Nov 29 07:59:34 compute-0 nova_compute[255040]: 2025-11-29 07:59:34.997 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:35 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 29 07:59:35 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 14.018s CPU time.
Nov 29 07:59:35 compute-0 systemd-machined[216271]: Machine qemu-2-instance-00000002 terminated.
Nov 29 07:59:35 compute-0 neutron-haproxy-ovnmeta-cd160439-5666-434d-854d-1a14849672c3[263897]: [NOTICE]   (263901) : haproxy version is 2.8.14-c23fe91
Nov 29 07:59:35 compute-0 neutron-haproxy-ovnmeta-cd160439-5666-434d-854d-1a14849672c3[263897]: [NOTICE]   (263901) : path to executable is /usr/sbin/haproxy
Nov 29 07:59:35 compute-0 neutron-haproxy-ovnmeta-cd160439-5666-434d-854d-1a14849672c3[263897]: [WARNING]  (263901) : Exiting Master process...
Nov 29 07:59:35 compute-0 neutron-haproxy-ovnmeta-cd160439-5666-434d-854d-1a14849672c3[263897]: [ALERT]    (263901) : Current worker (263903) exited with code 143 (Terminated)
Nov 29 07:59:35 compute-0 neutron-haproxy-ovnmeta-cd160439-5666-434d-854d-1a14849672c3[263897]: [WARNING]  (263901) : All workers exited. Exiting... (0)
Nov 29 07:59:35 compute-0 systemd[1]: libpod-583ea0590e70dc31deab1b525b5ebc3b6986b02869ed8e4fab20dec2de775c3c.scope: Deactivated successfully.
Nov 29 07:59:35 compute-0 podman[264024]: 2025-11-29 07:59:35.171996795 +0000 UTC m=+0.071273885 container died 583ea0590e70dc31deab1b525b5ebc3b6986b02869ed8e4fab20dec2de775c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd160439-5666-434d-854d-1a14849672c3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:59:35 compute-0 nova_compute[255040]: 2025-11-29 07:59:35.217 255071 DEBUG nova.compute.manager [req-45cf8de4-e204-4ef3-86e6-eae378a90898 req-03719aa6-7b76-4fc9-9bb7-a9541e4882db cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Received event network-vif-unplugged-2882a412-8149-42bc-be44-538ca28e3f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:59:35 compute-0 nova_compute[255040]: 2025-11-29 07:59:35.218 255071 DEBUG oslo_concurrency.lockutils [req-45cf8de4-e204-4ef3-86e6-eae378a90898 req-03719aa6-7b76-4fc9-9bb7-a9541e4882db cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "f663740c-6ef5-4e28-9746-851907470acd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:35 compute-0 nova_compute[255040]: 2025-11-29 07:59:35.219 255071 DEBUG oslo_concurrency.lockutils [req-45cf8de4-e204-4ef3-86e6-eae378a90898 req-03719aa6-7b76-4fc9-9bb7-a9541e4882db cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "f663740c-6ef5-4e28-9746-851907470acd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:35 compute-0 nova_compute[255040]: 2025-11-29 07:59:35.219 255071 DEBUG oslo_concurrency.lockutils [req-45cf8de4-e204-4ef3-86e6-eae378a90898 req-03719aa6-7b76-4fc9-9bb7-a9541e4882db cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "f663740c-6ef5-4e28-9746-851907470acd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:35 compute-0 nova_compute[255040]: 2025-11-29 07:59:35.219 255071 DEBUG nova.compute.manager [req-45cf8de4-e204-4ef3-86e6-eae378a90898 req-03719aa6-7b76-4fc9-9bb7-a9541e4882db cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] No waiting events found dispatching network-vif-unplugged-2882a412-8149-42bc-be44-538ca28e3f31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:59:35 compute-0 nova_compute[255040]: 2025-11-29 07:59:35.219 255071 DEBUG nova.compute.manager [req-45cf8de4-e204-4ef3-86e6-eae378a90898 req-03719aa6-7b76-4fc9-9bb7-a9541e4882db cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Received event network-vif-unplugged-2882a412-8149-42bc-be44-538ca28e3f31 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:59:35 compute-0 nova_compute[255040]: 2025-11-29 07:59:35.246 255071 INFO nova.virt.libvirt.driver [-] [instance: f663740c-6ef5-4e28-9746-851907470acd] Instance destroyed successfully.
Nov 29 07:59:35 compute-0 nova_compute[255040]: 2025-11-29 07:59:35.247 255071 DEBUG nova.objects.instance [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Lazy-loading 'resources' on Instance uuid f663740c-6ef5-4e28-9746-851907470acd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:59:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-954436b254573d290ed0c5a370e2fededa4db3f1daa038a2624a2b6e60b2199d-merged.mount: Deactivated successfully.
Nov 29 07:59:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-583ea0590e70dc31deab1b525b5ebc3b6986b02869ed8e4fab20dec2de775c3c-userdata-shm.mount: Deactivated successfully.
Nov 29 07:59:35 compute-0 nova_compute[255040]: 2025-11-29 07:59:35.260 255071 DEBUG nova.virt.libvirt.vif [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:58:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-374271226',display_name='tempest-VolumesActionsTest-instance-374271226',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-374271226',id=2,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:59:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c301766e23b54f51bc3ecc646fdabab5',ramdisk_id='',reservation_id='r-ny5dug40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-1723029332',owner_user_name='tempest-VolumesActionsTest-1723029332-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:59:14Z,user_data=None,user_id='a05ba0abba70499fbc58e9840c97b6d3',uuid=f663740c-6ef5-4e28-9746-851907470acd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2882a412-8149-42bc-be44-538ca28e3f31", "address": "fa:16:3e:a5:5e:84", "network": {"id": "cd160439-5666-434d-854d-1a14849672c3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1569429700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c301766e23b54f51bc3ecc646fdabab5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2882a412-81", "ovs_interfaceid": "2882a412-8149-42bc-be44-538ca28e3f31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:59:35 compute-0 nova_compute[255040]: 2025-11-29 07:59:35.261 255071 DEBUG nova.network.os_vif_util [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Converting VIF {"id": "2882a412-8149-42bc-be44-538ca28e3f31", "address": "fa:16:3e:a5:5e:84", "network": {"id": "cd160439-5666-434d-854d-1a14849672c3", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1569429700-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c301766e23b54f51bc3ecc646fdabab5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2882a412-81", "ovs_interfaceid": "2882a412-8149-42bc-be44-538ca28e3f31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:59:35 compute-0 nova_compute[255040]: 2025-11-29 07:59:35.262 255071 DEBUG nova.network.os_vif_util [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:5e:84,bridge_name='br-int',has_traffic_filtering=True,id=2882a412-8149-42bc-be44-538ca28e3f31,network=Network(cd160439-5666-434d-854d-1a14849672c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2882a412-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:59:35 compute-0 nova_compute[255040]: 2025-11-29 07:59:35.263 255071 DEBUG os_vif [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:5e:84,bridge_name='br-int',has_traffic_filtering=True,id=2882a412-8149-42bc-be44-538ca28e3f31,network=Network(cd160439-5666-434d-854d-1a14849672c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2882a412-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:59:35 compute-0 nova_compute[255040]: 2025-11-29 07:59:35.266 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:35 compute-0 nova_compute[255040]: 2025-11-29 07:59:35.267 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2882a412-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:35 compute-0 nova_compute[255040]: 2025-11-29 07:59:35.270 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:35 compute-0 nova_compute[255040]: 2025-11-29 07:59:35.273 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:59:35 compute-0 nova_compute[255040]: 2025-11-29 07:59:35.273 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:35 compute-0 nova_compute[255040]: 2025-11-29 07:59:35.277 255071 INFO os_vif [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:5e:84,bridge_name='br-int',has_traffic_filtering=True,id=2882a412-8149-42bc-be44-538ca28e3f31,network=Network(cd160439-5666-434d-854d-1a14849672c3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2882a412-81')
Nov 29 07:59:35 compute-0 podman[264024]: 2025-11-29 07:59:35.290559458 +0000 UTC m=+0.189836558 container cleanup 583ea0590e70dc31deab1b525b5ebc3b6986b02869ed8e4fab20dec2de775c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd160439-5666-434d-854d-1a14849672c3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 07:59:35 compute-0 systemd[1]: libpod-conmon-583ea0590e70dc31deab1b525b5ebc3b6986b02869ed8e4fab20dec2de775c3c.scope: Deactivated successfully.
Nov 29 07:59:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 109 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 2.4 MiB/s wr, 86 op/s
Nov 29 07:59:35 compute-0 podman[264079]: 2025-11-29 07:59:35.501482996 +0000 UTC m=+0.184693340 container remove 583ea0590e70dc31deab1b525b5ebc3b6986b02869ed8e4fab20dec2de775c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd160439-5666-434d-854d-1a14849672c3, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:59:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:35.509 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[fd8a456a-0c58-4036-be2a-2ddcf3435cbf]: (4, ('Sat Nov 29 07:59:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cd160439-5666-434d-854d-1a14849672c3 (583ea0590e70dc31deab1b525b5ebc3b6986b02869ed8e4fab20dec2de775c3c)\n583ea0590e70dc31deab1b525b5ebc3b6986b02869ed8e4fab20dec2de775c3c\nSat Nov 29 07:59:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cd160439-5666-434d-854d-1a14849672c3 (583ea0590e70dc31deab1b525b5ebc3b6986b02869ed8e4fab20dec2de775c3c)\n583ea0590e70dc31deab1b525b5ebc3b6986b02869ed8e4fab20dec2de775c3c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:35.512 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[6f689373-0da0-4c6d-9c38-79450d27d12e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:35.513 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd160439-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:35 compute-0 kernel: tapcd160439-50: left promiscuous mode
Nov 29 07:59:35 compute-0 nova_compute[255040]: 2025-11-29 07:59:35.515 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:35 compute-0 nova_compute[255040]: 2025-11-29 07:59:35.528 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:35.532 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[b5555a56-8b17-4867-9642-efba33eb3981]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:35.547 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[dcbd5876-8bfe-439b-9548-9e325f2eec9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:35.548 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[2784c94b-9bb6-42da-b7d5-5ea5d791b4ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:35.570 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[a877fc9a-f4e0-4d1c-9b49-bc9adf8c19ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539036, 'reachable_time': 37583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264097, 'error': None, 'target': 'ovnmeta-cd160439-5666-434d-854d-1a14849672c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:35.573 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cd160439-5666-434d-854d-1a14849672c3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:59:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 07:59:35.574 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[7f41b724-1153-4e8f-b4e6-d5bbbacdb43b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:35 compute-0 systemd[1]: run-netns-ovnmeta\x2dcd160439\x2d5666\x2d434d\x2d854d\x2d1a14849672c3.mount: Deactivated successfully.
Nov 29 07:59:35 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1443182988' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:35 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1443182988' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:35 compute-0 ceph-mon[75237]: pgmap v1109: 305 pgs: 305 active+clean; 109 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 2.4 MiB/s wr, 86 op/s
Nov 29 07:59:37 compute-0 nova_compute[255040]: 2025-11-29 07:59:37.318 255071 DEBUG nova.compute.manager [req-fde896f8-4ca7-40bd-87c1-4e22790233bd req-3f0ead00-1b13-4f80-adea-b271783863bc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Received event network-vif-plugged-2882a412-8149-42bc-be44-538ca28e3f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:59:37 compute-0 nova_compute[255040]: 2025-11-29 07:59:37.319 255071 DEBUG oslo_concurrency.lockutils [req-fde896f8-4ca7-40bd-87c1-4e22790233bd req-3f0ead00-1b13-4f80-adea-b271783863bc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "f663740c-6ef5-4e28-9746-851907470acd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:37 compute-0 nova_compute[255040]: 2025-11-29 07:59:37.319 255071 DEBUG oslo_concurrency.lockutils [req-fde896f8-4ca7-40bd-87c1-4e22790233bd req-3f0ead00-1b13-4f80-adea-b271783863bc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "f663740c-6ef5-4e28-9746-851907470acd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:37 compute-0 nova_compute[255040]: 2025-11-29 07:59:37.319 255071 DEBUG oslo_concurrency.lockutils [req-fde896f8-4ca7-40bd-87c1-4e22790233bd req-3f0ead00-1b13-4f80-adea-b271783863bc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "f663740c-6ef5-4e28-9746-851907470acd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:37 compute-0 nova_compute[255040]: 2025-11-29 07:59:37.319 255071 DEBUG nova.compute.manager [req-fde896f8-4ca7-40bd-87c1-4e22790233bd req-3f0ead00-1b13-4f80-adea-b271783863bc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] No waiting events found dispatching network-vif-plugged-2882a412-8149-42bc-be44-538ca28e3f31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:59:37 compute-0 nova_compute[255040]: 2025-11-29 07:59:37.320 255071 WARNING nova.compute.manager [req-fde896f8-4ca7-40bd-87c1-4e22790233bd req-3f0ead00-1b13-4f80-adea-b271783863bc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Received unexpected event network-vif-plugged-2882a412-8149-42bc-be44-538ca28e3f31 for instance with vm_state active and task_state deleting.
Nov 29 07:59:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 109 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.4 MiB/s wr, 86 op/s
Nov 29 07:59:37 compute-0 nova_compute[255040]: 2025-11-29 07:59:37.958 255071 INFO nova.virt.libvirt.driver [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Deleting instance files /var/lib/nova/instances/f663740c-6ef5-4e28-9746-851907470acd_del
Nov 29 07:59:37 compute-0 nova_compute[255040]: 2025-11-29 07:59:37.959 255071 INFO nova.virt.libvirt.driver [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Deletion of /var/lib/nova/instances/f663740c-6ef5-4e28-9746-851907470acd_del complete
Nov 29 07:59:38 compute-0 nova_compute[255040]: 2025-11-29 07:59:38.023 255071 INFO nova.compute.manager [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Took 3.21 seconds to destroy the instance on the hypervisor.
Nov 29 07:59:38 compute-0 nova_compute[255040]: 2025-11-29 07:59:38.024 255071 DEBUG oslo.service.loopingcall [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:59:38 compute-0 nova_compute[255040]: 2025-11-29 07:59:38.024 255071 DEBUG nova.compute.manager [-] [instance: f663740c-6ef5-4e28-9746-851907470acd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:59:38 compute-0 nova_compute[255040]: 2025-11-29 07:59:38.024 255071 DEBUG nova.network.neutron [-] [instance: f663740c-6ef5-4e28-9746-851907470acd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:59:38 compute-0 ceph-mon[75237]: pgmap v1110: 305 pgs: 305 active+clean; 109 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.4 MiB/s wr, 86 op/s
Nov 29 07:59:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:59:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/30289384' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:59:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/30289384' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:59:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:59:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:59:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:59:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:59:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:59:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_07:59:38
Nov 29 07:59:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:59:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 07:59:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.log', 'volumes', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', '.rgw.root']
Nov 29 07:59:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:59:38 compute-0 podman[264099]: 2025-11-29 07:59:38.970131802 +0000 UTC m=+0.133515987 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 29 07:59:39 compute-0 nova_compute[255040]: 2025-11-29 07:59:39.074 255071 DEBUG nova.network.neutron [-] [instance: f663740c-6ef5-4e28-9746-851907470acd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:59:39 compute-0 nova_compute[255040]: 2025-11-29 07:59:39.093 255071 INFO nova.compute.manager [-] [instance: f663740c-6ef5-4e28-9746-851907470acd] Took 1.07 seconds to deallocate network for instance.
Nov 29 07:59:39 compute-0 nova_compute[255040]: 2025-11-29 07:59:39.135 255071 DEBUG oslo_concurrency.lockutils [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:39 compute-0 nova_compute[255040]: 2025-11-29 07:59:39.136 255071 DEBUG oslo_concurrency.lockutils [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:39 compute-0 nova_compute[255040]: 2025-11-29 07:59:39.210 255071 DEBUG oslo_concurrency.processutils [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:39 compute-0 nova_compute[255040]: 2025-11-29 07:59:39.314 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 65 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 413 KiB/s rd, 2.5 MiB/s wr, 162 op/s
Nov 29 07:59:39 compute-0 nova_compute[255040]: 2025-11-29 07:59:39.451 255071 DEBUG nova.compute.manager [req-93f57330-0591-455a-bc25-90e5ced9bf90 req-da1fe9cd-5e86-42aa-85c0-b0691b033cd5 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f663740c-6ef5-4e28-9746-851907470acd] Received event network-vif-deleted-2882a412-8149-42bc-be44-538ca28e3f31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:59:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Nov 29 07:59:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Nov 29 07:59:39 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Nov 29 07:59:39 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/30289384' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:39 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/30289384' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:59:39 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3914625698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:39 compute-0 nova_compute[255040]: 2025-11-29 07:59:39.666 255071 DEBUG oslo_concurrency.processutils [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:39 compute-0 nova_compute[255040]: 2025-11-29 07:59:39.674 255071 DEBUG nova.compute.provider_tree [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:59:39 compute-0 nova_compute[255040]: 2025-11-29 07:59:39.694 255071 DEBUG nova.scheduler.client.report [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:59:39 compute-0 nova_compute[255040]: 2025-11-29 07:59:39.712 255071 DEBUG oslo_concurrency.lockutils [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:39 compute-0 nova_compute[255040]: 2025-11-29 07:59:39.737 255071 INFO nova.scheduler.client.report [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Deleted allocations for instance f663740c-6ef5-4e28-9746-851907470acd
Nov 29 07:59:39 compute-0 nova_compute[255040]: 2025-11-29 07:59:39.821 255071 DEBUG oslo_concurrency.lockutils [None req-910e1617-256f-44b2-af63-e808960d6a16 a05ba0abba70499fbc58e9840c97b6d3 c301766e23b54f51bc3ecc646fdabab5 - - default default] Lock "f663740c-6ef5-4e28-9746-851907470acd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:40 compute-0 nova_compute[255040]: 2025-11-29 07:59:40.272 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:40 compute-0 ceph-mon[75237]: pgmap v1111: 305 pgs: 305 active+clean; 65 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 413 KiB/s rd, 2.5 MiB/s wr, 162 op/s
Nov 29 07:59:40 compute-0 ceph-mon[75237]: osdmap e161: 3 total, 3 up, 3 in
Nov 29 07:59:40 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3914625698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 451 KiB/s rd, 2.2 MiB/s wr, 177 op/s
Nov 29 07:59:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:59:42 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/431076059' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:59:42 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/431076059' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:42 compute-0 ceph-mon[75237]: pgmap v1113: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 451 KiB/s rd, 2.2 MiB/s wr, 177 op/s
Nov 29 07:59:42 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/431076059' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:42 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/431076059' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:59:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:59:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:59:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:59:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:59:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:59:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:59:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:59:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:59:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:59:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:59:43 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1404370540' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:59:43 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1404370540' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 345 KiB/s rd, 1.1 MiB/s wr, 139 op/s
Nov 29 07:59:43 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1404370540' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:43 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1404370540' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:44 compute-0 nova_compute[255040]: 2025-11-29 07:59:44.317 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:44 compute-0 ceph-mon[75237]: pgmap v1114: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 345 KiB/s rd, 1.1 MiB/s wr, 139 op/s
Nov 29 07:59:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:45 compute-0 nova_compute[255040]: 2025-11-29 07:59:45.276 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 18 KiB/s wr, 123 op/s
Nov 29 07:59:45 compute-0 sudo[264149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:45 compute-0 sudo[264149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:45 compute-0 sudo[264149]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:46 compute-0 sudo[264174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:59:46 compute-0 sudo[264174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:46 compute-0 sudo[264174]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:46 compute-0 sudo[264199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:46 compute-0 sudo[264199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:46 compute-0 sudo[264199]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:46 compute-0 sudo[264224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:59:46 compute-0 sudo[264224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:46 compute-0 ceph-mon[75237]: pgmap v1115: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 18 KiB/s wr, 123 op/s
Nov 29 07:59:46 compute-0 sudo[264224]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:59:46 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2185099922' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:59:46 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2185099922' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:46 compute-0 sudo[264279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:46 compute-0 sudo[264279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:46 compute-0 sudo[264279]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:46 compute-0 sudo[264304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:59:46 compute-0 sudo[264304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:46 compute-0 sudo[264304]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:46 compute-0 sudo[264329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:46 compute-0 sudo[264329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:46 compute-0 sudo[264329]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:46 compute-0 sudo[264354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- inventory --format=json-pretty --filter-for-batch
Nov 29 07:59:46 compute-0 sudo[264354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:47 compute-0 podman[264419]: 2025-11-29 07:59:47.400335067 +0000 UTC m=+0.051868103 container create 085983b9b3f8db84d6180cfd32928cb6ef758078a5755c8e750b4900361da562 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jang, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:59:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 18 KiB/s wr, 124 op/s
Nov 29 07:59:47 compute-0 systemd[1]: Started libpod-conmon-085983b9b3f8db84d6180cfd32928cb6ef758078a5755c8e750b4900361da562.scope.
Nov 29 07:59:47 compute-0 podman[264419]: 2025-11-29 07:59:47.374988422 +0000 UTC m=+0.026521488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:59:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:59:47 compute-0 podman[264419]: 2025-11-29 07:59:47.512082295 +0000 UTC m=+0.163615351 container init 085983b9b3f8db84d6180cfd32928cb6ef758078a5755c8e750b4900361da562 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jang, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 07:59:47 compute-0 podman[264419]: 2025-11-29 07:59:47.522451755 +0000 UTC m=+0.173984801 container start 085983b9b3f8db84d6180cfd32928cb6ef758078a5755c8e750b4900361da562 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jang, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:59:47 compute-0 podman[264419]: 2025-11-29 07:59:47.52671434 +0000 UTC m=+0.178247506 container attach 085983b9b3f8db84d6180cfd32928cb6ef758078a5755c8e750b4900361da562 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:59:47 compute-0 exciting_jang[264434]: 167 167
Nov 29 07:59:47 compute-0 systemd[1]: libpod-085983b9b3f8db84d6180cfd32928cb6ef758078a5755c8e750b4900361da562.scope: Deactivated successfully.
Nov 29 07:59:47 compute-0 conmon[264434]: conmon 085983b9b3f8db84d618 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-085983b9b3f8db84d6180cfd32928cb6ef758078a5755c8e750b4900361da562.scope/container/memory.events
Nov 29 07:59:47 compute-0 podman[264419]: 2025-11-29 07:59:47.532863337 +0000 UTC m=+0.184396373 container died 085983b9b3f8db84d6180cfd32928cb6ef758078a5755c8e750b4900361da562 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 07:59:47 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2185099922' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:47 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2185099922' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf98757c3cfa7bf9bde67faad7c48a8d931d0f9bc3645fe6747ea8214abc5208-merged.mount: Deactivated successfully.
Nov 29 07:59:47 compute-0 podman[264419]: 2025-11-29 07:59:47.589104926 +0000 UTC m=+0.240637962 container remove 085983b9b3f8db84d6180cfd32928cb6ef758078a5755c8e750b4900361da562 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jang, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 07:59:47 compute-0 systemd[1]: libpod-conmon-085983b9b3f8db84d6180cfd32928cb6ef758078a5755c8e750b4900361da562.scope: Deactivated successfully.
Nov 29 07:59:47 compute-0 podman[264458]: 2025-11-29 07:59:47.76953894 +0000 UTC m=+0.047896455 container create 778c7aefc2ff610c674b706a92934c35b7b78d97f43aae0575a566ad437c52b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ride, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 07:59:47 compute-0 systemd[1]: Started libpod-conmon-778c7aefc2ff610c674b706a92934c35b7b78d97f43aae0575a566ad437c52b9.scope.
Nov 29 07:59:47 compute-0 podman[264458]: 2025-11-29 07:59:47.749449997 +0000 UTC m=+0.027807532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:59:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4709616cea501180a01543ed6d0c16b983220c58feaac6e92de11d2a314d310/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4709616cea501180a01543ed6d0c16b983220c58feaac6e92de11d2a314d310/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4709616cea501180a01543ed6d0c16b983220c58feaac6e92de11d2a314d310/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4709616cea501180a01543ed6d0c16b983220c58feaac6e92de11d2a314d310/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:47 compute-0 podman[264458]: 2025-11-29 07:59:47.865191974 +0000 UTC m=+0.143549539 container init 778c7aefc2ff610c674b706a92934c35b7b78d97f43aae0575a566ad437c52b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 07:59:47 compute-0 podman[264458]: 2025-11-29 07:59:47.880402205 +0000 UTC m=+0.158759720 container start 778c7aefc2ff610c674b706a92934c35b7b78d97f43aae0575a566ad437c52b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:59:47 compute-0 podman[264458]: 2025-11-29 07:59:47.884705691 +0000 UTC m=+0.163063236 container attach 778c7aefc2ff610c674b706a92934c35b7b78d97f43aae0575a566ad437c52b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 29 07:59:48 compute-0 ceph-mon[75237]: pgmap v1116: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 18 KiB/s wr, 124 op/s
Nov 29 07:59:48 compute-0 podman[264488]: 2025-11-29 07:59:48.910754027 +0000 UTC m=+0.072907580 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:59:49 compute-0 nova_compute[255040]: 2025-11-29 07:59:49.319 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:49 compute-0 funny_ride[264474]: [
Nov 29 07:59:49 compute-0 funny_ride[264474]:     {
Nov 29 07:59:49 compute-0 funny_ride[264474]:         "available": false,
Nov 29 07:59:49 compute-0 funny_ride[264474]:         "ceph_device": false,
Nov 29 07:59:49 compute-0 funny_ride[264474]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 07:59:49 compute-0 funny_ride[264474]:         "lsm_data": {},
Nov 29 07:59:49 compute-0 funny_ride[264474]:         "lvs": [],
Nov 29 07:59:49 compute-0 funny_ride[264474]:         "path": "/dev/sr0",
Nov 29 07:59:49 compute-0 funny_ride[264474]:         "rejected_reasons": [
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "Insufficient space (<5GB)",
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "Has a FileSystem"
Nov 29 07:59:49 compute-0 funny_ride[264474]:         ],
Nov 29 07:59:49 compute-0 funny_ride[264474]:         "sys_api": {
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "actuators": null,
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "device_nodes": "sr0",
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "devname": "sr0",
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "human_readable_size": "482.00 KB",
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "id_bus": "ata",
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "model": "QEMU DVD-ROM",
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "nr_requests": "2",
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "parent": "/dev/sr0",
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "partitions": {},
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "path": "/dev/sr0",
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "removable": "1",
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "rev": "2.5+",
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "ro": "0",
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "rotational": "1",
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "sas_address": "",
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "sas_device_handle": "",
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "scheduler_mode": "mq-deadline",
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "sectors": 0,
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "sectorsize": "2048",
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "size": 493568.0,
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "support_discard": "2048",
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "type": "disk",
Nov 29 07:59:49 compute-0 funny_ride[264474]:             "vendor": "QEMU"
Nov 29 07:59:49 compute-0 funny_ride[264474]:         }
Nov 29 07:59:49 compute-0 funny_ride[264474]:     }
Nov 29 07:59:49 compute-0 funny_ride[264474]: ]
Nov 29 07:59:49 compute-0 systemd[1]: libpod-778c7aefc2ff610c674b706a92934c35b7b78d97f43aae0575a566ad437c52b9.scope: Deactivated successfully.
Nov 29 07:59:49 compute-0 systemd[1]: libpod-778c7aefc2ff610c674b706a92934c35b7b78d97f43aae0575a566ad437c52b9.scope: Consumed 1.561s CPU time.
Nov 29 07:59:49 compute-0 podman[264458]: 2025-11-29 07:59:49.389794897 +0000 UTC m=+1.668152412 container died 778c7aefc2ff610c674b706a92934c35b7b78d97f43aae0575a566ad437c52b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ride, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:59:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4709616cea501180a01543ed6d0c16b983220c58feaac6e92de11d2a314d310-merged.mount: Deactivated successfully.
Nov 29 07:59:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.3 KiB/s wr, 60 op/s
Nov 29 07:59:49 compute-0 podman[264458]: 2025-11-29 07:59:49.454996769 +0000 UTC m=+1.733354284 container remove 778c7aefc2ff610c674b706a92934c35b7b78d97f43aae0575a566ad437c52b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ride, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:59:49 compute-0 systemd[1]: libpod-conmon-778c7aefc2ff610c674b706a92934c35b7b78d97f43aae0575a566ad437c52b9.scope: Deactivated successfully.
Nov 29 07:59:49 compute-0 sudo[264354]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:59:49 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:59:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:59:49 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:59:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:59:49 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:59:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:59:49 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:59:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:59:49 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:59:49 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev f18db90a-b3a5-4fb6-a690-f8a1a7e962cb does not exist
Nov 29 07:59:49 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 74e51c40-d543-4d90-aae4-dc4750a79c76 does not exist
Nov 29 07:59:49 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 3f3d0ef0-9fa9-40b2-916f-d58adb4f9ffa does not exist
Nov 29 07:59:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:59:49 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:59:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:59:49 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:59:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:59:49 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:59:49 compute-0 sudo[266333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:49 compute-0 sudo[266333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:49 compute-0 sudo[266333]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:49 compute-0 sudo[266358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:59:49 compute-0 sudo[266358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:49 compute-0 sudo[266358]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:49 compute-0 sudo[266383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:49 compute-0 sudo[266383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:49 compute-0 sudo[266383]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Nov 29 07:59:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Nov 29 07:59:49 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Nov 29 07:59:49 compute-0 sudo[266408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:59:49 compute-0 sudo[266408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:59:50 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3705962679' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:59:50 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3705962679' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:50 compute-0 podman[266472]: 2025-11-29 07:59:50.210299751 +0000 UTC m=+0.044535984 container create 41d9d236e9fb733ed48797965ef49f44773e1614ec6ac2edb76c9eac45637a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_dijkstra, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:59:50 compute-0 nova_compute[255040]: 2025-11-29 07:59:50.246 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403175.2442296, f663740c-6ef5-4e28-9746-851907470acd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:59:50 compute-0 nova_compute[255040]: 2025-11-29 07:59:50.247 255071 INFO nova.compute.manager [-] [instance: f663740c-6ef5-4e28-9746-851907470acd] VM Stopped (Lifecycle Event)
Nov 29 07:59:50 compute-0 systemd[1]: Started libpod-conmon-41d9d236e9fb733ed48797965ef49f44773e1614ec6ac2edb76c9eac45637a41.scope.
Nov 29 07:59:50 compute-0 nova_compute[255040]: 2025-11-29 07:59:50.276 255071 DEBUG nova.compute.manager [None req-9489b176-d93c-4b52-b4a9-cb5e728c03cd - - - - - -] [instance: f663740c-6ef5-4e28-9746-851907470acd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:59:50 compute-0 nova_compute[255040]: 2025-11-29 07:59:50.279 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:50 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:59:50 compute-0 podman[266472]: 2025-11-29 07:59:50.191434312 +0000 UTC m=+0.025670565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:59:50 compute-0 podman[266472]: 2025-11-29 07:59:50.328904536 +0000 UTC m=+0.163140799 container init 41d9d236e9fb733ed48797965ef49f44773e1614ec6ac2edb76c9eac45637a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:59:50 compute-0 podman[266472]: 2025-11-29 07:59:50.338037702 +0000 UTC m=+0.172273935 container start 41d9d236e9fb733ed48797965ef49f44773e1614ec6ac2edb76c9eac45637a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_dijkstra, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 07:59:50 compute-0 determined_dijkstra[266488]: 167 167
Nov 29 07:59:50 compute-0 systemd[1]: libpod-41d9d236e9fb733ed48797965ef49f44773e1614ec6ac2edb76c9eac45637a41.scope: Deactivated successfully.
Nov 29 07:59:50 compute-0 podman[266472]: 2025-11-29 07:59:50.354131147 +0000 UTC m=+0.188367400 container attach 41d9d236e9fb733ed48797965ef49f44773e1614ec6ac2edb76c9eac45637a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_dijkstra, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:59:50 compute-0 podman[266472]: 2025-11-29 07:59:50.355889104 +0000 UTC m=+0.190125347 container died 41d9d236e9fb733ed48797965ef49f44773e1614ec6ac2edb76c9eac45637a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 29 07:59:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-00e46562ec5eb1573db3498e94ce10eda67e5054b4acfb52d17301e471349577-merged.mount: Deactivated successfully.
Nov 29 07:59:50 compute-0 podman[266472]: 2025-11-29 07:59:50.401702402 +0000 UTC m=+0.235938635 container remove 41d9d236e9fb733ed48797965ef49f44773e1614ec6ac2edb76c9eac45637a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 07:59:50 compute-0 systemd[1]: libpod-conmon-41d9d236e9fb733ed48797965ef49f44773e1614ec6ac2edb76c9eac45637a41.scope: Deactivated successfully.
Nov 29 07:59:50 compute-0 ceph-mon[75237]: pgmap v1117: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.3 KiB/s wr, 60 op/s
Nov 29 07:59:50 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:59:50 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:59:50 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:59:50 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:59:50 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:59:50 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:59:50 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:59:50 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:59:50 compute-0 ceph-mon[75237]: osdmap e162: 3 total, 3 up, 3 in
Nov 29 07:59:50 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3705962679' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:50 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3705962679' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:50 compute-0 podman[266515]: 2025-11-29 07:59:50.602476285 +0000 UTC m=+0.057949386 container create a78ad2ed5a260f9a03ae570c3e1996f3414c5815d121ab9c73f8dc21cf7ff69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chatterjee, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:59:50 compute-0 systemd[1]: Started libpod-conmon-a78ad2ed5a260f9a03ae570c3e1996f3414c5815d121ab9c73f8dc21cf7ff69d.scope.
Nov 29 07:59:50 compute-0 podman[266515]: 2025-11-29 07:59:50.578402195 +0000 UTC m=+0.033875276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:59:50 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c116c492d7df971fa67c5fda1ce6fa9e7bd5834c5d312beb790a0aecd2edb1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c116c492d7df971fa67c5fda1ce6fa9e7bd5834c5d312beb790a0aecd2edb1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c116c492d7df971fa67c5fda1ce6fa9e7bd5834c5d312beb790a0aecd2edb1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c116c492d7df971fa67c5fda1ce6fa9e7bd5834c5d312beb790a0aecd2edb1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c116c492d7df971fa67c5fda1ce6fa9e7bd5834c5d312beb790a0aecd2edb1b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:50 compute-0 podman[266515]: 2025-11-29 07:59:50.717590585 +0000 UTC m=+0.173063676 container init a78ad2ed5a260f9a03ae570c3e1996f3414c5815d121ab9c73f8dc21cf7ff69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:59:50 compute-0 podman[266515]: 2025-11-29 07:59:50.728667004 +0000 UTC m=+0.184140075 container start a78ad2ed5a260f9a03ae570c3e1996f3414c5815d121ab9c73f8dc21cf7ff69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chatterjee, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:59:50 compute-0 podman[266515]: 2025-11-29 07:59:50.732898938 +0000 UTC m=+0.188372019 container attach a78ad2ed5a260f9a03ae570c3e1996f3414c5815d121ab9c73f8dc21cf7ff69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chatterjee, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 07:59:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 3.0 KiB/s wr, 68 op/s
Nov 29 07:59:51 compute-0 ceph-mon[75237]: pgmap v1119: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 3.0 KiB/s wr, 68 op/s
Nov 29 07:59:51 compute-0 elated_chatterjee[266531]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:59:51 compute-0 elated_chatterjee[266531]: --> relative data size: 1.0
Nov 29 07:59:51 compute-0 elated_chatterjee[266531]: --> All data devices are unavailable
Nov 29 07:59:51 compute-0 systemd[1]: libpod-a78ad2ed5a260f9a03ae570c3e1996f3414c5815d121ab9c73f8dc21cf7ff69d.scope: Deactivated successfully.
Nov 29 07:59:51 compute-0 systemd[1]: libpod-a78ad2ed5a260f9a03ae570c3e1996f3414c5815d121ab9c73f8dc21cf7ff69d.scope: Consumed 1.106s CPU time.
Nov 29 07:59:51 compute-0 podman[266515]: 2025-11-29 07:59:51.886741867 +0000 UTC m=+1.342214968 container died a78ad2ed5a260f9a03ae570c3e1996f3414c5815d121ab9c73f8dc21cf7ff69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chatterjee, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 07:59:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c116c492d7df971fa67c5fda1ce6fa9e7bd5834c5d312beb790a0aecd2edb1b-merged.mount: Deactivated successfully.
Nov 29 07:59:52 compute-0 podman[266515]: 2025-11-29 07:59:52.000627134 +0000 UTC m=+1.456100195 container remove a78ad2ed5a260f9a03ae570c3e1996f3414c5815d121ab9c73f8dc21cf7ff69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chatterjee, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 07:59:52 compute-0 systemd[1]: libpod-conmon-a78ad2ed5a260f9a03ae570c3e1996f3414c5815d121ab9c73f8dc21cf7ff69d.scope: Deactivated successfully.
Nov 29 07:59:52 compute-0 sudo[266408]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:52 compute-0 sudo[266574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:52 compute-0 sudo[266574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:52 compute-0 sudo[266574]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:52 compute-0 sudo[266599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:59:52 compute-0 sudo[266599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:52 compute-0 sudo[266599]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:52 compute-0 sudo[266624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:52 compute-0 sudo[266624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:52 compute-0 sudo[266624]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:52 compute-0 sudo[266649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 07:59:52 compute-0 sudo[266649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:52 compute-0 podman[266715]: 2025-11-29 07:59:52.734487997 +0000 UTC m=+0.047658368 container create efde7ab93949be1e0f6bffc58a27310f9cb274d63cf68e07e0ee27c2e8ff5285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 07:59:52 compute-0 systemd[1]: Started libpod-conmon-efde7ab93949be1e0f6bffc58a27310f9cb274d63cf68e07e0ee27c2e8ff5285.scope.
Nov 29 07:59:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:59:52 compute-0 podman[266715]: 2025-11-29 07:59:52.713451209 +0000 UTC m=+0.026621600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:59:52 compute-0 podman[266715]: 2025-11-29 07:59:52.813934263 +0000 UTC m=+0.127104654 container init efde7ab93949be1e0f6bffc58a27310f9cb274d63cf68e07e0ee27c2e8ff5285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dijkstra, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 07:59:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Nov 29 07:59:52 compute-0 podman[266715]: 2025-11-29 07:59:52.82307051 +0000 UTC m=+0.136240911 container start efde7ab93949be1e0f6bffc58a27310f9cb274d63cf68e07e0ee27c2e8ff5285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:59:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Nov 29 07:59:52 compute-0 vigilant_dijkstra[266731]: 167 167
Nov 29 07:59:52 compute-0 systemd[1]: libpod-efde7ab93949be1e0f6bffc58a27310f9cb274d63cf68e07e0ee27c2e8ff5285.scope: Deactivated successfully.
Nov 29 07:59:52 compute-0 podman[266715]: 2025-11-29 07:59:52.827580521 +0000 UTC m=+0.140750942 container attach efde7ab93949be1e0f6bffc58a27310f9cb274d63cf68e07e0ee27c2e8ff5285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 07:59:52 compute-0 podman[266715]: 2025-11-29 07:59:52.82825114 +0000 UTC m=+0.141421551 container died efde7ab93949be1e0f6bffc58a27310f9cb274d63cf68e07e0ee27c2e8ff5285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:59:52 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Nov 29 07:59:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-68859360e8d8d12fa3e491748b0394f5222927de0f09fb513b805832902b3ba4-merged.mount: Deactivated successfully.
Nov 29 07:59:52 compute-0 podman[266715]: 2025-11-29 07:59:52.874841578 +0000 UTC m=+0.188011949 container remove efde7ab93949be1e0f6bffc58a27310f9cb274d63cf68e07e0ee27c2e8ff5285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dijkstra, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:59:52 compute-0 systemd[1]: libpod-conmon-efde7ab93949be1e0f6bffc58a27310f9cb274d63cf68e07e0ee27c2e8ff5285.scope: Deactivated successfully.
Nov 29 07:59:53 compute-0 podman[266756]: 2025-11-29 07:59:53.086425793 +0000 UTC m=+0.051078690 container create afb08b260581c05b6b587d7c1da9cc462c355fcf6a21714ba8a0685b6ee43dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pasteur, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:59:53 compute-0 systemd[1]: Started libpod-conmon-afb08b260581c05b6b587d7c1da9cc462c355fcf6a21714ba8a0685b6ee43dd7.scope.
Nov 29 07:59:53 compute-0 podman[266756]: 2025-11-29 07:59:53.063860624 +0000 UTC m=+0.028513511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:59:53 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:59:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac24d4911f87897f02b88521f8cf6976ec8072387a686f782b4b5c535a94a95a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac24d4911f87897f02b88521f8cf6976ec8072387a686f782b4b5c535a94a95a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac24d4911f87897f02b88521f8cf6976ec8072387a686f782b4b5c535a94a95a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac24d4911f87897f02b88521f8cf6976ec8072387a686f782b4b5c535a94a95a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:53 compute-0 podman[266756]: 2025-11-29 07:59:53.193499246 +0000 UTC m=+0.158152203 container init afb08b260581c05b6b587d7c1da9cc462c355fcf6a21714ba8a0685b6ee43dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pasteur, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:59:53 compute-0 podman[266756]: 2025-11-29 07:59:53.201565353 +0000 UTC m=+0.166218260 container start afb08b260581c05b6b587d7c1da9cc462c355fcf6a21714ba8a0685b6ee43dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pasteur, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:59:53 compute-0 podman[266756]: 2025-11-29 07:59:53.206370664 +0000 UTC m=+0.171023531 container attach afb08b260581c05b6b587d7c1da9cc462c355fcf6a21714ba8a0685b6ee43dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pasteur, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:59:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.2 KiB/s wr, 52 op/s
Nov 29 07:59:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:59:53 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4164093148' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:59:53 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4164093148' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Nov 29 07:59:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Nov 29 07:59:53 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Nov 29 07:59:53 compute-0 ceph-mon[75237]: osdmap e163: 3 total, 3 up, 3 in
Nov 29 07:59:53 compute-0 ceph-mon[75237]: pgmap v1121: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.2 KiB/s wr, 52 op/s
Nov 29 07:59:53 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4164093148' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:53 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4164093148' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]: {
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:     "0": [
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:         {
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "devices": [
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "/dev/loop3"
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             ],
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "lv_name": "ceph_lv0",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "lv_size": "21470642176",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "name": "ceph_lv0",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "tags": {
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.cluster_name": "ceph",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.crush_device_class": "",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.encrypted": "0",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.osd_id": "0",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.type": "block",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.vdo": "0"
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             },
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "type": "block",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "vg_name": "ceph_vg0"
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:         }
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:     ],
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:     "1": [
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:         {
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "devices": [
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "/dev/loop4"
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             ],
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "lv_name": "ceph_lv1",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "lv_size": "21470642176",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "name": "ceph_lv1",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "tags": {
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.cluster_name": "ceph",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.crush_device_class": "",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.encrypted": "0",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.osd_id": "1",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.type": "block",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.vdo": "0"
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             },
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "type": "block",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "vg_name": "ceph_vg1"
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:         }
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:     ],
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:     "2": [
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:         {
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "devices": [
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "/dev/loop5"
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             ],
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "lv_name": "ceph_lv2",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "lv_size": "21470642176",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "name": "ceph_lv2",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "tags": {
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.cluster_name": "ceph",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.crush_device_class": "",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.encrypted": "0",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.osd_id": "2",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.type": "block",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:                 "ceph.vdo": "0"
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             },
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "type": "block",
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:             "vg_name": "ceph_vg2"
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:         }
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]:     ]
Nov 29 07:59:54 compute-0 hungry_pasteur[266772]: }
Nov 29 07:59:54 compute-0 systemd[1]: libpod-afb08b260581c05b6b587d7c1da9cc462c355fcf6a21714ba8a0685b6ee43dd7.scope: Deactivated successfully.
Nov 29 07:59:54 compute-0 podman[266756]: 2025-11-29 07:59:54.142418468 +0000 UTC m=+1.107071345 container died afb08b260581c05b6b587d7c1da9cc462c355fcf6a21714ba8a0685b6ee43dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:59:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac24d4911f87897f02b88521f8cf6976ec8072387a686f782b4b5c535a94a95a-merged.mount: Deactivated successfully.
Nov 29 07:59:54 compute-0 podman[266756]: 2025-11-29 07:59:54.208052541 +0000 UTC m=+1.172705408 container remove afb08b260581c05b6b587d7c1da9cc462c355fcf6a21714ba8a0685b6ee43dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pasteur, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 07:59:54 compute-0 systemd[1]: libpod-conmon-afb08b260581c05b6b587d7c1da9cc462c355fcf6a21714ba8a0685b6ee43dd7.scope: Deactivated successfully.
Nov 29 07:59:54 compute-0 sudo[266649]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:54 compute-0 nova_compute[255040]: 2025-11-29 07:59:54.322 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:54 compute-0 sudo[266792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:54 compute-0 sudo[266792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:54 compute-0 sudo[266792]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:54 compute-0 sudo[266817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:59:54 compute-0 sudo[266817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:54 compute-0 sudo[266817]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:54 compute-0 sudo[266842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:54 compute-0 sudo[266842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:54 compute-0 sudo[266842]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:54 compute-0 sudo[266867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 07:59:54 compute-0 sudo[266867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:54 compute-0 ceph-mon[75237]: osdmap e164: 3 total, 3 up, 3 in
Nov 29 07:59:54 compute-0 podman[266932]: 2025-11-29 07:59:54.954704101 +0000 UTC m=+0.070035382 container create fa3e8d4a539c3095665a48e023af8aece81c1671cdb791fa801a59fe77581d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_fermat, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:59:55 compute-0 systemd[1]: Started libpod-conmon-fa3e8d4a539c3095665a48e023af8aece81c1671cdb791fa801a59fe77581d2e.scope.
Nov 29 07:59:55 compute-0 podman[266932]: 2025-11-29 07:59:54.913371934 +0000 UTC m=+0.028703245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:59:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:59:55 compute-0 podman[266932]: 2025-11-29 07:59:55.084017984 +0000 UTC m=+0.199349285 container init fa3e8d4a539c3095665a48e023af8aece81c1671cdb791fa801a59fe77581d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_fermat, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:59:55 compute-0 podman[266932]: 2025-11-29 07:59:55.09384278 +0000 UTC m=+0.209174081 container start fa3e8d4a539c3095665a48e023af8aece81c1671cdb791fa801a59fe77581d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_fermat, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 07:59:55 compute-0 podman[266932]: 2025-11-29 07:59:55.098931927 +0000 UTC m=+0.214263208 container attach fa3e8d4a539c3095665a48e023af8aece81c1671cdb791fa801a59fe77581d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:59:55 compute-0 systemd[1]: libpod-fa3e8d4a539c3095665a48e023af8aece81c1671cdb791fa801a59fe77581d2e.scope: Deactivated successfully.
Nov 29 07:59:55 compute-0 eager_fermat[266948]: 167 167
Nov 29 07:59:55 compute-0 podman[266932]: 2025-11-29 07:59:55.104119597 +0000 UTC m=+0.219450898 container died fa3e8d4a539c3095665a48e023af8aece81c1671cdb791fa801a59fe77581d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_fermat, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:59:55 compute-0 conmon[266948]: conmon fa3e8d4a539c3095665a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fa3e8d4a539c3095665a48e023af8aece81c1671cdb791fa801a59fe77581d2e.scope/container/memory.events
Nov 29 07:59:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-1272dd3a7141f1beb865f406ee91621716270b3b76d36f147a5ddea8213188da-merged.mount: Deactivated successfully.
Nov 29 07:59:55 compute-0 podman[266932]: 2025-11-29 07:59:55.194826797 +0000 UTC m=+0.310158098 container remove fa3e8d4a539c3095665a48e023af8aece81c1671cdb791fa801a59fe77581d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:59:55 compute-0 podman[266950]: 2025-11-29 07:59:55.196972246 +0000 UTC m=+0.168574696 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 07:59:55 compute-0 systemd[1]: libpod-conmon-fa3e8d4a539c3095665a48e023af8aece81c1671cdb791fa801a59fe77581d2e.scope: Deactivated successfully.
Nov 29 07:59:55 compute-0 nova_compute[255040]: 2025-11-29 07:59:55.281 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:55 compute-0 podman[266990]: 2025-11-29 07:59:55.390697078 +0000 UTC m=+0.054919884 container create ec20f4e138ed516abd97279b961c9ef3e540d10e82b62e87ac6511dc12c74126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:59:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 3.8 KiB/s wr, 87 op/s
Nov 29 07:59:55 compute-0 podman[266990]: 2025-11-29 07:59:55.366843474 +0000 UTC m=+0.031066200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:59:55 compute-0 systemd[1]: Started libpod-conmon-ec20f4e138ed516abd97279b961c9ef3e540d10e82b62e87ac6511dc12c74126.scope.
Nov 29 07:59:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:59:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1258fb1eba358d73c1627ee021b650ea74c00bd69edab049ad25c2be55402db5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1258fb1eba358d73c1627ee021b650ea74c00bd69edab049ad25c2be55402db5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1258fb1eba358d73c1627ee021b650ea74c00bd69edab049ad25c2be55402db5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1258fb1eba358d73c1627ee021b650ea74c00bd69edab049ad25c2be55402db5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 8.266792016669923e-07 of space, bias 1.0, pg target 0.0002480037605000977 quantized to 32 (current 32)
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:59:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:59:56 compute-0 podman[266990]: 2025-11-29 07:59:56.128372196 +0000 UTC m=+0.792594992 container init ec20f4e138ed516abd97279b961c9ef3e540d10e82b62e87ac6511dc12c74126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 07:59:56 compute-0 podman[266990]: 2025-11-29 07:59:56.137976585 +0000 UTC m=+0.802199331 container start ec20f4e138ed516abd97279b961c9ef3e540d10e82b62e87ac6511dc12c74126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 07:59:56 compute-0 podman[266990]: 2025-11-29 07:59:56.427236239 +0000 UTC m=+1.091458965 container attach ec20f4e138ed516abd97279b961c9ef3e540d10e82b62e87ac6511dc12c74126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:59:56 compute-0 ceph-mon[75237]: pgmap v1123: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 3.8 KiB/s wr, 87 op/s
Nov 29 07:59:57 compute-0 priceless_faraday[267007]: {
Nov 29 07:59:57 compute-0 priceless_faraday[267007]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 07:59:57 compute-0 priceless_faraday[267007]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:59:57 compute-0 priceless_faraday[267007]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:59:57 compute-0 priceless_faraday[267007]:         "osd_id": 2,
Nov 29 07:59:57 compute-0 priceless_faraday[267007]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 07:59:57 compute-0 priceless_faraday[267007]:         "type": "bluestore"
Nov 29 07:59:57 compute-0 priceless_faraday[267007]:     },
Nov 29 07:59:57 compute-0 priceless_faraday[267007]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 07:59:57 compute-0 priceless_faraday[267007]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:59:57 compute-0 priceless_faraday[267007]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:59:57 compute-0 priceless_faraday[267007]:         "osd_id": 0,
Nov 29 07:59:57 compute-0 priceless_faraday[267007]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 07:59:57 compute-0 priceless_faraday[267007]:         "type": "bluestore"
Nov 29 07:59:57 compute-0 priceless_faraday[267007]:     },
Nov 29 07:59:57 compute-0 priceless_faraday[267007]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 07:59:57 compute-0 priceless_faraday[267007]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 07:59:57 compute-0 priceless_faraday[267007]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:59:57 compute-0 priceless_faraday[267007]:         "osd_id": 1,
Nov 29 07:59:57 compute-0 priceless_faraday[267007]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 07:59:57 compute-0 priceless_faraday[267007]:         "type": "bluestore"
Nov 29 07:59:57 compute-0 priceless_faraday[267007]:     }
Nov 29 07:59:57 compute-0 priceless_faraday[267007]: }
Nov 29 07:59:57 compute-0 systemd[1]: libpod-ec20f4e138ed516abd97279b961c9ef3e540d10e82b62e87ac6511dc12c74126.scope: Deactivated successfully.
Nov 29 07:59:57 compute-0 podman[266990]: 2025-11-29 07:59:57.145538572 +0000 UTC m=+1.809761288 container died ec20f4e138ed516abd97279b961c9ef3e540d10e82b62e87ac6511dc12c74126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:59:57 compute-0 systemd[1]: libpod-ec20f4e138ed516abd97279b961c9ef3e540d10e82b62e87ac6511dc12c74126.scope: Consumed 1.015s CPU time.
Nov 29 07:59:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-1258fb1eba358d73c1627ee021b650ea74c00bd69edab049ad25c2be55402db5-merged.mount: Deactivated successfully.
Nov 29 07:59:57 compute-0 podman[266990]: 2025-11-29 07:59:57.204984108 +0000 UTC m=+1.869206814 container remove ec20f4e138ed516abd97279b961c9ef3e540d10e82b62e87ac6511dc12c74126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:59:57 compute-0 systemd[1]: libpod-conmon-ec20f4e138ed516abd97279b961c9ef3e540d10e82b62e87ac6511dc12c74126.scope: Deactivated successfully.
Nov 29 07:59:57 compute-0 sudo[266867]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:59:57 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:59:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:59:57 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:59:57 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 764a742b-8f47-41b4-a5f3-1ca40e0d69ff does not exist
Nov 29 07:59:57 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 6422f715-5e58-4201-b7d0-86e97e7d9ffd does not exist
Nov 29 07:59:57 compute-0 sudo[267052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:57 compute-0 sudo[267052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:57 compute-0 sudo[267052]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:59:57 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4206161897' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:59:57 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4206161897' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:57 compute-0 sudo[267077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:59:57 compute-0 sudo[267077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:57 compute-0 sudo[267077]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.0 KiB/s wr, 69 op/s
Nov 29 07:59:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Nov 29 07:59:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Nov 29 07:59:57 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Nov 29 07:59:58 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:59:58 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 07:59:58 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4206161897' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:58 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4206161897' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:58 compute-0 ceph-mon[75237]: pgmap v1124: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.0 KiB/s wr, 69 op/s
Nov 29 07:59:58 compute-0 ceph-mon[75237]: osdmap e165: 3 total, 3 up, 3 in
Nov 29 07:59:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:59:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2928273620' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:59:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2928273620' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:59:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4227778457' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:59:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4227778457' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:59:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1737310812' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:59:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1737310812' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Nov 29 07:59:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Nov 29 07:59:58 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Nov 29 07:59:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2928273620' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2928273620' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4227778457' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4227778457' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1737310812' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1737310812' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:59 compute-0 ceph-mon[75237]: osdmap e166: 3 total, 3 up, 3 in
Nov 29 07:59:59 compute-0 nova_compute[255040]: 2025-11-29 07:59:59.324 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 7.7 KiB/s wr, 170 op/s
Nov 29 07:59:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:59:59 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2456540200' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:59:59 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2456540200' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:00 compute-0 nova_compute[255040]: 2025-11-29 08:00:00.285 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:00 compute-0 ceph-mon[75237]: pgmap v1127: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 7.7 KiB/s wr, 170 op/s
Nov 29 08:00:00 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2456540200' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:00 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2456540200' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:01 compute-0 nova_compute[255040]: 2025-11-29 08:00:01.093 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 6.2 KiB/s wr, 136 op/s
Nov 29 08:00:02 compute-0 ceph-mon[75237]: pgmap v1128: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 6.2 KiB/s wr, 136 op/s
Nov 29 08:00:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 5.9 KiB/s wr, 128 op/s
Nov 29 08:00:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3747395798' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3747395798' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:04 compute-0 nova_compute[255040]: 2025-11-29 08:00:04.325 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:04 compute-0 ceph-mon[75237]: pgmap v1129: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 5.9 KiB/s wr, 128 op/s
Nov 29 08:00:04 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3747395798' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:04 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3747395798' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Nov 29 08:00:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Nov 29 08:00:04 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Nov 29 08:00:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3925660884' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3925660884' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:05 compute-0 nova_compute[255040]: 2025-11-29 08:00:05.288 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 6.2 KiB/s wr, 159 op/s
Nov 29 08:00:05 compute-0 ceph-mon[75237]: osdmap e167: 3 total, 3 up, 3 in
Nov 29 08:00:05 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3925660884' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:05 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3925660884' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:05 compute-0 ceph-mon[75237]: pgmap v1131: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 6.2 KiB/s wr, 159 op/s
Nov 29 08:00:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 3.0 KiB/s wr, 107 op/s
Nov 29 08:00:08 compute-0 ceph-mon[75237]: pgmap v1132: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 3.0 KiB/s wr, 107 op/s
Nov 29 08:00:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:00:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:00:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:00:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:00:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:00:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:00:09 compute-0 nova_compute[255040]: 2025-11-29 08:00:09.328 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 1.9 KiB/s wr, 67 op/s
Nov 29 08:00:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:09 compute-0 podman[267102]: 2025-11-29 08:00:09.954466927 +0000 UTC m=+0.113952620 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 29 08:00:10 compute-0 nova_compute[255040]: 2025-11-29 08:00:10.290 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Nov 29 08:00:10 compute-0 ceph-mon[75237]: pgmap v1133: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 1.9 KiB/s wr, 67 op/s
Nov 29 08:00:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Nov 29 08:00:10 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Nov 29 08:00:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 895 B/s wr, 48 op/s
Nov 29 08:00:11 compute-0 ceph-mon[75237]: osdmap e168: 3 total, 3 up, 3 in
Nov 29 08:00:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Nov 29 08:00:12 compute-0 ceph-mon[75237]: pgmap v1135: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 895 B/s wr, 48 op/s
Nov 29 08:00:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Nov 29 08:00:12 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Nov 29 08:00:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.2 KiB/s wr, 45 op/s
Nov 29 08:00:13 compute-0 ceph-mon[75237]: osdmap e169: 3 total, 3 up, 3 in
Nov 29 08:00:14 compute-0 nova_compute[255040]: 2025-11-29 08:00:14.329 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:14 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2046826851' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:14 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2046826851' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:14 compute-0 ceph-mon[75237]: pgmap v1137: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.2 KiB/s wr, 45 op/s
Nov 29 08:00:14 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2046826851' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:14 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2046826851' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:15 compute-0 nova_compute[255040]: 2025-11-29 08:00:15.329 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.6 KiB/s wr, 54 op/s
Nov 29 08:00:16 compute-0 ceph-mon[75237]: pgmap v1138: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.6 KiB/s wr, 54 op/s
Nov 29 08:00:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:00:17 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2625490360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.9 KiB/s wr, 29 op/s
Nov 29 08:00:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:17 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2215258605' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:17 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2215258605' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:17 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2625490360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:17 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2215258605' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:17 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2215258605' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Nov 29 08:00:18 compute-0 ceph-mon[75237]: pgmap v1139: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.9 KiB/s wr, 29 op/s
Nov 29 08:00:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Nov 29 08:00:18 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Nov 29 08:00:19 compute-0 nova_compute[255040]: 2025-11-29 08:00:19.332 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.9 KiB/s wr, 64 op/s
Nov 29 08:00:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Nov 29 08:00:19 compute-0 ceph-mon[75237]: osdmap e170: 3 total, 3 up, 3 in
Nov 29 08:00:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Nov 29 08:00:19 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Nov 29 08:00:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Nov 29 08:00:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Nov 29 08:00:19 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Nov 29 08:00:19 compute-0 podman[267129]: 2025-11-29 08:00:19.904012533 +0000 UTC m=+0.063362104 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 08:00:19 compute-0 nova_compute[255040]: 2025-11-29 08:00:19.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:19 compute-0 nova_compute[255040]: 2025-11-29 08:00:19.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:20 compute-0 nova_compute[255040]: 2025-11-29 08:00:20.330 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:20 compute-0 ceph-mon[75237]: pgmap v1141: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.9 KiB/s wr, 64 op/s
Nov 29 08:00:20 compute-0 ceph-mon[75237]: osdmap e171: 3 total, 3 up, 3 in
Nov 29 08:00:20 compute-0 ceph-mon[75237]: osdmap e172: 3 total, 3 up, 3 in
Nov 29 08:00:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Nov 29 08:00:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Nov 29 08:00:20 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:00:20.864853) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403220865389, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1015, "num_deletes": 255, "total_data_size": 1309137, "memory_usage": 1328848, "flush_reason": "Manual Compaction"}
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403220884905, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1283220, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20637, "largest_seqno": 21651, "table_properties": {"data_size": 1278091, "index_size": 2589, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11723, "raw_average_key_size": 20, "raw_value_size": 1267587, "raw_average_value_size": 2216, "num_data_blocks": 115, "num_entries": 572, "num_filter_entries": 572, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403155, "oldest_key_time": 1764403155, "file_creation_time": 1764403220, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 19989 microseconds, and 10993 cpu microseconds.
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:00:20.884966) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1283220 bytes OK
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:00:20.884995) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:00:20.887701) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:00:20.887719) EVENT_LOG_v1 {"time_micros": 1764403220887712, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:00:20.887749) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1304153, prev total WAL file size 1304153, number of live WAL files 2.
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:00:20.889171) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1253KB)], [47(8625KB)]
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403220889523, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 10115981, "oldest_snapshot_seqno": -1}
Nov 29 08:00:20 compute-0 nova_compute[255040]: 2025-11-29 08:00:20.970 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:20 compute-0 nova_compute[255040]: 2025-11-29 08:00:20.974 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4765 keys, 8350774 bytes, temperature: kUnknown
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403220982916, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 8350774, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8316731, "index_size": 21005, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11973, "raw_key_size": 118949, "raw_average_key_size": 24, "raw_value_size": 8228464, "raw_average_value_size": 1726, "num_data_blocks": 869, "num_entries": 4765, "num_filter_entries": 4765, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764403220, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:00:20.983274) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 8350774 bytes
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:00:20.985040) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.2 rd, 89.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 8.4 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(14.4) write-amplify(6.5) OK, records in: 5288, records dropped: 523 output_compression: NoCompression
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:00:20.985058) EVENT_LOG_v1 {"time_micros": 1764403220985050, "job": 24, "event": "compaction_finished", "compaction_time_micros": 93475, "compaction_time_cpu_micros": 33499, "output_level": 6, "num_output_files": 1, "total_output_size": 8350774, "num_input_records": 5288, "num_output_records": 4765, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403220985448, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403220986876, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:00:20.888800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:00:20.986992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:00:20.987010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:00:20.987013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:00:20.987016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:00:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:00:20.987019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:00:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 4.0 KiB/s wr, 95 op/s
Nov 29 08:00:21 compute-0 ceph-mon[75237]: osdmap e173: 3 total, 3 up, 3 in
Nov 29 08:00:21 compute-0 ceph-mon[75237]: pgmap v1145: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 4.0 KiB/s wr, 95 op/s
Nov 29 08:00:21 compute-0 nova_compute[255040]: 2025-11-29 08:00:21.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:21 compute-0 nova_compute[255040]: 2025-11-29 08:00:21.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:22 compute-0 nova_compute[255040]: 2025-11-29 08:00:22.007 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:22 compute-0 nova_compute[255040]: 2025-11-29 08:00:22.008 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:22 compute-0 nova_compute[255040]: 2025-11-29 08:00:22.008 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:22 compute-0 nova_compute[255040]: 2025-11-29 08:00:22.008 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:00:22 compute-0 nova_compute[255040]: 2025-11-29 08:00:22.008 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:00:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:00:22 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2602343406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:22 compute-0 nova_compute[255040]: 2025-11-29 08:00:22.467 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:00:22 compute-0 nova_compute[255040]: 2025-11-29 08:00:22.677 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:00:22 compute-0 nova_compute[255040]: 2025-11-29 08:00:22.680 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4746MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:00:22 compute-0 nova_compute[255040]: 2025-11-29 08:00:22.680 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:22 compute-0 nova_compute[255040]: 2025-11-29 08:00:22.681 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:22 compute-0 nova_compute[255040]: 2025-11-29 08:00:22.758 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:00:22 compute-0 nova_compute[255040]: 2025-11-29 08:00:22.759 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:00:22 compute-0 nova_compute[255040]: 2025-11-29 08:00:22.782 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:00:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:00:23 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2132498164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:23 compute-0 nova_compute[255040]: 2025-11-29 08:00:23.224 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:00:23 compute-0 nova_compute[255040]: 2025-11-29 08:00:23.230 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:00:23 compute-0 nova_compute[255040]: 2025-11-29 08:00:23.249 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:00:23 compute-0 nova_compute[255040]: 2025-11-29 08:00:23.267 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:00:23 compute-0 nova_compute[255040]: 2025-11-29 08:00:23.267 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:23 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2602343406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 38 op/s
Nov 29 08:00:24 compute-0 nova_compute[255040]: 2025-11-29 08:00:24.268 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:24 compute-0 nova_compute[255040]: 2025-11-29 08:00:24.269 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:00:24 compute-0 nova_compute[255040]: 2025-11-29 08:00:24.270 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:00:24 compute-0 nova_compute[255040]: 2025-11-29 08:00:24.296 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:00:24 compute-0 nova_compute[255040]: 2025-11-29 08:00:24.296 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:24 compute-0 nova_compute[255040]: 2025-11-29 08:00:24.296 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:00:24 compute-0 nova_compute[255040]: 2025-11-29 08:00:24.334 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:24 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2132498164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:24 compute-0 ceph-mon[75237]: pgmap v1146: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 38 op/s
Nov 29 08:00:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:25 compute-0 nova_compute[255040]: 2025-11-29 08:00:25.332 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 3.7 KiB/s wr, 36 op/s
Nov 29 08:00:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Nov 29 08:00:25 compute-0 ceph-mon[75237]: pgmap v1147: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 3.7 KiB/s wr, 36 op/s
Nov 29 08:00:25 compute-0 podman[267192]: 2025-11-29 08:00:25.94368355 +0000 UTC m=+0.108713547 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 08:00:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Nov 29 08:00:25 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Nov 29 08:00:25 compute-0 nova_compute[255040]: 2025-11-29 08:00:25.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:00:26 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1680771539' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Nov 29 08:00:26 compute-0 ceph-mon[75237]: osdmap e174: 3 total, 3 up, 3 in
Nov 29 08:00:26 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1680771539' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Nov 29 08:00:27 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Nov 29 08:00:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:00:27.120 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:00:27.121 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:00:27.121 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 2.3 KiB/s wr, 18 op/s
Nov 29 08:00:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Nov 29 08:00:28 compute-0 ceph-mon[75237]: osdmap e175: 3 total, 3 up, 3 in
Nov 29 08:00:28 compute-0 ceph-mon[75237]: pgmap v1150: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 2.3 KiB/s wr, 18 op/s
Nov 29 08:00:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Nov 29 08:00:28 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Nov 29 08:00:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Nov 29 08:00:29 compute-0 nova_compute[255040]: 2025-11-29 08:00:29.336 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:29 compute-0 ceph-mon[75237]: osdmap e176: 3 total, 3 up, 3 in
Nov 29 08:00:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 4.2 KiB/s wr, 68 op/s
Nov 29 08:00:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Nov 29 08:00:29 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Nov 29 08:00:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Nov 29 08:00:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Nov 29 08:00:29 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Nov 29 08:00:30 compute-0 nova_compute[255040]: 2025-11-29 08:00:30.392 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:30 compute-0 ceph-mon[75237]: pgmap v1152: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 4.2 KiB/s wr, 68 op/s
Nov 29 08:00:30 compute-0 ceph-mon[75237]: osdmap e177: 3 total, 3 up, 3 in
Nov 29 08:00:30 compute-0 ceph-mon[75237]: osdmap e178: 3 total, 3 up, 3 in
Nov 29 08:00:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 4.1 KiB/s wr, 107 op/s
Nov 29 08:00:33 compute-0 ceph-mon[75237]: pgmap v1155: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 4.1 KiB/s wr, 107 op/s
Nov 29 08:00:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 4.3 KiB/s wr, 102 op/s
Nov 29 08:00:34 compute-0 ceph-mon[75237]: pgmap v1156: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 4.3 KiB/s wr, 102 op/s
Nov 29 08:00:34 compute-0 nova_compute[255040]: 2025-11-29 08:00:34.337 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:35 compute-0 ovn_controller[153295]: 2025-11-29T08:00:35Z|00045|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Nov 29 08:00:35 compute-0 nova_compute[255040]: 2025-11-29 08:00:35.395 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 3.5 KiB/s wr, 84 op/s
Nov 29 08:00:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Nov 29 08:00:36 compute-0 ceph-mon[75237]: pgmap v1157: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 3.5 KiB/s wr, 84 op/s
Nov 29 08:00:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Nov 29 08:00:36 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Nov 29 08:00:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3632006881' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3632006881' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/246612475' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/246612475' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.6 KiB/s wr, 36 op/s
Nov 29 08:00:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Nov 29 08:00:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Nov 29 08:00:37 compute-0 ceph-mon[75237]: osdmap e179: 3 total, 3 up, 3 in
Nov 29 08:00:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3632006881' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3632006881' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/246612475' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/246612475' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:37 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Nov 29 08:00:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Nov 29 08:00:38 compute-0 ceph-mon[75237]: pgmap v1159: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.6 KiB/s wr, 36 op/s
Nov 29 08:00:38 compute-0 ceph-mon[75237]: osdmap e180: 3 total, 3 up, 3 in
Nov 29 08:00:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:00:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:00:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:00:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:00:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:00:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:00:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Nov 29 08:00:38 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Nov 29 08:00:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:00:38
Nov 29 08:00:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:00:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:00:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.meta', 'images', '.rgw.root', 'vms', 'default.rgw.control', 'volumes', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups']
Nov 29 08:00:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:00:39 compute-0 nova_compute[255040]: 2025-11-29 08:00:39.339 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 3.5 KiB/s wr, 167 op/s
Nov 29 08:00:39 compute-0 ceph-mon[75237]: osdmap e181: 3 total, 3 up, 3 in
Nov 29 08:00:39 compute-0 ceph-mon[75237]: pgmap v1162: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 3.5 KiB/s wr, 167 op/s
Nov 29 08:00:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Nov 29 08:00:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Nov 29 08:00:39 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Nov 29 08:00:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:40 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1053958677' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:40 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1053958677' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:40 compute-0 nova_compute[255040]: 2025-11-29 08:00:40.398 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:40 compute-0 podman[267214]: 2025-11-29 08:00:40.929824757 +0000 UTC m=+0.099588452 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 29 08:00:41 compute-0 ceph-mon[75237]: osdmap e182: 3 total, 3 up, 3 in
Nov 29 08:00:41 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1053958677' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:41 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1053958677' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 205 KiB/s rd, 5.1 KiB/s wr, 303 op/s
Nov 29 08:00:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:00:42.214 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:00:42 compute-0 nova_compute[255040]: 2025-11-29 08:00:42.215 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:00:42.215 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:00:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Nov 29 08:00:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Nov 29 08:00:42 compute-0 ceph-mon[75237]: pgmap v1164: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 205 KiB/s rd, 5.1 KiB/s wr, 303 op/s
Nov 29 08:00:42 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Nov 29 08:00:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:00:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:00:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:00:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:00:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:00:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:00:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:00:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:00:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:00:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:00:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 206 KiB/s rd, 5.8 KiB/s wr, 300 op/s
Nov 29 08:00:43 compute-0 ceph-mon[75237]: osdmap e183: 3 total, 3 up, 3 in
Nov 29 08:00:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:44 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1549992506' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:44 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1549992506' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:44 compute-0 nova_compute[255040]: 2025-11-29 08:00:44.341 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Nov 29 08:00:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Nov 29 08:00:44 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Nov 29 08:00:44 compute-0 ceph-mon[75237]: pgmap v1166: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 206 KiB/s rd, 5.8 KiB/s wr, 300 op/s
Nov 29 08:00:44 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1549992506' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:44 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1549992506' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Nov 29 08:00:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Nov 29 08:00:44 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Nov 29 08:00:45 compute-0 nova_compute[255040]: 2025-11-29 08:00:45.401 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 124 KiB/s rd, 3.4 KiB/s wr, 175 op/s
Nov 29 08:00:45 compute-0 ceph-mon[75237]: osdmap e184: 3 total, 3 up, 3 in
Nov 29 08:00:45 compute-0 ceph-mon[75237]: osdmap e185: 3 total, 3 up, 3 in
Nov 29 08:00:46 compute-0 ceph-mon[75237]: pgmap v1169: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 124 KiB/s rd, 3.4 KiB/s wr, 175 op/s
Nov 29 08:00:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 2.5 KiB/s wr, 77 op/s
Nov 29 08:00:49 compute-0 ceph-mon[75237]: pgmap v1170: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 2.5 KiB/s wr, 77 op/s
Nov 29 08:00:49 compute-0 nova_compute[255040]: 2025-11-29 08:00:49.343 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.1 KiB/s wr, 79 op/s
Nov 29 08:00:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Nov 29 08:00:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Nov 29 08:00:50 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Nov 29 08:00:50 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:00:50.219 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:00:50 compute-0 ceph-mon[75237]: pgmap v1171: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.1 KiB/s wr, 79 op/s
Nov 29 08:00:50 compute-0 ceph-mon[75237]: osdmap e186: 3 total, 3 up, 3 in
Nov 29 08:00:50 compute-0 nova_compute[255040]: 2025-11-29 08:00:50.404 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:50 compute-0 podman[267240]: 2025-11-29 08:00:50.89520505 +0000 UTC m=+0.053695611 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 08:00:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.6 KiB/s wr, 36 op/s
Nov 29 08:00:52 compute-0 ceph-mon[75237]: pgmap v1173: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.6 KiB/s wr, 36 op/s
Nov 29 08:00:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.4 KiB/s wr, 37 op/s
Nov 29 08:00:54 compute-0 nova_compute[255040]: 2025-11-29 08:00:54.366 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:54 compute-0 ceph-mon[75237]: pgmap v1174: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.4 KiB/s wr, 37 op/s
Nov 29 08:00:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:55 compute-0 nova_compute[255040]: 2025-11-29 08:00:55.453 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 KiB/s wr, 22 op/s
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.994977860259165e-07 of space, bias 1.0, pg target 0.00020984933580777494 quantized to 32 (current 32)
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:00:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:00:56 compute-0 ceph-mon[75237]: pgmap v1175: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 KiB/s wr, 22 op/s
Nov 29 08:00:56 compute-0 podman[267259]: 2025-11-29 08:00:56.917374495 +0000 UTC m=+0.073202658 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 08:00:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 KiB/s wr, 22 op/s
Nov 29 08:00:57 compute-0 sudo[267280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:57 compute-0 sudo[267280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:57 compute-0 sudo[267280]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:57 compute-0 sudo[267305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:00:57 compute-0 sudo[267305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:57 compute-0 sudo[267305]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:57 compute-0 sudo[267330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:57 compute-0 sudo[267330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:57 compute-0 sudo[267330]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:57 compute-0 sudo[267355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 08:00:57 compute-0 sudo[267355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:00:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1839256786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:58 compute-0 sudo[267355]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:00:58 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:00:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:00:58 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:00:58 compute-0 sudo[267402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:58 compute-0 sudo[267402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:58 compute-0 sudo[267402]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:58 compute-0 sudo[267427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:00:58 compute-0 sudo[267427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:58 compute-0 sudo[267427]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:58 compute-0 sudo[267452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:58 compute-0 sudo[267452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:58 compute-0 sudo[267452]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:58 compute-0 sudo[267477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:00:58 compute-0 sudo[267477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:58 compute-0 ceph-mon[75237]: pgmap v1176: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 KiB/s wr, 22 op/s
Nov 29 08:00:58 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1839256786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:58 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:00:58 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:00:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2407196160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2407196160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:58 compute-0 sudo[267477]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:00:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:00:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:00:58 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:00:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:00:58 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:00:58 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 06f456bc-fd2e-453c-a615-7b39a561075e does not exist
Nov 29 08:00:58 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev aef69483-13a3-4554-8e4f-6c5b29298487 does not exist
Nov 29 08:00:58 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 7f0e3795-7a62-4733-b962-d47e0ae9fa95 does not exist
Nov 29 08:00:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:00:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:00:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:00:58 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:00:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:00:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:00:59 compute-0 sudo[267533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:59 compute-0 sudo[267533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:59 compute-0 sudo[267533]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:59 compute-0 sudo[267558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:00:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Nov 29 08:00:59 compute-0 sudo[267558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:59 compute-0 sudo[267558]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Nov 29 08:00:59 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Nov 29 08:00:59 compute-0 sudo[267583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:59 compute-0 sudo[267583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:59 compute-0 sudo[267583]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:59 compute-0 sudo[267608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:00:59 compute-0 sudo[267608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:59 compute-0 nova_compute[255040]: 2025-11-29 08:00:59.370 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.5 KiB/s wr, 56 op/s
Nov 29 08:00:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:59 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2338497534' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:59 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2338497534' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:59 compute-0 podman[267673]: 2025-11-29 08:00:59.669617721 +0000 UTC m=+0.052101219 container create f7d51298c80fb461ccb5333dd0624eda1b4bada8b5f73544c89633a2ada8f621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 08:00:59 compute-0 systemd[1]: Started libpod-conmon-f7d51298c80fb461ccb5333dd0624eda1b4bada8b5f73544c89633a2ada8f621.scope.
Nov 29 08:00:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2407196160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2407196160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:59 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:00:59 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:00:59 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:00:59 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:00:59 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:00:59 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:00:59 compute-0 ceph-mon[75237]: osdmap e187: 3 total, 3 up, 3 in
Nov 29 08:00:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2338497534' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2338497534' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:59 compute-0 podman[267673]: 2025-11-29 08:00:59.646177397 +0000 UTC m=+0.028660915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:00:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:00:59 compute-0 podman[267673]: 2025-11-29 08:00:59.775270894 +0000 UTC m=+0.157754402 container init f7d51298c80fb461ccb5333dd0624eda1b4bada8b5f73544c89633a2ada8f621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 08:00:59 compute-0 podman[267673]: 2025-11-29 08:00:59.785566293 +0000 UTC m=+0.168049801 container start f7d51298c80fb461ccb5333dd0624eda1b4bada8b5f73544c89633a2ada8f621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:00:59 compute-0 podman[267673]: 2025-11-29 08:00:59.789348695 +0000 UTC m=+0.171832193 container attach f7d51298c80fb461ccb5333dd0624eda1b4bada8b5f73544c89633a2ada8f621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chandrasekhar, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 29 08:00:59 compute-0 ecstatic_chandrasekhar[267690]: 167 167
Nov 29 08:00:59 compute-0 systemd[1]: libpod-f7d51298c80fb461ccb5333dd0624eda1b4bada8b5f73544c89633a2ada8f621.scope: Deactivated successfully.
Nov 29 08:00:59 compute-0 conmon[267690]: conmon f7d51298c80fb461ccb5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f7d51298c80fb461ccb5333dd0624eda1b4bada8b5f73544c89633a2ada8f621.scope/container/memory.events
Nov 29 08:00:59 compute-0 podman[267673]: 2025-11-29 08:00:59.794244977 +0000 UTC m=+0.176728475 container died f7d51298c80fb461ccb5333dd0624eda1b4bada8b5f73544c89633a2ada8f621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 08:00:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-99bb7f9a8695d7ad64ffb1c298bbba9f99d52f4733cea96fb31530b98b7a756a-merged.mount: Deactivated successfully.
Nov 29 08:00:59 compute-0 podman[267673]: 2025-11-29 08:00:59.842442419 +0000 UTC m=+0.224925917 container remove f7d51298c80fb461ccb5333dd0624eda1b4bada8b5f73544c89633a2ada8f621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 08:00:59 compute-0 systemd[1]: libpod-conmon-f7d51298c80fb461ccb5333dd0624eda1b4bada8b5f73544c89633a2ada8f621.scope: Deactivated successfully.
Nov 29 08:01:00 compute-0 podman[267715]: 2025-11-29 08:01:00.034046905 +0000 UTC m=+0.048104210 container create 1c0a5da3007a4d900a8c79ea457a3e4812c3f991ea9775698fad744c4de9c0e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_matsumoto, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:01:00 compute-0 systemd[1]: Started libpod-conmon-1c0a5da3007a4d900a8c79ea457a3e4812c3f991ea9775698fad744c4de9c0e2.scope.
Nov 29 08:01:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:00 compute-0 podman[267715]: 2025-11-29 08:01:00.012305687 +0000 UTC m=+0.026363022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:01:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Nov 29 08:01:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:01:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70e8fb4bdd291fb74064ba969210faabdb6a8c60d51937debcac41bcf449c910/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:01:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Nov 29 08:01:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70e8fb4bdd291fb74064ba969210faabdb6a8c60d51937debcac41bcf449c910/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:01:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70e8fb4bdd291fb74064ba969210faabdb6a8c60d51937debcac41bcf449c910/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:01:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70e8fb4bdd291fb74064ba969210faabdb6a8c60d51937debcac41bcf449c910/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:01:00 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Nov 29 08:01:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70e8fb4bdd291fb74064ba969210faabdb6a8c60d51937debcac41bcf449c910/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:01:00 compute-0 podman[267715]: 2025-11-29 08:01:00.148194468 +0000 UTC m=+0.162251793 container init 1c0a5da3007a4d900a8c79ea457a3e4812c3f991ea9775698fad744c4de9c0e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 29 08:01:00 compute-0 podman[267715]: 2025-11-29 08:01:00.159270767 +0000 UTC m=+0.173328072 container start 1c0a5da3007a4d900a8c79ea457a3e4812c3f991ea9775698fad744c4de9c0e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 08:01:00 compute-0 podman[267715]: 2025-11-29 08:01:00.165213428 +0000 UTC m=+0.179270753 container attach 1c0a5da3007a4d900a8c79ea457a3e4812c3f991ea9775698fad744c4de9c0e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:01:00 compute-0 nova_compute[255040]: 2025-11-29 08:01:00.456 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:00 compute-0 ceph-mon[75237]: pgmap v1178: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.5 KiB/s wr, 56 op/s
Nov 29 08:01:00 compute-0 ceph-mon[75237]: osdmap e188: 3 total, 3 up, 3 in
Nov 29 08:01:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:01:00 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1324665037' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:01:00 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1324665037' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:01 compute-0 CROND[267754]: (root) CMD (run-parts /etc/cron.hourly)
Nov 29 08:01:01 compute-0 run-parts[267758]: (/etc/cron.hourly) starting 0anacron
Nov 29 08:01:01 compute-0 run-parts[267765]: (/etc/cron.hourly) finished 0anacron
Nov 29 08:01:01 compute-0 CROND[267752]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 29 08:01:01 compute-0 magical_matsumoto[267731]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:01:01 compute-0 magical_matsumoto[267731]: --> relative data size: 1.0
Nov 29 08:01:01 compute-0 magical_matsumoto[267731]: --> All data devices are unavailable
Nov 29 08:01:01 compute-0 systemd[1]: libpod-1c0a5da3007a4d900a8c79ea457a3e4812c3f991ea9775698fad744c4de9c0e2.scope: Deactivated successfully.
Nov 29 08:01:01 compute-0 systemd[1]: libpod-1c0a5da3007a4d900a8c79ea457a3e4812c3f991ea9775698fad744c4de9c0e2.scope: Consumed 1.174s CPU time.
Nov 29 08:01:01 compute-0 podman[267715]: 2025-11-29 08:01:01.38399451 +0000 UTC m=+1.398051835 container died 1c0a5da3007a4d900a8c79ea457a3e4812c3f991ea9775698fad744c4de9c0e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_matsumoto, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:01:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-70e8fb4bdd291fb74064ba969210faabdb6a8c60d51937debcac41bcf449c910-merged.mount: Deactivated successfully.
Nov 29 08:01:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 3.0 KiB/s wr, 63 op/s
Nov 29 08:01:01 compute-0 podman[267715]: 2025-11-29 08:01:01.459355686 +0000 UTC m=+1.473412991 container remove 1c0a5da3007a4d900a8c79ea457a3e4812c3f991ea9775698fad744c4de9c0e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_matsumoto, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 08:01:01 compute-0 systemd[1]: libpod-conmon-1c0a5da3007a4d900a8c79ea457a3e4812c3f991ea9775698fad744c4de9c0e2.scope: Deactivated successfully.
Nov 29 08:01:01 compute-0 sudo[267608]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:01 compute-0 sudo[267784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:01:01 compute-0 sudo[267784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:01 compute-0 sudo[267784]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:01 compute-0 sudo[267809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:01:01 compute-0 sudo[267809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:01 compute-0 sudo[267809]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Nov 29 08:01:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Nov 29 08:01:01 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1324665037' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:01 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1324665037' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:01 compute-0 sudo[267834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:01:01 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Nov 29 08:01:01 compute-0 sudo[267834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:01 compute-0 sudo[267834]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:01 compute-0 sudo[267859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 08:01:01 compute-0 sudo[267859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:02 compute-0 podman[267923]: 2025-11-29 08:01:02.354872257 +0000 UTC m=+0.101237236 container create db5672150f0f11b934e5c6e72d00e12d92ac48446cc99067277e3c40d665756e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_benz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 08:01:02 compute-0 podman[267923]: 2025-11-29 08:01:02.287681163 +0000 UTC m=+0.034046172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:01:02 compute-0 systemd[1]: Started libpod-conmon-db5672150f0f11b934e5c6e72d00e12d92ac48446cc99067277e3c40d665756e.scope.
Nov 29 08:01:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:01:02 compute-0 podman[267923]: 2025-11-29 08:01:02.458780694 +0000 UTC m=+0.205145693 container init db5672150f0f11b934e5c6e72d00e12d92ac48446cc99067277e3c40d665756e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_benz, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:01:02 compute-0 podman[267923]: 2025-11-29 08:01:02.468951789 +0000 UTC m=+0.215316768 container start db5672150f0f11b934e5c6e72d00e12d92ac48446cc99067277e3c40d665756e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:01:02 compute-0 podman[267923]: 2025-11-29 08:01:02.473330687 +0000 UTC m=+0.219695686 container attach db5672150f0f11b934e5c6e72d00e12d92ac48446cc99067277e3c40d665756e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_benz, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:01:02 compute-0 jovial_benz[267940]: 167 167
Nov 29 08:01:02 compute-0 systemd[1]: libpod-db5672150f0f11b934e5c6e72d00e12d92ac48446cc99067277e3c40d665756e.scope: Deactivated successfully.
Nov 29 08:01:02 compute-0 conmon[267940]: conmon db5672150f0f11b934e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db5672150f0f11b934e5c6e72d00e12d92ac48446cc99067277e3c40d665756e.scope/container/memory.events
Nov 29 08:01:02 compute-0 podman[267923]: 2025-11-29 08:01:02.478900857 +0000 UTC m=+0.225265836 container died db5672150f0f11b934e5c6e72d00e12d92ac48446cc99067277e3c40d665756e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:01:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b11e01eed555a2f9fa3a59b0a1fe4e6bcf03a11aa44157552403fa7332408a19-merged.mount: Deactivated successfully.
Nov 29 08:01:02 compute-0 podman[267923]: 2025-11-29 08:01:02.533252356 +0000 UTC m=+0.279617335 container remove db5672150f0f11b934e5c6e72d00e12d92ac48446cc99067277e3c40d665756e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_benz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 08:01:02 compute-0 systemd[1]: libpod-conmon-db5672150f0f11b934e5c6e72d00e12d92ac48446cc99067277e3c40d665756e.scope: Deactivated successfully.
Nov 29 08:01:02 compute-0 podman[267963]: 2025-11-29 08:01:02.728015267 +0000 UTC m=+0.050407293 container create f21a1dfc28dc27cc8a2a913d4f113a5ea23004895c331740decefb6fa0c7e9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elbakyan, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 08:01:02 compute-0 ceph-mon[75237]: pgmap v1180: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 3.0 KiB/s wr, 63 op/s
Nov 29 08:01:02 compute-0 ceph-mon[75237]: osdmap e189: 3 total, 3 up, 3 in
Nov 29 08:01:02 compute-0 systemd[1]: Started libpod-conmon-f21a1dfc28dc27cc8a2a913d4f113a5ea23004895c331740decefb6fa0c7e9c4.scope.
Nov 29 08:01:02 compute-0 podman[267963]: 2025-11-29 08:01:02.707497432 +0000 UTC m=+0.029889478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:01:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:01:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2816b1798f3ba60c073e4e90f46ec7cc041feb1b98fcf1e87eb67c1149fd0567/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:01:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2816b1798f3ba60c073e4e90f46ec7cc041feb1b98fcf1e87eb67c1149fd0567/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:01:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2816b1798f3ba60c073e4e90f46ec7cc041feb1b98fcf1e87eb67c1149fd0567/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:01:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2816b1798f3ba60c073e4e90f46ec7cc041feb1b98fcf1e87eb67c1149fd0567/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:01:02 compute-0 podman[267963]: 2025-11-29 08:01:02.837269748 +0000 UTC m=+0.159661794 container init f21a1dfc28dc27cc8a2a913d4f113a5ea23004895c331740decefb6fa0c7e9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 08:01:02 compute-0 podman[267963]: 2025-11-29 08:01:02.847243627 +0000 UTC m=+0.169635653 container start f21a1dfc28dc27cc8a2a913d4f113a5ea23004895c331740decefb6fa0c7e9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elbakyan, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:01:02 compute-0 podman[267963]: 2025-11-29 08:01:02.851836931 +0000 UTC m=+0.174228977 container attach f21a1dfc28dc27cc8a2a913d4f113a5ea23004895c331740decefb6fa0c7e9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elbakyan, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:01:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 5.7 KiB/s wr, 145 op/s
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]: {
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:     "0": [
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:         {
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "devices": [
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "/dev/loop3"
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             ],
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "lv_name": "ceph_lv0",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "lv_size": "21470642176",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "name": "ceph_lv0",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "tags": {
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.cluster_name": "ceph",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.crush_device_class": "",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.encrypted": "0",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.osd_id": "0",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.type": "block",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.vdo": "0"
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             },
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "type": "block",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "vg_name": "ceph_vg0"
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:         }
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:     ],
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:     "1": [
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:         {
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "devices": [
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "/dev/loop4"
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             ],
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "lv_name": "ceph_lv1",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "lv_size": "21470642176",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "name": "ceph_lv1",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "tags": {
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.cluster_name": "ceph",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.crush_device_class": "",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.encrypted": "0",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.osd_id": "1",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.type": "block",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.vdo": "0"
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             },
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "type": "block",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "vg_name": "ceph_vg1"
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:         }
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:     ],
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:     "2": [
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:         {
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "devices": [
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "/dev/loop5"
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             ],
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "lv_name": "ceph_lv2",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "lv_size": "21470642176",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "name": "ceph_lv2",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "tags": {
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.cluster_name": "ceph",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.crush_device_class": "",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.encrypted": "0",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.osd_id": "2",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.type": "block",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:                 "ceph.vdo": "0"
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             },
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "type": "block",
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:             "vg_name": "ceph_vg2"
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:         }
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]:     ]
Nov 29 08:01:03 compute-0 tender_elbakyan[267980]: }
Nov 29 08:01:03 compute-0 systemd[1]: libpod-f21a1dfc28dc27cc8a2a913d4f113a5ea23004895c331740decefb6fa0c7e9c4.scope: Deactivated successfully.
Nov 29 08:01:03 compute-0 podman[267963]: 2025-11-29 08:01:03.744353691 +0000 UTC m=+1.066745727 container died f21a1dfc28dc27cc8a2a913d4f113a5ea23004895c331740decefb6fa0c7e9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:01:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Nov 29 08:01:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Nov 29 08:01:03 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Nov 29 08:01:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-2816b1798f3ba60c073e4e90f46ec7cc041feb1b98fcf1e87eb67c1149fd0567-merged.mount: Deactivated successfully.
Nov 29 08:01:03 compute-0 ceph-mon[75237]: pgmap v1182: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 5.7 KiB/s wr, 145 op/s
Nov 29 08:01:03 compute-0 ceph-mon[75237]: osdmap e190: 3 total, 3 up, 3 in
Nov 29 08:01:03 compute-0 podman[267963]: 2025-11-29 08:01:03.834610979 +0000 UTC m=+1.157003005 container remove f21a1dfc28dc27cc8a2a913d4f113a5ea23004895c331740decefb6fa0c7e9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elbakyan, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:01:03 compute-0 systemd[1]: libpod-conmon-f21a1dfc28dc27cc8a2a913d4f113a5ea23004895c331740decefb6fa0c7e9c4.scope: Deactivated successfully.
Nov 29 08:01:03 compute-0 sudo[267859]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:03 compute-0 sudo[268003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:01:03 compute-0 sudo[268003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:03 compute-0 sudo[268003]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:04 compute-0 sudo[268028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:01:04 compute-0 sudo[268028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:04 compute-0 sudo[268028]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:04 compute-0 sudo[268053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:01:04 compute-0 sudo[268053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:04 compute-0 sudo[268053]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:04 compute-0 sudo[268078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 08:01:04 compute-0 sudo[268078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:01:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3802131951' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:01:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3802131951' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:04 compute-0 nova_compute[255040]: 2025-11-29 08:01:04.371 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:04 compute-0 podman[268142]: 2025-11-29 08:01:04.562795969 +0000 UTC m=+0.046218680 container create 512f732ecc42eaa19b881d6c904bc246ac8e8e3d840f4e8176d8b0f706c3e36a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:01:04 compute-0 systemd[1]: Started libpod-conmon-512f732ecc42eaa19b881d6c904bc246ac8e8e3d840f4e8176d8b0f706c3e36a.scope.
Nov 29 08:01:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:01:04 compute-0 podman[268142]: 2025-11-29 08:01:04.541411642 +0000 UTC m=+0.024834373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:01:04 compute-0 podman[268142]: 2025-11-29 08:01:04.6479757 +0000 UTC m=+0.131398411 container init 512f732ecc42eaa19b881d6c904bc246ac8e8e3d840f4e8176d8b0f706c3e36a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_volhard, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:01:04 compute-0 podman[268142]: 2025-11-29 08:01:04.654892477 +0000 UTC m=+0.138315188 container start 512f732ecc42eaa19b881d6c904bc246ac8e8e3d840f4e8176d8b0f706c3e36a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_volhard, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:01:04 compute-0 podman[268142]: 2025-11-29 08:01:04.65868467 +0000 UTC m=+0.142107391 container attach 512f732ecc42eaa19b881d6c904bc246ac8e8e3d840f4e8176d8b0f706c3e36a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 08:01:04 compute-0 eager_volhard[268158]: 167 167
Nov 29 08:01:04 compute-0 systemd[1]: libpod-512f732ecc42eaa19b881d6c904bc246ac8e8e3d840f4e8176d8b0f706c3e36a.scope: Deactivated successfully.
Nov 29 08:01:04 compute-0 conmon[268158]: conmon 512f732ecc42eaa19b88 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-512f732ecc42eaa19b881d6c904bc246ac8e8e3d840f4e8176d8b0f706c3e36a.scope/container/memory.events
Nov 29 08:01:04 compute-0 podman[268142]: 2025-11-29 08:01:04.66275466 +0000 UTC m=+0.146177371 container died 512f732ecc42eaa19b881d6c904bc246ac8e8e3d840f4e8176d8b0f706c3e36a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_volhard, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:01:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-03b3dce18c20268ef1a16f2251c2991842174cb02e2b0d1c2ea733a21f9c4dca-merged.mount: Deactivated successfully.
Nov 29 08:01:04 compute-0 podman[268142]: 2025-11-29 08:01:04.700008265 +0000 UTC m=+0.183430976 container remove 512f732ecc42eaa19b881d6c904bc246ac8e8e3d840f4e8176d8b0f706c3e36a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 08:01:04 compute-0 systemd[1]: libpod-conmon-512f732ecc42eaa19b881d6c904bc246ac8e8e3d840f4e8176d8b0f706c3e36a.scope: Deactivated successfully.
Nov 29 08:01:04 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3802131951' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:04 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3802131951' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:04 compute-0 podman[268181]: 2025-11-29 08:01:04.89858437 +0000 UTC m=+0.055821149 container create 82b5efdbc6a5c00a37668d6faf52f8b0b3538f18a2b2c53fff67af6f9e2c83c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_elgamal, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 08:01:04 compute-0 systemd[1]: Started libpod-conmon-82b5efdbc6a5c00a37668d6faf52f8b0b3538f18a2b2c53fff67af6f9e2c83c1.scope.
Nov 29 08:01:04 compute-0 podman[268181]: 2025-11-29 08:01:04.87600788 +0000 UTC m=+0.033244639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:01:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:01:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff14b9dbf1174d1dd869c978950d611433483e16335de266a6b92d0df65b4271/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:01:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff14b9dbf1174d1dd869c978950d611433483e16335de266a6b92d0df65b4271/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:01:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff14b9dbf1174d1dd869c978950d611433483e16335de266a6b92d0df65b4271/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:01:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff14b9dbf1174d1dd869c978950d611433483e16335de266a6b92d0df65b4271/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:01:05 compute-0 podman[268181]: 2025-11-29 08:01:05.005327974 +0000 UTC m=+0.162564743 container init 82b5efdbc6a5c00a37668d6faf52f8b0b3538f18a2b2c53fff67af6f9e2c83c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_elgamal, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 08:01:05 compute-0 podman[268181]: 2025-11-29 08:01:05.011973913 +0000 UTC m=+0.169210652 container start 82b5efdbc6a5c00a37668d6faf52f8b0b3538f18a2b2c53fff67af6f9e2c83c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:01:05 compute-0 podman[268181]: 2025-11-29 08:01:05.016710621 +0000 UTC m=+0.173947360 container attach 82b5efdbc6a5c00a37668d6faf52f8b0b3538f18a2b2c53fff67af6f9e2c83c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 08:01:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Nov 29 08:01:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Nov 29 08:01:05 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Nov 29 08:01:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 4.3 KiB/s wr, 130 op/s
Nov 29 08:01:05 compute-0 nova_compute[255040]: 2025-11-29 08:01:05.462 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:06 compute-0 busy_elgamal[268198]: {
Nov 29 08:01:06 compute-0 busy_elgamal[268198]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 08:01:06 compute-0 busy_elgamal[268198]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:01:06 compute-0 busy_elgamal[268198]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:01:06 compute-0 busy_elgamal[268198]:         "osd_id": 2,
Nov 29 08:01:06 compute-0 busy_elgamal[268198]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:01:06 compute-0 busy_elgamal[268198]:         "type": "bluestore"
Nov 29 08:01:06 compute-0 busy_elgamal[268198]:     },
Nov 29 08:01:06 compute-0 busy_elgamal[268198]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 08:01:06 compute-0 busy_elgamal[268198]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:01:06 compute-0 busy_elgamal[268198]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:01:06 compute-0 busy_elgamal[268198]:         "osd_id": 0,
Nov 29 08:01:06 compute-0 busy_elgamal[268198]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:01:06 compute-0 busy_elgamal[268198]:         "type": "bluestore"
Nov 29 08:01:06 compute-0 busy_elgamal[268198]:     },
Nov 29 08:01:06 compute-0 busy_elgamal[268198]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 08:01:06 compute-0 busy_elgamal[268198]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:01:06 compute-0 busy_elgamal[268198]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:01:06 compute-0 busy_elgamal[268198]:         "osd_id": 1,
Nov 29 08:01:06 compute-0 busy_elgamal[268198]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:01:06 compute-0 busy_elgamal[268198]:         "type": "bluestore"
Nov 29 08:01:06 compute-0 busy_elgamal[268198]:     }
Nov 29 08:01:06 compute-0 busy_elgamal[268198]: }
Nov 29 08:01:06 compute-0 systemd[1]: libpod-82b5efdbc6a5c00a37668d6faf52f8b0b3538f18a2b2c53fff67af6f9e2c83c1.scope: Deactivated successfully.
Nov 29 08:01:06 compute-0 podman[268181]: 2025-11-29 08:01:06.083035345 +0000 UTC m=+1.240272124 container died 82b5efdbc6a5c00a37668d6faf52f8b0b3538f18a2b2c53fff67af6f9e2c83c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 08:01:06 compute-0 systemd[1]: libpod-82b5efdbc6a5c00a37668d6faf52f8b0b3538f18a2b2c53fff67af6f9e2c83c1.scope: Consumed 1.078s CPU time.
Nov 29 08:01:06 compute-0 ceph-mon[75237]: osdmap e191: 3 total, 3 up, 3 in
Nov 29 08:01:06 compute-0 ceph-mon[75237]: pgmap v1185: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 4.3 KiB/s wr, 130 op/s
Nov 29 08:01:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff14b9dbf1174d1dd869c978950d611433483e16335de266a6b92d0df65b4271-merged.mount: Deactivated successfully.
Nov 29 08:01:06 compute-0 podman[268181]: 2025-11-29 08:01:06.155574595 +0000 UTC m=+1.312811334 container remove 82b5efdbc6a5c00a37668d6faf52f8b0b3538f18a2b2c53fff67af6f9e2c83c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 08:01:06 compute-0 systemd[1]: libpod-conmon-82b5efdbc6a5c00a37668d6faf52f8b0b3538f18a2b2c53fff67af6f9e2c83c1.scope: Deactivated successfully.
Nov 29 08:01:06 compute-0 sudo[268078]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:01:06 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:01:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:01:06 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:01:06 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 003fb6ab-965a-446d-80a2-645b004b742e does not exist
Nov 29 08:01:06 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 435b21da-7f0d-4f6f-bfa1-a680351dc764 does not exist
Nov 29 08:01:06 compute-0 sudo[268242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:01:06 compute-0 sudo[268242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:06 compute-0 sudo[268242]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:06 compute-0 sudo[268267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:01:06 compute-0 sudo[268267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:06 compute-0 sudo[268267]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:07 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:01:07 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:01:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 3.8 KiB/s wr, 115 op/s
Nov 29 08:01:08 compute-0 ceph-mon[75237]: pgmap v1186: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 3.8 KiB/s wr, 115 op/s
Nov 29 08:01:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:01:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:01:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:01:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:01:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:01:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:01:09 compute-0 nova_compute[255040]: 2025-11-29 08:01:09.373 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 3.9 KiB/s wr, 112 op/s
Nov 29 08:01:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Nov 29 08:01:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Nov 29 08:01:10 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Nov 29 08:01:10 compute-0 sshd-session[267985]: Received disconnect from 45.78.219.195 port 52452:11: Bye Bye [preauth]
Nov 29 08:01:10 compute-0 sshd-session[267985]: Disconnected from authenticating user root 45.78.219.195 port 52452 [preauth]
Nov 29 08:01:10 compute-0 nova_compute[255040]: 2025-11-29 08:01:10.467 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:10 compute-0 ceph-mon[75237]: pgmap v1187: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 3.9 KiB/s wr, 112 op/s
Nov 29 08:01:10 compute-0 ceph-mon[75237]: osdmap e192: 3 total, 3 up, 3 in
Nov 29 08:01:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.3 KiB/s wr, 55 op/s
Nov 29 08:01:11 compute-0 podman[268292]: 2025-11-29 08:01:11.947311916 +0000 UTC m=+0.110242899 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 08:01:12 compute-0 ceph-mon[75237]: pgmap v1189: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.3 KiB/s wr, 55 op/s
Nov 29 08:01:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 857 B/s wr, 20 op/s
Nov 29 08:01:13 compute-0 ceph-mon[75237]: pgmap v1190: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 857 B/s wr, 20 op/s
Nov 29 08:01:14 compute-0 nova_compute[255040]: 2025-11-29 08:01:14.374 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:14 compute-0 nova_compute[255040]: 2025-11-29 08:01:14.977 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:14 compute-0 nova_compute[255040]: 2025-11-29 08:01:14.978 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 08:01:15 compute-0 nova_compute[255040]: 2025-11-29 08:01:15.051 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 08:01:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 716 B/s wr, 16 op/s
Nov 29 08:01:15 compute-0 nova_compute[255040]: 2025-11-29 08:01:15.470 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 716 B/s wr, 16 op/s
Nov 29 08:01:17 compute-0 ceph-mon[75237]: pgmap v1191: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 716 B/s wr, 16 op/s
Nov 29 08:01:19 compute-0 ceph-mon[75237]: pgmap v1192: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 716 B/s wr, 16 op/s
Nov 29 08:01:19 compute-0 nova_compute[255040]: 2025-11-29 08:01:19.421 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Nov 29 08:01:20 compute-0 nova_compute[255040]: 2025-11-29 08:01:20.050 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:20 compute-0 nova_compute[255040]: 2025-11-29 08:01:20.473 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:20 compute-0 ceph-mon[75237]: pgmap v1193: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Nov 29 08:01:20 compute-0 nova_compute[255040]: 2025-11-29 08:01:20.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Nov 29 08:01:21 compute-0 podman[268318]: 2025-11-29 08:01:21.901253349 +0000 UTC m=+0.064838854 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 08:01:21 compute-0 nova_compute[255040]: 2025-11-29 08:01:21.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:22 compute-0 nova_compute[255040]: 2025-11-29 08:01:22.422 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:22 compute-0 nova_compute[255040]: 2025-11-29 08:01:22.993 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:22 compute-0 nova_compute[255040]: 2025-11-29 08:01:22.994 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:22 compute-0 nova_compute[255040]: 2025-11-29 08:01:22.994 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:01:22 compute-0 nova_compute[255040]: 2025-11-29 08:01:22.994 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:01:23 compute-0 ceph-mon[75237]: pgmap v1194: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Nov 29 08:01:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Nov 29 08:01:23 compute-0 nova_compute[255040]: 2025-11-29 08:01:23.650 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:01:23 compute-0 nova_compute[255040]: 2025-11-29 08:01:23.651 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:23 compute-0 nova_compute[255040]: 2025-11-29 08:01:23.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:24 compute-0 nova_compute[255040]: 2025-11-29 08:01:24.007 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:24 compute-0 nova_compute[255040]: 2025-11-29 08:01:24.008 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:24 compute-0 nova_compute[255040]: 2025-11-29 08:01:24.008 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:24 compute-0 nova_compute[255040]: 2025-11-29 08:01:24.008 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:01:24 compute-0 nova_compute[255040]: 2025-11-29 08:01:24.009 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:24 compute-0 nova_compute[255040]: 2025-11-29 08:01:24.423 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:01:24 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2692300737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:24 compute-0 nova_compute[255040]: 2025-11-29 08:01:24.510 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:24 compute-0 ceph-mon[75237]: pgmap v1195: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Nov 29 08:01:24 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2692300737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:24 compute-0 nova_compute[255040]: 2025-11-29 08:01:24.662 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:01:24 compute-0 nova_compute[255040]: 2025-11-29 08:01:24.664 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4706MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:01:24 compute-0 nova_compute[255040]: 2025-11-29 08:01:24.664 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:24 compute-0 nova_compute[255040]: 2025-11-29 08:01:24.664 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:24 compute-0 nova_compute[255040]: 2025-11-29 08:01:24.882 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:01:24 compute-0 nova_compute[255040]: 2025-11-29 08:01:24.882 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:01:24 compute-0 nova_compute[255040]: 2025-11-29 08:01:24.929 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Nov 29 08:01:25 compute-0 nova_compute[255040]: 2025-11-29 08:01:25.475 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:01:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/336003150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:25 compute-0 nova_compute[255040]: 2025-11-29 08:01:25.593 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:25 compute-0 nova_compute[255040]: 2025-11-29 08:01:25.600 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:01:25 compute-0 nova_compute[255040]: 2025-11-29 08:01:25.637 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:01:25 compute-0 nova_compute[255040]: 2025-11-29 08:01:25.639 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:01:25 compute-0 nova_compute[255040]: 2025-11-29 08:01:25.640 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:25 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/336003150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:25 compute-0 nova_compute[255040]: 2025-11-29 08:01:25.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:26 compute-0 nova_compute[255040]: 2025-11-29 08:01:26.082 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:26 compute-0 nova_compute[255040]: 2025-11-29 08:01:26.082 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:01:26 compute-0 nova_compute[255040]: 2025-11-29 08:01:26.083 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:26 compute-0 nova_compute[255040]: 2025-11-29 08:01:26.083 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 08:01:26 compute-0 ceph-mon[75237]: pgmap v1196: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Nov 29 08:01:26 compute-0 nova_compute[255040]: 2025-11-29 08:01:26.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:26 compute-0 nova_compute[255040]: 2025-11-29 08:01:26.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:27.122 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:27.123 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:27.123 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Nov 29 08:01:27 compute-0 podman[268381]: 2025-11-29 08:01:27.902990801 +0000 UTC m=+0.065964283 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125)
Nov 29 08:01:27 compute-0 ceph-mon[75237]: pgmap v1197: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Nov 29 08:01:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Nov 29 08:01:29 compute-0 nova_compute[255040]: 2025-11-29 08:01:29.492 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:29 compute-0 ceph-mon[75237]: pgmap v1198: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Nov 29 08:01:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:30 compute-0 nova_compute[255040]: 2025-11-29 08:01:30.389 255071 DEBUG oslo_concurrency.lockutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Acquiring lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:30 compute-0 nova_compute[255040]: 2025-11-29 08:01:30.390 255071 DEBUG oslo_concurrency.lockutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:30 compute-0 nova_compute[255040]: 2025-11-29 08:01:30.408 255071 DEBUG nova.compute.manager [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:01:30 compute-0 nova_compute[255040]: 2025-11-29 08:01:30.479 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:30 compute-0 nova_compute[255040]: 2025-11-29 08:01:30.587 255071 DEBUG oslo_concurrency.lockutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:30 compute-0 nova_compute[255040]: 2025-11-29 08:01:30.588 255071 DEBUG oslo_concurrency.lockutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:30 compute-0 nova_compute[255040]: 2025-11-29 08:01:30.597 255071 DEBUG nova.virt.hardware [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:01:30 compute-0 nova_compute[255040]: 2025-11-29 08:01:30.598 255071 INFO nova.compute.claims [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:01:30 compute-0 nova_compute[255040]: 2025-11-29 08:01:30.700 255071 DEBUG oslo_concurrency.processutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:01:31 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3942647771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.170 255071 DEBUG oslo_concurrency.processutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.177 255071 DEBUG nova.compute.provider_tree [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.196 255071 DEBUG nova.scheduler.client.report [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:01:31 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3942647771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.219 255071 DEBUG oslo_concurrency.lockutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.220 255071 DEBUG nova.compute.manager [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.274 255071 DEBUG nova.compute.manager [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.275 255071 DEBUG nova.network.neutron [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.295 255071 INFO nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.313 255071 DEBUG nova.compute.manager [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.397 255071 DEBUG nova.compute.manager [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.398 255071 DEBUG nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.399 255071 INFO nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Creating image(s)
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.429 255071 DEBUG nova.storage.rbd_utils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] rbd image 394d590b-38c1-44bb-8370-a9d12c6b7ef0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.454 255071 DEBUG nova.storage.rbd_utils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] rbd image 394d590b-38c1-44bb-8370-a9d12c6b7ef0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:01:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 426 B/s wr, 1 op/s
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.475 255071 DEBUG nova.storage.rbd_utils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] rbd image 394d590b-38c1-44bb-8370-a9d12c6b7ef0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.479 255071 DEBUG oslo_concurrency.processutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.524 255071 DEBUG nova.policy [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '342375d9cda748d0bdc3985fba484510', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '130bffe4c30f493aa286a3620fd260ca', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.570 255071 DEBUG oslo_concurrency.processutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.571 255071 DEBUG oslo_concurrency.lockutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Acquiring lock "55a6637599f7119d0d1afd670bb8713620840059" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.572 255071 DEBUG oslo_concurrency.lockutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.572 255071 DEBUG oslo_concurrency.lockutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.597 255071 DEBUG nova.storage.rbd_utils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] rbd image 394d590b-38c1-44bb-8370-a9d12c6b7ef0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:01:31 compute-0 nova_compute[255040]: 2025-11-29 08:01:31.602 255071 DEBUG oslo_concurrency.processutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 394d590b-38c1-44bb-8370-a9d12c6b7ef0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:32 compute-0 nova_compute[255040]: 2025-11-29 08:01:32.209 255071 DEBUG nova.network.neutron [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Successfully created port: 53214bb0-d184-4728-8d1e-d7fa4a77f667 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:01:32 compute-0 ceph-mon[75237]: pgmap v1199: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 426 B/s wr, 1 op/s
Nov 29 08:01:32 compute-0 nova_compute[255040]: 2025-11-29 08:01:32.862 255071 DEBUG nova.network.neutron [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Successfully updated port: 53214bb0-d184-4728-8d1e-d7fa4a77f667 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:01:32 compute-0 nova_compute[255040]: 2025-11-29 08:01:32.881 255071 DEBUG oslo_concurrency.lockutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Acquiring lock "refresh_cache-394d590b-38c1-44bb-8370-a9d12c6b7ef0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:01:32 compute-0 nova_compute[255040]: 2025-11-29 08:01:32.881 255071 DEBUG oslo_concurrency.lockutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Acquired lock "refresh_cache-394d590b-38c1-44bb-8370-a9d12c6b7ef0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:01:32 compute-0 nova_compute[255040]: 2025-11-29 08:01:32.882 255071 DEBUG nova.network.neutron [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:01:33 compute-0 nova_compute[255040]: 2025-11-29 08:01:33.112 255071 DEBUG nova.compute.manager [req-e9a2bba6-00b3-4666-b8d2-f1177d9698b2 req-528d8723-9111-43e9-9740-3baaa233f7d2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Received event network-changed-53214bb0-d184-4728-8d1e-d7fa4a77f667 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:33 compute-0 nova_compute[255040]: 2025-11-29 08:01:33.113 255071 DEBUG nova.compute.manager [req-e9a2bba6-00b3-4666-b8d2-f1177d9698b2 req-528d8723-9111-43e9-9740-3baaa233f7d2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Refreshing instance network info cache due to event network-changed-53214bb0-d184-4728-8d1e-d7fa4a77f667. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:01:33 compute-0 nova_compute[255040]: 2025-11-29 08:01:33.113 255071 DEBUG oslo_concurrency.lockutils [req-e9a2bba6-00b3-4666-b8d2-f1177d9698b2 req-528d8723-9111-43e9-9740-3baaa233f7d2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-394d590b-38c1-44bb-8370-a9d12c6b7ef0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:01:33 compute-0 nova_compute[255040]: 2025-11-29 08:01:33.155 255071 DEBUG nova.network.neutron [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:01:33 compute-0 nova_compute[255040]: 2025-11-29 08:01:33.182 255071 DEBUG oslo_concurrency.processutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 394d590b-38c1-44bb-8370-a9d12c6b7ef0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:33 compute-0 nova_compute[255040]: 2025-11-29 08:01:33.245 255071 DEBUG nova.storage.rbd_utils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] resizing rbd image 394d590b-38c1-44bb-8370-a9d12c6b7ef0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:01:33 compute-0 nova_compute[255040]: 2025-11-29 08:01:33.355 255071 DEBUG nova.objects.instance [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lazy-loading 'migration_context' on Instance uuid 394d590b-38c1-44bb-8370-a9d12c6b7ef0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:01:33 compute-0 nova_compute[255040]: 2025-11-29 08:01:33.370 255071 DEBUG nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:01:33 compute-0 nova_compute[255040]: 2025-11-29 08:01:33.371 255071 DEBUG nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Ensure instance console log exists: /var/lib/nova/instances/394d590b-38c1-44bb-8370-a9d12c6b7ef0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:01:33 compute-0 nova_compute[255040]: 2025-11-29 08:01:33.372 255071 DEBUG oslo_concurrency.lockutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:33 compute-0 nova_compute[255040]: 2025-11-29 08:01:33.372 255071 DEBUG oslo_concurrency.lockutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:33 compute-0 nova_compute[255040]: 2025-11-29 08:01:33.372 255071 DEBUG oslo_concurrency.lockutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 57 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 862 KiB/s wr, 12 op/s
Nov 29 08:01:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:01:33 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/840003791' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:01:33 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/840003791' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:33 compute-0 nova_compute[255040]: 2025-11-29 08:01:33.901 255071 DEBUG nova.network.neutron [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Updating instance_info_cache with network_info: [{"id": "53214bb0-d184-4728-8d1e-d7fa4a77f667", "address": "fa:16:3e:24:70:25", "network": {"id": "7c59af87-9673-4cef-a687-a8698997e2ed", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1730273405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "130bffe4c30f493aa286a3620fd260ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53214bb0-d1", "ovs_interfaceid": "53214bb0-d184-4728-8d1e-d7fa4a77f667", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:01:34 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/840003791' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:34 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/840003791' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:34 compute-0 nova_compute[255040]: 2025-11-29 08:01:34.494 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.193 255071 DEBUG oslo_concurrency.lockutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Releasing lock "refresh_cache-394d590b-38c1-44bb-8370-a9d12c6b7ef0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.194 255071 DEBUG nova.compute.manager [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Instance network_info: |[{"id": "53214bb0-d184-4728-8d1e-d7fa4a77f667", "address": "fa:16:3e:24:70:25", "network": {"id": "7c59af87-9673-4cef-a687-a8698997e2ed", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1730273405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "130bffe4c30f493aa286a3620fd260ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53214bb0-d1", "ovs_interfaceid": "53214bb0-d184-4728-8d1e-d7fa4a77f667", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.195 255071 DEBUG oslo_concurrency.lockutils [req-e9a2bba6-00b3-4666-b8d2-f1177d9698b2 req-528d8723-9111-43e9-9740-3baaa233f7d2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-394d590b-38c1-44bb-8370-a9d12c6b7ef0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.196 255071 DEBUG nova.network.neutron [req-e9a2bba6-00b3-4666-b8d2-f1177d9698b2 req-528d8723-9111-43e9-9740-3baaa233f7d2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Refreshing network info cache for port 53214bb0-d184-4728-8d1e-d7fa4a77f667 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.202 255071 DEBUG nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Start _get_guest_xml network_info=[{"id": "53214bb0-d184-4728-8d1e-d7fa4a77f667", "address": "fa:16:3e:24:70:25", "network": {"id": "7c59af87-9673-4cef-a687-a8698997e2ed", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1730273405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "130bffe4c30f493aa286a3620fd260ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53214bb0-d1", "ovs_interfaceid": "53214bb0-d184-4728-8d1e-d7fa4a77f667", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'image_id': '36a9388d-0d77-4d24-a915-be92247e5dbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.210 255071 WARNING nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.218 255071 DEBUG nova.virt.libvirt.host [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.219 255071 DEBUG nova.virt.libvirt.host [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.227 255071 DEBUG nova.virt.libvirt.host [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.228 255071 DEBUG nova.virt.libvirt.host [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.229 255071 DEBUG nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.229 255071 DEBUG nova.virt.hardware [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.230 255071 DEBUG nova.virt.hardware [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.230 255071 DEBUG nova.virt.hardware [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.230 255071 DEBUG nova.virt.hardware [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.231 255071 DEBUG nova.virt.hardware [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.231 255071 DEBUG nova.virt.hardware [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.231 255071 DEBUG nova.virt.hardware [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.231 255071 DEBUG nova.virt.hardware [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.231 255071 DEBUG nova.virt.hardware [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.232 255071 DEBUG nova.virt.hardware [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.232 255071 DEBUG nova.virt.hardware [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.235 255071 DEBUG oslo_concurrency.processutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:35 compute-0 ceph-mon[75237]: pgmap v1200: 305 pgs: 305 active+clean; 57 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 862 KiB/s wr, 12 op/s
Nov 29 08:01:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Nov 29 08:01:35 compute-0 nova_compute[255040]: 2025-11-29 08:01:35.482 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:01:36 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/373021752' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:01:36 compute-0 nova_compute[255040]: 2025-11-29 08:01:36.091 255071 DEBUG oslo_concurrency.processutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.856s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:36 compute-0 nova_compute[255040]: 2025-11-29 08:01:36.115 255071 DEBUG nova.storage.rbd_utils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] rbd image 394d590b-38c1-44bb-8370-a9d12c6b7ef0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:01:36 compute-0 nova_compute[255040]: 2025-11-29 08:01:36.119 255071 DEBUG oslo_concurrency.processutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:36 compute-0 ceph-mon[75237]: pgmap v1201: 305 pgs: 305 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Nov 29 08:01:36 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/373021752' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:01:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:01:36 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2053889697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:01:36 compute-0 nova_compute[255040]: 2025-11-29 08:01:36.585 255071 DEBUG oslo_concurrency.processutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:36 compute-0 nova_compute[255040]: 2025-11-29 08:01:36.587 255071 DEBUG nova.virt.libvirt.vif [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:01:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-438987100',display_name='tempest-VolumesActionsTest-instance-438987100',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-438987100',id=3,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='130bffe4c30f493aa286a3620fd260ca',ramdisk_id='',reservation_id='r-59m570qx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1980568150',owner_user_name='tempest-VolumesActionsTest-1980568150-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:01:31Z,user_data=None,user_id='342375d9cda748d0bdc3985fba484510',uuid=394d590b-38c1-44bb-8370-a9d12c6b7ef0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "53214bb0-d184-4728-8d1e-d7fa4a77f667", "address": "fa:16:3e:24:70:25", "network": {"id": "7c59af87-9673-4cef-a687-a8698997e2ed", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1730273405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "130bffe4c30f493aa286a3620fd260ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53214bb0-d1", "ovs_interfaceid": "53214bb0-d184-4728-8d1e-d7fa4a77f667", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:01:36 compute-0 nova_compute[255040]: 2025-11-29 08:01:36.587 255071 DEBUG nova.network.os_vif_util [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Converting VIF {"id": "53214bb0-d184-4728-8d1e-d7fa4a77f667", "address": "fa:16:3e:24:70:25", "network": {"id": "7c59af87-9673-4cef-a687-a8698997e2ed", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1730273405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "130bffe4c30f493aa286a3620fd260ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53214bb0-d1", "ovs_interfaceid": "53214bb0-d184-4728-8d1e-d7fa4a77f667", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:01:36 compute-0 nova_compute[255040]: 2025-11-29 08:01:36.588 255071 DEBUG nova.network.os_vif_util [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:70:25,bridge_name='br-int',has_traffic_filtering=True,id=53214bb0-d184-4728-8d1e-d7fa4a77f667,network=Network(7c59af87-9673-4cef-a687-a8698997e2ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53214bb0-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:01:36 compute-0 nova_compute[255040]: 2025-11-29 08:01:36.590 255071 DEBUG nova.objects.instance [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lazy-loading 'pci_devices' on Instance uuid 394d590b-38c1-44bb-8370-a9d12c6b7ef0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:01:36 compute-0 nova_compute[255040]: 2025-11-29 08:01:36.923 255071 DEBUG nova.network.neutron [req-e9a2bba6-00b3-4666-b8d2-f1177d9698b2 req-528d8723-9111-43e9-9740-3baaa233f7d2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Updated VIF entry in instance network info cache for port 53214bb0-d184-4728-8d1e-d7fa4a77f667. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:01:36 compute-0 nova_compute[255040]: 2025-11-29 08:01:36.923 255071 DEBUG nova.network.neutron [req-e9a2bba6-00b3-4666-b8d2-f1177d9698b2 req-528d8723-9111-43e9-9740-3baaa233f7d2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Updating instance_info_cache with network_info: [{"id": "53214bb0-d184-4728-8d1e-d7fa4a77f667", "address": "fa:16:3e:24:70:25", "network": {"id": "7c59af87-9673-4cef-a687-a8698997e2ed", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1730273405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "130bffe4c30f493aa286a3620fd260ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53214bb0-d1", "ovs_interfaceid": "53214bb0-d184-4728-8d1e-d7fa4a77f667", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:01:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.039 255071 DEBUG nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:01:38 compute-0 nova_compute[255040]:   <uuid>394d590b-38c1-44bb-8370-a9d12c6b7ef0</uuid>
Nov 29 08:01:38 compute-0 nova_compute[255040]:   <name>instance-00000003</name>
Nov 29 08:01:38 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:01:38 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:01:38 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <nova:name>tempest-VolumesActionsTest-instance-438987100</nova:name>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:01:35</nova:creationTime>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:01:38 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:01:38 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:01:38 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:01:38 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:01:38 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:01:38 compute-0 nova_compute[255040]:         <nova:user uuid="342375d9cda748d0bdc3985fba484510">tempest-VolumesActionsTest-1980568150-project-member</nova:user>
Nov 29 08:01:38 compute-0 nova_compute[255040]:         <nova:project uuid="130bffe4c30f493aa286a3620fd260ca">tempest-VolumesActionsTest-1980568150</nova:project>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <nova:root type="image" uuid="36a9388d-0d77-4d24-a915-be92247e5dbc"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:01:38 compute-0 nova_compute[255040]:         <nova:port uuid="53214bb0-d184-4728-8d1e-d7fa4a77f667">
Nov 29 08:01:38 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:01:38 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:01:38 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <system>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <entry name="serial">394d590b-38c1-44bb-8370-a9d12c6b7ef0</entry>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <entry name="uuid">394d590b-38c1-44bb-8370-a9d12c6b7ef0</entry>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     </system>
Nov 29 08:01:38 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:01:38 compute-0 nova_compute[255040]:   <os>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:   </os>
Nov 29 08:01:38 compute-0 nova_compute[255040]:   <features>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:   </features>
Nov 29 08:01:38 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:01:38 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:01:38 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/394d590b-38c1-44bb-8370-a9d12c6b7ef0_disk">
Nov 29 08:01:38 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       </source>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:01:38 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/394d590b-38c1-44bb-8370-a9d12c6b7ef0_disk.config">
Nov 29 08:01:38 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       </source>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:01:38 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:24:70:25"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <target dev="tap53214bb0-d1"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/394d590b-38c1-44bb-8370-a9d12c6b7ef0/console.log" append="off"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <video>
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     </video>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:01:38 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:01:38 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:01:38 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:01:38 compute-0 nova_compute[255040]: </domain>
Nov 29 08:01:38 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.041 255071 DEBUG nova.compute.manager [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Preparing to wait for external event network-vif-plugged-53214bb0-d184-4728-8d1e-d7fa4a77f667 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.041 255071 DEBUG oslo_concurrency.lockutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Acquiring lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.041 255071 DEBUG oslo_concurrency.lockutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.042 255071 DEBUG oslo_concurrency.lockutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.042 255071 DEBUG nova.virt.libvirt.vif [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:01:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-438987100',display_name='tempest-VolumesActionsTest-instance-438987100',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-438987100',id=3,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='130bffe4c30f493aa286a3620fd260ca',ramdisk_id='',reservation_id='r-59m570qx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1980568150',owner_user_name='tempest-VolumesActionsTest-1980568150-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:01:31Z,user_data=None,user_id='342375d9cda748d0bdc3985fba484510',uuid=394d590b-38c1-44bb-8370-a9d12c6b7ef0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "53214bb0-d184-4728-8d1e-d7fa4a77f667", "address": "fa:16:3e:24:70:25", "network": {"id": "7c59af87-9673-4cef-a687-a8698997e2ed", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1730273405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "130bffe4c30f493aa286a3620fd260ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53214bb0-d1", "ovs_interfaceid": "53214bb0-d184-4728-8d1e-d7fa4a77f667", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.043 255071 DEBUG nova.network.os_vif_util [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Converting VIF {"id": "53214bb0-d184-4728-8d1e-d7fa4a77f667", "address": "fa:16:3e:24:70:25", "network": {"id": "7c59af87-9673-4cef-a687-a8698997e2ed", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1730273405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "130bffe4c30f493aa286a3620fd260ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53214bb0-d1", "ovs_interfaceid": "53214bb0-d184-4728-8d1e-d7fa4a77f667", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.043 255071 DEBUG nova.network.os_vif_util [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:70:25,bridge_name='br-int',has_traffic_filtering=True,id=53214bb0-d184-4728-8d1e-d7fa4a77f667,network=Network(7c59af87-9673-4cef-a687-a8698997e2ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53214bb0-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.044 255071 DEBUG os_vif [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:70:25,bridge_name='br-int',has_traffic_filtering=True,id=53214bb0-d184-4728-8d1e-d7fa4a77f667,network=Network(7c59af87-9673-4cef-a687-a8698997e2ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53214bb0-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.045 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.045 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.046 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.049 255071 DEBUG oslo_concurrency.lockutils [req-e9a2bba6-00b3-4666-b8d2-f1177d9698b2 req-528d8723-9111-43e9-9740-3baaa233f7d2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-394d590b-38c1-44bb-8370-a9d12c6b7ef0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.052 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.052 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap53214bb0-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.053 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap53214bb0-d1, col_values=(('external_ids', {'iface-id': '53214bb0-d184-4728-8d1e-d7fa4a77f667', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:70:25', 'vm-uuid': '394d590b-38c1-44bb-8370-a9d12c6b7ef0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.055 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:38 compute-0 NetworkManager[49116]: <info>  [1764403298.0576] manager: (tap53214bb0-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.058 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.069 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.071 255071 INFO os_vif [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:70:25,bridge_name='br-int',has_traffic_filtering=True,id=53214bb0-d184-4728-8d1e-d7fa4a77f667,network=Network(7c59af87-9673-4cef-a687-a8698997e2ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53214bb0-d1')
Nov 29 08:01:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:01:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:01:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:01:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:01:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:01:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:01:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:01:38
Nov 29 08:01:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:01:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:01:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['.rgw.root', 'images', 'volumes', 'default.rgw.log', 'backups', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data']
Nov 29 08:01:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:01:38 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2053889697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.941 255071 DEBUG nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.941 255071 DEBUG nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.942 255071 DEBUG nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] No VIF found with MAC fa:16:3e:24:70:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.942 255071 INFO nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Using config drive
Nov 29 08:01:38 compute-0 nova_compute[255040]: 2025-11-29 08:01:38.968 255071 DEBUG nova.storage.rbd_utils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] rbd image 394d590b-38c1-44bb-8370-a9d12c6b7ef0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:01:39 compute-0 nova_compute[255040]: 2025-11-29 08:01:39.310 255071 INFO nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Creating config drive at /var/lib/nova/instances/394d590b-38c1-44bb-8370-a9d12c6b7ef0/disk.config
Nov 29 08:01:39 compute-0 nova_compute[255040]: 2025-11-29 08:01:39.316 255071 DEBUG oslo_concurrency.processutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/394d590b-38c1-44bb-8370-a9d12c6b7ef0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp94oqwcfp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:39 compute-0 nova_compute[255040]: 2025-11-29 08:01:39.449 255071 DEBUG oslo_concurrency.processutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/394d590b-38c1-44bb-8370-a9d12c6b7ef0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp94oqwcfp" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Nov 29 08:01:39 compute-0 nova_compute[255040]: 2025-11-29 08:01:39.479 255071 DEBUG nova.storage.rbd_utils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] rbd image 394d590b-38c1-44bb-8370-a9d12c6b7ef0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:01:39 compute-0 nova_compute[255040]: 2025-11-29 08:01:39.485 255071 DEBUG oslo_concurrency.processutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/394d590b-38c1-44bb-8370-a9d12c6b7ef0/disk.config 394d590b-38c1-44bb-8370-a9d12c6b7ef0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:39 compute-0 nova_compute[255040]: 2025-11-29 08:01:39.508 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:40 compute-0 ceph-mon[75237]: pgmap v1202: 305 pgs: 305 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Nov 29 08:01:40 compute-0 ceph-mon[75237]: pgmap v1203: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Nov 29 08:01:40 compute-0 nova_compute[255040]: 2025-11-29 08:01:40.718 255071 DEBUG oslo_concurrency.processutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/394d590b-38c1-44bb-8370-a9d12c6b7ef0/disk.config 394d590b-38c1-44bb-8370-a9d12c6b7ef0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:40 compute-0 nova_compute[255040]: 2025-11-29 08:01:40.719 255071 INFO nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Deleting local config drive /var/lib/nova/instances/394d590b-38c1-44bb-8370-a9d12c6b7ef0/disk.config because it was imported into RBD.
Nov 29 08:01:40 compute-0 kernel: tap53214bb0-d1: entered promiscuous mode
Nov 29 08:01:40 compute-0 NetworkManager[49116]: <info>  [1764403300.7900] manager: (tap53214bb0-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Nov 29 08:01:40 compute-0 ovn_controller[153295]: 2025-11-29T08:01:40Z|00046|binding|INFO|Claiming lport 53214bb0-d184-4728-8d1e-d7fa4a77f667 for this chassis.
Nov 29 08:01:40 compute-0 ovn_controller[153295]: 2025-11-29T08:01:40Z|00047|binding|INFO|53214bb0-d184-4728-8d1e-d7fa4a77f667: Claiming fa:16:3e:24:70:25 10.100.0.14
Nov 29 08:01:40 compute-0 nova_compute[255040]: 2025-11-29 08:01:40.791 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:40 compute-0 nova_compute[255040]: 2025-11-29 08:01:40.799 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:40 compute-0 nova_compute[255040]: 2025-11-29 08:01:40.801 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:40.808 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:70:25 10.100.0.14'], port_security=['fa:16:3e:24:70:25 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '394d590b-38c1-44bb-8370-a9d12c6b7ef0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c59af87-9673-4cef-a687-a8698997e2ed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '130bffe4c30f493aa286a3620fd260ca', 'neutron:revision_number': '2', 'neutron:security_group_ids': '622ec407-a852-4817-9d30-c772bce4eb0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee65ecf4-28fb-4c94-9662-60b0418afead, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=53214bb0-d184-4728-8d1e-d7fa4a77f667) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:01:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:40.810 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 53214bb0-d184-4728-8d1e-d7fa4a77f667 in datapath 7c59af87-9673-4cef-a687-a8698997e2ed bound to our chassis
Nov 29 08:01:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:40.812 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c59af87-9673-4cef-a687-a8698997e2ed
Nov 29 08:01:40 compute-0 systemd-machined[216271]: New machine qemu-3-instance-00000003.
Nov 29 08:01:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:40.827 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5c112b18-4686-4184-b383-869f918336f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:40.829 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c59af87-91 in ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:01:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:40.833 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c59af87-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:01:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:40.833 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce0f159-0204-4794-83a6-2b8c7dd2d952]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:40.834 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5f8ad8-6bd4-417a-a64a-5004a0bcde15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:40.852 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[33fd8c67-bfe8-47c0-a21d-3f1cabf973da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:40 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Nov 29 08:01:40 compute-0 nova_compute[255040]: 2025-11-29 08:01:40.874 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:40.876 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[069ad98a-5945-4be0-90d1-b370dbc7f5f1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:40 compute-0 ovn_controller[153295]: 2025-11-29T08:01:40Z|00048|binding|INFO|Setting lport 53214bb0-d184-4728-8d1e-d7fa4a77f667 ovn-installed in OVS
Nov 29 08:01:40 compute-0 ovn_controller[153295]: 2025-11-29T08:01:40Z|00049|binding|INFO|Setting lport 53214bb0-d184-4728-8d1e-d7fa4a77f667 up in Southbound
Nov 29 08:01:40 compute-0 nova_compute[255040]: 2025-11-29 08:01:40.879 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:40 compute-0 systemd-udevd[268727]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:01:40 compute-0 NetworkManager[49116]: <info>  [1764403300.9011] device (tap53214bb0-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:01:40 compute-0 NetworkManager[49116]: <info>  [1764403300.9021] device (tap53214bb0-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:01:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:40.923 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[8f47b830-a61c-453e-8499-35b051ed83a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:40 compute-0 NetworkManager[49116]: <info>  [1764403300.9323] manager: (tap7c59af87-90): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Nov 29 08:01:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:40.931 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[83e07ac2-5c73-4a43-8023-b60283d2e85e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:40.965 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[69fc8950-24e6-4359-a850-b4265bbd4891]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:40.969 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[c0cf7cb6-e205-4505-8a6c-31f324870573]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:40 compute-0 NetworkManager[49116]: <info>  [1764403300.9963] device (tap7c59af87-90): carrier: link connected
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:41.003 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[48c211fe-b677-431b-b39e-9ba302f65787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:41.025 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c06186-d2e5-460a-8f4a-6162c8b1aa79]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c59af87-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:14:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553833, 'reachable_time': 42737, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268757, 'error': None, 'target': 'ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:41.046 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[ba466d64-7228-40e6-8e19-1e1373dbb036]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec7:1495'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 553833, 'tstamp': 553833}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268758, 'error': None, 'target': 'ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:41.069 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d96508-bd26-405f-aa65-40caa3c39ebf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c59af87-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:14:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553833, 'reachable_time': 42737, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268759, 'error': None, 'target': 'ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:41.107 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[a0cc07d1-750b-4c90-9f8d-23907871fc07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:41 compute-0 nova_compute[255040]: 2025-11-29 08:01:41.139 255071 DEBUG nova.compute.manager [req-0ac791fb-a361-48a9-8b6e-c67a29b0a3b2 req-cde3ff55-1f15-4374-b03a-d971dd061dcb cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Received event network-vif-plugged-53214bb0-d184-4728-8d1e-d7fa4a77f667 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:41 compute-0 nova_compute[255040]: 2025-11-29 08:01:41.140 255071 DEBUG oslo_concurrency.lockutils [req-0ac791fb-a361-48a9-8b6e-c67a29b0a3b2 req-cde3ff55-1f15-4374-b03a-d971dd061dcb cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:41 compute-0 nova_compute[255040]: 2025-11-29 08:01:41.140 255071 DEBUG oslo_concurrency.lockutils [req-0ac791fb-a361-48a9-8b6e-c67a29b0a3b2 req-cde3ff55-1f15-4374-b03a-d971dd061dcb cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:41 compute-0 nova_compute[255040]: 2025-11-29 08:01:41.140 255071 DEBUG oslo_concurrency.lockutils [req-0ac791fb-a361-48a9-8b6e-c67a29b0a3b2 req-cde3ff55-1f15-4374-b03a-d971dd061dcb cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:41 compute-0 nova_compute[255040]: 2025-11-29 08:01:41.141 255071 DEBUG nova.compute.manager [req-0ac791fb-a361-48a9-8b6e-c67a29b0a3b2 req-cde3ff55-1f15-4374-b03a-d971dd061dcb cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Processing event network-vif-plugged-53214bb0-d184-4728-8d1e-d7fa4a77f667 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:41.186 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[a382209f-31a1-41d8-a586-91d122bcb130]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:41.188 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c59af87-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:41.188 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:41.189 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c59af87-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:41 compute-0 NetworkManager[49116]: <info>  [1764403301.1916] manager: (tap7c59af87-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Nov 29 08:01:41 compute-0 kernel: tap7c59af87-90: entered promiscuous mode
Nov 29 08:01:41 compute-0 nova_compute[255040]: 2025-11-29 08:01:41.192 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:41.194 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c59af87-90, col_values=(('external_ids', {'iface-id': '10f95636-701a-4c25-bac8-f036309f6a48'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:41 compute-0 ovn_controller[153295]: 2025-11-29T08:01:41Z|00050|binding|INFO|Releasing lport 10f95636-701a-4c25-bac8-f036309f6a48 from this chassis (sb_readonly=0)
Nov 29 08:01:41 compute-0 nova_compute[255040]: 2025-11-29 08:01:41.196 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:41.197 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c59af87-9673-4cef-a687-a8698997e2ed.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c59af87-9673-4cef-a687-a8698997e2ed.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:41.198 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[430020f4-6066-4719-8eff-67073d80bbf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:41.199 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-7c59af87-9673-4cef-a687-a8698997e2ed
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/7c59af87-9673-4cef-a687-a8698997e2ed.pid.haproxy
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID 7c59af87-9673-4cef-a687-a8698997e2ed
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:01:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:41.200 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed', 'env', 'PROCESS_TAG=haproxy-7c59af87-9673-4cef-a687-a8698997e2ed', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c59af87-9673-4cef-a687-a8698997e2ed.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:01:41 compute-0 nova_compute[255040]: 2025-11-29 08:01:41.212 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 29 08:01:41 compute-0 podman[268808]: 2025-11-29 08:01:41.59206196 +0000 UTC m=+0.033193598 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.431 255071 DEBUG nova.compute.manager [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.432 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403302.4307873, 394d590b-38c1-44bb-8370-a9d12c6b7ef0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.433 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] VM Started (Lifecycle Event)
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.439 255071 DEBUG nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.447 255071 INFO nova.virt.libvirt.driver [-] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Instance spawned successfully.
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.448 255071 DEBUG nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.471 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:01:42 compute-0 podman[268808]: 2025-11-29 08:01:42.476354567 +0000 UTC m=+0.917486155 container create e8cb2d18f63f62023a51e60a677644b5fa9e64f5618872121db4037c823cffab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.477 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.484 255071 DEBUG nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.485 255071 DEBUG nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.486 255071 DEBUG nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.486 255071 DEBUG nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.487 255071 DEBUG nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.487 255071 DEBUG nova.virt.libvirt.driver [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.498 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.498 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403302.4310622, 394d590b-38c1-44bb-8370-a9d12c6b7ef0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.499 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] VM Paused (Lifecycle Event)
Nov 29 08:01:42 compute-0 systemd[1]: Started libpod-conmon-e8cb2d18f63f62023a51e60a677644b5fa9e64f5618872121db4037c823cffab.scope.
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.532 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.541 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403302.4357579, 394d590b-38c1-44bb-8370-a9d12c6b7ef0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.541 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] VM Resumed (Lifecycle Event)
Nov 29 08:01:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:01:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/760f48de082e29f5d6e83be5fa4996bd2b1e8650ffd526411e744735b27db41e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.565 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.571 255071 INFO nova.compute.manager [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Took 11.17 seconds to spawn the instance on the hypervisor.
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.571 255071 DEBUG nova.compute.manager [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.575 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.603 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:01:42 compute-0 podman[268808]: 2025-11-29 08:01:42.603843391 +0000 UTC m=+1.044974989 container init e8cb2d18f63f62023a51e60a677644b5fa9e64f5618872121db4037c823cffab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 08:01:42 compute-0 podman[268808]: 2025-11-29 08:01:42.609789421 +0000 UTC m=+1.050921009 container start e8cb2d18f63f62023a51e60a677644b5fa9e64f5618872121db4037c823cffab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.645 255071 INFO nova.compute.manager [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Took 12.20 seconds to build instance.
Nov 29 08:01:42 compute-0 neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed[268848]: [NOTICE]   (268865) : New worker (268875) forked
Nov 29 08:01:42 compute-0 neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed[268848]: [NOTICE]   (268865) : Loading success.
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.662 255071 DEBUG oslo_concurrency.lockutils [None req-de51a57a-d034-4cef-9341-f1b399af6ff0 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:42 compute-0 podman[268845]: 2025-11-29 08:01:42.674777406 +0000 UTC m=+0.140774193 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:01:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:01:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:01:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:01:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:01:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:01:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:01:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:01:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:01:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:01:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:01:42 compute-0 nova_compute[255040]: 2025-11-29 08:01:42.975 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:42.975 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:01:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:42.977 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:01:43 compute-0 nova_compute[255040]: 2025-11-29 08:01:43.055 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:43 compute-0 ceph-mon[75237]: pgmap v1204: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 29 08:01:43 compute-0 nova_compute[255040]: 2025-11-29 08:01:43.226 255071 DEBUG nova.compute.manager [req-8de9d582-fdb1-4d22-a3c4-27e2b6c3687f req-14c70228-0d75-482c-ac6d-8ce0363f4432 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Received event network-vif-plugged-53214bb0-d184-4728-8d1e-d7fa4a77f667 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:43 compute-0 nova_compute[255040]: 2025-11-29 08:01:43.227 255071 DEBUG oslo_concurrency.lockutils [req-8de9d582-fdb1-4d22-a3c4-27e2b6c3687f req-14c70228-0d75-482c-ac6d-8ce0363f4432 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:43 compute-0 nova_compute[255040]: 2025-11-29 08:01:43.227 255071 DEBUG oslo_concurrency.lockutils [req-8de9d582-fdb1-4d22-a3c4-27e2b6c3687f req-14c70228-0d75-482c-ac6d-8ce0363f4432 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:43 compute-0 nova_compute[255040]: 2025-11-29 08:01:43.228 255071 DEBUG oslo_concurrency.lockutils [req-8de9d582-fdb1-4d22-a3c4-27e2b6c3687f req-14c70228-0d75-482c-ac6d-8ce0363f4432 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:43 compute-0 nova_compute[255040]: 2025-11-29 08:01:43.228 255071 DEBUG nova.compute.manager [req-8de9d582-fdb1-4d22-a3c4-27e2b6c3687f req-14c70228-0d75-482c-ac6d-8ce0363f4432 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] No waiting events found dispatching network-vif-plugged-53214bb0-d184-4728-8d1e-d7fa4a77f667 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:01:43 compute-0 nova_compute[255040]: 2025-11-29 08:01:43.229 255071 WARNING nova.compute.manager [req-8de9d582-fdb1-4d22-a3c4-27e2b6c3687f req-14c70228-0d75-482c-ac6d-8ce0363f4432 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Received unexpected event network-vif-plugged-53214bb0-d184-4728-8d1e-d7fa4a77f667 for instance with vm_state active and task_state None.
Nov 29 08:01:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 29 08:01:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:43.979 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:44 compute-0 ceph-mon[75237]: pgmap v1205: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 29 08:01:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:01:44 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2659712140' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:01:44 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2659712140' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:44 compute-0 nova_compute[255040]: 2025-11-29 08:01:44.498 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:44 compute-0 nova_compute[255040]: 2025-11-29 08:01:44.810 255071 DEBUG oslo_concurrency.lockutils [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Acquiring lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:44 compute-0 nova_compute[255040]: 2025-11-29 08:01:44.810 255071 DEBUG oslo_concurrency.lockutils [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:44 compute-0 nova_compute[255040]: 2025-11-29 08:01:44.811 255071 DEBUG oslo_concurrency.lockutils [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Acquiring lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:44 compute-0 nova_compute[255040]: 2025-11-29 08:01:44.812 255071 DEBUG oslo_concurrency.lockutils [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:44 compute-0 nova_compute[255040]: 2025-11-29 08:01:44.812 255071 DEBUG oslo_concurrency.lockutils [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:44 compute-0 nova_compute[255040]: 2025-11-29 08:01:44.813 255071 INFO nova.compute.manager [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Terminating instance
Nov 29 08:01:44 compute-0 nova_compute[255040]: 2025-11-29 08:01:44.814 255071 DEBUG nova.compute.manager [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:01:44 compute-0 kernel: tap53214bb0-d1 (unregistering): left promiscuous mode
Nov 29 08:01:44 compute-0 NetworkManager[49116]: <info>  [1764403304.8717] device (tap53214bb0-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:01:44 compute-0 ovn_controller[153295]: 2025-11-29T08:01:44Z|00051|binding|INFO|Releasing lport 53214bb0-d184-4728-8d1e-d7fa4a77f667 from this chassis (sb_readonly=0)
Nov 29 08:01:44 compute-0 ovn_controller[153295]: 2025-11-29T08:01:44Z|00052|binding|INFO|Setting lport 53214bb0-d184-4728-8d1e-d7fa4a77f667 down in Southbound
Nov 29 08:01:44 compute-0 ovn_controller[153295]: 2025-11-29T08:01:44Z|00053|binding|INFO|Removing iface tap53214bb0-d1 ovn-installed in OVS
Nov 29 08:01:44 compute-0 nova_compute[255040]: 2025-11-29 08:01:44.888 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:44.895 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:70:25 10.100.0.14'], port_security=['fa:16:3e:24:70:25 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '394d590b-38c1-44bb-8370-a9d12c6b7ef0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c59af87-9673-4cef-a687-a8698997e2ed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '130bffe4c30f493aa286a3620fd260ca', 'neutron:revision_number': '4', 'neutron:security_group_ids': '622ec407-a852-4817-9d30-c772bce4eb0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee65ecf4-28fb-4c94-9662-60b0418afead, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=53214bb0-d184-4728-8d1e-d7fa4a77f667) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:01:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:44.896 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 53214bb0-d184-4728-8d1e-d7fa4a77f667 in datapath 7c59af87-9673-4cef-a687-a8698997e2ed unbound from our chassis
Nov 29 08:01:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:44.897 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c59af87-9673-4cef-a687-a8698997e2ed, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:01:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:44.897 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe4d487-0c02-4a52-95f0-4a8b17102d2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:44.898 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed namespace which is not needed anymore
Nov 29 08:01:44 compute-0 nova_compute[255040]: 2025-11-29 08:01:44.927 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:44 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 29 08:01:44 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 3.273s CPU time.
Nov 29 08:01:44 compute-0 systemd-machined[216271]: Machine qemu-3-instance-00000003 terminated.
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.043 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.050 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:45 compute-0 neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed[268848]: [NOTICE]   (268865) : haproxy version is 2.8.14-c23fe91
Nov 29 08:01:45 compute-0 neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed[268848]: [NOTICE]   (268865) : path to executable is /usr/sbin/haproxy
Nov 29 08:01:45 compute-0 neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed[268848]: [WARNING]  (268865) : Exiting Master process...
Nov 29 08:01:45 compute-0 neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed[268848]: [ALERT]    (268865) : Current worker (268875) exited with code 143 (Terminated)
Nov 29 08:01:45 compute-0 neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed[268848]: [WARNING]  (268865) : All workers exited. Exiting... (0)
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.061 255071 INFO nova.virt.libvirt.driver [-] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Instance destroyed successfully.
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.061 255071 DEBUG nova.objects.instance [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lazy-loading 'resources' on Instance uuid 394d590b-38c1-44bb-8370-a9d12c6b7ef0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:01:45 compute-0 systemd[1]: libpod-e8cb2d18f63f62023a51e60a677644b5fa9e64f5618872121db4037c823cffab.scope: Deactivated successfully.
Nov 29 08:01:45 compute-0 podman[268913]: 2025-11-29 08:01:45.070298045 +0000 UTC m=+0.061423519 container died e8cb2d18f63f62023a51e60a677644b5fa9e64f5618872121db4037c823cffab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.079 255071 DEBUG nova.virt.libvirt.vif [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:01:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-438987100',display_name='tempest-VolumesActionsTest-instance-438987100',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-438987100',id=3,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:01:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='130bffe4c30f493aa286a3620fd260ca',ramdisk_id='',reservation_id='r-59m570qx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-1980568150',owner_user_name='tempest-VolumesActionsTest-1980568150-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:01:42Z,user_data=None,user_id='342375d9cda748d0bdc3985fba484510',uuid=394d590b-38c1-44bb-8370-a9d12c6b7ef0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "53214bb0-d184-4728-8d1e-d7fa4a77f667", "address": "fa:16:3e:24:70:25", "network": {"id": "7c59af87-9673-4cef-a687-a8698997e2ed", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1730273405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "130bffe4c30f493aa286a3620fd260ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53214bb0-d1", "ovs_interfaceid": "53214bb0-d184-4728-8d1e-d7fa4a77f667", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.079 255071 DEBUG nova.network.os_vif_util [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Converting VIF {"id": "53214bb0-d184-4728-8d1e-d7fa4a77f667", "address": "fa:16:3e:24:70:25", "network": {"id": "7c59af87-9673-4cef-a687-a8698997e2ed", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1730273405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "130bffe4c30f493aa286a3620fd260ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53214bb0-d1", "ovs_interfaceid": "53214bb0-d184-4728-8d1e-d7fa4a77f667", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.080 255071 DEBUG nova.network.os_vif_util [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:70:25,bridge_name='br-int',has_traffic_filtering=True,id=53214bb0-d184-4728-8d1e-d7fa4a77f667,network=Network(7c59af87-9673-4cef-a687-a8698997e2ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53214bb0-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.081 255071 DEBUG os_vif [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:70:25,bridge_name='br-int',has_traffic_filtering=True,id=53214bb0-d184-4728-8d1e-d7fa4a77f667,network=Network(7c59af87-9673-4cef-a687-a8698997e2ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53214bb0-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.086 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.087 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53214bb0-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.090 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.094 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.097 255071 INFO os_vif [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:70:25,bridge_name='br-int',has_traffic_filtering=True,id=53214bb0-d184-4728-8d1e-d7fa4a77f667,network=Network(7c59af87-9673-4cef-a687-a8698997e2ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53214bb0-d1')
Nov 29 08:01:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8cb2d18f63f62023a51e60a677644b5fa9e64f5618872121db4037c823cffab-userdata-shm.mount: Deactivated successfully.
Nov 29 08:01:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-760f48de082e29f5d6e83be5fa4996bd2b1e8650ffd526411e744735b27db41e-merged.mount: Deactivated successfully.
Nov 29 08:01:45 compute-0 podman[268913]: 2025-11-29 08:01:45.137253964 +0000 UTC m=+0.128379438 container cleanup e8cb2d18f63f62023a51e60a677644b5fa9e64f5618872121db4037c823cffab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 08:01:45 compute-0 systemd[1]: libpod-conmon-e8cb2d18f63f62023a51e60a677644b5fa9e64f5618872121db4037c823cffab.scope: Deactivated successfully.
Nov 29 08:01:45 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2659712140' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:45 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2659712140' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:45 compute-0 podman[268969]: 2025-11-29 08:01:45.231314945 +0000 UTC m=+0.064127773 container remove e8cb2d18f63f62023a51e60a677644b5fa9e64f5618872121db4037c823cffab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:01:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:45.239 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5b238e-2967-444e-99f6-f52535641ca6]: (4, ('Sat Nov 29 08:01:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed (e8cb2d18f63f62023a51e60a677644b5fa9e64f5618872121db4037c823cffab)\ne8cb2d18f63f62023a51e60a677644b5fa9e64f5618872121db4037c823cffab\nSat Nov 29 08:01:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed (e8cb2d18f63f62023a51e60a677644b5fa9e64f5618872121db4037c823cffab)\ne8cb2d18f63f62023a51e60a677644b5fa9e64f5618872121db4037c823cffab\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:45.241 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[9b11805e-26b5-4b89-81c4-fc30b21572af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:45.242 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c59af87-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:45 compute-0 kernel: tap7c59af87-90: left promiscuous mode
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.244 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.259 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:45.263 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[3077223e-4631-4d0c-8eb4-aa77c74e079a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:45.279 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[161d97f6-e83f-4f45-a8bb-e6f2b6b5ca6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:45.280 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[49b9ebea-bca4-4826-a96a-b0b4e1c5a4b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:45.301 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[af3bc0ed-8d69-40bc-b3f6-90b024e9f5b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553825, 'reachable_time': 27378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268984, 'error': None, 'target': 'ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.303 255071 DEBUG nova.compute.manager [req-16357d12-a557-4906-b83a-d21551c49de9 req-0e51df83-b205-4f6d-874a-48bf7e73fe2b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Received event network-vif-unplugged-53214bb0-d184-4728-8d1e-d7fa4a77f667 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.304 255071 DEBUG oslo_concurrency.lockutils [req-16357d12-a557-4906-b83a-d21551c49de9 req-0e51df83-b205-4f6d-874a-48bf7e73fe2b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:45.304 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:01:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:01:45.305 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[9598dad4-f194-4dcb-98ad-ace8eebdeb09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.305 255071 DEBUG oslo_concurrency.lockutils [req-16357d12-a557-4906-b83a-d21551c49de9 req-0e51df83-b205-4f6d-874a-48bf7e73fe2b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.305 255071 DEBUG oslo_concurrency.lockutils [req-16357d12-a557-4906-b83a-d21551c49de9 req-0e51df83-b205-4f6d-874a-48bf7e73fe2b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.306 255071 DEBUG nova.compute.manager [req-16357d12-a557-4906-b83a-d21551c49de9 req-0e51df83-b205-4f6d-874a-48bf7e73fe2b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] No waiting events found dispatching network-vif-unplugged-53214bb0-d184-4728-8d1e-d7fa4a77f667 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.306 255071 DEBUG nova.compute.manager [req-16357d12-a557-4906-b83a-d21551c49de9 req-0e51df83-b205-4f6d-874a-48bf7e73fe2b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Received event network-vif-unplugged-53214bb0-d184-4728-8d1e-d7fa4a77f667 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:01:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d7c59af87\x2d9673\x2d4cef\x2da687\x2da8698997e2ed.mount: Deactivated successfully.
Nov 29 08:01:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 967 KiB/s wr, 78 op/s
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.549 255071 INFO nova.virt.libvirt.driver [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Deleting instance files /var/lib/nova/instances/394d590b-38c1-44bb-8370-a9d12c6b7ef0_del
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.550 255071 INFO nova.virt.libvirt.driver [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Deletion of /var/lib/nova/instances/394d590b-38c1-44bb-8370-a9d12c6b7ef0_del complete
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.614 255071 INFO nova.compute.manager [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Took 0.80 seconds to destroy the instance on the hypervisor.
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.615 255071 DEBUG oslo.service.loopingcall [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.615 255071 DEBUG nova.compute.manager [-] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:01:45 compute-0 nova_compute[255040]: 2025-11-29 08:01:45.615 255071 DEBUG nova.network.neutron [-] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:01:46 compute-0 ceph-mon[75237]: pgmap v1206: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 967 KiB/s wr, 78 op/s
Nov 29 08:01:47 compute-0 nova_compute[255040]: 2025-11-29 08:01:47.396 255071 DEBUG nova.network.neutron [-] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:01:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 13 KiB/s wr, 50 op/s
Nov 29 08:01:47 compute-0 nova_compute[255040]: 2025-11-29 08:01:47.586 255071 DEBUG nova.compute.manager [req-c8617be6-7364-4438-bb57-74961c95ae0e req-faf0e771-8d45-4a5f-9725-e53705fde81a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Received event network-vif-plugged-53214bb0-d184-4728-8d1e-d7fa4a77f667 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:47 compute-0 nova_compute[255040]: 2025-11-29 08:01:47.586 255071 DEBUG oslo_concurrency.lockutils [req-c8617be6-7364-4438-bb57-74961c95ae0e req-faf0e771-8d45-4a5f-9725-e53705fde81a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:47 compute-0 nova_compute[255040]: 2025-11-29 08:01:47.587 255071 DEBUG oslo_concurrency.lockutils [req-c8617be6-7364-4438-bb57-74961c95ae0e req-faf0e771-8d45-4a5f-9725-e53705fde81a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:47 compute-0 nova_compute[255040]: 2025-11-29 08:01:47.587 255071 DEBUG oslo_concurrency.lockutils [req-c8617be6-7364-4438-bb57-74961c95ae0e req-faf0e771-8d45-4a5f-9725-e53705fde81a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:47 compute-0 nova_compute[255040]: 2025-11-29 08:01:47.587 255071 DEBUG nova.compute.manager [req-c8617be6-7364-4438-bb57-74961c95ae0e req-faf0e771-8d45-4a5f-9725-e53705fde81a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] No waiting events found dispatching network-vif-plugged-53214bb0-d184-4728-8d1e-d7fa4a77f667 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:01:47 compute-0 nova_compute[255040]: 2025-11-29 08:01:47.587 255071 WARNING nova.compute.manager [req-c8617be6-7364-4438-bb57-74961c95ae0e req-faf0e771-8d45-4a5f-9725-e53705fde81a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Received unexpected event network-vif-plugged-53214bb0-d184-4728-8d1e-d7fa4a77f667 for instance with vm_state active and task_state deleting.
Nov 29 08:01:47 compute-0 nova_compute[255040]: 2025-11-29 08:01:47.589 255071 DEBUG nova.compute.manager [req-8c4e7e3b-770d-45f8-99d4-2cce74855082 req-39b7020e-f6aa-422c-9535-b74210652d3c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Received event network-vif-deleted-53214bb0-d184-4728-8d1e-d7fa4a77f667 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:47 compute-0 nova_compute[255040]: 2025-11-29 08:01:47.589 255071 INFO nova.compute.manager [req-8c4e7e3b-770d-45f8-99d4-2cce74855082 req-39b7020e-f6aa-422c-9535-b74210652d3c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Neutron deleted interface 53214bb0-d184-4728-8d1e-d7fa4a77f667; detaching it from the instance and deleting it from the info cache
Nov 29 08:01:47 compute-0 nova_compute[255040]: 2025-11-29 08:01:47.589 255071 DEBUG nova.network.neutron [req-8c4e7e3b-770d-45f8-99d4-2cce74855082 req-39b7020e-f6aa-422c-9535-b74210652d3c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:01:47 compute-0 nova_compute[255040]: 2025-11-29 08:01:47.606 255071 INFO nova.compute.manager [-] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Took 1.99 seconds to deallocate network for instance.
Nov 29 08:01:47 compute-0 nova_compute[255040]: 2025-11-29 08:01:47.615 255071 DEBUG nova.compute.manager [req-8c4e7e3b-770d-45f8-99d4-2cce74855082 req-39b7020e-f6aa-422c-9535-b74210652d3c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Detach interface failed, port_id=53214bb0-d184-4728-8d1e-d7fa4a77f667, reason: Instance 394d590b-38c1-44bb-8370-a9d12c6b7ef0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 29 08:01:48 compute-0 nova_compute[255040]: 2025-11-29 08:01:48.038 255071 DEBUG oslo_concurrency.lockutils [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:48 compute-0 nova_compute[255040]: 2025-11-29 08:01:48.039 255071 DEBUG oslo_concurrency.lockutils [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:48 compute-0 nova_compute[255040]: 2025-11-29 08:01:48.110 255071 DEBUG oslo_concurrency.processutils [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:48 compute-0 ceph-mon[75237]: pgmap v1207: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 13 KiB/s wr, 50 op/s
Nov 29 08:01:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:01:48 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2313510836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:48 compute-0 nova_compute[255040]: 2025-11-29 08:01:48.593 255071 DEBUG oslo_concurrency.processutils [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:48 compute-0 nova_compute[255040]: 2025-11-29 08:01:48.599 255071 DEBUG nova.compute.provider_tree [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:01:48 compute-0 nova_compute[255040]: 2025-11-29 08:01:48.788 255071 DEBUG nova.scheduler.client.report [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:01:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 57 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 109 op/s
Nov 29 08:01:49 compute-0 nova_compute[255040]: 2025-11-29 08:01:49.500 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:49 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2313510836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:49 compute-0 nova_compute[255040]: 2025-11-29 08:01:49.720 255071 DEBUG oslo_concurrency.lockutils [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:49 compute-0 nova_compute[255040]: 2025-11-29 08:01:49.781 255071 INFO nova.scheduler.client.report [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Deleted allocations for instance 394d590b-38c1-44bb-8370-a9d12c6b7ef0
Nov 29 08:01:50 compute-0 nova_compute[255040]: 2025-11-29 08:01:50.090 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:50 compute-0 nova_compute[255040]: 2025-11-29 08:01:50.150 255071 DEBUG oslo_concurrency.lockutils [None req-d911b725-3687-4ee1-a309-e0070c074647 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "394d590b-38c1-44bb-8370-a9d12c6b7ef0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:50 compute-0 ceph-mon[75237]: pgmap v1208: 305 pgs: 305 active+clean; 57 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 109 op/s
Nov 29 08:01:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:01:51 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1101684849' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:01:51 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1101684849' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 116 op/s
Nov 29 08:01:51 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1101684849' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:51 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1101684849' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:52 compute-0 ceph-mon[75237]: pgmap v1209: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 116 op/s
Nov 29 08:01:52 compute-0 podman[269009]: 2025-11-29 08:01:52.903866821 +0000 UTC m=+0.068290366 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 08:01:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 117 op/s
Nov 29 08:01:54 compute-0 nova_compute[255040]: 2025-11-29 08:01:54.502 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:54 compute-0 ceph-mon[75237]: pgmap v1210: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 117 op/s
Nov 29 08:01:54 compute-0 nova_compute[255040]: 2025-11-29 08:01:54.927 255071 DEBUG oslo_concurrency.lockutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Acquiring lock "bf99fdc2-8478-42a0-8ccb-4610de952012" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:54 compute-0 nova_compute[255040]: 2025-11-29 08:01:54.927 255071 DEBUG oslo_concurrency.lockutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "bf99fdc2-8478-42a0-8ccb-4610de952012" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:55 compute-0 nova_compute[255040]: 2025-11-29 08:01:55.005 255071 DEBUG nova.compute.manager [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:01:55 compute-0 nova_compute[255040]: 2025-11-29 08:01:55.121 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:55 compute-0 nova_compute[255040]: 2025-11-29 08:01:55.136 255071 DEBUG oslo_concurrency.lockutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:55 compute-0 nova_compute[255040]: 2025-11-29 08:01:55.137 255071 DEBUG oslo_concurrency.lockutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:55 compute-0 nova_compute[255040]: 2025-11-29 08:01:55.144 255071 DEBUG nova.virt.hardware [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:01:55 compute-0 nova_compute[255040]: 2025-11-29 08:01:55.145 255071 INFO nova.compute.claims [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:01:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 126 op/s
Nov 29 08:01:55 compute-0 nova_compute[255040]: 2025-11-29 08:01:55.587 255071 DEBUG oslo_concurrency.processutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:01:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:01:56 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2694176897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.087256625643029e-07 of space, bias 1.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:01:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.068 255071 DEBUG oslo_concurrency.processutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.076 255071 DEBUG nova.compute.provider_tree [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.299 255071 DEBUG nova.scheduler.client.report [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.330 255071 DEBUG oslo_concurrency.lockutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.331 255071 DEBUG nova.compute.manager [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.384 255071 DEBUG nova.compute.manager [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.385 255071 DEBUG nova.network.neutron [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.405 255071 INFO nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.421 255071 DEBUG nova.compute.manager [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.509 255071 DEBUG nova.compute.manager [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.510 255071 DEBUG nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.511 255071 INFO nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Creating image(s)
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.533 255071 DEBUG nova.storage.rbd_utils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] rbd image bf99fdc2-8478-42a0-8ccb-4610de952012_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.558 255071 DEBUG nova.storage.rbd_utils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] rbd image bf99fdc2-8478-42a0-8ccb-4610de952012_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.581 255071 DEBUG nova.storage.rbd_utils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] rbd image bf99fdc2-8478-42a0-8ccb-4610de952012_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.585 255071 DEBUG oslo_concurrency.processutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.612 255071 DEBUG nova.policy [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '342375d9cda748d0bdc3985fba484510', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '130bffe4c30f493aa286a3620fd260ca', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.651 255071 DEBUG oslo_concurrency.processutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.652 255071 DEBUG oslo_concurrency.lockutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Acquiring lock "55a6637599f7119d0d1afd670bb8713620840059" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.652 255071 DEBUG oslo_concurrency.lockutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.653 255071 DEBUG oslo_concurrency.lockutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.673 255071 DEBUG nova.storage.rbd_utils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] rbd image bf99fdc2-8478-42a0-8ccb-4610de952012_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.677 255071 DEBUG oslo_concurrency.processutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 bf99fdc2-8478-42a0-8ccb-4610de952012_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.706 255071 DEBUG oslo_concurrency.lockutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "f350e0f9-cc77-497b-a727-5576d4812c31" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.707 255071 DEBUG oslo_concurrency.lockutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "f350e0f9-cc77-497b-a727-5576d4812c31" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.726 255071 DEBUG nova.compute.manager [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:01:56 compute-0 ceph-mon[75237]: pgmap v1211: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 126 op/s
Nov 29 08:01:56 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2694176897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.822 255071 DEBUG oslo_concurrency.lockutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.823 255071 DEBUG oslo_concurrency.lockutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.830 255071 DEBUG nova.virt.hardware [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.830 255071 INFO nova.compute.claims [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:01:56 compute-0 nova_compute[255040]: 2025-11-29 08:01:56.958 255071 DEBUG oslo_concurrency.processutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.032 255071 DEBUG oslo_concurrency.processutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 bf99fdc2-8478-42a0-8ccb-4610de952012_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.098 255071 DEBUG nova.storage.rbd_utils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] resizing rbd image bf99fdc2-8478-42a0-8ccb-4610de952012_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.194 255071 DEBUG nova.objects.instance [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lazy-loading 'migration_context' on Instance uuid bf99fdc2-8478-42a0-8ccb-4610de952012 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.218 255071 DEBUG nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.219 255071 DEBUG nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Ensure instance console log exists: /var/lib/nova/instances/bf99fdc2-8478-42a0-8ccb-4610de952012/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.220 255071 DEBUG oslo_concurrency.lockutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.220 255071 DEBUG oslo_concurrency.lockutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.220 255071 DEBUG oslo_concurrency.lockutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.367 255071 DEBUG nova.network.neutron [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Successfully created port: 700d2367-3b63-4cd6-acb3-a96968287ef7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:01:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:01:57 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3235854025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.419 255071 DEBUG oslo_concurrency.processutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.426 255071 DEBUG nova.compute.provider_tree [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.441 255071 DEBUG nova.scheduler.client.report [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.469 255071 DEBUG oslo_concurrency.lockutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.470 255071 DEBUG nova.compute.manager [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:01:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 935 KiB/s rd, 2.3 KiB/s wr, 80 op/s
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.522 255071 DEBUG nova.compute.manager [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.523 255071 DEBUG nova.network.neutron [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.549 255071 INFO nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.571 255071 DEBUG nova.compute.manager [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.700 255071 DEBUG nova.compute.manager [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.702 255071 DEBUG nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.702 255071 INFO nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Creating image(s)
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.725 255071 DEBUG nova.storage.rbd_utils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] rbd image f350e0f9-cc77-497b-a727-5576d4812c31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:01:57 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3235854025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.750 255071 DEBUG nova.storage.rbd_utils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] rbd image f350e0f9-cc77-497b-a727-5576d4812c31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.774 255071 DEBUG nova.storage.rbd_utils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] rbd image f350e0f9-cc77-497b-a727-5576d4812c31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.777 255071 DEBUG oslo_concurrency.processutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.832 255071 DEBUG nova.policy [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2f0bad5019c043259e8f0cdbb532a167', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '122d6c1348a9421688c8c95fa7bfdf33', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.846 255071 DEBUG oslo_concurrency.processutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.847 255071 DEBUG oslo_concurrency.lockutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "55a6637599f7119d0d1afd670bb8713620840059" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.848 255071 DEBUG oslo_concurrency.lockutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.848 255071 DEBUG oslo_concurrency.lockutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.870 255071 DEBUG nova.storage.rbd_utils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] rbd image f350e0f9-cc77-497b-a727-5576d4812c31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:01:57 compute-0 nova_compute[255040]: 2025-11-29 08:01:57.874 255071 DEBUG oslo_concurrency.processutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 f350e0f9-cc77-497b-a727-5576d4812c31_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:58 compute-0 nova_compute[255040]: 2025-11-29 08:01:58.195 255071 DEBUG oslo_concurrency.processutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 f350e0f9-cc77-497b-a727-5576d4812c31_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:58 compute-0 nova_compute[255040]: 2025-11-29 08:01:58.257 255071 DEBUG nova.storage.rbd_utils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] resizing rbd image f350e0f9-cc77-497b-a727-5576d4812c31_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:01:58 compute-0 nova_compute[255040]: 2025-11-29 08:01:58.344 255071 DEBUG nova.objects.instance [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lazy-loading 'migration_context' on Instance uuid f350e0f9-cc77-497b-a727-5576d4812c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:01:58 compute-0 nova_compute[255040]: 2025-11-29 08:01:58.407 255071 DEBUG nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:01:58 compute-0 nova_compute[255040]: 2025-11-29 08:01:58.408 255071 DEBUG nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Ensure instance console log exists: /var/lib/nova/instances/f350e0f9-cc77-497b-a727-5576d4812c31/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:01:58 compute-0 nova_compute[255040]: 2025-11-29 08:01:58.409 255071 DEBUG oslo_concurrency.lockutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:58 compute-0 nova_compute[255040]: 2025-11-29 08:01:58.409 255071 DEBUG oslo_concurrency.lockutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:58 compute-0 nova_compute[255040]: 2025-11-29 08:01:58.409 255071 DEBUG oslo_concurrency.lockutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:01:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3602188023' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:01:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3602188023' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:58 compute-0 ceph-mon[75237]: pgmap v1212: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 935 KiB/s rd, 2.3 KiB/s wr, 80 op/s
Nov 29 08:01:58 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3602188023' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:58 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3602188023' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:58 compute-0 podman[269406]: 2025-11-29 08:01:58.907534326 +0000 UTC m=+0.075240223 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 08:01:59 compute-0 nova_compute[255040]: 2025-11-29 08:01:59.023 255071 DEBUG nova.network.neutron [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Successfully updated port: 700d2367-3b63-4cd6-acb3-a96968287ef7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:01:59 compute-0 nova_compute[255040]: 2025-11-29 08:01:59.043 255071 DEBUG oslo_concurrency.lockutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Acquiring lock "refresh_cache-bf99fdc2-8478-42a0-8ccb-4610de952012" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:01:59 compute-0 nova_compute[255040]: 2025-11-29 08:01:59.044 255071 DEBUG oslo_concurrency.lockutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Acquired lock "refresh_cache-bf99fdc2-8478-42a0-8ccb-4610de952012" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:01:59 compute-0 nova_compute[255040]: 2025-11-29 08:01:59.044 255071 DEBUG nova.network.neutron [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:01:59 compute-0 nova_compute[255040]: 2025-11-29 08:01:59.241 255071 DEBUG nova.compute.manager [req-a0ccc152-3e75-4ae0-8d54-356596a8feda req-faca161e-6c37-4367-a586-0f3f56c22b88 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Received event network-changed-700d2367-3b63-4cd6-acb3-a96968287ef7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:59 compute-0 nova_compute[255040]: 2025-11-29 08:01:59.242 255071 DEBUG nova.compute.manager [req-a0ccc152-3e75-4ae0-8d54-356596a8feda req-faca161e-6c37-4367-a586-0f3f56c22b88 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Refreshing instance network info cache due to event network-changed-700d2367-3b63-4cd6-acb3-a96968287ef7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:01:59 compute-0 nova_compute[255040]: 2025-11-29 08:01:59.242 255071 DEBUG oslo_concurrency.lockutils [req-a0ccc152-3e75-4ae0-8d54-356596a8feda req-faca161e-6c37-4367-a586-0f3f56c22b88 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-bf99fdc2-8478-42a0-8ccb-4610de952012" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:01:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 110 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 949 KiB/s rd, 2.8 MiB/s wr, 109 op/s
Nov 29 08:01:59 compute-0 nova_compute[255040]: 2025-11-29 08:01:59.504 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:59 compute-0 nova_compute[255040]: 2025-11-29 08:01:59.602 255071 DEBUG nova.network.neutron [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:01:59 compute-0 nova_compute[255040]: 2025-11-29 08:01:59.692 255071 DEBUG nova.network.neutron [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Successfully created port: 7797cf74-15a6-482f-9e6c-39e396a230f7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:01:59 compute-0 ceph-mon[75237]: pgmap v1213: 305 pgs: 305 active+clean; 110 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 949 KiB/s rd, 2.8 MiB/s wr, 109 op/s
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.057 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403305.0552063, 394d590b-38c1-44bb-8370-a9d12c6b7ef0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.057 255071 INFO nova.compute.manager [-] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] VM Stopped (Lifecycle Event)
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.077 255071 DEBUG nova.compute.manager [None req-76f9dede-11bd-4b69-99e5-4e451ca4d84f - - - - - -] [instance: 394d590b-38c1-44bb-8370-a9d12c6b7ef0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.123 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.891 255071 DEBUG nova.network.neutron [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Updating instance_info_cache with network_info: [{"id": "700d2367-3b63-4cd6-acb3-a96968287ef7", "address": "fa:16:3e:d6:a9:88", "network": {"id": "7c59af87-9673-4cef-a687-a8698997e2ed", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1730273405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "130bffe4c30f493aa286a3620fd260ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap700d2367-3b", "ovs_interfaceid": "700d2367-3b63-4cd6-acb3-a96968287ef7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.922 255071 DEBUG oslo_concurrency.lockutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Releasing lock "refresh_cache-bf99fdc2-8478-42a0-8ccb-4610de952012" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.923 255071 DEBUG nova.compute.manager [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Instance network_info: |[{"id": "700d2367-3b63-4cd6-acb3-a96968287ef7", "address": "fa:16:3e:d6:a9:88", "network": {"id": "7c59af87-9673-4cef-a687-a8698997e2ed", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1730273405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "130bffe4c30f493aa286a3620fd260ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap700d2367-3b", "ovs_interfaceid": "700d2367-3b63-4cd6-acb3-a96968287ef7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.923 255071 DEBUG oslo_concurrency.lockutils [req-a0ccc152-3e75-4ae0-8d54-356596a8feda req-faca161e-6c37-4367-a586-0f3f56c22b88 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-bf99fdc2-8478-42a0-8ccb-4610de952012" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.923 255071 DEBUG nova.network.neutron [req-a0ccc152-3e75-4ae0-8d54-356596a8feda req-faca161e-6c37-4367-a586-0f3f56c22b88 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Refreshing network info cache for port 700d2367-3b63-4cd6-acb3-a96968287ef7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.926 255071 DEBUG nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Start _get_guest_xml network_info=[{"id": "700d2367-3b63-4cd6-acb3-a96968287ef7", "address": "fa:16:3e:d6:a9:88", "network": {"id": "7c59af87-9673-4cef-a687-a8698997e2ed", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1730273405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "130bffe4c30f493aa286a3620fd260ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap700d2367-3b", "ovs_interfaceid": "700d2367-3b63-4cd6-acb3-a96968287ef7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'image_id': '36a9388d-0d77-4d24-a915-be92247e5dbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.931 255071 WARNING nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.936 255071 DEBUG nova.virt.libvirt.host [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.937 255071 DEBUG nova.virt.libvirt.host [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.944 255071 DEBUG nova.virt.libvirt.host [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.944 255071 DEBUG nova.virt.libvirt.host [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.945 255071 DEBUG nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.945 255071 DEBUG nova.virt.hardware [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.946 255071 DEBUG nova.virt.hardware [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.946 255071 DEBUG nova.virt.hardware [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.946 255071 DEBUG nova.virt.hardware [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.947 255071 DEBUG nova.virt.hardware [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.947 255071 DEBUG nova.virt.hardware [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.947 255071 DEBUG nova.virt.hardware [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.947 255071 DEBUG nova.virt.hardware [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.947 255071 DEBUG nova.virt.hardware [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.948 255071 DEBUG nova.virt.hardware [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.948 255071 DEBUG nova.virt.hardware [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:02:00 compute-0 nova_compute[255040]: 2025-11-29 08:02:00.951 255071 DEBUG oslo_concurrency.processutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:01 compute-0 nova_compute[255040]: 2025-11-29 08:02:01.046 255071 DEBUG nova.network.neutron [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Successfully updated port: 7797cf74-15a6-482f-9e6c-39e396a230f7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:02:01 compute-0 nova_compute[255040]: 2025-11-29 08:02:01.072 255071 DEBUG oslo_concurrency.lockutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "refresh_cache-f350e0f9-cc77-497b-a727-5576d4812c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:02:01 compute-0 nova_compute[255040]: 2025-11-29 08:02:01.072 255071 DEBUG oslo_concurrency.lockutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquired lock "refresh_cache-f350e0f9-cc77-497b-a727-5576d4812c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:02:01 compute-0 nova_compute[255040]: 2025-11-29 08:02:01.072 255071 DEBUG nova.network.neutron [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:02:01 compute-0 nova_compute[255040]: 2025-11-29 08:02:01.337 255071 DEBUG nova.compute.manager [req-40e883cd-a5a8-4d3b-9916-4f7a89f4ff6f req-ff808002-4365-4318-b2bc-c26096a03026 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Received event network-changed-7797cf74-15a6-482f-9e6c-39e396a230f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:01 compute-0 nova_compute[255040]: 2025-11-29 08:02:01.338 255071 DEBUG nova.compute.manager [req-40e883cd-a5a8-4d3b-9916-4f7a89f4ff6f req-ff808002-4365-4318-b2bc-c26096a03026 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Refreshing instance network info cache due to event network-changed-7797cf74-15a6-482f-9e6c-39e396a230f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:02:01 compute-0 nova_compute[255040]: 2025-11-29 08:02:01.338 255071 DEBUG oslo_concurrency.lockutils [req-40e883cd-a5a8-4d3b-9916-4f7a89f4ff6f req-ff808002-4365-4318-b2bc-c26096a03026 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-f350e0f9-cc77-497b-a727-5576d4812c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:02:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:02:01 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3149113582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:01 compute-0 nova_compute[255040]: 2025-11-29 08:02:01.400 255071 DEBUG oslo_concurrency.processutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:01 compute-0 nova_compute[255040]: 2025-11-29 08:02:01.423 255071 DEBUG nova.storage.rbd_utils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] rbd image bf99fdc2-8478-42a0-8ccb-4610de952012_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:01 compute-0 nova_compute[255040]: 2025-11-29 08:02:01.427 255071 DEBUG oslo_concurrency.processutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:01 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3149113582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 134 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.5 MiB/s wr, 75 op/s
Nov 29 08:02:01 compute-0 nova_compute[255040]: 2025-11-29 08:02:01.753 255071 DEBUG nova.network.neutron [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:02:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:02:01 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2272233103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:01 compute-0 nova_compute[255040]: 2025-11-29 08:02:01.882 255071 DEBUG oslo_concurrency.processutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:01 compute-0 nova_compute[255040]: 2025-11-29 08:02:01.884 255071 DEBUG nova.virt.libvirt.vif [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:01:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-213456701',display_name='tempest-VolumesActionsTest-instance-213456701',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-213456701',id=4,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='130bffe4c30f493aa286a3620fd260ca',ramdisk_id='',reservation_id='r-l0w0b93x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1980568150',owner_user_name='tempest-VolumesActionsTest-1980568150-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:01:56Z,user_data=None,user_id='342375d9cda748d0bdc3985fba484510',uuid=bf99fdc2-8478-42a0-8ccb-4610de952012,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "700d2367-3b63-4cd6-acb3-a96968287ef7", "address": "fa:16:3e:d6:a9:88", "network": {"id": "7c59af87-9673-4cef-a687-a8698997e2ed", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1730273405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "130bffe4c30f493aa286a3620fd260ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap700d2367-3b", "ovs_interfaceid": "700d2367-3b63-4cd6-acb3-a96968287ef7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:02:01 compute-0 nova_compute[255040]: 2025-11-29 08:02:01.885 255071 DEBUG nova.network.os_vif_util [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Converting VIF {"id": "700d2367-3b63-4cd6-acb3-a96968287ef7", "address": "fa:16:3e:d6:a9:88", "network": {"id": "7c59af87-9673-4cef-a687-a8698997e2ed", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1730273405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "130bffe4c30f493aa286a3620fd260ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap700d2367-3b", "ovs_interfaceid": "700d2367-3b63-4cd6-acb3-a96968287ef7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:02:01 compute-0 nova_compute[255040]: 2025-11-29 08:02:01.886 255071 DEBUG nova.network.os_vif_util [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:a9:88,bridge_name='br-int',has_traffic_filtering=True,id=700d2367-3b63-4cd6-acb3-a96968287ef7,network=Network(7c59af87-9673-4cef-a687-a8698997e2ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap700d2367-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:02:01 compute-0 nova_compute[255040]: 2025-11-29 08:02:01.887 255071 DEBUG nova.objects.instance [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lazy-loading 'pci_devices' on Instance uuid bf99fdc2-8478-42a0-8ccb-4610de952012 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:02:02 compute-0 ceph-mon[75237]: pgmap v1214: 305 pgs: 305 active+clean; 134 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.5 MiB/s wr, 75 op/s
Nov 29 08:02:02 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2272233103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.631 255071 DEBUG nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:02:02 compute-0 nova_compute[255040]:   <uuid>bf99fdc2-8478-42a0-8ccb-4610de952012</uuid>
Nov 29 08:02:02 compute-0 nova_compute[255040]:   <name>instance-00000004</name>
Nov 29 08:02:02 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:02:02 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:02:02 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <nova:name>tempest-VolumesActionsTest-instance-213456701</nova:name>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:02:00</nova:creationTime>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:02:02 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:02:02 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:02:02 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:02:02 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:02:02 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:02:02 compute-0 nova_compute[255040]:         <nova:user uuid="342375d9cda748d0bdc3985fba484510">tempest-VolumesActionsTest-1980568150-project-member</nova:user>
Nov 29 08:02:02 compute-0 nova_compute[255040]:         <nova:project uuid="130bffe4c30f493aa286a3620fd260ca">tempest-VolumesActionsTest-1980568150</nova:project>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <nova:root type="image" uuid="36a9388d-0d77-4d24-a915-be92247e5dbc"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:02:02 compute-0 nova_compute[255040]:         <nova:port uuid="700d2367-3b63-4cd6-acb3-a96968287ef7">
Nov 29 08:02:02 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:02:02 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:02:02 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <system>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <entry name="serial">bf99fdc2-8478-42a0-8ccb-4610de952012</entry>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <entry name="uuid">bf99fdc2-8478-42a0-8ccb-4610de952012</entry>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     </system>
Nov 29 08:02:02 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:02:02 compute-0 nova_compute[255040]:   <os>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:   </os>
Nov 29 08:02:02 compute-0 nova_compute[255040]:   <features>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:   </features>
Nov 29 08:02:02 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:02:02 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:02:02 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/bf99fdc2-8478-42a0-8ccb-4610de952012_disk">
Nov 29 08:02:02 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       </source>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:02:02 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/bf99fdc2-8478-42a0-8ccb-4610de952012_disk.config">
Nov 29 08:02:02 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       </source>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:02:02 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:d6:a9:88"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <target dev="tap700d2367-3b"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/bf99fdc2-8478-42a0-8ccb-4610de952012/console.log" append="off"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <video>
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     </video>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:02:02 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:02:02 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:02:02 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:02:02 compute-0 nova_compute[255040]: </domain>
Nov 29 08:02:02 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.633 255071 DEBUG nova.compute.manager [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Preparing to wait for external event network-vif-plugged-700d2367-3b63-4cd6-acb3-a96968287ef7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.633 255071 DEBUG oslo_concurrency.lockutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Acquiring lock "bf99fdc2-8478-42a0-8ccb-4610de952012-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.634 255071 DEBUG oslo_concurrency.lockutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "bf99fdc2-8478-42a0-8ccb-4610de952012-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.634 255071 DEBUG oslo_concurrency.lockutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "bf99fdc2-8478-42a0-8ccb-4610de952012-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.635 255071 DEBUG nova.virt.libvirt.vif [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:01:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-213456701',display_name='tempest-VolumesActionsTest-instance-213456701',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-213456701',id=4,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='130bffe4c30f493aa286a3620fd260ca',ramdisk_id='',reservation_id='r-l0w0b93x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1980568150',owner_user_name='tempest-VolumesActionsTest-1980568150-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:01:56Z,user_data=None,user_id='342375d9cda748d0bdc3985fba484510',uuid=bf99fdc2-8478-42a0-8ccb-4610de952012,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "700d2367-3b63-4cd6-acb3-a96968287ef7", "address": "fa:16:3e:d6:a9:88", "network": {"id": "7c59af87-9673-4cef-a687-a8698997e2ed", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1730273405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "130bffe4c30f493aa286a3620fd260ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap700d2367-3b", "ovs_interfaceid": "700d2367-3b63-4cd6-acb3-a96968287ef7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.635 255071 DEBUG nova.network.os_vif_util [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Converting VIF {"id": "700d2367-3b63-4cd6-acb3-a96968287ef7", "address": "fa:16:3e:d6:a9:88", "network": {"id": "7c59af87-9673-4cef-a687-a8698997e2ed", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1730273405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "130bffe4c30f493aa286a3620fd260ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap700d2367-3b", "ovs_interfaceid": "700d2367-3b63-4cd6-acb3-a96968287ef7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.636 255071 DEBUG nova.network.os_vif_util [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:a9:88,bridge_name='br-int',has_traffic_filtering=True,id=700d2367-3b63-4cd6-acb3-a96968287ef7,network=Network(7c59af87-9673-4cef-a687-a8698997e2ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap700d2367-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.636 255071 DEBUG os_vif [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:a9:88,bridge_name='br-int',has_traffic_filtering=True,id=700d2367-3b63-4cd6-acb3-a96968287ef7,network=Network(7c59af87-9673-4cef-a687-a8698997e2ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap700d2367-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.637 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.638 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.638 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.642 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.643 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap700d2367-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.643 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap700d2367-3b, col_values=(('external_ids', {'iface-id': '700d2367-3b63-4cd6-acb3-a96968287ef7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:a9:88', 'vm-uuid': 'bf99fdc2-8478-42a0-8ccb-4610de952012'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.646 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:02 compute-0 NetworkManager[49116]: <info>  [1764403322.6475] manager: (tap700d2367-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.648 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.654 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.656 255071 INFO os_vif [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:a9:88,bridge_name='br-int',has_traffic_filtering=True,id=700d2367-3b63-4cd6-acb3-a96968287ef7,network=Network(7c59af87-9673-4cef-a687-a8698997e2ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap700d2367-3b')
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.716 255071 DEBUG nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.716 255071 DEBUG nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.717 255071 DEBUG nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] No VIF found with MAC fa:16:3e:d6:a9:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.717 255071 INFO nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Using config drive
Nov 29 08:02:02 compute-0 nova_compute[255040]: 2025-11-29 08:02:02.739 255071 DEBUG nova.storage.rbd_utils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] rbd image bf99fdc2-8478-42a0-8ccb-4610de952012_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:02:02 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1253443350' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:02:02 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1253443350' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.227 255071 INFO nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Creating config drive at /var/lib/nova/instances/bf99fdc2-8478-42a0-8ccb-4610de952012/disk.config
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.233 255071 DEBUG oslo_concurrency.processutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bf99fdc2-8478-42a0-8ccb-4610de952012/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptu1irhyb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.347 255071 DEBUG nova.network.neutron [req-a0ccc152-3e75-4ae0-8d54-356596a8feda req-faca161e-6c37-4367-a586-0f3f56c22b88 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Updated VIF entry in instance network info cache for port 700d2367-3b63-4cd6-acb3-a96968287ef7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.348 255071 DEBUG nova.network.neutron [req-a0ccc152-3e75-4ae0-8d54-356596a8feda req-faca161e-6c37-4367-a586-0f3f56c22b88 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Updating instance_info_cache with network_info: [{"id": "700d2367-3b63-4cd6-acb3-a96968287ef7", "address": "fa:16:3e:d6:a9:88", "network": {"id": "7c59af87-9673-4cef-a687-a8698997e2ed", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1730273405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "130bffe4c30f493aa286a3620fd260ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap700d2367-3b", "ovs_interfaceid": "700d2367-3b63-4cd6-acb3-a96968287ef7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.361 255071 DEBUG oslo_concurrency.processutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bf99fdc2-8478-42a0-8ccb-4610de952012/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptu1irhyb" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.390 255071 DEBUG nova.storage.rbd_utils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] rbd image bf99fdc2-8478-42a0-8ccb-4610de952012_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.397 255071 DEBUG oslo_concurrency.processutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bf99fdc2-8478-42a0-8ccb-4610de952012/disk.config bf99fdc2-8478-42a0-8ccb-4610de952012_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.424 255071 DEBUG oslo_concurrency.lockutils [req-a0ccc152-3e75-4ae0-8d54-356596a8feda req-faca161e-6c37-4367-a586-0f3f56c22b88 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-bf99fdc2-8478-42a0-8ccb-4610de952012" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:02:03 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1253443350' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:03 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1253443350' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.477 255071 DEBUG nova.network.neutron [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Updating instance_info_cache with network_info: [{"id": "7797cf74-15a6-482f-9e6c-39e396a230f7", "address": "fa:16:3e:84:90:2b", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7797cf74-15", "ovs_interfaceid": "7797cf74-15a6-482f-9e6c-39e396a230f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:02:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 3.5 MiB/s wr, 67 op/s
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.496 255071 DEBUG oslo_concurrency.lockutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Releasing lock "refresh_cache-f350e0f9-cc77-497b-a727-5576d4812c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.497 255071 DEBUG nova.compute.manager [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Instance network_info: |[{"id": "7797cf74-15a6-482f-9e6c-39e396a230f7", "address": "fa:16:3e:84:90:2b", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7797cf74-15", "ovs_interfaceid": "7797cf74-15a6-482f-9e6c-39e396a230f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.498 255071 DEBUG oslo_concurrency.lockutils [req-40e883cd-a5a8-4d3b-9916-4f7a89f4ff6f req-ff808002-4365-4318-b2bc-c26096a03026 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-f350e0f9-cc77-497b-a727-5576d4812c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.498 255071 DEBUG nova.network.neutron [req-40e883cd-a5a8-4d3b-9916-4f7a89f4ff6f req-ff808002-4365-4318-b2bc-c26096a03026 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Refreshing network info cache for port 7797cf74-15a6-482f-9e6c-39e396a230f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.502 255071 DEBUG nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Start _get_guest_xml network_info=[{"id": "7797cf74-15a6-482f-9e6c-39e396a230f7", "address": "fa:16:3e:84:90:2b", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7797cf74-15", "ovs_interfaceid": "7797cf74-15a6-482f-9e6c-39e396a230f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'image_id': '36a9388d-0d77-4d24-a915-be92247e5dbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.509 255071 WARNING nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.520 255071 DEBUG nova.virt.libvirt.host [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.521 255071 DEBUG nova.virt.libvirt.host [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.524 255071 DEBUG nova.virt.libvirt.host [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.525 255071 DEBUG nova.virt.libvirt.host [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.525 255071 DEBUG nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.526 255071 DEBUG nova.virt.hardware [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.526 255071 DEBUG nova.virt.hardware [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.526 255071 DEBUG nova.virt.hardware [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.527 255071 DEBUG nova.virt.hardware [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.527 255071 DEBUG nova.virt.hardware [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.527 255071 DEBUG nova.virt.hardware [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.527 255071 DEBUG nova.virt.hardware [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.527 255071 DEBUG nova.virt.hardware [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.528 255071 DEBUG nova.virt.hardware [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.528 255071 DEBUG nova.virt.hardware [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.528 255071 DEBUG nova.virt.hardware [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.531 255071 DEBUG oslo_concurrency.processutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.566 255071 DEBUG oslo_concurrency.processutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bf99fdc2-8478-42a0-8ccb-4610de952012/disk.config bf99fdc2-8478-42a0-8ccb-4610de952012_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.567 255071 INFO nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Deleting local config drive /var/lib/nova/instances/bf99fdc2-8478-42a0-8ccb-4610de952012/disk.config because it was imported into RBD.
Nov 29 08:02:03 compute-0 kernel: tap700d2367-3b: entered promiscuous mode
Nov 29 08:02:03 compute-0 NetworkManager[49116]: <info>  [1764403323.6304] manager: (tap700d2367-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Nov 29 08:02:03 compute-0 ovn_controller[153295]: 2025-11-29T08:02:03Z|00054|binding|INFO|Claiming lport 700d2367-3b63-4cd6-acb3-a96968287ef7 for this chassis.
Nov 29 08:02:03 compute-0 ovn_controller[153295]: 2025-11-29T08:02:03Z|00055|binding|INFO|700d2367-3b63-4cd6-acb3-a96968287ef7: Claiming fa:16:3e:d6:a9:88 10.100.0.5
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.636 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:03.643 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:a9:88 10.100.0.5'], port_security=['fa:16:3e:d6:a9:88 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'bf99fdc2-8478-42a0-8ccb-4610de952012', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c59af87-9673-4cef-a687-a8698997e2ed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '130bffe4c30f493aa286a3620fd260ca', 'neutron:revision_number': '2', 'neutron:security_group_ids': '622ec407-a852-4817-9d30-c772bce4eb0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee65ecf4-28fb-4c94-9662-60b0418afead, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=700d2367-3b63-4cd6-acb3-a96968287ef7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:02:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:03.644 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 700d2367-3b63-4cd6-acb3-a96968287ef7 in datapath 7c59af87-9673-4cef-a687-a8698997e2ed bound to our chassis
Nov 29 08:02:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:03.646 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c59af87-9673-4cef-a687-a8698997e2ed
Nov 29 08:02:03 compute-0 ovn_controller[153295]: 2025-11-29T08:02:03Z|00056|binding|INFO|Setting lport 700d2367-3b63-4cd6-acb3-a96968287ef7 ovn-installed in OVS
Nov 29 08:02:03 compute-0 ovn_controller[153295]: 2025-11-29T08:02:03Z|00057|binding|INFO|Setting lport 700d2367-3b63-4cd6-acb3-a96968287ef7 up in Southbound
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.660 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:03.664 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5486d928-fb76-467c-98b0-3224e73b2596]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:03.666 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c59af87-91 in ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:02:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:03.668 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c59af87-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:02:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:03.669 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[405f6e72-d39b-4fc8-8b26-bae1c3dca066]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:03.670 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[67001a6e-90e3-4a4d-819a-81f3a4ee8175]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:03 compute-0 systemd-machined[216271]: New machine qemu-4-instance-00000004.
Nov 29 08:02:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:03.686 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[ca5b9fa8-6423-48da-b8d9-8775b94b019d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:02:03 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/892915163' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:02:03 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/892915163' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:03 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Nov 29 08:02:03 compute-0 systemd-udevd[269586]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:02:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:03.714 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[eebd6a3c-1528-4a1e-89e1-32ccd55ecc6e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:03 compute-0 NetworkManager[49116]: <info>  [1764403323.7257] device (tap700d2367-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:02:03 compute-0 NetworkManager[49116]: <info>  [1764403323.7269] device (tap700d2367-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:02:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:03.758 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[ce391cf1-89a1-45cf-a7e1-22149afeb473]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:03 compute-0 NetworkManager[49116]: <info>  [1764403323.7674] manager: (tap7c59af87-90): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Nov 29 08:02:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:03.766 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[b78cc7e3-347c-499d-bc26-85da60b88110]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:03.810 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[fc136761-d9d2-4730-a620-a65c5fa171f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:03.815 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[c46b78b8-1a6a-4b14-8ce1-a719a32c2621]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:03 compute-0 NetworkManager[49116]: <info>  [1764403323.8523] device (tap7c59af87-90): carrier: link connected
Nov 29 08:02:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:03.863 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3a2bd1-3a09-410f-8aa5-786a6037ad7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.868 255071 DEBUG nova.compute.manager [req-014db83e-cd80-4def-ab1e-0cb4893b51c8 req-91780c5c-b096-44ea-a134-bb4414c969e3 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Received event network-vif-plugged-700d2367-3b63-4cd6-acb3-a96968287ef7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.869 255071 DEBUG oslo_concurrency.lockutils [req-014db83e-cd80-4def-ab1e-0cb4893b51c8 req-91780c5c-b096-44ea-a134-bb4414c969e3 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "bf99fdc2-8478-42a0-8ccb-4610de952012-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.869 255071 DEBUG oslo_concurrency.lockutils [req-014db83e-cd80-4def-ab1e-0cb4893b51c8 req-91780c5c-b096-44ea-a134-bb4414c969e3 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "bf99fdc2-8478-42a0-8ccb-4610de952012-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.869 255071 DEBUG oslo_concurrency.lockutils [req-014db83e-cd80-4def-ab1e-0cb4893b51c8 req-91780c5c-b096-44ea-a134-bb4414c969e3 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "bf99fdc2-8478-42a0-8ccb-4610de952012-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:03 compute-0 nova_compute[255040]: 2025-11-29 08:02:03.870 255071 DEBUG nova.compute.manager [req-014db83e-cd80-4def-ab1e-0cb4893b51c8 req-91780c5c-b096-44ea-a134-bb4414c969e3 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Processing event network-vif-plugged-700d2367-3b63-4cd6-acb3-a96968287ef7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:02:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:03.887 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[9263f634-beeb-45fe-9b98-a25522a9ccf6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c59af87-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:14:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 556119, 'reachable_time': 22148, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269616, 'error': None, 'target': 'ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:03.910 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[8e6f2066-f2cc-4520-b619-f054bda83457]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec7:1495'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 556119, 'tstamp': 556119}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269617, 'error': None, 'target': 'ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:03.937 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[6ee3ca04-ca37-460c-b236-9e2f98d85299]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c59af87-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:14:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 556119, 'reachable_time': 22148, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269618, 'error': None, 'target': 'ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:03.981 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[20871a34-90c9-4342-b83b-532376d2f54f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:02:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1314284623' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.050 255071 DEBUG oslo_concurrency.processutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:04.053 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[35373e73-fe97-4f00-8383-ed537a5df673]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:04.055 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c59af87-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:04.056 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:04.056 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c59af87-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:04 compute-0 NetworkManager[49116]: <info>  [1764403324.0596] manager: (tap7c59af87-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Nov 29 08:02:04 compute-0 kernel: tap7c59af87-90: entered promiscuous mode
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:04.065 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c59af87-90, col_values=(('external_ids', {'iface-id': '10f95636-701a-4c25-bac8-f036309f6a48'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:04 compute-0 ovn_controller[153295]: 2025-11-29T08:02:04Z|00058|binding|INFO|Releasing lport 10f95636-701a-4c25-bac8-f036309f6a48 from this chassis (sb_readonly=0)
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:04.070 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c59af87-9673-4cef-a687-a8698997e2ed.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c59af87-9673-4cef-a687-a8698997e2ed.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:04.071 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[09fa2566-d3ba-45d6-a33c-de259abc1ac8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:04.072 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-7c59af87-9673-4cef-a687-a8698997e2ed
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/7c59af87-9673-4cef-a687-a8698997e2ed.pid.haproxy
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID 7c59af87-9673-4cef-a687-a8698997e2ed
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:04.073 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed', 'env', 'PROCESS_TAG=haproxy-7c59af87-9673-4cef-a687-a8698997e2ed', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c59af87-9673-4cef-a687-a8698997e2ed.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.086 255071 DEBUG nova.storage.rbd_utils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] rbd image f350e0f9-cc77-497b-a727-5576d4812c31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.094 255071 DEBUG oslo_concurrency.processutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.117 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.313 255071 DEBUG nova.compute.manager [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.316 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403324.3123264, bf99fdc2-8478-42a0-8ccb-4610de952012 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.316 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] VM Started (Lifecycle Event)
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.320 255071 DEBUG nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.329 255071 INFO nova.virt.libvirt.driver [-] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Instance spawned successfully.
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.329 255071 DEBUG nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.339 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.344 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.356 255071 DEBUG nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.356 255071 DEBUG nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.357 255071 DEBUG nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.357 255071 DEBUG nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.358 255071 DEBUG nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.358 255071 DEBUG nova.virt.libvirt.driver [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.362 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.363 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403324.3138905, bf99fdc2-8478-42a0-8ccb-4610de952012 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.363 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] VM Paused (Lifecycle Event)
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.390 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.396 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403324.3196704, bf99fdc2-8478-42a0-8ccb-4610de952012 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.397 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] VM Resumed (Lifecycle Event)
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.432 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.437 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.444 255071 INFO nova.compute.manager [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Took 7.93 seconds to spawn the instance on the hypervisor.
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.444 255071 DEBUG nova.compute.manager [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.457 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.496 255071 DEBUG nova.network.neutron [req-40e883cd-a5a8-4d3b-9916-4f7a89f4ff6f req-ff808002-4365-4318-b2bc-c26096a03026 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Updated VIF entry in instance network info cache for port 7797cf74-15a6-482f-9e6c-39e396a230f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.497 255071 DEBUG nova.network.neutron [req-40e883cd-a5a8-4d3b-9916-4f7a89f4ff6f req-ff808002-4365-4318-b2bc-c26096a03026 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Updating instance_info_cache with network_info: [{"id": "7797cf74-15a6-482f-9e6c-39e396a230f7", "address": "fa:16:3e:84:90:2b", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7797cf74-15", "ovs_interfaceid": "7797cf74-15a6-482f-9e6c-39e396a230f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.513 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.515 255071 DEBUG oslo_concurrency.lockutils [req-40e883cd-a5a8-4d3b-9916-4f7a89f4ff6f req-ff808002-4365-4318-b2bc-c26096a03026 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-f350e0f9-cc77-497b-a727-5576d4812c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.518 255071 INFO nova.compute.manager [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Took 9.41 seconds to build instance.
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.534 255071 DEBUG oslo_concurrency.lockutils [None req-4b686013-4541-458d-bbec-51268c3886c2 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "bf99fdc2-8478-42a0-8ccb-4610de952012" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:04 compute-0 podman[269730]: 2025-11-29 08:02:04.472292526 +0000 UTC m=+0.030874426 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:02:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:02:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1234885121' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:04 compute-0 ceph-mon[75237]: pgmap v1215: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 3.5 MiB/s wr, 67 op/s
Nov 29 08:02:04 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/892915163' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:04 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/892915163' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:04 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1314284623' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.617 255071 DEBUG oslo_concurrency.processutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:04 compute-0 podman[269730]: 2025-11-29 08:02:04.620061407 +0000 UTC m=+0.178643277 container create 2576fe3a0fd113c0c4070579a6e16fd1ad6f044055274cf6504d8534e643baa3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.623 255071 DEBUG nova.virt.libvirt.vif [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:01:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1596728706',display_name='tempest-VolumesBackupsTest-instance-1596728706',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1596728706',id=5,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOsawfirz1+mQwn2OYfT0MGTQb0vCfyvNFlQWVFRzwitiCzmybvG5MB9FisRTgFb4a4ZldjoJfyqGoH18Kt/Yhs/wjMEvBN+lMb7A242/Izl3Z2jYy6qVOMN0V5jJf+8zw==',key_name='tempest-keypair-827833387',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='122d6c1348a9421688c8c95fa7bfdf33',ramdisk_id='',reservation_id='r-5huvb9e5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-433060525',owner_user_name='tempest-VolumesBackupsTest-433060525-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:01:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2f0bad5019c043259e8f0cdbb532a167',uuid=f350e0f9-cc77-497b-a727-5576d4812c31,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7797cf74-15a6-482f-9e6c-39e396a230f7", "address": "fa:16:3e:84:90:2b", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7797cf74-15", "ovs_interfaceid": "7797cf74-15a6-482f-9e6c-39e396a230f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.624 255071 DEBUG nova.network.os_vif_util [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Converting VIF {"id": "7797cf74-15a6-482f-9e6c-39e396a230f7", "address": "fa:16:3e:84:90:2b", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7797cf74-15", "ovs_interfaceid": "7797cf74-15a6-482f-9e6c-39e396a230f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.625 255071 DEBUG nova.network.os_vif_util [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:90:2b,bridge_name='br-int',has_traffic_filtering=True,id=7797cf74-15a6-482f-9e6c-39e396a230f7,network=Network(0b79d41d-8eb2-4d4a-9786-7791592a7e66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7797cf74-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.627 255071 DEBUG nova.objects.instance [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lazy-loading 'pci_devices' on Instance uuid f350e0f9-cc77-497b-a727-5576d4812c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.651 255071 DEBUG nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:02:04 compute-0 nova_compute[255040]:   <uuid>f350e0f9-cc77-497b-a727-5576d4812c31</uuid>
Nov 29 08:02:04 compute-0 nova_compute[255040]:   <name>instance-00000005</name>
Nov 29 08:02:04 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:02:04 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:02:04 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <nova:name>tempest-VolumesBackupsTest-instance-1596728706</nova:name>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:02:03</nova:creationTime>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:02:04 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:02:04 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:02:04 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:02:04 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:02:04 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:02:04 compute-0 nova_compute[255040]:         <nova:user uuid="2f0bad5019c043259e8f0cdbb532a167">tempest-VolumesBackupsTest-433060525-project-member</nova:user>
Nov 29 08:02:04 compute-0 nova_compute[255040]:         <nova:project uuid="122d6c1348a9421688c8c95fa7bfdf33">tempest-VolumesBackupsTest-433060525</nova:project>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <nova:root type="image" uuid="36a9388d-0d77-4d24-a915-be92247e5dbc"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:02:04 compute-0 nova_compute[255040]:         <nova:port uuid="7797cf74-15a6-482f-9e6c-39e396a230f7">
Nov 29 08:02:04 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:02:04 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:02:04 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <system>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <entry name="serial">f350e0f9-cc77-497b-a727-5576d4812c31</entry>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <entry name="uuid">f350e0f9-cc77-497b-a727-5576d4812c31</entry>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     </system>
Nov 29 08:02:04 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:02:04 compute-0 nova_compute[255040]:   <os>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:   </os>
Nov 29 08:02:04 compute-0 nova_compute[255040]:   <features>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:   </features>
Nov 29 08:02:04 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:02:04 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:02:04 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/f350e0f9-cc77-497b-a727-5576d4812c31_disk">
Nov 29 08:02:04 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       </source>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:02:04 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/f350e0f9-cc77-497b-a727-5576d4812c31_disk.config">
Nov 29 08:02:04 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       </source>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:02:04 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:84:90:2b"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <target dev="tap7797cf74-15"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/f350e0f9-cc77-497b-a727-5576d4812c31/console.log" append="off"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <video>
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     </video>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:02:04 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:02:04 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:02:04 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:02:04 compute-0 nova_compute[255040]: </domain>
Nov 29 08:02:04 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.654 255071 DEBUG nova.compute.manager [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Preparing to wait for external event network-vif-plugged-7797cf74-15a6-482f-9e6c-39e396a230f7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.655 255071 DEBUG oslo_concurrency.lockutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "f350e0f9-cc77-497b-a727-5576d4812c31-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.655 255071 DEBUG oslo_concurrency.lockutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "f350e0f9-cc77-497b-a727-5576d4812c31-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.656 255071 DEBUG oslo_concurrency.lockutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "f350e0f9-cc77-497b-a727-5576d4812c31-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.656 255071 DEBUG nova.virt.libvirt.vif [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:01:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1596728706',display_name='tempest-VolumesBackupsTest-instance-1596728706',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1596728706',id=5,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOsawfirz1+mQwn2OYfT0MGTQb0vCfyvNFlQWVFRzwitiCzmybvG5MB9FisRTgFb4a4ZldjoJfyqGoH18Kt/Yhs/wjMEvBN+lMb7A242/Izl3Z2jYy6qVOMN0V5jJf+8zw==',key_name='tempest-keypair-827833387',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='122d6c1348a9421688c8c95fa7bfdf33',ramdisk_id='',reservation_id='r-5huvb9e5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-433060525',owner_user_name='tempest-VolumesBackupsTest-433060525-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:01:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2f0bad5019c043259e8f0cdbb532a167',uuid=f350e0f9-cc77-497b-a727-5576d4812c31,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7797cf74-15a6-482f-9e6c-39e396a230f7", "address": "fa:16:3e:84:90:2b", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7797cf74-15", "ovs_interfaceid": "7797cf74-15a6-482f-9e6c-39e396a230f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.657 255071 DEBUG nova.network.os_vif_util [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Converting VIF {"id": "7797cf74-15a6-482f-9e6c-39e396a230f7", "address": "fa:16:3e:84:90:2b", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7797cf74-15", "ovs_interfaceid": "7797cf74-15a6-482f-9e6c-39e396a230f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.658 255071 DEBUG nova.network.os_vif_util [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:90:2b,bridge_name='br-int',has_traffic_filtering=True,id=7797cf74-15a6-482f-9e6c-39e396a230f7,network=Network(0b79d41d-8eb2-4d4a-9786-7791592a7e66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7797cf74-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.658 255071 DEBUG os_vif [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:90:2b,bridge_name='br-int',has_traffic_filtering=True,id=7797cf74-15a6-482f-9e6c-39e396a230f7,network=Network(0b79d41d-8eb2-4d4a-9786-7791592a7e66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7797cf74-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.659 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.659 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.660 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.665 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.666 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7797cf74-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.666 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7797cf74-15, col_values=(('external_ids', {'iface-id': '7797cf74-15a6-482f-9e6c-39e396a230f7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:90:2b', 'vm-uuid': 'f350e0f9-cc77-497b-a727-5576d4812c31'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:04 compute-0 NetworkManager[49116]: <info>  [1764403324.6697] manager: (tap7797cf74-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.668 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.672 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:02:04 compute-0 systemd[1]: Started libpod-conmon-2576fe3a0fd113c0c4070579a6e16fd1ad6f044055274cf6504d8534e643baa3.scope.
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.676 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.677 255071 INFO os_vif [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:90:2b,bridge_name='br-int',has_traffic_filtering=True,id=7797cf74-15a6-482f-9e6c-39e396a230f7,network=Network(0b79d41d-8eb2-4d4a-9786-7791592a7e66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7797cf74-15')
Nov 29 08:02:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b955fce9a6b67a85beea29c7d45f418cb0c6178ede4b2c1f20d120d82506631/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:04 compute-0 podman[269730]: 2025-11-29 08:02:04.906766802 +0000 UTC m=+0.465348682 container init 2576fe3a0fd113c0c4070579a6e16fd1ad6f044055274cf6504d8534e643baa3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:02:04 compute-0 podman[269730]: 2025-11-29 08:02:04.918281873 +0000 UTC m=+0.476863753 container start 2576fe3a0fd113c0c4070579a6e16fd1ad6f044055274cf6504d8534e643baa3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.931 255071 DEBUG nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.935 255071 DEBUG nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.936 255071 DEBUG nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] No VIF found with MAC fa:16:3e:84:90:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.937 255071 INFO nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Using config drive
Nov 29 08:02:04 compute-0 nova_compute[255040]: 2025-11-29 08:02:04.967 255071 DEBUG nova.storage.rbd_utils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] rbd image f350e0f9-cc77-497b-a727-5576d4812c31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:04 compute-0 neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed[269749]: [NOTICE]   (269754) : New worker (269774) forked
Nov 29 08:02:04 compute-0 neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed[269749]: [NOTICE]   (269754) : Loading success.
Nov 29 08:02:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:05 compute-0 nova_compute[255040]: 2025-11-29 08:02:05.273 255071 INFO nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Creating config drive at /var/lib/nova/instances/f350e0f9-cc77-497b-a727-5576d4812c31/disk.config
Nov 29 08:02:05 compute-0 nova_compute[255040]: 2025-11-29 08:02:05.285 255071 DEBUG oslo_concurrency.processutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f350e0f9-cc77-497b-a727-5576d4812c31/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmperhmha8e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:05 compute-0 nova_compute[255040]: 2025-11-29 08:02:05.416 255071 DEBUG oslo_concurrency.processutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f350e0f9-cc77-497b-a727-5576d4812c31/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmperhmha8e" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:05 compute-0 nova_compute[255040]: 2025-11-29 08:02:05.449 255071 DEBUG nova.storage.rbd_utils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] rbd image f350e0f9-cc77-497b-a727-5576d4812c31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:05 compute-0 nova_compute[255040]: 2025-11-29 08:02:05.456 255071 DEBUG oslo_concurrency.processutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f350e0f9-cc77-497b-a727-5576d4812c31/disk.config f350e0f9-cc77-497b-a727-5576d4812c31_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 3.6 MiB/s wr, 73 op/s
Nov 29 08:02:05 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1234885121' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:05 compute-0 nova_compute[255040]: 2025-11-29 08:02:05.605 255071 DEBUG oslo_concurrency.processutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f350e0f9-cc77-497b-a727-5576d4812c31/disk.config f350e0f9-cc77-497b-a727-5576d4812c31_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:05 compute-0 nova_compute[255040]: 2025-11-29 08:02:05.607 255071 INFO nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Deleting local config drive /var/lib/nova/instances/f350e0f9-cc77-497b-a727-5576d4812c31/disk.config because it was imported into RBD.
Nov 29 08:02:05 compute-0 kernel: tap7797cf74-15: entered promiscuous mode
Nov 29 08:02:05 compute-0 NetworkManager[49116]: <info>  [1764403325.6535] manager: (tap7797cf74-15): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Nov 29 08:02:05 compute-0 ovn_controller[153295]: 2025-11-29T08:02:05Z|00059|binding|INFO|Claiming lport 7797cf74-15a6-482f-9e6c-39e396a230f7 for this chassis.
Nov 29 08:02:05 compute-0 ovn_controller[153295]: 2025-11-29T08:02:05Z|00060|binding|INFO|7797cf74-15a6-482f-9e6c-39e396a230f7: Claiming fa:16:3e:84:90:2b 10.100.0.8
Nov 29 08:02:05 compute-0 nova_compute[255040]: 2025-11-29 08:02:05.660 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:05.670 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:90:2b 10.100.0.8'], port_security=['fa:16:3e:84:90:2b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'f350e0f9-cc77-497b-a727-5576d4812c31', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b79d41d-8eb2-4d4a-9786-7791592a7e66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '122d6c1348a9421688c8c95fa7bfdf33', 'neutron:revision_number': '2', 'neutron:security_group_ids': '87adc913-6830-41c3-8257-1aa0b3e37174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=932d84e2-f2b7-4447-ace7-dc91550d516b, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=7797cf74-15a6-482f-9e6c-39e396a230f7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:02:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:05.672 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 7797cf74-15a6-482f-9e6c-39e396a230f7 in datapath 0b79d41d-8eb2-4d4a-9786-7791592a7e66 bound to our chassis
Nov 29 08:02:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:05.673 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0b79d41d-8eb2-4d4a-9786-7791592a7e66
Nov 29 08:02:05 compute-0 NetworkManager[49116]: <info>  [1764403325.6745] device (tap7797cf74-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:02:05 compute-0 NetworkManager[49116]: <info>  [1764403325.6754] device (tap7797cf74-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:02:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:05.687 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[c9aca4ad-4027-4df0-8d0d-ac80687626e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:05.689 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0b79d41d-81 in ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:02:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:05.691 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0b79d41d-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:02:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:05.691 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[d98a78a9-76ea-451b-88ae-8869e676fd6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:05.692 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[ad431ee3-eb6a-4407-a375-cc7be91dc5e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:05 compute-0 systemd-machined[216271]: New machine qemu-5-instance-00000005.
Nov 29 08:02:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:05.708 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[9ff89990-68ee-4c9a-b038-30ad7fb729ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:05 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Nov 29 08:02:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:05.734 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[206cbc8b-5a8e-4ba2-838a-2890deab63fd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:05 compute-0 nova_compute[255040]: 2025-11-29 08:02:05.741 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:05 compute-0 ovn_controller[153295]: 2025-11-29T08:02:05Z|00061|binding|INFO|Setting lport 7797cf74-15a6-482f-9e6c-39e396a230f7 ovn-installed in OVS
Nov 29 08:02:05 compute-0 ovn_controller[153295]: 2025-11-29T08:02:05Z|00062|binding|INFO|Setting lport 7797cf74-15a6-482f-9e6c-39e396a230f7 up in Southbound
Nov 29 08:02:05 compute-0 nova_compute[255040]: 2025-11-29 08:02:05.753 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:05.772 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[58f16349-c200-40fe-82b2-3180e1358bb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:05.778 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[3dfdfe6c-d521-40e2-a7f3-25007b40cf64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:05 compute-0 NetworkManager[49116]: <info>  [1764403325.7826] manager: (tap0b79d41d-80): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Nov 29 08:02:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:05.820 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[07035be2-7693-41e7-8ac3-d2f29739502d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:05.824 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[1a50e005-e140-433b-a0ed-d213e0539f56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:05 compute-0 NetworkManager[49116]: <info>  [1764403325.8546] device (tap0b79d41d-80): carrier: link connected
Nov 29 08:02:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:05.862 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc9003e-2c36-4c04-8c73-e923e7cb6fbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:05.882 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[b8b13d6f-2edd-48e0-8bba-0b68e780c441]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b79d41d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:ac:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 556319, 'reachable_time': 33355, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269852, 'error': None, 'target': 'ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:05.905 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[7368432f-aedf-4c81-878f-a41b12b8e80b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:ac51'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 556319, 'tstamp': 556319}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269853, 'error': None, 'target': 'ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:05.928 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[44eb98aa-72e3-4229-9723-cc17945e1f56]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b79d41d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:ac:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 556319, 'reachable_time': 33355, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269854, 'error': None, 'target': 'ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:05.973 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc7817a-9714-4569-a8d3-bcba6383aae0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:06.040 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[438daf61-a5e6-486d-800b-bc6b696c6b32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:06.041 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b79d41d-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:06.042 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:06.042 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b79d41d-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:06 compute-0 NetworkManager[49116]: <info>  [1764403326.0809] manager: (tap0b79d41d-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.080 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:06 compute-0 kernel: tap0b79d41d-80: entered promiscuous mode
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.083 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:06.085 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0b79d41d-80, col_values=(('external_ids', {'iface-id': '9fb3b3e1-f71e-47ab-acbc-6f0864db08ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:06 compute-0 ovn_controller[153295]: 2025-11-29T08:02:06Z|00063|binding|INFO|Releasing lport 9fb3b3e1-f71e-47ab-acbc-6f0864db08ce from this chassis (sb_readonly=0)
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.086 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.120 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:06.122 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0b79d41d-8eb2-4d4a-9786-7791592a7e66.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0b79d41d-8eb2-4d4a-9786-7791592a7e66.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:06.124 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[f7640579-706a-4646-a24b-9a31631ba219]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:06.125 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-0b79d41d-8eb2-4d4a-9786-7791592a7e66
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/0b79d41d-8eb2-4d4a-9786-7791592a7e66.pid.haproxy
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID 0b79d41d-8eb2-4d4a-9786-7791592a7e66
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:02:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:06.129 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66', 'env', 'PROCESS_TAG=haproxy-0b79d41d-8eb2-4d4a-9786-7791592a7e66', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0b79d41d-8eb2-4d4a-9786-7791592a7e66.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.138 255071 DEBUG nova.compute.manager [req-08a6fc20-61f4-4dd6-bd2f-9308d79aec97 req-1e55cbb0-17c8-460c-9b4e-dbd263c021e0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Received event network-vif-plugged-700d2367-3b63-4cd6-acb3-a96968287ef7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.139 255071 DEBUG oslo_concurrency.lockutils [req-08a6fc20-61f4-4dd6-bd2f-9308d79aec97 req-1e55cbb0-17c8-460c-9b4e-dbd263c021e0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "bf99fdc2-8478-42a0-8ccb-4610de952012-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.139 255071 DEBUG oslo_concurrency.lockutils [req-08a6fc20-61f4-4dd6-bd2f-9308d79aec97 req-1e55cbb0-17c8-460c-9b4e-dbd263c021e0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "bf99fdc2-8478-42a0-8ccb-4610de952012-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.140 255071 DEBUG oslo_concurrency.lockutils [req-08a6fc20-61f4-4dd6-bd2f-9308d79aec97 req-1e55cbb0-17c8-460c-9b4e-dbd263c021e0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "bf99fdc2-8478-42a0-8ccb-4610de952012-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.140 255071 DEBUG nova.compute.manager [req-08a6fc20-61f4-4dd6-bd2f-9308d79aec97 req-1e55cbb0-17c8-460c-9b4e-dbd263c021e0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] No waiting events found dispatching network-vif-plugged-700d2367-3b63-4cd6-acb3-a96968287ef7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.141 255071 WARNING nova.compute.manager [req-08a6fc20-61f4-4dd6-bd2f-9308d79aec97 req-1e55cbb0-17c8-460c-9b4e-dbd263c021e0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Received unexpected event network-vif-plugged-700d2367-3b63-4cd6-acb3-a96968287ef7 for instance with vm_state active and task_state None.
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.157 255071 DEBUG nova.compute.manager [req-de3f340c-8f6a-4720-8e59-58ceb6f9de94 req-f897fdcd-4a68-4f9b-8c74-86b44d7b9642 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Received event network-vif-plugged-7797cf74-15a6-482f-9e6c-39e396a230f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.157 255071 DEBUG oslo_concurrency.lockutils [req-de3f340c-8f6a-4720-8e59-58ceb6f9de94 req-f897fdcd-4a68-4f9b-8c74-86b44d7b9642 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "f350e0f9-cc77-497b-a727-5576d4812c31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.158 255071 DEBUG oslo_concurrency.lockutils [req-de3f340c-8f6a-4720-8e59-58ceb6f9de94 req-f897fdcd-4a68-4f9b-8c74-86b44d7b9642 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "f350e0f9-cc77-497b-a727-5576d4812c31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.159 255071 DEBUG oslo_concurrency.lockutils [req-de3f340c-8f6a-4720-8e59-58ceb6f9de94 req-f897fdcd-4a68-4f9b-8c74-86b44d7b9642 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "f350e0f9-cc77-497b-a727-5576d4812c31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.159 255071 DEBUG nova.compute.manager [req-de3f340c-8f6a-4720-8e59-58ceb6f9de94 req-f897fdcd-4a68-4f9b-8c74-86b44d7b9642 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Processing event network-vif-plugged-7797cf74-15a6-482f-9e6c-39e396a230f7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:02:06 compute-0 sudo[269900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:02:06 compute-0 sudo[269900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:06 compute-0 sudo[269900]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.511 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403326.5113688, f350e0f9-cc77-497b-a727-5576d4812c31 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.512 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] VM Started (Lifecycle Event)
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.516 255071 DEBUG nova.compute.manager [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:02:06 compute-0 sudo[269940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:02:06 compute-0 sudo[269940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.523 255071 DEBUG nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:02:06 compute-0 sudo[269940]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.528 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.529 255071 INFO nova.virt.libvirt.driver [-] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Instance spawned successfully.
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.529 255071 DEBUG nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.537 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.546 255071 DEBUG nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.546 255071 DEBUG nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.547 255071 DEBUG nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.547 255071 DEBUG nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.547 255071 DEBUG nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.548 255071 DEBUG nova.virt.libvirt.driver [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.554 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.555 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403326.5115027, f350e0f9-cc77-497b-a727-5576d4812c31 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.555 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] VM Paused (Lifecycle Event)
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.587 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:06 compute-0 sudo[269977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:02:06 compute-0 podman[269974]: 2025-11-29 08:02:06.593296489 +0000 UTC m=+0.056921698 container create d3fa594082147c4a8d5248a6be36304bd7556a5e3583d23d46c976ebb5a86a0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 08:02:06 compute-0 sudo[269977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:06 compute-0 sudo[269977]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:06 compute-0 ceph-mon[75237]: pgmap v1216: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 3.6 MiB/s wr, 73 op/s
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.623 255071 INFO nova.compute.manager [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Took 8.92 seconds to spawn the instance on the hypervisor.
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.625 255071 DEBUG nova.compute.manager [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.632 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403326.5244539, f350e0f9-cc77-497b-a727-5576d4812c31 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.633 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] VM Resumed (Lifecycle Event)
Nov 29 08:02:06 compute-0 systemd[1]: Started libpod-conmon-d3fa594082147c4a8d5248a6be36304bd7556a5e3583d23d46c976ebb5a86a0e.scope.
Nov 29 08:02:06 compute-0 podman[269974]: 2025-11-29 08:02:06.56441132 +0000 UTC m=+0.028036559 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.675 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:02:06 compute-0 sudo[270013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:02:06 compute-0 sudo[270013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.683 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:02:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6741b302fa4062fe85947d53188997040b21ca5904d144fd8e9053bed6de8859/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:06 compute-0 podman[269974]: 2025-11-29 08:02:06.698159212 +0000 UTC m=+0.161784441 container init d3fa594082147c4a8d5248a6be36304bd7556a5e3583d23d46c976ebb5a86a0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 08:02:06 compute-0 podman[269974]: 2025-11-29 08:02:06.711510933 +0000 UTC m=+0.175136142 container start d3fa594082147c4a8d5248a6be36304bd7556a5e3583d23d46c976ebb5a86a0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:02:06 compute-0 neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66[270033]: [NOTICE]   (270044) : New worker (270046) forked
Nov 29 08:02:06 compute-0 neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66[270033]: [NOTICE]   (270044) : Loading success.
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.871 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.892 255071 INFO nova.compute.manager [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Took 10.09 seconds to build instance.
Nov 29 08:02:06 compute-0 nova_compute[255040]: 2025-11-29 08:02:06.909 255071 DEBUG oslo_concurrency.lockutils [None req-7f445f7f-7eb0-4a05-8b96-0d2c298d7d94 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "f350e0f9-cc77-497b-a727-5576d4812c31" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.108 255071 DEBUG oslo_concurrency.lockutils [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Acquiring lock "bf99fdc2-8478-42a0-8ccb-4610de952012" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.109 255071 DEBUG oslo_concurrency.lockutils [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "bf99fdc2-8478-42a0-8ccb-4610de952012" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.109 255071 DEBUG oslo_concurrency.lockutils [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Acquiring lock "bf99fdc2-8478-42a0-8ccb-4610de952012-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.110 255071 DEBUG oslo_concurrency.lockutils [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "bf99fdc2-8478-42a0-8ccb-4610de952012-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.110 255071 DEBUG oslo_concurrency.lockutils [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "bf99fdc2-8478-42a0-8ccb-4610de952012-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.111 255071 INFO nova.compute.manager [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Terminating instance
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.112 255071 DEBUG nova.compute.manager [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:02:07 compute-0 sudo[270013]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 08:02:07 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 08:02:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:02:07 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:02:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:02:07 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:02:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:02:07 compute-0 kernel: tap700d2367-3b (unregistering): left promiscuous mode
Nov 29 08:02:07 compute-0 NetworkManager[49116]: <info>  [1764403327.3174] device (tap700d2367-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:02:07 compute-0 ovn_controller[153295]: 2025-11-29T08:02:07Z|00064|binding|INFO|Releasing lport 700d2367-3b63-4cd6-acb3-a96968287ef7 from this chassis (sb_readonly=0)
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.336 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:07 compute-0 ovn_controller[153295]: 2025-11-29T08:02:07Z|00065|binding|INFO|Setting lport 700d2367-3b63-4cd6-acb3-a96968287ef7 down in Southbound
Nov 29 08:02:07 compute-0 ovn_controller[153295]: 2025-11-29T08:02:07Z|00066|binding|INFO|Removing iface tap700d2367-3b ovn-installed in OVS
Nov 29 08:02:07 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.341 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:07 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 95842ff9-40f1-4991-870c-a468f719d500 does not exist
Nov 29 08:02:07 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev a859e4a7-04d7-4346-aa86-f4e4b3184990 does not exist
Nov 29 08:02:07 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 1e459800-f4d9-4929-b882-59d5f3f215bf does not exist
Nov 29 08:02:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:02:07 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:02:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:07.345 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:a9:88 10.100.0.5'], port_security=['fa:16:3e:d6:a9:88 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'bf99fdc2-8478-42a0-8ccb-4610de952012', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c59af87-9673-4cef-a687-a8698997e2ed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '130bffe4c30f493aa286a3620fd260ca', 'neutron:revision_number': '4', 'neutron:security_group_ids': '622ec407-a852-4817-9d30-c772bce4eb0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee65ecf4-28fb-4c94-9662-60b0418afead, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=700d2367-3b63-4cd6-acb3-a96968287ef7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:02:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:07.347 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 700d2367-3b63-4cd6-acb3-a96968287ef7 in datapath 7c59af87-9673-4cef-a687-a8698997e2ed unbound from our chassis
Nov 29 08:02:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:07.348 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c59af87-9673-4cef-a687-a8698997e2ed, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:02:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:07.350 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d81cb3-84b1-4023-bbb4-87e30cfe2a45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:07.351 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed namespace which is not needed anymore
Nov 29 08:02:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:02:07 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.360 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:02:07 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:02:07 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Nov 29 08:02:07 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 3.368s CPU time.
Nov 29 08:02:07 compute-0 systemd-machined[216271]: Machine qemu-4-instance-00000004 terminated.
Nov 29 08:02:07 compute-0 sudo[270095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:02:07 compute-0 sudo[270095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:07 compute-0 sudo[270095]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 3.6 MiB/s wr, 61 op/s
Nov 29 08:02:07 compute-0 neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed[269749]: [NOTICE]   (269754) : haproxy version is 2.8.14-c23fe91
Nov 29 08:02:07 compute-0 neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed[269749]: [NOTICE]   (269754) : path to executable is /usr/sbin/haproxy
Nov 29 08:02:07 compute-0 neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed[269749]: [WARNING]  (269754) : Exiting Master process...
Nov 29 08:02:07 compute-0 neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed[269749]: [ALERT]    (269754) : Current worker (269774) exited with code 143 (Terminated)
Nov 29 08:02:07 compute-0 neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed[269749]: [WARNING]  (269754) : All workers exited. Exiting... (0)
Nov 29 08:02:07 compute-0 systemd[1]: libpod-2576fe3a0fd113c0c4070579a6e16fd1ad6f044055274cf6504d8534e643baa3.scope: Deactivated successfully.
Nov 29 08:02:07 compute-0 conmon[269749]: conmon 2576fe3a0fd113c0c407 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2576fe3a0fd113c0c4070579a6e16fd1ad6f044055274cf6504d8534e643baa3.scope/container/memory.events
Nov 29 08:02:07 compute-0 sudo[270134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:02:07 compute-0 podman[270132]: 2025-11-29 08:02:07.514547325 +0000 UTC m=+0.052962911 container died 2576fe3a0fd113c0c4070579a6e16fd1ad6f044055274cf6504d8534e643baa3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:02:07 compute-0 sudo[270134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:07 compute-0 sudo[270134]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2576fe3a0fd113c0c4070579a6e16fd1ad6f044055274cf6504d8534e643baa3-userdata-shm.mount: Deactivated successfully.
Nov 29 08:02:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b955fce9a6b67a85beea29c7d45f418cb0c6178ede4b2c1f20d120d82506631-merged.mount: Deactivated successfully.
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.557 255071 INFO nova.virt.libvirt.driver [-] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Instance destroyed successfully.
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.557 255071 DEBUG nova.objects.instance [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lazy-loading 'resources' on Instance uuid bf99fdc2-8478-42a0-8ccb-4610de952012 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:02:07 compute-0 podman[270132]: 2025-11-29 08:02:07.569119829 +0000 UTC m=+0.107535425 container cleanup 2576fe3a0fd113c0c4070579a6e16fd1ad6f044055274cf6504d8534e643baa3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.576 255071 DEBUG nova.virt.libvirt.vif [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:01:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-213456701',display_name='tempest-VolumesActionsTest-instance-213456701',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-213456701',id=4,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:02:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='130bffe4c30f493aa286a3620fd260ca',ramdisk_id='',reservation_id='r-l0w0b93x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-1980568150',owner_user_name='tempest-VolumesActionsTest-1980568150-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:02:04Z,user_data=None,user_id='342375d9cda748d0bdc3985fba484510',uuid=bf99fdc2-8478-42a0-8ccb-4610de952012,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "700d2367-3b63-4cd6-acb3-a96968287ef7", "address": "fa:16:3e:d6:a9:88", "network": {"id": "7c59af87-9673-4cef-a687-a8698997e2ed", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1730273405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "130bffe4c30f493aa286a3620fd260ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap700d2367-3b", "ovs_interfaceid": "700d2367-3b63-4cd6-acb3-a96968287ef7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.577 255071 DEBUG nova.network.os_vif_util [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Converting VIF {"id": "700d2367-3b63-4cd6-acb3-a96968287ef7", "address": "fa:16:3e:d6:a9:88", "network": {"id": "7c59af87-9673-4cef-a687-a8698997e2ed", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1730273405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "130bffe4c30f493aa286a3620fd260ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap700d2367-3b", "ovs_interfaceid": "700d2367-3b63-4cd6-acb3-a96968287ef7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.578 255071 DEBUG nova.network.os_vif_util [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:a9:88,bridge_name='br-int',has_traffic_filtering=True,id=700d2367-3b63-4cd6-acb3-a96968287ef7,network=Network(7c59af87-9673-4cef-a687-a8698997e2ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap700d2367-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.579 255071 DEBUG os_vif [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:a9:88,bridge_name='br-int',has_traffic_filtering=True,id=700d2367-3b63-4cd6-acb3-a96968287ef7,network=Network(7c59af87-9673-4cef-a687-a8698997e2ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap700d2367-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.581 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.584 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap700d2367-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:07 compute-0 systemd[1]: libpod-conmon-2576fe3a0fd113c0c4070579a6e16fd1ad6f044055274cf6504d8534e643baa3.scope: Deactivated successfully.
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.586 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.587 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:07 compute-0 sudo[270180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:02:07 compute-0 sudo[270180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.591 255071 INFO os_vif [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:a9:88,bridge_name='br-int',has_traffic_filtering=True,id=700d2367-3b63-4cd6-acb3-a96968287ef7,network=Network(7c59af87-9673-4cef-a687-a8698997e2ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap700d2367-3b')
Nov 29 08:02:07 compute-0 sudo[270180]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:07 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 08:02:07 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:02:07 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:02:07 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:02:07 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:02:07 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:02:07 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:02:07 compute-0 podman[270217]: 2025-11-29 08:02:07.645365179 +0000 UTC m=+0.052407107 container remove 2576fe3a0fd113c0c4070579a6e16fd1ad6f044055274cf6504d8534e643baa3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 08:02:07 compute-0 sudo[270232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:02:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:07.653 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[b0a6e73b-5ead-42f9-a8a1-96c0854b9493]: (4, ('Sat Nov 29 08:02:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed (2576fe3a0fd113c0c4070579a6e16fd1ad6f044055274cf6504d8534e643baa3)\n2576fe3a0fd113c0c4070579a6e16fd1ad6f044055274cf6504d8534e643baa3\nSat Nov 29 08:02:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed (2576fe3a0fd113c0c4070579a6e16fd1ad6f044055274cf6504d8534e643baa3)\n2576fe3a0fd113c0c4070579a6e16fd1ad6f044055274cf6504d8534e643baa3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:07.655 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[79493d65-41e7-406d-90ef-b64033f118ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:07.656 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c59af87-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:07 compute-0 sudo[270232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:07 compute-0 kernel: tap7c59af87-90: left promiscuous mode
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.663 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:07.670 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[ec895613-f5ff-4ff0-995a-b91e00e6a8c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.673 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:07.684 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[68d0def4-1e13-4b7a-9e5d-162b0145d82f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:07.686 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[ed81b52f-4110-45f7-b94c-aa88ac575ec9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:07.702 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[e50909a8-fb1e-414e-9998-5e686a40f8c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 556109, 'reachable_time': 41602, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270276, 'error': None, 'target': 'ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:07 compute-0 systemd[1]: run-netns-ovnmeta\x2d7c59af87\x2d9673\x2d4cef\x2da687\x2da8698997e2ed.mount: Deactivated successfully.
Nov 29 08:02:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:07.707 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c59af87-9673-4cef-a687-a8698997e2ed deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:02:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:07.707 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[b5230f1a-8a3e-477a-95b5-ea4aff1a2a1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.970 255071 INFO nova.virt.libvirt.driver [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Deleting instance files /var/lib/nova/instances/bf99fdc2-8478-42a0-8ccb-4610de952012_del
Nov 29 08:02:07 compute-0 nova_compute[255040]: 2025-11-29 08:02:07.972 255071 INFO nova.virt.libvirt.driver [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Deletion of /var/lib/nova/instances/bf99fdc2-8478-42a0-8ccb-4610de952012_del complete
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.020 255071 INFO nova.compute.manager [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Took 0.91 seconds to destroy the instance on the hypervisor.
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.021 255071 DEBUG oslo.service.loopingcall [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.021 255071 DEBUG nova.compute.manager [-] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.022 255071 DEBUG nova.network.neutron [-] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:02:08 compute-0 podman[270318]: 2025-11-29 08:02:08.064896271 +0000 UTC m=+0.050917126 container create 10313f72ddd836c7b85fdc23172ab8e6748c2bfc54aaed8a14935be20891ac3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 08:02:08 compute-0 systemd[1]: Started libpod-conmon-10313f72ddd836c7b85fdc23172ab8e6748c2bfc54aaed8a14935be20891ac3d.scope.
Nov 29 08:02:08 compute-0 podman[270318]: 2025-11-29 08:02:08.04631827 +0000 UTC m=+0.032339135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:02:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:02:08 compute-0 podman[270318]: 2025-11-29 08:02:08.169937799 +0000 UTC m=+0.155958664 container init 10313f72ddd836c7b85fdc23172ab8e6748c2bfc54aaed8a14935be20891ac3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:02:08 compute-0 podman[270318]: 2025-11-29 08:02:08.17812474 +0000 UTC m=+0.164145585 container start 10313f72ddd836c7b85fdc23172ab8e6748c2bfc54aaed8a14935be20891ac3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 08:02:08 compute-0 podman[270318]: 2025-11-29 08:02:08.183767242 +0000 UTC m=+0.169788107 container attach 10313f72ddd836c7b85fdc23172ab8e6748c2bfc54aaed8a14935be20891ac3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 08:02:08 compute-0 systemd[1]: libpod-10313f72ddd836c7b85fdc23172ab8e6748c2bfc54aaed8a14935be20891ac3d.scope: Deactivated successfully.
Nov 29 08:02:08 compute-0 charming_nightingale[270334]: 167 167
Nov 29 08:02:08 compute-0 conmon[270334]: conmon 10313f72ddd836c7b85f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-10313f72ddd836c7b85fdc23172ab8e6748c2bfc54aaed8a14935be20891ac3d.scope/container/memory.events
Nov 29 08:02:08 compute-0 podman[270339]: 2025-11-29 08:02:08.229873478 +0000 UTC m=+0.024865483 container died 10313f72ddd836c7b85fdc23172ab8e6748c2bfc54aaed8a14935be20891ac3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_nightingale, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:02:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8da02791432c5de00c9984250c8f5718a0ff22f7c97860e93a2ade7d6dc44a6-merged.mount: Deactivated successfully.
Nov 29 08:02:08 compute-0 podman[270339]: 2025-11-29 08:02:08.267147905 +0000 UTC m=+0.062139900 container remove 10313f72ddd836c7b85fdc23172ab8e6748c2bfc54aaed8a14935be20891ac3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.266 255071 DEBUG nova.compute.manager [req-dc20174c-3797-4d69-84ad-34440f15d476 req-ea89c960-da4f-4771-b6da-6e6dc7fa4d34 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Received event network-vif-unplugged-700d2367-3b63-4cd6-acb3-a96968287ef7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.268 255071 DEBUG oslo_concurrency.lockutils [req-dc20174c-3797-4d69-84ad-34440f15d476 req-ea89c960-da4f-4771-b6da-6e6dc7fa4d34 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "bf99fdc2-8478-42a0-8ccb-4610de952012-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.268 255071 DEBUG oslo_concurrency.lockutils [req-dc20174c-3797-4d69-84ad-34440f15d476 req-ea89c960-da4f-4771-b6da-6e6dc7fa4d34 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "bf99fdc2-8478-42a0-8ccb-4610de952012-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.268 255071 DEBUG oslo_concurrency.lockutils [req-dc20174c-3797-4d69-84ad-34440f15d476 req-ea89c960-da4f-4771-b6da-6e6dc7fa4d34 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "bf99fdc2-8478-42a0-8ccb-4610de952012-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.269 255071 DEBUG nova.compute.manager [req-dc20174c-3797-4d69-84ad-34440f15d476 req-ea89c960-da4f-4771-b6da-6e6dc7fa4d34 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] No waiting events found dispatching network-vif-unplugged-700d2367-3b63-4cd6-acb3-a96968287ef7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.269 255071 DEBUG nova.compute.manager [req-dc20174c-3797-4d69-84ad-34440f15d476 req-ea89c960-da4f-4771-b6da-6e6dc7fa4d34 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Received event network-vif-unplugged-700d2367-3b63-4cd6-acb3-a96968287ef7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.269 255071 DEBUG nova.compute.manager [req-dc20174c-3797-4d69-84ad-34440f15d476 req-ea89c960-da4f-4771-b6da-6e6dc7fa4d34 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Received event network-vif-plugged-700d2367-3b63-4cd6-acb3-a96968287ef7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.269 255071 DEBUG oslo_concurrency.lockutils [req-dc20174c-3797-4d69-84ad-34440f15d476 req-ea89c960-da4f-4771-b6da-6e6dc7fa4d34 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "bf99fdc2-8478-42a0-8ccb-4610de952012-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.270 255071 DEBUG oslo_concurrency.lockutils [req-dc20174c-3797-4d69-84ad-34440f15d476 req-ea89c960-da4f-4771-b6da-6e6dc7fa4d34 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "bf99fdc2-8478-42a0-8ccb-4610de952012-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.270 255071 DEBUG oslo_concurrency.lockutils [req-dc20174c-3797-4d69-84ad-34440f15d476 req-ea89c960-da4f-4771-b6da-6e6dc7fa4d34 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "bf99fdc2-8478-42a0-8ccb-4610de952012-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.270 255071 DEBUG nova.compute.manager [req-dc20174c-3797-4d69-84ad-34440f15d476 req-ea89c960-da4f-4771-b6da-6e6dc7fa4d34 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] No waiting events found dispatching network-vif-plugged-700d2367-3b63-4cd6-acb3-a96968287ef7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.271 255071 WARNING nova.compute.manager [req-dc20174c-3797-4d69-84ad-34440f15d476 req-ea89c960-da4f-4771-b6da-6e6dc7fa4d34 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Received unexpected event network-vif-plugged-700d2367-3b63-4cd6-acb3-a96968287ef7 for instance with vm_state active and task_state deleting.
Nov 29 08:02:08 compute-0 systemd[1]: libpod-conmon-10313f72ddd836c7b85fdc23172ab8e6748c2bfc54aaed8a14935be20891ac3d.scope: Deactivated successfully.
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.317 255071 DEBUG nova.compute.manager [req-c9d20bbf-03cb-44ce-82fd-6b691c881255 req-660d4363-bcec-4310-9b6d-8adc3e842354 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Received event network-vif-plugged-7797cf74-15a6-482f-9e6c-39e396a230f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.318 255071 DEBUG oslo_concurrency.lockutils [req-c9d20bbf-03cb-44ce-82fd-6b691c881255 req-660d4363-bcec-4310-9b6d-8adc3e842354 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "f350e0f9-cc77-497b-a727-5576d4812c31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.318 255071 DEBUG oslo_concurrency.lockutils [req-c9d20bbf-03cb-44ce-82fd-6b691c881255 req-660d4363-bcec-4310-9b6d-8adc3e842354 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "f350e0f9-cc77-497b-a727-5576d4812c31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.318 255071 DEBUG oslo_concurrency.lockutils [req-c9d20bbf-03cb-44ce-82fd-6b691c881255 req-660d4363-bcec-4310-9b6d-8adc3e842354 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "f350e0f9-cc77-497b-a727-5576d4812c31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.318 255071 DEBUG nova.compute.manager [req-c9d20bbf-03cb-44ce-82fd-6b691c881255 req-660d4363-bcec-4310-9b6d-8adc3e842354 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] No waiting events found dispatching network-vif-plugged-7797cf74-15a6-482f-9e6c-39e396a230f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.319 255071 WARNING nova.compute.manager [req-c9d20bbf-03cb-44ce-82fd-6b691c881255 req-660d4363-bcec-4310-9b6d-8adc3e842354 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Received unexpected event network-vif-plugged-7797cf74-15a6-482f-9e6c-39e396a230f7 for instance with vm_state active and task_state None.
Nov 29 08:02:08 compute-0 podman[270361]: 2025-11-29 08:02:08.469680947 +0000 UTC m=+0.051336059 container create ed45136baf86a9864c657b06e33b63bec6a32de66dd393404bd07964d18da775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_taussig, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:02:08 compute-0 systemd[1]: Started libpod-conmon-ed45136baf86a9864c657b06e33b63bec6a32de66dd393404bd07964d18da775.scope.
Nov 29 08:02:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:02:08 compute-0 podman[270361]: 2025-11-29 08:02:08.448293108 +0000 UTC m=+0.029948260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:02:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b82896995a735c170dc65f09280dda1e674296117a2aa0b65476d0da05dcf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b82896995a735c170dc65f09280dda1e674296117a2aa0b65476d0da05dcf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b82896995a735c170dc65f09280dda1e674296117a2aa0b65476d0da05dcf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b82896995a735c170dc65f09280dda1e674296117a2aa0b65476d0da05dcf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b82896995a735c170dc65f09280dda1e674296117a2aa0b65476d0da05dcf1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:08 compute-0 podman[270361]: 2025-11-29 08:02:08.568138346 +0000 UTC m=+0.149793478 container init ed45136baf86a9864c657b06e33b63bec6a32de66dd393404bd07964d18da775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 08:02:08 compute-0 podman[270361]: 2025-11-29 08:02:08.577874409 +0000 UTC m=+0.159529521 container start ed45136baf86a9864c657b06e33b63bec6a32de66dd393404bd07964d18da775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:02:08 compute-0 podman[270361]: 2025-11-29 08:02:08.583272745 +0000 UTC m=+0.164927937 container attach ed45136baf86a9864c657b06e33b63bec6a32de66dd393404bd07964d18da775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.607 255071 DEBUG nova.network.neutron [-] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:02:08 compute-0 ceph-mon[75237]: pgmap v1217: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 3.6 MiB/s wr, 61 op/s
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.643 255071 INFO nova.compute.manager [-] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Took 0.62 seconds to deallocate network for instance.
Nov 29 08:02:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:02:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:02:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:02:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.688 255071 DEBUG oslo_concurrency.lockutils [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.689 255071 DEBUG oslo_concurrency.lockutils [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:02:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.777 255071 DEBUG oslo_concurrency.processutils [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:08 compute-0 ovn_controller[153295]: 2025-11-29T08:02:08Z|00067|binding|INFO|Releasing lport 9fb3b3e1-f71e-47ab-acbc-6f0864db08ce from this chassis (sb_readonly=0)
Nov 29 08:02:08 compute-0 NetworkManager[49116]: <info>  [1764403328.7803] manager: (patch-provnet-0b50aea8-d2d6-4416-bd00-1ceabb7a7c1d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Nov 29 08:02:08 compute-0 NetworkManager[49116]: <info>  [1764403328.7813] manager: (patch-br-int-to-provnet-0b50aea8-d2d6-4416-bd00-1ceabb7a7c1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.802 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:08 compute-0 ovn_controller[153295]: 2025-11-29T08:02:08Z|00068|binding|INFO|Releasing lport 9fb3b3e1-f71e-47ab-acbc-6f0864db08ce from this chassis (sb_readonly=0)
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.817 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:08 compute-0 nova_compute[255040]: 2025-11-29 08:02:08.830 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:02:09 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/777200741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:09 compute-0 nova_compute[255040]: 2025-11-29 08:02:09.332 255071 DEBUG oslo_concurrency.processutils [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:09 compute-0 nova_compute[255040]: 2025-11-29 08:02:09.340 255071 DEBUG nova.compute.provider_tree [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:02:09 compute-0 nova_compute[255040]: 2025-11-29 08:02:09.362 255071 DEBUG nova.scheduler.client.report [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:02:09 compute-0 nova_compute[255040]: 2025-11-29 08:02:09.388 255071 DEBUG oslo_concurrency.lockutils [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:09 compute-0 nova_compute[255040]: 2025-11-29 08:02:09.418 255071 INFO nova.scheduler.client.report [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Deleted allocations for instance bf99fdc2-8478-42a0-8ccb-4610de952012
Nov 29 08:02:09 compute-0 nova_compute[255040]: 2025-11-29 08:02:09.479 255071 DEBUG oslo_concurrency.lockutils [None req-84743aad-98fd-424b-8f74-ce09917dc697 342375d9cda748d0bdc3985fba484510 130bffe4c30f493aa286a3620fd260ca - - default default] Lock "bf99fdc2-8478-42a0-8ccb-4610de952012" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 99 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.6 MiB/s wr, 190 op/s
Nov 29 08:02:09 compute-0 nova_compute[255040]: 2025-11-29 08:02:09.509 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:09 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/777200741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:09 compute-0 awesome_taussig[270377]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:02:09 compute-0 awesome_taussig[270377]: --> relative data size: 1.0
Nov 29 08:02:09 compute-0 awesome_taussig[270377]: --> All data devices are unavailable
Nov 29 08:02:09 compute-0 systemd[1]: libpod-ed45136baf86a9864c657b06e33b63bec6a32de66dd393404bd07964d18da775.scope: Deactivated successfully.
Nov 29 08:02:09 compute-0 podman[270361]: 2025-11-29 08:02:09.673243168 +0000 UTC m=+1.254898290 container died ed45136baf86a9864c657b06e33b63bec6a32de66dd393404bd07964d18da775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_taussig, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:02:09 compute-0 systemd[1]: libpod-ed45136baf86a9864c657b06e33b63bec6a32de66dd393404bd07964d18da775.scope: Consumed 1.042s CPU time.
Nov 29 08:02:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-56b82896995a735c170dc65f09280dda1e674296117a2aa0b65476d0da05dcf1-merged.mount: Deactivated successfully.
Nov 29 08:02:09 compute-0 podman[270361]: 2025-11-29 08:02:09.739308512 +0000 UTC m=+1.320963634 container remove ed45136baf86a9864c657b06e33b63bec6a32de66dd393404bd07964d18da775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_taussig, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 08:02:09 compute-0 systemd[1]: libpod-conmon-ed45136baf86a9864c657b06e33b63bec6a32de66dd393404bd07964d18da775.scope: Deactivated successfully.
Nov 29 08:02:09 compute-0 sudo[270232]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:09 compute-0 sudo[270443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:02:09 compute-0 sudo[270443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:09 compute-0 sudo[270443]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:09 compute-0 sudo[270468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:02:09 compute-0 sudo[270468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:09 compute-0 sudo[270468]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:09 compute-0 sudo[270493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:02:09 compute-0 sudo[270493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:09 compute-0 sudo[270493]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:10 compute-0 sudo[270518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 08:02:10 compute-0 sudo[270518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:10 compute-0 podman[270579]: 2025-11-29 08:02:10.403132574 +0000 UTC m=+0.047818063 container create 979973c7d5dd00ea3d1d80132ec4aea5f33b9cd0fd25270dfa80fbb128e18c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:02:10 compute-0 systemd[1]: Started libpod-conmon-979973c7d5dd00ea3d1d80132ec4aea5f33b9cd0fd25270dfa80fbb128e18c85.scope.
Nov 29 08:02:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:02:10 compute-0 nova_compute[255040]: 2025-11-29 08:02:10.459 255071 DEBUG nova.compute.manager [req-85f078ea-7e69-4c38-a3cb-c230675f0669 req-e10aeb52-7fb1-4f0f-bf03-7adf1312c65d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Received event network-changed-7797cf74-15a6-482f-9e6c-39e396a230f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:10 compute-0 nova_compute[255040]: 2025-11-29 08:02:10.461 255071 DEBUG nova.compute.manager [req-85f078ea-7e69-4c38-a3cb-c230675f0669 req-e10aeb52-7fb1-4f0f-bf03-7adf1312c65d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Refreshing instance network info cache due to event network-changed-7797cf74-15a6-482f-9e6c-39e396a230f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:02:10 compute-0 nova_compute[255040]: 2025-11-29 08:02:10.461 255071 DEBUG oslo_concurrency.lockutils [req-85f078ea-7e69-4c38-a3cb-c230675f0669 req-e10aeb52-7fb1-4f0f-bf03-7adf1312c65d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-f350e0f9-cc77-497b-a727-5576d4812c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:02:10 compute-0 nova_compute[255040]: 2025-11-29 08:02:10.462 255071 DEBUG oslo_concurrency.lockutils [req-85f078ea-7e69-4c38-a3cb-c230675f0669 req-e10aeb52-7fb1-4f0f-bf03-7adf1312c65d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-f350e0f9-cc77-497b-a727-5576d4812c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:02:10 compute-0 nova_compute[255040]: 2025-11-29 08:02:10.462 255071 DEBUG nova.network.neutron [req-85f078ea-7e69-4c38-a3cb-c230675f0669 req-e10aeb52-7fb1-4f0f-bf03-7adf1312c65d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Refreshing network info cache for port 7797cf74-15a6-482f-9e6c-39e396a230f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:02:10 compute-0 podman[270579]: 2025-11-29 08:02:10.469930628 +0000 UTC m=+0.114616147 container init 979973c7d5dd00ea3d1d80132ec4aea5f33b9cd0fd25270dfa80fbb128e18c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hugle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:02:10 compute-0 podman[270579]: 2025-11-29 08:02:10.382046394 +0000 UTC m=+0.026731923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:02:10 compute-0 podman[270579]: 2025-11-29 08:02:10.478230703 +0000 UTC m=+0.122916182 container start 979973c7d5dd00ea3d1d80132ec4aea5f33b9cd0fd25270dfa80fbb128e18c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hugle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:02:10 compute-0 podman[270579]: 2025-11-29 08:02:10.481999775 +0000 UTC m=+0.126685354 container attach 979973c7d5dd00ea3d1d80132ec4aea5f33b9cd0fd25270dfa80fbb128e18c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hugle, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 08:02:10 compute-0 wonderful_hugle[270595]: 167 167
Nov 29 08:02:10 compute-0 systemd[1]: libpod-979973c7d5dd00ea3d1d80132ec4aea5f33b9cd0fd25270dfa80fbb128e18c85.scope: Deactivated successfully.
Nov 29 08:02:10 compute-0 podman[270579]: 2025-11-29 08:02:10.486383743 +0000 UTC m=+0.131069222 container died 979973c7d5dd00ea3d1d80132ec4aea5f33b9cd0fd25270dfa80fbb128e18c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 08:02:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-689424ce5f5d6869f1502933591728e062c366eff2aaea1785068b0e2d3e284e-merged.mount: Deactivated successfully.
Nov 29 08:02:10 compute-0 nova_compute[255040]: 2025-11-29 08:02:10.524 255071 DEBUG nova.compute.manager [req-f6b5b0b5-2efb-4661-aebd-df41aa31df18 req-ffe4cb97-faca-4fb6-a164-cb6fc06217ff cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Received event network-vif-deleted-700d2367-3b63-4cd6-acb3-a96968287ef7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:10 compute-0 podman[270579]: 2025-11-29 08:02:10.527440482 +0000 UTC m=+0.172125961 container remove 979973c7d5dd00ea3d1d80132ec4aea5f33b9cd0fd25270dfa80fbb128e18c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hugle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 08:02:10 compute-0 systemd[1]: libpod-conmon-979973c7d5dd00ea3d1d80132ec4aea5f33b9cd0fd25270dfa80fbb128e18c85.scope: Deactivated successfully.
Nov 29 08:02:10 compute-0 ceph-mon[75237]: pgmap v1218: 305 pgs: 305 active+clean; 99 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.6 MiB/s wr, 190 op/s
Nov 29 08:02:10 compute-0 podman[270620]: 2025-11-29 08:02:10.736545831 +0000 UTC m=+0.051910674 container create 51e90b31b1b78ca256a23c234a6e4f70bf922b591b175454f8c4c2fd32bff9cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ritchie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:02:10 compute-0 systemd[1]: Started libpod-conmon-51e90b31b1b78ca256a23c234a6e4f70bf922b591b175454f8c4c2fd32bff9cb.scope.
Nov 29 08:02:10 compute-0 podman[270620]: 2025-11-29 08:02:10.712156361 +0000 UTC m=+0.027521264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:02:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:02:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041752fc4476f2c9e796b17f5de2dd469d51771037f9ac9d8ebef752e8a1d76e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041752fc4476f2c9e796b17f5de2dd469d51771037f9ac9d8ebef752e8a1d76e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041752fc4476f2c9e796b17f5de2dd469d51771037f9ac9d8ebef752e8a1d76e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041752fc4476f2c9e796b17f5de2dd469d51771037f9ac9d8ebef752e8a1d76e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:10 compute-0 podman[270620]: 2025-11-29 08:02:10.839773809 +0000 UTC m=+0.155138682 container init 51e90b31b1b78ca256a23c234a6e4f70bf922b591b175454f8c4c2fd32bff9cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ritchie, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:02:10 compute-0 podman[270620]: 2025-11-29 08:02:10.852903554 +0000 UTC m=+0.168268387 container start 51e90b31b1b78ca256a23c234a6e4f70bf922b591b175454f8c4c2fd32bff9cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 08:02:10 compute-0 podman[270620]: 2025-11-29 08:02:10.856434969 +0000 UTC m=+0.171799812 container attach 51e90b31b1b78ca256a23c234a6e4f70bf922b591b175454f8c4c2fd32bff9cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:02:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 844 KiB/s wr, 212 op/s
Nov 29 08:02:11 compute-0 keen_ritchie[270637]: {
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:     "0": [
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:         {
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "devices": [
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "/dev/loop3"
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             ],
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "lv_name": "ceph_lv0",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "lv_size": "21470642176",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "name": "ceph_lv0",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "tags": {
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.cluster_name": "ceph",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.crush_device_class": "",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.encrypted": "0",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.osd_id": "0",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.type": "block",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.vdo": "0"
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             },
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "type": "block",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "vg_name": "ceph_vg0"
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:         }
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:     ],
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:     "1": [
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:         {
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "devices": [
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "/dev/loop4"
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             ],
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "lv_name": "ceph_lv1",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "lv_size": "21470642176",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "name": "ceph_lv1",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "tags": {
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.cluster_name": "ceph",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.crush_device_class": "",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.encrypted": "0",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.osd_id": "1",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.type": "block",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.vdo": "0"
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             },
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "type": "block",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "vg_name": "ceph_vg1"
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:         }
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:     ],
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:     "2": [
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:         {
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "devices": [
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "/dev/loop5"
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             ],
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "lv_name": "ceph_lv2",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "lv_size": "21470642176",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "name": "ceph_lv2",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "tags": {
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.cluster_name": "ceph",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.crush_device_class": "",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.encrypted": "0",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.osd_id": "2",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.type": "block",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:                 "ceph.vdo": "0"
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             },
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "type": "block",
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:             "vg_name": "ceph_vg2"
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:         }
Nov 29 08:02:11 compute-0 keen_ritchie[270637]:     ]
Nov 29 08:02:11 compute-0 keen_ritchie[270637]: }
Nov 29 08:02:11 compute-0 systemd[1]: libpod-51e90b31b1b78ca256a23c234a6e4f70bf922b591b175454f8c4c2fd32bff9cb.scope: Deactivated successfully.
Nov 29 08:02:11 compute-0 conmon[270637]: conmon 51e90b31b1b78ca256a2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-51e90b31b1b78ca256a23c234a6e4f70bf922b591b175454f8c4c2fd32bff9cb.scope/container/memory.events
Nov 29 08:02:11 compute-0 podman[270620]: 2025-11-29 08:02:11.746462461 +0000 UTC m=+1.061827284 container died 51e90b31b1b78ca256a23c234a6e4f70bf922b591b175454f8c4c2fd32bff9cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ritchie, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:02:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-041752fc4476f2c9e796b17f5de2dd469d51771037f9ac9d8ebef752e8a1d76e-merged.mount: Deactivated successfully.
Nov 29 08:02:11 compute-0 podman[270620]: 2025-11-29 08:02:11.803250335 +0000 UTC m=+1.118615158 container remove 51e90b31b1b78ca256a23c234a6e4f70bf922b591b175454f8c4c2fd32bff9cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ritchie, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:02:11 compute-0 systemd[1]: libpod-conmon-51e90b31b1b78ca256a23c234a6e4f70bf922b591b175454f8c4c2fd32bff9cb.scope: Deactivated successfully.
Nov 29 08:02:11 compute-0 sudo[270518]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:11 compute-0 sudo[270659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:02:11 compute-0 sudo[270659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:11 compute-0 sudo[270659]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:11 compute-0 sudo[270684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:02:11 compute-0 sudo[270684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:11 compute-0 sudo[270684]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:12 compute-0 nova_compute[255040]: 2025-11-29 08:02:12.038 255071 DEBUG nova.network.neutron [req-85f078ea-7e69-4c38-a3cb-c230675f0669 req-e10aeb52-7fb1-4f0f-bf03-7adf1312c65d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Updated VIF entry in instance network info cache for port 7797cf74-15a6-482f-9e6c-39e396a230f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:02:12 compute-0 nova_compute[255040]: 2025-11-29 08:02:12.041 255071 DEBUG nova.network.neutron [req-85f078ea-7e69-4c38-a3cb-c230675f0669 req-e10aeb52-7fb1-4f0f-bf03-7adf1312c65d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Updating instance_info_cache with network_info: [{"id": "7797cf74-15a6-482f-9e6c-39e396a230f7", "address": "fa:16:3e:84:90:2b", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7797cf74-15", "ovs_interfaceid": "7797cf74-15a6-482f-9e6c-39e396a230f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:02:12 compute-0 sudo[270709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:02:12 compute-0 sudo[270709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:12 compute-0 sudo[270709]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:12 compute-0 nova_compute[255040]: 2025-11-29 08:02:12.068 255071 DEBUG oslo_concurrency.lockutils [req-85f078ea-7e69-4c38-a3cb-c230675f0669 req-e10aeb52-7fb1-4f0f-bf03-7adf1312c65d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-f350e0f9-cc77-497b-a727-5576d4812c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:02:12 compute-0 sudo[270734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 08:02:12 compute-0 sudo[270734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:12 compute-0 podman[270797]: 2025-11-29 08:02:12.517554241 +0000 UTC m=+0.068231825 container create eda1a1bf86bdb5227a7b9acd694e06be8286db05cc5d451249e5eda12d158794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wilbur, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:02:12 compute-0 podman[270797]: 2025-11-29 08:02:12.481586489 +0000 UTC m=+0.032264123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:02:12 compute-0 nova_compute[255040]: 2025-11-29 08:02:12.586 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:12 compute-0 systemd[1]: Started libpod-conmon-eda1a1bf86bdb5227a7b9acd694e06be8286db05cc5d451249e5eda12d158794.scope.
Nov 29 08:02:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:02:13 compute-0 ceph-mon[75237]: pgmap v1219: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 844 KiB/s wr, 212 op/s
Nov 29 08:02:13 compute-0 podman[270797]: 2025-11-29 08:02:13.136267213 +0000 UTC m=+0.686944897 container init eda1a1bf86bdb5227a7b9acd694e06be8286db05cc5d451249e5eda12d158794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wilbur, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 29 08:02:13 compute-0 podman[270797]: 2025-11-29 08:02:13.149052929 +0000 UTC m=+0.699730513 container start eda1a1bf86bdb5227a7b9acd694e06be8286db05cc5d451249e5eda12d158794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:02:13 compute-0 serene_wilbur[270814]: 167 167
Nov 29 08:02:13 compute-0 systemd[1]: libpod-eda1a1bf86bdb5227a7b9acd694e06be8286db05cc5d451249e5eda12d158794.scope: Deactivated successfully.
Nov 29 08:02:13 compute-0 podman[270797]: 2025-11-29 08:02:13.18391566 +0000 UTC m=+0.734593244 container attach eda1a1bf86bdb5227a7b9acd694e06be8286db05cc5d451249e5eda12d158794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 08:02:13 compute-0 podman[270797]: 2025-11-29 08:02:13.185634847 +0000 UTC m=+0.736312461 container died eda1a1bf86bdb5227a7b9acd694e06be8286db05cc5d451249e5eda12d158794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wilbur, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:02:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b3f9f1830a30d98c91222c66e6980abdbc547229e15eff83f086092609fa64c-merged.mount: Deactivated successfully.
Nov 29 08:02:13 compute-0 podman[270797]: 2025-11-29 08:02:13.245054852 +0000 UTC m=+0.795732446 container remove eda1a1bf86bdb5227a7b9acd694e06be8286db05cc5d451249e5eda12d158794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wilbur, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:02:13 compute-0 systemd[1]: libpod-conmon-eda1a1bf86bdb5227a7b9acd694e06be8286db05cc5d451249e5eda12d158794.scope: Deactivated successfully.
Nov 29 08:02:13 compute-0 podman[270817]: 2025-11-29 08:02:13.395457725 +0000 UTC m=+0.549824123 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:02:13 compute-0 podman[270865]: 2025-11-29 08:02:13.464373517 +0000 UTC m=+0.043854406 container create 653b3edfb7c7c57d183d63d4cc7531acf319edefecf491acc263c539670781ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ardinghelli, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:02:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 29 KiB/s wr, 189 op/s
Nov 29 08:02:13 compute-0 systemd[1]: Started libpod-conmon-653b3edfb7c7c57d183d63d4cc7531acf319edefecf491acc263c539670781ca.scope.
Nov 29 08:02:13 compute-0 podman[270865]: 2025-11-29 08:02:13.446875294 +0000 UTC m=+0.026356203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:02:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:02:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1eb4d4cc5dbdaa4af5d80d8d48cacda827b5602b8b51b7641a880e43f9d577e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1eb4d4cc5dbdaa4af5d80d8d48cacda827b5602b8b51b7641a880e43f9d577e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1eb4d4cc5dbdaa4af5d80d8d48cacda827b5602b8b51b7641a880e43f9d577e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1eb4d4cc5dbdaa4af5d80d8d48cacda827b5602b8b51b7641a880e43f9d577e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:13 compute-0 podman[270865]: 2025-11-29 08:02:13.578308254 +0000 UTC m=+0.157789193 container init 653b3edfb7c7c57d183d63d4cc7531acf319edefecf491acc263c539670781ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:02:13 compute-0 podman[270865]: 2025-11-29 08:02:13.591773268 +0000 UTC m=+0.171254157 container start 653b3edfb7c7c57d183d63d4cc7531acf319edefecf491acc263c539670781ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ardinghelli, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 08:02:13 compute-0 podman[270865]: 2025-11-29 08:02:13.596954457 +0000 UTC m=+0.176435356 container attach 653b3edfb7c7c57d183d63d4cc7531acf319edefecf491acc263c539670781ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 08:02:14 compute-0 ceph-mon[75237]: pgmap v1220: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 29 KiB/s wr, 189 op/s
Nov 29 08:02:14 compute-0 nova_compute[255040]: 2025-11-29 08:02:14.512 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]: {
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]:         "osd_id": 2,
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]:         "type": "bluestore"
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]:     },
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]:         "osd_id": 0,
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]:         "type": "bluestore"
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]:     },
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]:         "osd_id": 1,
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]:         "type": "bluestore"
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]:     }
Nov 29 08:02:14 compute-0 eloquent_ardinghelli[270882]: }
Nov 29 08:02:14 compute-0 systemd[1]: libpod-653b3edfb7c7c57d183d63d4cc7531acf319edefecf491acc263c539670781ca.scope: Deactivated successfully.
Nov 29 08:02:14 compute-0 systemd[1]: libpod-653b3edfb7c7c57d183d63d4cc7531acf319edefecf491acc263c539670781ca.scope: Consumed 1.046s CPU time.
Nov 29 08:02:14 compute-0 podman[270865]: 2025-11-29 08:02:14.640725163 +0000 UTC m=+1.220206062 container died 653b3edfb7c7c57d183d63d4cc7531acf319edefecf491acc263c539670781ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:02:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1eb4d4cc5dbdaa4af5d80d8d48cacda827b5602b8b51b7641a880e43f9d577e-merged.mount: Deactivated successfully.
Nov 29 08:02:14 compute-0 podman[270865]: 2025-11-29 08:02:14.701304279 +0000 UTC m=+1.280785168 container remove 653b3edfb7c7c57d183d63d4cc7531acf319edefecf491acc263c539670781ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ardinghelli, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 08:02:14 compute-0 systemd[1]: libpod-conmon-653b3edfb7c7c57d183d63d4cc7531acf319edefecf491acc263c539670781ca.scope: Deactivated successfully.
Nov 29 08:02:14 compute-0 sudo[270734]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:02:14 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:02:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:02:14 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:02:14 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 03308dca-a2d0-4dda-a904-ccd641f031e8 does not exist
Nov 29 08:02:14 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 3202ef00-540a-4b1b-a462-67366c6f1a4b does not exist
Nov 29 08:02:14 compute-0 sudo[270926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:02:14 compute-0 sudo[270926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:14 compute-0 sudo[270926]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:14 compute-0 sudo[270951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:02:14 compute-0 sudo[270951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:14 compute-0 sudo[270951]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 104 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.4 MiB/s wr, 199 op/s
Nov 29 08:02:15 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:02:15 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:02:16 compute-0 ceph-mon[75237]: pgmap v1221: 305 pgs: 305 active+clean; 104 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.4 MiB/s wr, 199 op/s
Nov 29 08:02:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 104 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 192 op/s
Nov 29 08:02:17 compute-0 nova_compute[255040]: 2025-11-29 08:02:17.591 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:19 compute-0 ceph-mon[75237]: pgmap v1222: 305 pgs: 305 active+clean; 104 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 192 op/s
Nov 29 08:02:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 9.3 MiB/s wr, 197 op/s
Nov 29 08:02:19 compute-0 nova_compute[255040]: 2025-11-29 08:02:19.515 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:20 compute-0 ceph-mon[75237]: pgmap v1223: 305 pgs: 305 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 9.3 MiB/s wr, 197 op/s
Nov 29 08:02:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:21 compute-0 nova_compute[255040]: 2025-11-29 08:02:21.062 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 236 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 12 MiB/s wr, 91 op/s
Nov 29 08:02:21 compute-0 ceph-mon[75237]: pgmap v1224: 305 pgs: 305 active+clean; 236 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 12 MiB/s wr, 91 op/s
Nov 29 08:02:21 compute-0 nova_compute[255040]: 2025-11-29 08:02:21.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:22 compute-0 ovn_controller[153295]: 2025-11-29T08:02:22Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:84:90:2b 10.100.0.8
Nov 29 08:02:22 compute-0 ovn_controller[153295]: 2025-11-29T08:02:22Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:84:90:2b 10.100.0.8
Nov 29 08:02:22 compute-0 nova_compute[255040]: 2025-11-29 08:02:22.547 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403327.5463927, bf99fdc2-8478-42a0-8ccb-4610de952012 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:02:22 compute-0 nova_compute[255040]: 2025-11-29 08:02:22.548 255071 INFO nova.compute.manager [-] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] VM Stopped (Lifecycle Event)
Nov 29 08:02:22 compute-0 nova_compute[255040]: 2025-11-29 08:02:22.576 255071 DEBUG nova.compute.manager [None req-962902eb-5349-442f-b6bf-9f8d7ec0a98b - - - - - -] [instance: bf99fdc2-8478-42a0-8ccb-4610de952012] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:22 compute-0 nova_compute[255040]: 2025-11-29 08:02:22.596 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:22 compute-0 nova_compute[255040]: 2025-11-29 08:02:22.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:22 compute-0 nova_compute[255040]: 2025-11-29 08:02:22.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 299 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 198 KiB/s rd, 17 MiB/s wr, 61 op/s
Nov 29 08:02:23 compute-0 podman[270976]: 2025-11-29 08:02:23.985569503 +0000 UTC m=+0.129393456 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:02:24 compute-0 nova_compute[255040]: 2025-11-29 08:02:24.516 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:24 compute-0 nova_compute[255040]: 2025-11-29 08:02:24.970 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:24 compute-0 nova_compute[255040]: 2025-11-29 08:02:24.974 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:24 compute-0 nova_compute[255040]: 2025-11-29 08:02:24.975 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:02:24 compute-0 nova_compute[255040]: 2025-11-29 08:02:24.975 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:02:25 compute-0 nova_compute[255040]: 2025-11-29 08:02:25.177 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "refresh_cache-f350e0f9-cc77-497b-a727-5576d4812c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:02:25 compute-0 nova_compute[255040]: 2025-11-29 08:02:25.178 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquired lock "refresh_cache-f350e0f9-cc77-497b-a727-5576d4812c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:02:25 compute-0 nova_compute[255040]: 2025-11-29 08:02:25.179 255071 DEBUG nova.network.neutron [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:02:25 compute-0 nova_compute[255040]: 2025-11-29 08:02:25.180 255071 DEBUG nova.objects.instance [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f350e0f9-cc77-497b-a727-5576d4812c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:02:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 377 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 23 MiB/s wr, 106 op/s
Nov 29 08:02:26 compute-0 ceph-mon[75237]: pgmap v1225: 305 pgs: 305 active+clean; 299 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 198 KiB/s rd, 17 MiB/s wr, 61 op/s
Nov 29 08:02:26 compute-0 nova_compute[255040]: 2025-11-29 08:02:26.496 255071 DEBUG nova.network.neutron [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Updating instance_info_cache with network_info: [{"id": "7797cf74-15a6-482f-9e6c-39e396a230f7", "address": "fa:16:3e:84:90:2b", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7797cf74-15", "ovs_interfaceid": "7797cf74-15a6-482f-9e6c-39e396a230f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:02:26 compute-0 nova_compute[255040]: 2025-11-29 08:02:26.527 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Releasing lock "refresh_cache-f350e0f9-cc77-497b-a727-5576d4812c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:02:26 compute-0 nova_compute[255040]: 2025-11-29 08:02:26.529 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:02:26 compute-0 nova_compute[255040]: 2025-11-29 08:02:26.530 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:26 compute-0 nova_compute[255040]: 2025-11-29 08:02:26.567 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:26 compute-0 nova_compute[255040]: 2025-11-29 08:02:26.567 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:26 compute-0 nova_compute[255040]: 2025-11-29 08:02:26.567 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:26 compute-0 nova_compute[255040]: 2025-11-29 08:02:26.568 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:02:26 compute-0 nova_compute[255040]: 2025-11-29 08:02:26.568 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:02:27 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1352015308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:27 compute-0 nova_compute[255040]: 2025-11-29 08:02:27.037 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:27 compute-0 nova_compute[255040]: 2025-11-29 08:02:27.109 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:02:27 compute-0 nova_compute[255040]: 2025-11-29 08:02:27.109 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:02:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:27.123 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:27.124 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:27.125 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:27 compute-0 nova_compute[255040]: 2025-11-29 08:02:27.322 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:02:27 compute-0 nova_compute[255040]: 2025-11-29 08:02:27.324 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4511MB free_disk=59.947776794433594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:02:27 compute-0 nova_compute[255040]: 2025-11-29 08:02:27.324 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:27 compute-0 nova_compute[255040]: 2025-11-29 08:02:27.324 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:27 compute-0 nova_compute[255040]: 2025-11-29 08:02:27.440 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance f350e0f9-cc77-497b-a727-5576d4812c31 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:02:27 compute-0 nova_compute[255040]: 2025-11-29 08:02:27.440 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:02:27 compute-0 nova_compute[255040]: 2025-11-29 08:02:27.440 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:02:27 compute-0 nova_compute[255040]: 2025-11-29 08:02:27.470 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 377 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 22 MiB/s wr, 95 op/s
Nov 29 08:02:27 compute-0 ceph-mon[75237]: pgmap v1226: 305 pgs: 305 active+clean; 377 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 23 MiB/s wr, 106 op/s
Nov 29 08:02:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1352015308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:27 compute-0 nova_compute[255040]: 2025-11-29 08:02:27.600 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:02:27 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1736533129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:27 compute-0 nova_compute[255040]: 2025-11-29 08:02:27.938 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:27 compute-0 nova_compute[255040]: 2025-11-29 08:02:27.945 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:02:27 compute-0 nova_compute[255040]: 2025-11-29 08:02:27.975 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:02:28 compute-0 nova_compute[255040]: 2025-11-29 08:02:28.143 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:02:28 compute-0 nova_compute[255040]: 2025-11-29 08:02:28.144 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:28 compute-0 nova_compute[255040]: 2025-11-29 08:02:28.589 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:28 compute-0 nova_compute[255040]: 2025-11-29 08:02:28.590 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:28 compute-0 nova_compute[255040]: 2025-11-29 08:02:28.590 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:02:28 compute-0 ceph-mon[75237]: pgmap v1227: 305 pgs: 305 active+clean; 377 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 22 MiB/s wr, 95 op/s
Nov 29 08:02:28 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1736533129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 417 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 25 MiB/s wr, 96 op/s
Nov 29 08:02:29 compute-0 nova_compute[255040]: 2025-11-29 08:02:29.518 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:29 compute-0 podman[271039]: 2025-11-29 08:02:29.605495003 +0000 UTC m=+0.071161203 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:02:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:30 compute-0 ceph-mon[75237]: pgmap v1228: 305 pgs: 305 active+clean; 417 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 25 MiB/s wr, 96 op/s
Nov 29 08:02:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 457 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 21 MiB/s wr, 99 op/s
Nov 29 08:02:32 compute-0 nova_compute[255040]: 2025-11-29 08:02:32.638 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:32 compute-0 ceph-mon[75237]: pgmap v1229: 305 pgs: 305 active+clean; 457 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 21 MiB/s wr, 99 op/s
Nov 29 08:02:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 497 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 705 KiB/s rd, 21 MiB/s wr, 81 op/s
Nov 29 08:02:34 compute-0 sshd-session[271059]: Connection closed by 220.250.59.155 port 38888
Nov 29 08:02:34 compute-0 ceph-mon[75237]: pgmap v1230: 305 pgs: 305 active+clean; 497 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 705 KiB/s rd, 21 MiB/s wr, 81 op/s
Nov 29 08:02:34 compute-0 nova_compute[255040]: 2025-11-29 08:02:34.521 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 553 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 MiB/s wr, 72 op/s
Nov 29 08:02:35 compute-0 ceph-mon[75237]: pgmap v1231: 305 pgs: 305 active+clean; 553 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 MiB/s wr, 72 op/s
Nov 29 08:02:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 553 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 15 MiB/s wr, 24 op/s
Nov 29 08:02:37 compute-0 nova_compute[255040]: 2025-11-29 08:02:37.641 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:38 compute-0 ceph-mon[75237]: pgmap v1232: 305 pgs: 305 active+clean; 553 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 15 MiB/s wr, 24 op/s
Nov 29 08:02:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:02:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:02:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:02:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:02:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:02:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:02:38 compute-0 ovn_controller[153295]: 2025-11-29T08:02:38Z|00069|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Nov 29 08:02:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:02:38
Nov 29 08:02:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:02:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:02:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'backups', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', '.rgw.root', 'images', 'default.rgw.log']
Nov 29 08:02:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:02:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 642 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 21 MiB/s wr, 48 op/s
Nov 29 08:02:39 compute-0 nova_compute[255040]: 2025-11-29 08:02:39.524 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:02:40 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1649243502' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:02:40 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1649243502' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:40 compute-0 ceph-mon[75237]: pgmap v1233: 305 pgs: 305 active+clean; 642 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 21 MiB/s wr, 48 op/s
Nov 29 08:02:40 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1649243502' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:40 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1649243502' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 713 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 23 MiB/s wr, 79 op/s
Nov 29 08:02:42 compute-0 nova_compute[255040]: 2025-11-29 08:02:42.030 255071 DEBUG oslo_concurrency.lockutils [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "f350e0f9-cc77-497b-a727-5576d4812c31" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:42 compute-0 nova_compute[255040]: 2025-11-29 08:02:42.031 255071 DEBUG oslo_concurrency.lockutils [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "f350e0f9-cc77-497b-a727-5576d4812c31" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:42 compute-0 nova_compute[255040]: 2025-11-29 08:02:42.064 255071 DEBUG nova.objects.instance [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lazy-loading 'flavor' on Instance uuid f350e0f9-cc77-497b-a727-5576d4812c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:02:42 compute-0 nova_compute[255040]: 2025-11-29 08:02:42.096 255071 INFO nova.virt.libvirt.driver [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Ignoring supplied device name: /dev/vdb
Nov 29 08:02:42 compute-0 ceph-mon[75237]: pgmap v1234: 305 pgs: 305 active+clean; 713 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 23 MiB/s wr, 79 op/s
Nov 29 08:02:42 compute-0 nova_compute[255040]: 2025-11-29 08:02:42.645 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:42 compute-0 nova_compute[255040]: 2025-11-29 08:02:42.679 255071 DEBUG oslo_concurrency.lockutils [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "f350e0f9-cc77-497b-a727-5576d4812c31" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:02:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:02:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:02:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:02:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:02:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:02:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:02:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:02:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:02:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:02:43 compute-0 nova_compute[255040]: 2025-11-29 08:02:43.126 255071 DEBUG oslo_concurrency.lockutils [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "f350e0f9-cc77-497b-a727-5576d4812c31" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:43 compute-0 nova_compute[255040]: 2025-11-29 08:02:43.127 255071 DEBUG oslo_concurrency.lockutils [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "f350e0f9-cc77-497b-a727-5576d4812c31" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:43 compute-0 nova_compute[255040]: 2025-11-29 08:02:43.127 255071 INFO nova.compute.manager [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Attaching volume 0b4e95d1-ba9b-4234-8276-c9cfcb5a6d0c to /dev/vdb
Nov 29 08:02:43 compute-0 nova_compute[255040]: 2025-11-29 08:02:43.349 255071 DEBUG os_brick.utils [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:02:43 compute-0 nova_compute[255040]: 2025-11-29 08:02:43.352 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:43 compute-0 nova_compute[255040]: 2025-11-29 08:02:43.363 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:43 compute-0 nova_compute[255040]: 2025-11-29 08:02:43.364 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[aec89b96-bd7c-4d02-a4b1-8a9b0558a984]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:43 compute-0 nova_compute[255040]: 2025-11-29 08:02:43.366 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:43 compute-0 nova_compute[255040]: 2025-11-29 08:02:43.374 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:43 compute-0 nova_compute[255040]: 2025-11-29 08:02:43.374 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[10c0a41d-94b0-4f4f-93fc-d3cb567571d2]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:43 compute-0 nova_compute[255040]: 2025-11-29 08:02:43.379 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:43 compute-0 nova_compute[255040]: 2025-11-29 08:02:43.388 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:43 compute-0 nova_compute[255040]: 2025-11-29 08:02:43.388 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[3caef996-2bdb-4bbd-93fb-5b919f165d26]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:43 compute-0 nova_compute[255040]: 2025-11-29 08:02:43.390 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[eee8c965-b97b-47d4-b794-755c505823d7]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:43 compute-0 nova_compute[255040]: 2025-11-29 08:02:43.391 255071 DEBUG oslo_concurrency.processutils [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:43 compute-0 nova_compute[255040]: 2025-11-29 08:02:43.424 255071 DEBUG oslo_concurrency.processutils [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:43 compute-0 nova_compute[255040]: 2025-11-29 08:02:43.429 255071 DEBUG os_brick.initiator.connectors.lightos [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:02:43 compute-0 nova_compute[255040]: 2025-11-29 08:02:43.429 255071 DEBUG os_brick.initiator.connectors.lightos [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:02:43 compute-0 nova_compute[255040]: 2025-11-29 08:02:43.430 255071 DEBUG os_brick.initiator.connectors.lightos [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:02:43 compute-0 nova_compute[255040]: 2025-11-29 08:02:43.430 255071 DEBUG os_brick.utils [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] <== get_connector_properties: return (79ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:02:43 compute-0 nova_compute[255040]: 2025-11-29 08:02:43.431 255071 DEBUG nova.virt.block_device [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Updating existing volume attachment record: 3eab240c-c542-41f9-862c-8cdb6c0752a4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:02:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 763 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 25 MiB/s wr, 79 op/s
Nov 29 08:02:44 compute-0 podman[271068]: 2025-11-29 08:02:44.015175627 +0000 UTC m=+0.174587257 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 08:02:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:02:44 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3088153627' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:44 compute-0 nova_compute[255040]: 2025-11-29 08:02:44.441 255071 DEBUG nova.objects.instance [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lazy-loading 'flavor' on Instance uuid f350e0f9-cc77-497b-a727-5576d4812c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:02:44 compute-0 nova_compute[255040]: 2025-11-29 08:02:44.472 255071 DEBUG nova.virt.libvirt.driver [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Attempting to attach volume 0b4e95d1-ba9b-4234-8276-c9cfcb5a6d0c with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:02:44 compute-0 nova_compute[255040]: 2025-11-29 08:02:44.476 255071 DEBUG nova.virt.libvirt.guest [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:02:44 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:02:44 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-0b4e95d1-ba9b-4234-8276-c9cfcb5a6d0c">
Nov 29 08:02:44 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:02:44 compute-0 nova_compute[255040]:   </source>
Nov 29 08:02:44 compute-0 nova_compute[255040]:   <auth username="openstack">
Nov 29 08:02:44 compute-0 nova_compute[255040]:     <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:02:44 compute-0 nova_compute[255040]:   </auth>
Nov 29 08:02:44 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:02:44 compute-0 nova_compute[255040]:   <serial>0b4e95d1-ba9b-4234-8276-c9cfcb5a6d0c</serial>
Nov 29 08:02:44 compute-0 nova_compute[255040]: </disk>
Nov 29 08:02:44 compute-0 nova_compute[255040]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:02:44 compute-0 nova_compute[255040]: 2025-11-29 08:02:44.526 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:44 compute-0 nova_compute[255040]: 2025-11-29 08:02:44.603 255071 DEBUG nova.virt.libvirt.driver [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:02:44 compute-0 nova_compute[255040]: 2025-11-29 08:02:44.604 255071 DEBUG nova.virt.libvirt.driver [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:02:44 compute-0 nova_compute[255040]: 2025-11-29 08:02:44.605 255071 DEBUG nova.virt.libvirt.driver [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:02:44 compute-0 nova_compute[255040]: 2025-11-29 08:02:44.606 255071 DEBUG nova.virt.libvirt.driver [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] No VIF found with MAC fa:16:3e:84:90:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:02:44 compute-0 ceph-mon[75237]: pgmap v1235: 305 pgs: 305 active+clean; 763 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 25 MiB/s wr, 79 op/s
Nov 29 08:02:44 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3088153627' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:44 compute-0 nova_compute[255040]: 2025-11-29 08:02:44.934 255071 DEBUG oslo_concurrency.lockutils [None req-245fcfe7-c044-4d5f-b618-076bcd722055 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "f350e0f9-cc77-497b-a727-5576d4812c31" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 921 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 37 MiB/s wr, 99 op/s
Nov 29 08:02:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:02:46 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3174097092' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Nov 29 08:02:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Nov 29 08:02:46 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Nov 29 08:02:46 compute-0 ceph-mon[75237]: pgmap v1236: 305 pgs: 305 active+clean; 921 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 37 MiB/s wr, 99 op/s
Nov 29 08:02:46 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3174097092' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 921 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 39 MiB/s wr, 107 op/s
Nov 29 08:02:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Nov 29 08:02:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Nov 29 08:02:47 compute-0 ceph-mon[75237]: osdmap e193: 3 total, 3 up, 3 in
Nov 29 08:02:47 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Nov 29 08:02:47 compute-0 nova_compute[255040]: 2025-11-29 08:02:47.690 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:48 compute-0 ceph-mon[75237]: pgmap v1238: 305 pgs: 305 active+clean; 921 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 39 MiB/s wr, 107 op/s
Nov 29 08:02:48 compute-0 ceph-mon[75237]: osdmap e194: 3 total, 3 up, 3 in
Nov 29 08:02:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 1.1 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 55 MiB/s wr, 88 op/s
Nov 29 08:02:49 compute-0 nova_compute[255040]: 2025-11-29 08:02:49.528 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Nov 29 08:02:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Nov 29 08:02:49 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Nov 29 08:02:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:50 compute-0 ceph-mon[75237]: pgmap v1240: 305 pgs: 305 active+clean; 1.1 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 55 MiB/s wr, 88 op/s
Nov 29 08:02:50 compute-0 ceph-mon[75237]: osdmap e195: 3 total, 3 up, 3 in
Nov 29 08:02:50 compute-0 nova_compute[255040]: 2025-11-29 08:02:50.729 255071 DEBUG oslo_concurrency.lockutils [None req-dd36b67c-72af-4ca7-a542-ee29a4cf414e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "f350e0f9-cc77-497b-a727-5576d4812c31" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:50 compute-0 nova_compute[255040]: 2025-11-29 08:02:50.729 255071 DEBUG oslo_concurrency.lockutils [None req-dd36b67c-72af-4ca7-a542-ee29a4cf414e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "f350e0f9-cc77-497b-a727-5576d4812c31" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:50 compute-0 nova_compute[255040]: 2025-11-29 08:02:50.742 255071 INFO nova.compute.manager [None req-dd36b67c-72af-4ca7-a542-ee29a4cf414e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Detaching volume 0b4e95d1-ba9b-4234-8276-c9cfcb5a6d0c
Nov 29 08:02:50 compute-0 nova_compute[255040]: 2025-11-29 08:02:50.906 255071 INFO nova.virt.block_device [None req-dd36b67c-72af-4ca7-a542-ee29a4cf414e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Attempting to driver detach volume 0b4e95d1-ba9b-4234-8276-c9cfcb5a6d0c from mountpoint /dev/vdb
Nov 29 08:02:50 compute-0 nova_compute[255040]: 2025-11-29 08:02:50.915 255071 DEBUG nova.virt.libvirt.driver [None req-dd36b67c-72af-4ca7-a542-ee29a4cf414e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Attempting to detach device vdb from instance f350e0f9-cc77-497b-a727-5576d4812c31 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:02:50 compute-0 nova_compute[255040]: 2025-11-29 08:02:50.916 255071 DEBUG nova.virt.libvirt.guest [None req-dd36b67c-72af-4ca7-a542-ee29a4cf414e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:02:50 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:02:50 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-0b4e95d1-ba9b-4234-8276-c9cfcb5a6d0c">
Nov 29 08:02:50 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:02:50 compute-0 nova_compute[255040]:   </source>
Nov 29 08:02:50 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:02:50 compute-0 nova_compute[255040]:   <serial>0b4e95d1-ba9b-4234-8276-c9cfcb5a6d0c</serial>
Nov 29 08:02:50 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:02:50 compute-0 nova_compute[255040]: </disk>
Nov 29 08:02:50 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:02:50 compute-0 nova_compute[255040]: 2025-11-29 08:02:50.922 255071 INFO nova.virt.libvirt.driver [None req-dd36b67c-72af-4ca7-a542-ee29a4cf414e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Successfully detached device vdb from instance f350e0f9-cc77-497b-a727-5576d4812c31 from the persistent domain config.
Nov 29 08:02:50 compute-0 nova_compute[255040]: 2025-11-29 08:02:50.922 255071 DEBUG nova.virt.libvirt.driver [None req-dd36b67c-72af-4ca7-a542-ee29a4cf414e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance f350e0f9-cc77-497b-a727-5576d4812c31 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:02:50 compute-0 nova_compute[255040]: 2025-11-29 08:02:50.923 255071 DEBUG nova.virt.libvirt.guest [None req-dd36b67c-72af-4ca7-a542-ee29a4cf414e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:02:50 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:02:50 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-0b4e95d1-ba9b-4234-8276-c9cfcb5a6d0c">
Nov 29 08:02:50 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:02:50 compute-0 nova_compute[255040]:   </source>
Nov 29 08:02:50 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:02:50 compute-0 nova_compute[255040]:   <serial>0b4e95d1-ba9b-4234-8276-c9cfcb5a6d0c</serial>
Nov 29 08:02:50 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:02:50 compute-0 nova_compute[255040]: </disk>
Nov 29 08:02:50 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:02:50 compute-0 nova_compute[255040]: 2025-11-29 08:02:50.980 255071 DEBUG nova.virt.libvirt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Received event <DeviceRemovedEvent: 1764403370.979476, f350e0f9-cc77-497b-a727-5576d4812c31 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:02:50 compute-0 nova_compute[255040]: 2025-11-29 08:02:50.987 255071 DEBUG nova.virt.libvirt.driver [None req-dd36b67c-72af-4ca7-a542-ee29a4cf414e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance f350e0f9-cc77-497b-a727-5576d4812c31 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:02:50 compute-0 nova_compute[255040]: 2025-11-29 08:02:50.990 255071 INFO nova.virt.libvirt.driver [None req-dd36b67c-72af-4ca7-a542-ee29a4cf414e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Successfully detached device vdb from instance f350e0f9-cc77-497b-a727-5576d4812c31 from the live domain config.
Nov 29 08:02:51 compute-0 nova_compute[255040]: 2025-11-29 08:02:51.135 255071 DEBUG nova.objects.instance [None req-dd36b67c-72af-4ca7-a542-ee29a4cf414e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lazy-loading 'flavor' on Instance uuid f350e0f9-cc77-497b-a727-5576d4812c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:02:51 compute-0 nova_compute[255040]: 2025-11-29 08:02:51.170 255071 DEBUG oslo_concurrency.lockutils [None req-dd36b67c-72af-4ca7-a542-ee29a4cf414e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "f350e0f9-cc77-497b-a727-5576d4812c31" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 37 MiB/s wr, 104 op/s
Nov 29 08:02:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Nov 29 08:02:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Nov 29 08:02:51 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Nov 29 08:02:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:02:51 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/217738540' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:02:51 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/217738540' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.100 255071 DEBUG oslo_concurrency.lockutils [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "f350e0f9-cc77-497b-a727-5576d4812c31" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.101 255071 DEBUG oslo_concurrency.lockutils [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "f350e0f9-cc77-497b-a727-5576d4812c31" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.101 255071 DEBUG oslo_concurrency.lockutils [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "f350e0f9-cc77-497b-a727-5576d4812c31-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.101 255071 DEBUG oslo_concurrency.lockutils [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "f350e0f9-cc77-497b-a727-5576d4812c31-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.102 255071 DEBUG oslo_concurrency.lockutils [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "f350e0f9-cc77-497b-a727-5576d4812c31-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.103 255071 INFO nova.compute.manager [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Terminating instance
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.104 255071 DEBUG nova.compute.manager [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:02:52 compute-0 kernel: tap7797cf74-15 (unregistering): left promiscuous mode
Nov 29 08:02:52 compute-0 NetworkManager[49116]: <info>  [1764403372.2298] device (tap7797cf74-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:02:52 compute-0 ovn_controller[153295]: 2025-11-29T08:02:52Z|00070|binding|INFO|Releasing lport 7797cf74-15a6-482f-9e6c-39e396a230f7 from this chassis (sb_readonly=0)
Nov 29 08:02:52 compute-0 ovn_controller[153295]: 2025-11-29T08:02:52Z|00071|binding|INFO|Setting lport 7797cf74-15a6-482f-9e6c-39e396a230f7 down in Southbound
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.243 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:52 compute-0 ovn_controller[153295]: 2025-11-29T08:02:52Z|00072|binding|INFO|Removing iface tap7797cf74-15 ovn-installed in OVS
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.247 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:52.255 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:90:2b 10.100.0.8'], port_security=['fa:16:3e:84:90:2b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'f350e0f9-cc77-497b-a727-5576d4812c31', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b79d41d-8eb2-4d4a-9786-7791592a7e66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '122d6c1348a9421688c8c95fa7bfdf33', 'neutron:revision_number': '4', 'neutron:security_group_ids': '87adc913-6830-41c3-8257-1aa0b3e37174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=932d84e2-f2b7-4447-ace7-dc91550d516b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=7797cf74-15a6-482f-9e6c-39e396a230f7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:02:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:52.257 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 7797cf74-15a6-482f-9e6c-39e396a230f7 in datapath 0b79d41d-8eb2-4d4a-9786-7791592a7e66 unbound from our chassis
Nov 29 08:02:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:52.258 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0b79d41d-8eb2-4d4a-9786-7791592a7e66, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:02:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:52.262 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[d13e5f4f-1df1-4e39-b3d5-cf563c0fc167]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:52.263 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66 namespace which is not needed anymore
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.267 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:52 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Nov 29 08:02:52 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 16.194s CPU time.
Nov 29 08:02:52 compute-0 systemd-machined[216271]: Machine qemu-5-instance-00000005 terminated.
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.327 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.333 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.342 255071 INFO nova.virt.libvirt.driver [-] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Instance destroyed successfully.
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.343 255071 DEBUG nova.objects.instance [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lazy-loading 'resources' on Instance uuid f350e0f9-cc77-497b-a727-5576d4812c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.363 255071 DEBUG nova.virt.libvirt.vif [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:01:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1596728706',display_name='tempest-VolumesBackupsTest-instance-1596728706',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1596728706',id=5,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOsawfirz1+mQwn2OYfT0MGTQb0vCfyvNFlQWVFRzwitiCzmybvG5MB9FisRTgFb4a4ZldjoJfyqGoH18Kt/Yhs/wjMEvBN+lMb7A242/Izl3Z2jYy6qVOMN0V5jJf+8zw==',key_name='tempest-keypair-827833387',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:02:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='122d6c1348a9421688c8c95fa7bfdf33',ramdisk_id='',reservation_id='r-5huvb9e5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-433060525',owner_user_name='tempest-VolumesBackupsTest-433060525-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:02:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2f0bad5019c043259e8f0cdbb532a167',uuid=f350e0f9-cc77-497b-a727-5576d4812c31,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7797cf74-15a6-482f-9e6c-39e396a230f7", "address": "fa:16:3e:84:90:2b", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7797cf74-15", "ovs_interfaceid": "7797cf74-15a6-482f-9e6c-39e396a230f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.364 255071 DEBUG nova.network.os_vif_util [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Converting VIF {"id": "7797cf74-15a6-482f-9e6c-39e396a230f7", "address": "fa:16:3e:84:90:2b", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7797cf74-15", "ovs_interfaceid": "7797cf74-15a6-482f-9e6c-39e396a230f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.365 255071 DEBUG nova.network.os_vif_util [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:84:90:2b,bridge_name='br-int',has_traffic_filtering=True,id=7797cf74-15a6-482f-9e6c-39e396a230f7,network=Network(0b79d41d-8eb2-4d4a-9786-7791592a7e66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7797cf74-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.365 255071 DEBUG os_vif [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:90:2b,bridge_name='br-int',has_traffic_filtering=True,id=7797cf74-15a6-482f-9e6c-39e396a230f7,network=Network(0b79d41d-8eb2-4d4a-9786-7791592a7e66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7797cf74-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.371 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.372 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7797cf74-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.375 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.378 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.383 255071 INFO os_vif [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:90:2b,bridge_name='br-int',has_traffic_filtering=True,id=7797cf74-15a6-482f-9e6c-39e396a230f7,network=Network(0b79d41d-8eb2-4d4a-9786-7791592a7e66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7797cf74-15')
Nov 29 08:02:52 compute-0 neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66[270033]: [NOTICE]   (270044) : haproxy version is 2.8.14-c23fe91
Nov 29 08:02:52 compute-0 neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66[270033]: [NOTICE]   (270044) : path to executable is /usr/sbin/haproxy
Nov 29 08:02:52 compute-0 neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66[270033]: [WARNING]  (270044) : Exiting Master process...
Nov 29 08:02:52 compute-0 neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66[270033]: [ALERT]    (270044) : Current worker (270046) exited with code 143 (Terminated)
Nov 29 08:02:52 compute-0 neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66[270033]: [WARNING]  (270044) : All workers exited. Exiting... (0)
Nov 29 08:02:52 compute-0 systemd[1]: libpod-d3fa594082147c4a8d5248a6be36304bd7556a5e3583d23d46c976ebb5a86a0e.scope: Deactivated successfully.
Nov 29 08:02:52 compute-0 podman[271151]: 2025-11-29 08:02:52.459471851 +0000 UTC m=+0.065424058 container died d3fa594082147c4a8d5248a6be36304bd7556a5e3583d23d46c976ebb5a86a0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:02:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d3fa594082147c4a8d5248a6be36304bd7556a5e3583d23d46c976ebb5a86a0e-userdata-shm.mount: Deactivated successfully.
Nov 29 08:02:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-6741b302fa4062fe85947d53188997040b21ca5904d144fd8e9053bed6de8859-merged.mount: Deactivated successfully.
Nov 29 08:02:52 compute-0 podman[271151]: 2025-11-29 08:02:52.514910388 +0000 UTC m=+0.120862575 container cleanup d3fa594082147c4a8d5248a6be36304bd7556a5e3583d23d46c976ebb5a86a0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:02:52 compute-0 systemd[1]: libpod-conmon-d3fa594082147c4a8d5248a6be36304bd7556a5e3583d23d46c976ebb5a86a0e.scope: Deactivated successfully.
Nov 29 08:02:52 compute-0 podman[271198]: 2025-11-29 08:02:52.607724335 +0000 UTC m=+0.059409625 container remove d3fa594082147c4a8d5248a6be36304bd7556a5e3583d23d46c976ebb5a86a0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:02:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:52.614 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[58263330-4719-4a7f-995b-81d9299eea88]: (4, ('Sat Nov 29 08:02:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66 (d3fa594082147c4a8d5248a6be36304bd7556a5e3583d23d46c976ebb5a86a0e)\nd3fa594082147c4a8d5248a6be36304bd7556a5e3583d23d46c976ebb5a86a0e\nSat Nov 29 08:02:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66 (d3fa594082147c4a8d5248a6be36304bd7556a5e3583d23d46c976ebb5a86a0e)\nd3fa594082147c4a8d5248a6be36304bd7556a5e3583d23d46c976ebb5a86a0e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:52.616 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[b077bbca-004d-4935-8df0-dc2fc3f2317a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:52.617 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b79d41d-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.620 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:52 compute-0 kernel: tap0b79d41d-80: left promiscuous mode
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.636 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:52.641 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[03be678b-5617-4ab9-9e61-33821d89f8c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:52.659 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[c32f8ba6-9658-44bb-94c4-0a7678aa08c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:52.662 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[ee7f3aab-ae6f-47c4-8aa7-e7c370648a12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:52.683 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[b0106bcf-0e5e-4ae0-8632-b26400ff15e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 556310, 'reachable_time': 42263, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271215, 'error': None, 'target': 'ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d0b79d41d\x2d8eb2\x2d4d4a\x2d9786\x2d7791592a7e66.mount: Deactivated successfully.
Nov 29 08:02:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:52.688 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:02:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:52.689 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[432ef522-f5ec-4bb8-b034-f82cc6a61358]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:52 compute-0 ceph-mon[75237]: pgmap v1242: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 37 MiB/s wr, 104 op/s
Nov 29 08:02:52 compute-0 ceph-mon[75237]: osdmap e196: 3 total, 3 up, 3 in
Nov 29 08:02:52 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/217738540' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:52 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/217738540' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.828 255071 INFO nova.virt.libvirt.driver [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Deleting instance files /var/lib/nova/instances/f350e0f9-cc77-497b-a727-5576d4812c31_del
Nov 29 08:02:52 compute-0 nova_compute[255040]: 2025-11-29 08:02:52.830 255071 INFO nova.virt.libvirt.driver [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Deletion of /var/lib/nova/instances/f350e0f9-cc77-497b-a727-5576d4812c31_del complete
Nov 29 08:02:53 compute-0 nova_compute[255040]: 2025-11-29 08:02:53.020 255071 INFO nova.compute.manager [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Took 0.92 seconds to destroy the instance on the hypervisor.
Nov 29 08:02:53 compute-0 nova_compute[255040]: 2025-11-29 08:02:53.021 255071 DEBUG oslo.service.loopingcall [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:02:53 compute-0 nova_compute[255040]: 2025-11-29 08:02:53.022 255071 DEBUG nova.compute.manager [-] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:02:53 compute-0 nova_compute[255040]: 2025-11-29 08:02:53.022 255071 DEBUG nova.network.neutron [-] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:02:53 compute-0 nova_compute[255040]: 2025-11-29 08:02:53.209 255071 DEBUG nova.compute.manager [req-3e138f9d-6d7b-415a-a8d0-d84dd4ca2a53 req-3d85f8ea-303b-4d73-8a5a-35c4f393ac2f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Received event network-vif-unplugged-7797cf74-15a6-482f-9e6c-39e396a230f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:53 compute-0 nova_compute[255040]: 2025-11-29 08:02:53.210 255071 DEBUG oslo_concurrency.lockutils [req-3e138f9d-6d7b-415a-a8d0-d84dd4ca2a53 req-3d85f8ea-303b-4d73-8a5a-35c4f393ac2f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "f350e0f9-cc77-497b-a727-5576d4812c31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:53 compute-0 nova_compute[255040]: 2025-11-29 08:02:53.211 255071 DEBUG oslo_concurrency.lockutils [req-3e138f9d-6d7b-415a-a8d0-d84dd4ca2a53 req-3d85f8ea-303b-4d73-8a5a-35c4f393ac2f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "f350e0f9-cc77-497b-a727-5576d4812c31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:53 compute-0 nova_compute[255040]: 2025-11-29 08:02:53.211 255071 DEBUG oslo_concurrency.lockutils [req-3e138f9d-6d7b-415a-a8d0-d84dd4ca2a53 req-3d85f8ea-303b-4d73-8a5a-35c4f393ac2f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "f350e0f9-cc77-497b-a727-5576d4812c31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:53 compute-0 nova_compute[255040]: 2025-11-29 08:02:53.212 255071 DEBUG nova.compute.manager [req-3e138f9d-6d7b-415a-a8d0-d84dd4ca2a53 req-3d85f8ea-303b-4d73-8a5a-35c4f393ac2f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] No waiting events found dispatching network-vif-unplugged-7797cf74-15a6-482f-9e6c-39e396a230f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:02:53 compute-0 nova_compute[255040]: 2025-11-29 08:02:53.212 255071 DEBUG nova.compute.manager [req-3e138f9d-6d7b-415a-a8d0-d84dd4ca2a53 req-3d85f8ea-303b-4d73-8a5a-35c4f393ac2f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Received event network-vif-unplugged-7797cf74-15a6-482f-9e6c-39e396a230f7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:02:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 816 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 128 KiB/s rd, 37 MiB/s wr, 192 op/s
Nov 29 08:02:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:02:53 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3337242624' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:02:53 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3337242624' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:54 compute-0 ceph-mon[75237]: pgmap v1244: 305 pgs: 305 active+clean; 816 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 128 KiB/s rd, 37 MiB/s wr, 192 op/s
Nov 29 08:02:54 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3337242624' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:54 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3337242624' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:54 compute-0 nova_compute[255040]: 2025-11-29 08:02:54.531 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:54 compute-0 podman[271216]: 2025-11-29 08:02:54.914790625 +0000 UTC m=+0.072442458 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:02:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:55 compute-0 nova_compute[255040]: 2025-11-29 08:02:55.325 255071 DEBUG nova.compute.manager [req-17ee7e5e-bc64-4112-992c-bc1392e29d14 req-0d59fdcb-6d33-49b4-a456-858572c5f61d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Received event network-vif-plugged-7797cf74-15a6-482f-9e6c-39e396a230f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:55 compute-0 nova_compute[255040]: 2025-11-29 08:02:55.325 255071 DEBUG oslo_concurrency.lockutils [req-17ee7e5e-bc64-4112-992c-bc1392e29d14 req-0d59fdcb-6d33-49b4-a456-858572c5f61d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "f350e0f9-cc77-497b-a727-5576d4812c31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:55 compute-0 nova_compute[255040]: 2025-11-29 08:02:55.325 255071 DEBUG oslo_concurrency.lockutils [req-17ee7e5e-bc64-4112-992c-bc1392e29d14 req-0d59fdcb-6d33-49b4-a456-858572c5f61d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "f350e0f9-cc77-497b-a727-5576d4812c31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:55 compute-0 nova_compute[255040]: 2025-11-29 08:02:55.325 255071 DEBUG oslo_concurrency.lockutils [req-17ee7e5e-bc64-4112-992c-bc1392e29d14 req-0d59fdcb-6d33-49b4-a456-858572c5f61d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "f350e0f9-cc77-497b-a727-5576d4812c31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:55 compute-0 nova_compute[255040]: 2025-11-29 08:02:55.325 255071 DEBUG nova.compute.manager [req-17ee7e5e-bc64-4112-992c-bc1392e29d14 req-0d59fdcb-6d33-49b4-a456-858572c5f61d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] No waiting events found dispatching network-vif-plugged-7797cf74-15a6-482f-9e6c-39e396a230f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:02:55 compute-0 nova_compute[255040]: 2025-11-29 08:02:55.326 255071 WARNING nova.compute.manager [req-17ee7e5e-bc64-4112-992c-bc1392e29d14 req-0d59fdcb-6d33-49b4-a456-858572c5f61d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Received unexpected event network-vif-plugged-7797cf74-15a6-482f-9e6c-39e396a230f7 for instance with vm_state active and task_state deleting.
Nov 29 08:02:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 165 KiB/s rd, 29 MiB/s wr, 249 op/s
Nov 29 08:02:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:55.855 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:02:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:55.856 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:02:55 compute-0 nova_compute[255040]: 2025-11-29 08:02:55.856 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:55 compute-0 nova_compute[255040]: 2025-11-29 08:02:55.922 255071 DEBUG nova.network.neutron [-] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:02:55 compute-0 nova_compute[255040]: 2025-11-29 08:02:55.942 255071 INFO nova.compute.manager [-] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Took 2.92 seconds to deallocate network for instance.
Nov 29 08:02:55 compute-0 nova_compute[255040]: 2025-11-29 08:02:55.979 255071 DEBUG oslo_concurrency.lockutils [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:55 compute-0 nova_compute[255040]: 2025-11-29 08:02:55.980 255071 DEBUG oslo_concurrency.lockutils [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:56 compute-0 nova_compute[255040]: 2025-11-29 08:02:56.032 255071 DEBUG oslo_concurrency.processutils [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.087256625643029e-07 of space, bias 1.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:02:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:02:56 compute-0 nova_compute[255040]: 2025-11-29 08:02:56.112 255071 DEBUG nova.compute.manager [req-25256b21-d924-45c7-b75b-67e950b321b0 req-4bc20c2c-7e5b-46a6-8a0c-8fb371bd5fe1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Received event network-vif-deleted-7797cf74-15a6-482f-9e6c-39e396a230f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:02:56 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3753728075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:56 compute-0 nova_compute[255040]: 2025-11-29 08:02:56.464 255071 DEBUG oslo_concurrency.processutils [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:56 compute-0 nova_compute[255040]: 2025-11-29 08:02:56.471 255071 DEBUG nova.compute.provider_tree [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:02:56 compute-0 nova_compute[255040]: 2025-11-29 08:02:56.490 255071 DEBUG nova.scheduler.client.report [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:02:56 compute-0 nova_compute[255040]: 2025-11-29 08:02:56.515 255071 DEBUG oslo_concurrency.lockutils [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:56 compute-0 nova_compute[255040]: 2025-11-29 08:02:56.550 255071 INFO nova.scheduler.client.report [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Deleted allocations for instance f350e0f9-cc77-497b-a727-5576d4812c31
Nov 29 08:02:56 compute-0 ceph-mon[75237]: pgmap v1245: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 165 KiB/s rd, 29 MiB/s wr, 249 op/s
Nov 29 08:02:56 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3753728075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:56 compute-0 nova_compute[255040]: 2025-11-29 08:02:56.611 255071 DEBUG oslo_concurrency.lockutils [None req-ca9eed76-d32c-4176-85ee-457a0d1c9c63 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "f350e0f9-cc77-497b-a727-5576d4812c31" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:57 compute-0 nova_compute[255040]: 2025-11-29 08:02:57.376 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 4.0 MiB/s wr, 206 op/s
Nov 29 08:02:57 compute-0 nova_compute[255040]: 2025-11-29 08:02:57.608 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:57 compute-0 nova_compute[255040]: 2025-11-29 08:02:57.717 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:57 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:02:57.858 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Nov 29 08:02:58 compute-0 ceph-mon[75237]: pgmap v1246: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 4.0 MiB/s wr, 206 op/s
Nov 29 08:02:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Nov 29 08:02:58 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Nov 29 08:02:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:02:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2485394098' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:02:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2485394098' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:02:59 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1946375518' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:02:59 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1946375518' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 6.4 KiB/s wr, 189 op/s
Nov 29 08:02:59 compute-0 nova_compute[255040]: 2025-11-29 08:02:59.534 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:59 compute-0 ceph-mon[75237]: osdmap e197: 3 total, 3 up, 3 in
Nov 29 08:02:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2485394098' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2485394098' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1946375518' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1946375518' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:59 compute-0 podman[271259]: 2025-11-29 08:02:59.908014027 +0000 UTC m=+0.070298849 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 08:03:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Nov 29 08:03:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Nov 29 08:03:00 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Nov 29 08:03:00 compute-0 ceph-mon[75237]: pgmap v1248: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 6.4 KiB/s wr, 189 op/s
Nov 29 08:03:00 compute-0 ceph-mon[75237]: osdmap e198: 3 total, 3 up, 3 in
Nov 29 08:03:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 4.0 KiB/s wr, 123 op/s
Nov 29 08:03:02 compute-0 nova_compute[255040]: 2025-11-29 08:03:02.429 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:03 compute-0 ceph-mon[75237]: pgmap v1250: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 4.0 KiB/s wr, 123 op/s
Nov 29 08:03:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 29 08:03:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:03:03 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/46369827' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:03:03 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/46369827' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:04 compute-0 ceph-mon[75237]: pgmap v1251: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 29 08:03:04 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/46369827' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:04 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/46369827' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:04 compute-0 nova_compute[255040]: 2025-11-29 08:03:04.535 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Nov 29 08:03:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Nov 29 08:03:05 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Nov 29 08:03:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.2 KiB/s wr, 61 op/s
Nov 29 08:03:06 compute-0 ceph-mon[75237]: osdmap e199: 3 total, 3 up, 3 in
Nov 29 08:03:06 compute-0 ceph-mon[75237]: pgmap v1253: 305 pgs: 305 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.2 KiB/s wr, 61 op/s
Nov 29 08:03:06 compute-0 sshd-session[271280]: error: kex_exchange_identification: read: Connection reset by peer
Nov 29 08:03:06 compute-0 sshd-session[271280]: Connection reset by 45.140.17.97 port 3878
Nov 29 08:03:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:03:07 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1750808156' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:03:07 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1750808156' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:07 compute-0 nova_compute[255040]: 2025-11-29 08:03:07.340 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403372.3386958, f350e0f9-cc77-497b-a727-5576d4812c31 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:03:07 compute-0 nova_compute[255040]: 2025-11-29 08:03:07.341 255071 INFO nova.compute.manager [-] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] VM Stopped (Lifecycle Event)
Nov 29 08:03:07 compute-0 nova_compute[255040]: 2025-11-29 08:03:07.361 255071 DEBUG nova.compute.manager [None req-6c468798-336f-4ef5-9e30-9f04d309d6c6 - - - - - -] [instance: f350e0f9-cc77-497b-a727-5576d4812c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:03:07 compute-0 nova_compute[255040]: 2025-11-29 08:03:07.432 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:07 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1750808156' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:07 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1750808156' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.4 KiB/s wr, 31 op/s
Nov 29 08:03:08 compute-0 ceph-mon[75237]: pgmap v1254: 305 pgs: 305 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.4 KiB/s wr, 31 op/s
Nov 29 08:03:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:03:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:03:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:03:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:03:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:03:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:03:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 64 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.0 MiB/s wr, 72 op/s
Nov 29 08:03:09 compute-0 nova_compute[255040]: 2025-11-29 08:03:09.537 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:10 compute-0 ceph-mon[75237]: pgmap v1255: 305 pgs: 305 active+clean; 64 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.0 MiB/s wr, 72 op/s
Nov 29 08:03:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 99 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.8 MiB/s wr, 91 op/s
Nov 29 08:03:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:03:12 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/883576250' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:12 compute-0 nova_compute[255040]: 2025-11-29 08:03:12.437 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:12 compute-0 ceph-mon[75237]: pgmap v1256: 305 pgs: 305 active+clean; 99 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.8 MiB/s wr, 91 op/s
Nov 29 08:03:12 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/883576250' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 123 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.7 MiB/s wr, 129 op/s
Nov 29 08:03:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Nov 29 08:03:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Nov 29 08:03:14 compute-0 nova_compute[255040]: 2025-11-29 08:03:14.540 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:14 compute-0 ceph-mon[75237]: pgmap v1257: 305 pgs: 305 active+clean; 123 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.7 MiB/s wr, 129 op/s
Nov 29 08:03:14 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Nov 29 08:03:14 compute-0 podman[271281]: 2025-11-29 08:03:14.94930462 +0000 UTC m=+0.108445540 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 08:03:14 compute-0 sudo[271305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:14 compute-0 sudo[271305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:15 compute-0 sudo[271305]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:15 compute-0 sudo[271333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:03:15 compute-0 sudo[271333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:15 compute-0 sudo[271333]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:15 compute-0 sudo[271358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:15 compute-0 sudo[271358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:15 compute-0 sudo[271358]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:15 compute-0 sudo[271383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 08:03:15 compute-0 sudo[271383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 134 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.3 MiB/s wr, 121 op/s
Nov 29 08:03:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:15 compute-0 ceph-mon[75237]: osdmap e200: 3 total, 3 up, 3 in
Nov 29 08:03:16 compute-0 podman[271481]: 2025-11-29 08:03:16.605368425 +0000 UTC m=+0.822492249 container exec 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 08:03:16 compute-0 podman[271502]: 2025-11-29 08:03:16.979350607 +0000 UTC m=+0.181538735 container exec_died 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 08:03:17 compute-0 ceph-mon[75237]: pgmap v1259: 305 pgs: 305 active+clean; 134 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.3 MiB/s wr, 121 op/s
Nov 29 08:03:17 compute-0 podman[271481]: 2025-11-29 08:03:17.370224776 +0000 UTC m=+1.587348610 container exec_died 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 08:03:17 compute-0 nova_compute[255040]: 2025-11-29 08:03:17.441 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 134 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.3 MiB/s wr, 121 op/s
Nov 29 08:03:18 compute-0 sudo[271383]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:03:18 compute-0 ceph-mon[75237]: pgmap v1260: 305 pgs: 305 active+clean; 134 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.3 MiB/s wr, 121 op/s
Nov 29 08:03:19 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:03:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:03:19 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:03:19 compute-0 sudo[271640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:19 compute-0 sudo[271640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:19 compute-0 sudo[271640]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:19 compute-0 sudo[271665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:03:19 compute-0 sudo[271665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:19 compute-0 sudo[271665]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:19 compute-0 sudo[271690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:19 compute-0 sudo[271690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:19 compute-0 sudo[271690]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:19 compute-0 sudo[271715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:03:19 compute-0 sudo[271715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 150 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.1 MiB/s wr, 109 op/s
Nov 29 08:03:19 compute-0 nova_compute[255040]: 2025-11-29 08:03:19.541 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:19 compute-0 sudo[271715]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:03:19 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:03:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:03:19 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:03:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:03:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Nov 29 08:03:20 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:03:20 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 0697689d-a86e-4108-8f4e-9e752564319f does not exist
Nov 29 08:03:20 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 33781f99-8b08-440c-9193-f245cf564d03 does not exist
Nov 29 08:03:20 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 5cba7627-12e8-4066-970b-ca1e7605a724 does not exist
Nov 29 08:03:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:03:20 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:03:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:03:20 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:03:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:03:20 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:03:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Nov 29 08:03:20 compute-0 sudo[271771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:20 compute-0 sudo[271771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:20 compute-0 sudo[271771]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:20 compute-0 sudo[271796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:03:20 compute-0 sudo[271796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:20 compute-0 sudo[271796]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:20 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:03:20 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:03:20 compute-0 ceph-mon[75237]: pgmap v1261: 305 pgs: 305 active+clean; 150 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.1 MiB/s wr, 109 op/s
Nov 29 08:03:20 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:03:20 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:03:20 compute-0 sudo[271821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:20 compute-0 sudo[271821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:20 compute-0 sudo[271821]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:20 compute-0 sudo[271846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:03:20 compute-0 sudo[271846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:20 compute-0 nova_compute[255040]: 2025-11-29 08:03:20.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:21 compute-0 podman[271911]: 2025-11-29 08:03:21.195056898 +0000 UTC m=+0.062662260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:03:21 compute-0 podman[271911]: 2025-11-29 08:03:21.334381224 +0000 UTC m=+0.201986556 container create 17a992a55380e49120a4f3dc5308cd248027dbabae0f42ec93298f8647db6eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:03:21 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Nov 29 08:03:21 compute-0 systemd[1]: Started libpod-conmon-17a992a55380e49120a4f3dc5308cd248027dbabae0f42ec93298f8647db6eee.scope.
Nov 29 08:03:21 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:03:21 compute-0 podman[271911]: 2025-11-29 08:03:21.43178163 +0000 UTC m=+0.299386942 container init 17a992a55380e49120a4f3dc5308cd248027dbabae0f42ec93298f8647db6eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lamport, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:03:21 compute-0 podman[271911]: 2025-11-29 08:03:21.441315067 +0000 UTC m=+0.308920369 container start 17a992a55380e49120a4f3dc5308cd248027dbabae0f42ec93298f8647db6eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lamport, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:03:21 compute-0 podman[271911]: 2025-11-29 08:03:21.445990253 +0000 UTC m=+0.313595595 container attach 17a992a55380e49120a4f3dc5308cd248027dbabae0f42ec93298f8647db6eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lamport, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:03:21 compute-0 zealous_lamport[271927]: 167 167
Nov 29 08:03:21 compute-0 systemd[1]: libpod-17a992a55380e49120a4f3dc5308cd248027dbabae0f42ec93298f8647db6eee.scope: Deactivated successfully.
Nov 29 08:03:21 compute-0 podman[271911]: 2025-11-29 08:03:21.448062229 +0000 UTC m=+0.315667511 container died 17a992a55380e49120a4f3dc5308cd248027dbabae0f42ec93298f8647db6eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 29 08:03:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 179 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.3 MiB/s wr, 60 op/s
Nov 29 08:03:22 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:03:22 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:03:22 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:03:22 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:03:22 compute-0 ceph-mon[75237]: osdmap e201: 3 total, 3 up, 3 in
Nov 29 08:03:22 compute-0 nova_compute[255040]: 2025-11-29 08:03:22.445 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0814711abedd3a0b401081a3f023f9483f6e9a704e730cf76660ff44de931d5-merged.mount: Deactivated successfully.
Nov 29 08:03:22 compute-0 nova_compute[255040]: 2025-11-29 08:03:22.977 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 180 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 54 op/s
Nov 29 08:03:23 compute-0 ceph-mon[75237]: pgmap v1263: 305 pgs: 305 active+clean; 179 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.3 MiB/s wr, 60 op/s
Nov 29 08:03:23 compute-0 podman[271911]: 2025-11-29 08:03:23.841965893 +0000 UTC m=+2.709571175 container remove 17a992a55380e49120a4f3dc5308cd248027dbabae0f42ec93298f8647db6eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:03:23 compute-0 systemd[1]: libpod-conmon-17a992a55380e49120a4f3dc5308cd248027dbabae0f42ec93298f8647db6eee.scope: Deactivated successfully.
Nov 29 08:03:23 compute-0 nova_compute[255040]: 2025-11-29 08:03:23.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:24 compute-0 podman[271951]: 2025-11-29 08:03:24.046463436 +0000 UTC m=+0.071468098 container create 24b05c9ae63baf3b58a6a625b566ef4e5ecfe3585ba37a1017aafdd52b9a1b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:03:24 compute-0 podman[271951]: 2025-11-29 08:03:24.001108873 +0000 UTC m=+0.026113555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:03:24 compute-0 systemd[1]: Started libpod-conmon-24b05c9ae63baf3b58a6a625b566ef4e5ecfe3585ba37a1017aafdd52b9a1b56.scope.
Nov 29 08:03:24 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:03:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5695b594edde4171240d3890ee4c51f04d4fac077d9e5857996e56229c6559f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5695b594edde4171240d3890ee4c51f04d4fac077d9e5857996e56229c6559f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5695b594edde4171240d3890ee4c51f04d4fac077d9e5857996e56229c6559f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5695b594edde4171240d3890ee4c51f04d4fac077d9e5857996e56229c6559f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5695b594edde4171240d3890ee4c51f04d4fac077d9e5857996e56229c6559f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:24 compute-0 podman[271951]: 2025-11-29 08:03:24.145874036 +0000 UTC m=+0.170878718 container init 24b05c9ae63baf3b58a6a625b566ef4e5ecfe3585ba37a1017aafdd52b9a1b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bohr, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:03:24 compute-0 podman[271951]: 2025-11-29 08:03:24.156259546 +0000 UTC m=+0.181264208 container start 24b05c9ae63baf3b58a6a625b566ef4e5ecfe3585ba37a1017aafdd52b9a1b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bohr, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:03:24 compute-0 podman[271951]: 2025-11-29 08:03:24.166454621 +0000 UTC m=+0.191459283 container attach 24b05c9ae63baf3b58a6a625b566ef4e5ecfe3585ba37a1017aafdd52b9a1b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:03:24 compute-0 nova_compute[255040]: 2025-11-29 08:03:24.543 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:24 compute-0 ceph-mon[75237]: pgmap v1264: 305 pgs: 305 active+clean; 180 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 54 op/s
Nov 29 08:03:24 compute-0 nova_compute[255040]: 2025-11-29 08:03:24.969 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:24 compute-0 nova_compute[255040]: 2025-11-29 08:03:24.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:25 compute-0 inspiring_bohr[271967]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:03:25 compute-0 inspiring_bohr[271967]: --> relative data size: 1.0
Nov 29 08:03:25 compute-0 inspiring_bohr[271967]: --> All data devices are unavailable
Nov 29 08:03:25 compute-0 systemd[1]: libpod-24b05c9ae63baf3b58a6a625b566ef4e5ecfe3585ba37a1017aafdd52b9a1b56.scope: Deactivated successfully.
Nov 29 08:03:25 compute-0 podman[271951]: 2025-11-29 08:03:25.260843824 +0000 UTC m=+1.285848486 container died 24b05c9ae63baf3b58a6a625b566ef4e5ecfe3585ba37a1017aafdd52b9a1b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bohr, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:03:25 compute-0 systemd[1]: libpod-24b05c9ae63baf3b58a6a625b566ef4e5ecfe3585ba37a1017aafdd52b9a1b56.scope: Consumed 1.051s CPU time.
Nov 29 08:03:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 29 08:03:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5695b594edde4171240d3890ee4c51f04d4fac077d9e5857996e56229c6559f-merged.mount: Deactivated successfully.
Nov 29 08:03:25 compute-0 podman[271951]: 2025-11-29 08:03:25.672024338 +0000 UTC m=+1.697029000 container remove 24b05c9ae63baf3b58a6a625b566ef4e5ecfe3585ba37a1017aafdd52b9a1b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bohr, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 08:03:25 compute-0 sudo[271846]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:25 compute-0 systemd[1]: libpod-conmon-24b05c9ae63baf3b58a6a625b566ef4e5ecfe3585ba37a1017aafdd52b9a1b56.scope: Deactivated successfully.
Nov 29 08:03:25 compute-0 podman[271997]: 2025-11-29 08:03:25.792708082 +0000 UTC m=+0.502313783 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 08:03:25 compute-0 sudo[272018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:25 compute-0 sudo[272018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:25 compute-0 sudo[272018]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:25 compute-0 sudo[272052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:03:25 compute-0 sudo[272052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:25 compute-0 sudo[272052]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:25 compute-0 sudo[272077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:25 compute-0 sudo[272077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:25 compute-0 sudo[272077]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:25 compute-0 nova_compute[255040]: 2025-11-29 08:03:25.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:25 compute-0 nova_compute[255040]: 2025-11-29 08:03:25.976 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:03:25 compute-0 nova_compute[255040]: 2025-11-29 08:03:25.976 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:03:25 compute-0 ceph-mon[75237]: pgmap v1265: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 29 08:03:26 compute-0 sudo[272102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 08:03:26 compute-0 sudo[272102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:26 compute-0 podman[272167]: 2025-11-29 08:03:26.348036042 +0000 UTC m=+0.045525078 container create d674497bad039d74dd58bd949a1161e73560762c52da92a00c8824b0e63478b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 08:03:26 compute-0 systemd[1]: Started libpod-conmon-d674497bad039d74dd58bd949a1161e73560762c52da92a00c8824b0e63478b8.scope.
Nov 29 08:03:26 compute-0 podman[272167]: 2025-11-29 08:03:26.327313973 +0000 UTC m=+0.024803049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:03:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:03:26 compute-0 podman[272167]: 2025-11-29 08:03:26.515795295 +0000 UTC m=+0.213284371 container init d674497bad039d74dd58bd949a1161e73560762c52da92a00c8824b0e63478b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:03:26 compute-0 podman[272167]: 2025-11-29 08:03:26.529419752 +0000 UTC m=+0.226908808 container start d674497bad039d74dd58bd949a1161e73560762c52da92a00c8824b0e63478b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:03:26 compute-0 relaxed_murdock[272184]: 167 167
Nov 29 08:03:26 compute-0 systemd[1]: libpod-d674497bad039d74dd58bd949a1161e73560762c52da92a00c8824b0e63478b8.scope: Deactivated successfully.
Nov 29 08:03:26 compute-0 podman[272167]: 2025-11-29 08:03:26.53751086 +0000 UTC m=+0.234999916 container attach d674497bad039d74dd58bd949a1161e73560762c52da92a00c8824b0e63478b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 08:03:26 compute-0 conmon[272184]: conmon d674497bad039d74dd58 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d674497bad039d74dd58bd949a1161e73560762c52da92a00c8824b0e63478b8.scope/container/memory.events
Nov 29 08:03:26 compute-0 podman[272167]: 2025-11-29 08:03:26.538477996 +0000 UTC m=+0.235967062 container died d674497bad039d74dd58bd949a1161e73560762c52da92a00c8824b0e63478b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 08:03:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c87c06e1757942cbb9425ab3671f5b7fc307dbef610fdb687ee209975a53015-merged.mount: Deactivated successfully.
Nov 29 08:03:26 compute-0 podman[272167]: 2025-11-29 08:03:26.685083328 +0000 UTC m=+0.382572374 container remove d674497bad039d74dd58bd949a1161e73560762c52da92a00c8824b0e63478b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 08:03:26 compute-0 systemd[1]: libpod-conmon-d674497bad039d74dd58bd949a1161e73560762c52da92a00c8824b0e63478b8.scope: Deactivated successfully.
Nov 29 08:03:26 compute-0 podman[272210]: 2025-11-29 08:03:26.838125414 +0000 UTC m=+0.024609484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:03:26 compute-0 podman[272210]: 2025-11-29 08:03:26.968403676 +0000 UTC m=+0.154887716 container create 7bbc6d92405297fa6547f45fbf6129e0b67516bea4d1af0a3540b9a8a59a1934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 08:03:27 compute-0 systemd[1]: Started libpod-conmon-7bbc6d92405297fa6547f45fbf6129e0b67516bea4d1af0a3540b9a8a59a1934.scope.
Nov 29 08:03:27 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dab245748ac04e3017a676b1f1d5d22c0ae908578e8c9fb438734ac652931219/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dab245748ac04e3017a676b1f1d5d22c0ae908578e8c9fb438734ac652931219/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dab245748ac04e3017a676b1f1d5d22c0ae908578e8c9fb438734ac652931219/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dab245748ac04e3017a676b1f1d5d22c0ae908578e8c9fb438734ac652931219/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:27 compute-0 podman[272210]: 2025-11-29 08:03:27.114273118 +0000 UTC m=+0.300757158 container init 7bbc6d92405297fa6547f45fbf6129e0b67516bea4d1af0a3540b9a8a59a1934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 29 08:03:27 compute-0 podman[272210]: 2025-11-29 08:03:27.120873346 +0000 UTC m=+0.307357386 container start 7bbc6d92405297fa6547f45fbf6129e0b67516bea4d1af0a3540b9a8a59a1934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:03:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:03:27.124 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:03:27.125 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:03:27.125 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:27 compute-0 podman[272210]: 2025-11-29 08:03:27.133727672 +0000 UTC m=+0.320211752 container attach 7bbc6d92405297fa6547f45fbf6129e0b67516bea4d1af0a3540b9a8a59a1934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 08:03:27 compute-0 nova_compute[255040]: 2025-11-29 08:03:27.496 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 29 08:03:27 compute-0 nova_compute[255040]: 2025-11-29 08:03:27.770 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:03:27 compute-0 nova_compute[255040]: 2025-11-29 08:03:27.772 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]: {
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:     "0": [
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:         {
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "devices": [
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "/dev/loop3"
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             ],
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "lv_name": "ceph_lv0",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "lv_size": "21470642176",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "name": "ceph_lv0",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "tags": {
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.cluster_name": "ceph",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.crush_device_class": "",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.encrypted": "0",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.osd_id": "0",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.type": "block",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.vdo": "0"
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             },
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "type": "block",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "vg_name": "ceph_vg0"
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:         }
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:     ],
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:     "1": [
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:         {
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "devices": [
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "/dev/loop4"
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             ],
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "lv_name": "ceph_lv1",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "lv_size": "21470642176",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "name": "ceph_lv1",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "tags": {
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.cluster_name": "ceph",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.crush_device_class": "",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.encrypted": "0",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.osd_id": "1",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.type": "block",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.vdo": "0"
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             },
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "type": "block",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "vg_name": "ceph_vg1"
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:         }
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:     ],
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:     "2": [
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:         {
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "devices": [
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "/dev/loop5"
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             ],
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "lv_name": "ceph_lv2",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "lv_size": "21470642176",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "name": "ceph_lv2",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "tags": {
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.cluster_name": "ceph",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.crush_device_class": "",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.encrypted": "0",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.osd_id": "2",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.type": "block",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:                 "ceph.vdo": "0"
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             },
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "type": "block",
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:             "vg_name": "ceph_vg2"
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:         }
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]:     ]
Nov 29 08:03:27 compute-0 intelligent_shamir[272226]: }
Nov 29 08:03:27 compute-0 systemd[1]: libpod-7bbc6d92405297fa6547f45fbf6129e0b67516bea4d1af0a3540b9a8a59a1934.scope: Deactivated successfully.
Nov 29 08:03:27 compute-0 podman[272210]: 2025-11-29 08:03:27.959195246 +0000 UTC m=+1.145679286 container died 7bbc6d92405297fa6547f45fbf6129e0b67516bea4d1af0a3540b9a8a59a1934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 08:03:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-dab245748ac04e3017a676b1f1d5d22c0ae908578e8c9fb438734ac652931219-merged.mount: Deactivated successfully.
Nov 29 08:03:28 compute-0 podman[272210]: 2025-11-29 08:03:28.075613864 +0000 UTC m=+1.262097914 container remove 7bbc6d92405297fa6547f45fbf6129e0b67516bea4d1af0a3540b9a8a59a1934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 08:03:28 compute-0 systemd[1]: libpod-conmon-7bbc6d92405297fa6547f45fbf6129e0b67516bea4d1af0a3540b9a8a59a1934.scope: Deactivated successfully.
Nov 29 08:03:28 compute-0 sudo[272102]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:28 compute-0 sudo[272247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:28 compute-0 sudo[272247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:28 compute-0 sudo[272247]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:28 compute-0 sudo[272272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:03:28 compute-0 sudo[272272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:28 compute-0 sudo[272272]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:28 compute-0 sudo[272297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:28 compute-0 sudo[272297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:28 compute-0 sudo[272297]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:28 compute-0 sudo[272322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 08:03:28 compute-0 sudo[272322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:28 compute-0 ovn_controller[153295]: 2025-11-29T08:03:28Z|00073|memory_trim|INFO|Detected inactivity (last active 30016 ms ago): trimming memory
Nov 29 08:03:28 compute-0 ceph-mon[75237]: pgmap v1266: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 29 08:03:28 compute-0 podman[272386]: 2025-11-29 08:03:28.818598094 +0000 UTC m=+0.098835276 container create a21e96b862f7b190a555b687a9e201a297fa22bf0e9d861698413f443d833fa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lumiere, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:03:28 compute-0 podman[272386]: 2025-11-29 08:03:28.741469264 +0000 UTC m=+0.021706466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:03:28 compute-0 systemd[1]: Started libpod-conmon-a21e96b862f7b190a555b687a9e201a297fa22bf0e9d861698413f443d833fa7.scope.
Nov 29 08:03:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:03:28 compute-0 podman[272386]: 2025-11-29 08:03:28.964288471 +0000 UTC m=+0.244525643 container init a21e96b862f7b190a555b687a9e201a297fa22bf0e9d861698413f443d833fa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:03:28 compute-0 podman[272386]: 2025-11-29 08:03:28.976442838 +0000 UTC m=+0.256679990 container start a21e96b862f7b190a555b687a9e201a297fa22bf0e9d861698413f443d833fa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lumiere, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:03:28 compute-0 podman[272386]: 2025-11-29 08:03:28.980657822 +0000 UTC m=+0.260895084 container attach a21e96b862f7b190a555b687a9e201a297fa22bf0e9d861698413f443d833fa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lumiere, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 08:03:28 compute-0 upbeat_lumiere[272403]: 167 167
Nov 29 08:03:28 compute-0 systemd[1]: libpod-a21e96b862f7b190a555b687a9e201a297fa22bf0e9d861698413f443d833fa7.scope: Deactivated successfully.
Nov 29 08:03:28 compute-0 podman[272386]: 2025-11-29 08:03:28.98430689 +0000 UTC m=+0.264544062 container died a21e96b862f7b190a555b687a9e201a297fa22bf0e9d861698413f443d833fa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 08:03:29 compute-0 nova_compute[255040]: 2025-11-29 08:03:29.014 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:29 compute-0 nova_compute[255040]: 2025-11-29 08:03:29.016 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:29 compute-0 nova_compute[255040]: 2025-11-29 08:03:29.016 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:29 compute-0 nova_compute[255040]: 2025-11-29 08:03:29.016 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:03:29 compute-0 nova_compute[255040]: 2025-11-29 08:03:29.017 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:03:29 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2959057878' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:03:29 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2959057878' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2c9d13edfd3d333673fd4b275918641c61fc10c3045035aad53f225032bc633-merged.mount: Deactivated successfully.
Nov 29 08:03:29 compute-0 podman[272386]: 2025-11-29 08:03:29.1408142 +0000 UTC m=+0.421051352 container remove a21e96b862f7b190a555b687a9e201a297fa22bf0e9d861698413f443d833fa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_lumiere, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:03:29 compute-0 systemd[1]: libpod-conmon-a21e96b862f7b190a555b687a9e201a297fa22bf0e9d861698413f443d833fa7.scope: Deactivated successfully.
Nov 29 08:03:29 compute-0 podman[272448]: 2025-11-29 08:03:29.3885997 +0000 UTC m=+0.092316670 container create b078004c9e9b085e9c81c861632b0ac5c4bb64b6641374f9de17f1c269fd782b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ride, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 08:03:29 compute-0 podman[272448]: 2025-11-29 08:03:29.324142062 +0000 UTC m=+0.027859012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:03:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:03:29 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3261119118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:29 compute-0 systemd[1]: Started libpod-conmon-b078004c9e9b085e9c81c861632b0ac5c4bb64b6641374f9de17f1c269fd782b.scope.
Nov 29 08:03:29 compute-0 nova_compute[255040]: 2025-11-29 08:03:29.495 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f543c472f0afb39ff6f6831ea1f98a87e34d610ab3f4c8d97f62ce2cb97d6fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f543c472f0afb39ff6f6831ea1f98a87e34d610ab3f4c8d97f62ce2cb97d6fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f543c472f0afb39ff6f6831ea1f98a87e34d610ab3f4c8d97f62ce2cb97d6fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f543c472f0afb39ff6f6831ea1f98a87e34d610ab3f4c8d97f62ce2cb97d6fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 712 KiB/s rd, 1.3 MiB/s wr, 40 op/s
Nov 29 08:03:29 compute-0 nova_compute[255040]: 2025-11-29 08:03:29.545 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:29 compute-0 podman[272448]: 2025-11-29 08:03:29.691927126 +0000 UTC m=+0.395644076 container init b078004c9e9b085e9c81c861632b0ac5c4bb64b6641374f9de17f1c269fd782b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ride, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:03:29 compute-0 podman[272448]: 2025-11-29 08:03:29.70173339 +0000 UTC m=+0.405450320 container start b078004c9e9b085e9c81c861632b0ac5c4bb64b6641374f9de17f1c269fd782b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:03:29 compute-0 nova_compute[255040]: 2025-11-29 08:03:29.712 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:03:29 compute-0 nova_compute[255040]: 2025-11-29 08:03:29.714 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4642MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:03:29 compute-0 nova_compute[255040]: 2025-11-29 08:03:29.714 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:29 compute-0 nova_compute[255040]: 2025-11-29 08:03:29.714 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:29 compute-0 podman[272448]: 2025-11-29 08:03:29.724568967 +0000 UTC m=+0.428285917 container attach b078004c9e9b085e9c81c861632b0ac5c4bb64b6641374f9de17f1c269fd782b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ride, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:03:29 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2959057878' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:29 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2959057878' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:29 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3261119118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:29 compute-0 nova_compute[255040]: 2025-11-29 08:03:29.975 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:03:29 compute-0 nova_compute[255040]: 2025-11-29 08:03:29.976 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:03:29 compute-0 nova_compute[255040]: 2025-11-29 08:03:29.995 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Refreshing inventories for resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 08:03:30 compute-0 nova_compute[255040]: 2025-11-29 08:03:30.011 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Updating ProviderTree inventory for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 08:03:30 compute-0 nova_compute[255040]: 2025-11-29 08:03:30.011 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Updating inventory in ProviderTree for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 08:03:30 compute-0 nova_compute[255040]: 2025-11-29 08:03:30.025 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Refreshing aggregate associations for resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 08:03:30 compute-0 nova_compute[255040]: 2025-11-29 08:03:30.059 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Refreshing trait associations for resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e, traits: COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_ABM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE,COMPUTE_NODE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 08:03:30 compute-0 nova_compute[255040]: 2025-11-29 08:03:30.076 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:03:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1361442252' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:03:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1361442252' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:03:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1464638932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:30 compute-0 nova_compute[255040]: 2025-11-29 08:03:30.519 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:30 compute-0 nova_compute[255040]: 2025-11-29 08:03:30.526 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:03:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:30 compute-0 nova_compute[255040]: 2025-11-29 08:03:30.545 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:03:30 compute-0 elegant_ride[272466]: {
Nov 29 08:03:30 compute-0 elegant_ride[272466]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 08:03:30 compute-0 elegant_ride[272466]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:03:30 compute-0 elegant_ride[272466]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:03:30 compute-0 elegant_ride[272466]:         "osd_id": 2,
Nov 29 08:03:30 compute-0 elegant_ride[272466]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:03:30 compute-0 elegant_ride[272466]:         "type": "bluestore"
Nov 29 08:03:30 compute-0 elegant_ride[272466]:     },
Nov 29 08:03:30 compute-0 elegant_ride[272466]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 08:03:30 compute-0 elegant_ride[272466]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:03:30 compute-0 elegant_ride[272466]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:03:30 compute-0 elegant_ride[272466]:         "osd_id": 0,
Nov 29 08:03:30 compute-0 elegant_ride[272466]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:03:30 compute-0 elegant_ride[272466]:         "type": "bluestore"
Nov 29 08:03:30 compute-0 elegant_ride[272466]:     },
Nov 29 08:03:30 compute-0 elegant_ride[272466]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 08:03:30 compute-0 elegant_ride[272466]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:03:30 compute-0 elegant_ride[272466]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:03:30 compute-0 elegant_ride[272466]:         "osd_id": 1,
Nov 29 08:03:30 compute-0 elegant_ride[272466]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:03:30 compute-0 elegant_ride[272466]:         "type": "bluestore"
Nov 29 08:03:30 compute-0 elegant_ride[272466]:     }
Nov 29 08:03:30 compute-0 elegant_ride[272466]: }
Nov 29 08:03:30 compute-0 systemd[1]: libpod-b078004c9e9b085e9c81c861632b0ac5c4bb64b6641374f9de17f1c269fd782b.scope: Deactivated successfully.
Nov 29 08:03:30 compute-0 systemd[1]: libpod-b078004c9e9b085e9c81c861632b0ac5c4bb64b6641374f9de17f1c269fd782b.scope: Consumed 1.037s CPU time.
Nov 29 08:03:30 compute-0 podman[272448]: 2025-11-29 08:03:30.744785629 +0000 UTC m=+1.448502559 container died b078004c9e9b085e9c81c861632b0ac5c4bb64b6641374f9de17f1c269fd782b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 08:03:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f543c472f0afb39ff6f6831ea1f98a87e34d610ab3f4c8d97f62ce2cb97d6fe-merged.mount: Deactivated successfully.
Nov 29 08:03:30 compute-0 podman[272448]: 2025-11-29 08:03:30.84533599 +0000 UTC m=+1.549052920 container remove b078004c9e9b085e9c81c861632b0ac5c4bb64b6641374f9de17f1c269fd782b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_ride, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 08:03:30 compute-0 systemd[1]: libpod-conmon-b078004c9e9b085e9c81c861632b0ac5c4bb64b6641374f9de17f1c269fd782b.scope: Deactivated successfully.
Nov 29 08:03:30 compute-0 sudo[272322]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:03:30 compute-0 ceph-mon[75237]: pgmap v1267: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 712 KiB/s rd, 1.3 MiB/s wr, 40 op/s
Nov 29 08:03:30 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1361442252' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:30 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1361442252' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:30 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1464638932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:30 compute-0 podman[272523]: 2025-11-29 08:03:30.897853936 +0000 UTC m=+0.120065438 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:03:30 compute-0 nova_compute[255040]: 2025-11-29 08:03:30.907 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:03:30 compute-0 nova_compute[255040]: 2025-11-29 08:03:30.908 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:30 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 29 08:03:30 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:03:30.912962) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:03:30 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 29 08:03:30 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403410913157, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2268, "num_deletes": 263, "total_data_size": 3423320, "memory_usage": 3474576, "flush_reason": "Manual Compaction"}
Nov 29 08:03:30 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 29 08:03:30 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:03:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:03:31 compute-0 nova_compute[255040]: 2025-11-29 08:03:31.112 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:31 compute-0 nova_compute[255040]: 2025-11-29 08:03:31.130 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:31 compute-0 nova_compute[255040]: 2025-11-29 08:03:31.131 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:31 compute-0 nova_compute[255040]: 2025-11-29 08:03:31.131 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:03:31 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:03:31 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev abd90215-5bca-40f6-b027-1c701f9d89f2 does not exist
Nov 29 08:03:31 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev cd71ae87-cb14-42c1-96c1-864613883ee1 does not exist
Nov 29 08:03:31 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403411516595, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3348938, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21652, "largest_seqno": 23919, "table_properties": {"data_size": 3338596, "index_size": 6580, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22145, "raw_average_key_size": 20, "raw_value_size": 3317586, "raw_average_value_size": 3144, "num_data_blocks": 292, "num_entries": 1055, "num_filter_entries": 1055, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403221, "oldest_key_time": 1764403221, "file_creation_time": 1764403410, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:03:31 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 603772 microseconds, and 13721 cpu microseconds.
Nov 29 08:03:31 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:03:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 4.3 KiB/s wr, 30 op/s
Nov 29 08:03:31 compute-0 sudo[272553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:31 compute-0 sudo[272553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:31 compute-0 sudo[272553]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:31 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:03:31.516753) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3348938 bytes OK
Nov 29 08:03:31 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:03:31.516803) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 29 08:03:31 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:03:31.600791) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 29 08:03:31 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:03:31.600845) EVENT_LOG_v1 {"time_micros": 1764403411600832, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:03:31 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:03:31.600873) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:03:31 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3413541, prev total WAL file size 3454605, number of live WAL files 2.
Nov 29 08:03:31 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:03:31 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:03:31.604154) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 29 08:03:31 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:03:31 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3270KB)], [50(8155KB)]
Nov 29 08:03:31 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403411604249, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11699712, "oldest_snapshot_seqno": -1}
Nov 29 08:03:31 compute-0 sudo[272578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:03:31 compute-0 sudo[272578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:31 compute-0 sudo[272578]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:31 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5288 keys, 9963918 bytes, temperature: kUnknown
Nov 29 08:03:31 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403411969499, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9963918, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9924670, "index_size": 24930, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13253, "raw_key_size": 130901, "raw_average_key_size": 24, "raw_value_size": 9825500, "raw_average_value_size": 1858, "num_data_blocks": 1034, "num_entries": 5288, "num_filter_entries": 5288, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764403411, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:03:31 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:03:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:03:32 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1477701246' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:32 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:03:31.969833) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9963918 bytes
Nov 29 08:03:32 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:03:32.335386) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 32.0 rd, 27.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.0 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 5820, records dropped: 532 output_compression: NoCompression
Nov 29 08:03:32 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:03:32.335431) EVENT_LOG_v1 {"time_micros": 1764403412335410, "job": 26, "event": "compaction_finished", "compaction_time_micros": 365346, "compaction_time_cpu_micros": 25468, "output_level": 6, "num_output_files": 1, "total_output_size": 9963918, "num_input_records": 5820, "num_output_records": 5288, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:03:32 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:03:32 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403412337151, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 29 08:03:32 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:03:32 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403412339751, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 29 08:03:32 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:03:31.603988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:03:32 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:03:32.339868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:03:32 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:03:32.339877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:03:32 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:03:32.339881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:03:32 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:03:32.339885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:03:32 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:03:32.339889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:03:32 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:03:32 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:03:32 compute-0 ceph-mon[75237]: pgmap v1268: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 4.3 KiB/s wr, 30 op/s
Nov 29 08:03:32 compute-0 nova_compute[255040]: 2025-11-29 08:03:32.499 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.6 KiB/s wr, 36 op/s
Nov 29 08:03:33 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1477701246' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:34 compute-0 nova_compute[255040]: 2025-11-29 08:03:34.546 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:34 compute-0 ceph-mon[75237]: pgmap v1269: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.6 KiB/s wr, 36 op/s
Nov 29 08:03:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Nov 29 08:03:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Nov 29 08:03:34 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Nov 29 08:03:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:03:35 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2625493045' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:03:35 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2625493045' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 191 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 662 KiB/s wr, 40 op/s
Nov 29 08:03:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:35 compute-0 ceph-mon[75237]: osdmap e202: 3 total, 3 up, 3 in
Nov 29 08:03:35 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2625493045' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:35 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2625493045' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:35 compute-0 ceph-mon[75237]: pgmap v1271: 305 pgs: 305 active+clean; 191 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 662 KiB/s wr, 40 op/s
Nov 29 08:03:36 compute-0 sshd-session[272231]: Connection closed by 45.78.219.195 port 49024 [preauth]
Nov 29 08:03:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Nov 29 08:03:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Nov 29 08:03:36 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Nov 29 08:03:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:03:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1852577940' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:03:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1852577940' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:37 compute-0 nova_compute[255040]: 2025-11-29 08:03:37.502 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 191 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 827 KiB/s wr, 45 op/s
Nov 29 08:03:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:03:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1188508328' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:03:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1188508328' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:37 compute-0 ceph-mon[75237]: osdmap e203: 3 total, 3 up, 3 in
Nov 29 08:03:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1852577940' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1852577940' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:37 compute-0 ceph-mon[75237]: pgmap v1273: 305 pgs: 305 active+clean; 191 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 827 KiB/s wr, 45 op/s
Nov 29 08:03:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1188508328' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1188508328' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:03:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2954263402' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:03:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2954263402' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:03:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3621501690' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:03:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3621501690' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:03:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:03:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:03:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:03:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:03:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:03:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:03:38
Nov 29 08:03:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:03:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:03:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'images', 'backups', 'vms', 'default.rgw.meta']
Nov 29 08:03:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:03:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Nov 29 08:03:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Nov 29 08:03:38 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Nov 29 08:03:39 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2954263402' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:39 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2954263402' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:39 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3621501690' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:39 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3621501690' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 173 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 245 op/s
Nov 29 08:03:39 compute-0 nova_compute[255040]: 2025-11-29 08:03:39.549 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:40 compute-0 ceph-mon[75237]: osdmap e204: 3 total, 3 up, 3 in
Nov 29 08:03:40 compute-0 ceph-mon[75237]: pgmap v1275: 305 pgs: 305 active+clean; 173 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 245 op/s
Nov 29 08:03:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Nov 29 08:03:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Nov 29 08:03:41 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Nov 29 08:03:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 135 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 189 KiB/s rd, 2.5 MiB/s wr, 257 op/s
Nov 29 08:03:42 compute-0 ceph-mon[75237]: osdmap e205: 3 total, 3 up, 3 in
Nov 29 08:03:42 compute-0 ceph-mon[75237]: pgmap v1277: 305 pgs: 305 active+clean; 135 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 189 KiB/s rd, 2.5 MiB/s wr, 257 op/s
Nov 29 08:03:42 compute-0 nova_compute[255040]: 2025-11-29 08:03:42.506 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:03:42 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2665716646' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:03:42 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2665716646' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:03:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:03:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:03:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:03:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:03:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:03:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:03:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:03:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:03:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:03:43 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2665716646' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:43 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2665716646' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 123 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 2.3 MiB/s wr, 248 op/s
Nov 29 08:03:44 compute-0 ceph-mon[75237]: pgmap v1278: 305 pgs: 305 active+clean; 123 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 2.3 MiB/s wr, 248 op/s
Nov 29 08:03:44 compute-0 nova_compute[255040]: 2025-11-29 08:03:44.551 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.9 MiB/s wr, 234 op/s
Nov 29 08:03:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Nov 29 08:03:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:03:45 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2701007539' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Nov 29 08:03:45 compute-0 podman[272605]: 2025-11-29 08:03:45.948985658 +0000 UTC m=+0.117726534 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 08:03:46 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Nov 29 08:03:46 compute-0 ceph-mon[75237]: pgmap v1279: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.9 MiB/s wr, 234 op/s
Nov 29 08:03:46 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2701007539' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Nov 29 08:03:47 compute-0 ceph-mon[75237]: osdmap e206: 3 total, 3 up, 3 in
Nov 29 08:03:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Nov 29 08:03:47 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Nov 29 08:03:47 compute-0 nova_compute[255040]: 2025-11-29 08:03:47.508 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.8 KiB/s wr, 55 op/s
Nov 29 08:03:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Nov 29 08:03:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Nov 29 08:03:48 compute-0 ceph-mon[75237]: osdmap e207: 3 total, 3 up, 3 in
Nov 29 08:03:48 compute-0 ceph-mon[75237]: pgmap v1282: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.8 KiB/s wr, 55 op/s
Nov 29 08:03:48 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Nov 29 08:03:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.5 KiB/s wr, 81 op/s
Nov 29 08:03:49 compute-0 nova_compute[255040]: 2025-11-29 08:03:49.553 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:49 compute-0 ceph-mon[75237]: osdmap e208: 3 total, 3 up, 3 in
Nov 29 08:03:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Nov 29 08:03:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Nov 29 08:03:50 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Nov 29 08:03:50 compute-0 ceph-mon[75237]: pgmap v1284: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.5 KiB/s wr, 81 op/s
Nov 29 08:03:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:03:51 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/700038229' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:03:51 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/700038229' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.0 KiB/s wr, 80 op/s
Nov 29 08:03:51 compute-0 ceph-mon[75237]: osdmap e209: 3 total, 3 up, 3 in
Nov 29 08:03:51 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/700038229' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:51 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/700038229' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:52 compute-0 nova_compute[255040]: 2025-11-29 08:03:52.512 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:52 compute-0 ceph-mon[75237]: pgmap v1286: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.0 KiB/s wr, 80 op/s
Nov 29 08:03:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:03:52 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3142163053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 111 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 81 op/s
Nov 29 08:03:53 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3142163053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:54 compute-0 nova_compute[255040]: 2025-11-29 08:03:54.554 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Nov 29 08:03:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 134 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.7 MiB/s wr, 130 op/s
Nov 29 08:03:55 compute-0 podman[272631]: 2025-11-29 08:03:55.907800779 +0000 UTC m=+0.074121189 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0006927571709969395 of space, bias 1.0, pg target 0.20782715129908186 quantized to 32 (current 32)
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:03:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:03:57 compute-0 nova_compute[255040]: 2025-11-29 08:03:57.514 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 134 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 979 KiB/s rd, 2.4 MiB/s wr, 106 op/s
Nov 29 08:03:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:03:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/63415931' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:03:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/63415931' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Nov 29 08:03:58 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Nov 29 08:03:58 compute-0 ceph-mon[75237]: pgmap v1287: 305 pgs: 305 active+clean; 111 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 81 op/s
Nov 29 08:03:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:03:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/568191700' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:03:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/568191700' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 134 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 2.4 MiB/s wr, 90 op/s
Nov 29 08:03:59 compute-0 nova_compute[255040]: 2025-11-29 08:03:59.556 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:59 compute-0 ceph-mon[75237]: pgmap v1288: 305 pgs: 305 active+clean; 134 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.7 MiB/s wr, 130 op/s
Nov 29 08:03:59 compute-0 ceph-mon[75237]: pgmap v1289: 305 pgs: 305 active+clean; 134 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 979 KiB/s rd, 2.4 MiB/s wr, 106 op/s
Nov 29 08:03:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/63415931' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/63415931' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:59 compute-0 ceph-mon[75237]: osdmap e210: 3 total, 3 up, 3 in
Nov 29 08:03:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/568191700' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/568191700' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:00 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:00.115 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:00 compute-0 nova_compute[255040]: 2025-11-29 08:04:00.116 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:00 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:00.117 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:04:00 compute-0 ceph-mon[75237]: pgmap v1291: 305 pgs: 305 active+clean; 134 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 2.4 MiB/s wr, 90 op/s
Nov 29 08:04:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 134 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Nov 29 08:04:01 compute-0 podman[272652]: 2025-11-29 08:04:01.900503971 +0000 UTC m=+0.068642212 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd)
Nov 29 08:04:02 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:02.121 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:02 compute-0 nova_compute[255040]: 2025-11-29 08:04:02.517 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:02 compute-0 ceph-mon[75237]: pgmap v1292: 305 pgs: 305 active+clean; 134 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Nov 29 08:04:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 111 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 963 KiB/s wr, 81 op/s
Nov 29 08:04:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:04 compute-0 nova_compute[255040]: 2025-11-29 08:04:04.558 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-0 ceph-mon[75237]: pgmap v1293: 305 pgs: 305 active+clean; 111 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 963 KiB/s wr, 81 op/s
Nov 29 08:04:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:04:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2756452573' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:04:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2756452573' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1.3 KiB/s wr, 46 op/s
Nov 29 08:04:05 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2756452573' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:05 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2756452573' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Nov 29 08:04:07 compute-0 ceph-mon[75237]: pgmap v1294: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1.3 KiB/s wr, 46 op/s
Nov 29 08:04:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Nov 29 08:04:07 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Nov 29 08:04:07 compute-0 nova_compute[255040]: 2025-11-29 08:04:07.520 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.1 KiB/s wr, 34 op/s
Nov 29 08:04:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Nov 29 08:04:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Nov 29 08:04:08 compute-0 ceph-mon[75237]: osdmap e211: 3 total, 3 up, 3 in
Nov 29 08:04:08 compute-0 ceph-mon[75237]: pgmap v1296: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.1 KiB/s wr, 34 op/s
Nov 29 08:04:08 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Nov 29 08:04:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:04:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:04:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:04:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:04:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:04:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:04:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:04:08 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/611314620' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:04:08 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/611314620' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:09 compute-0 ceph-mon[75237]: osdmap e212: 3 total, 3 up, 3 in
Nov 29 08:04:09 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/611314620' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:09 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/611314620' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 2.9 KiB/s wr, 71 op/s
Nov 29 08:04:09 compute-0 nova_compute[255040]: 2025-11-29 08:04:09.561 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Nov 29 08:04:10 compute-0 ceph-mon[75237]: pgmap v1298: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 2.9 KiB/s wr, 71 op/s
Nov 29 08:04:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Nov 29 08:04:10 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Nov 29 08:04:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:04:11 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1620171169' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Nov 29 08:04:11 compute-0 ceph-mon[75237]: osdmap e213: 3 total, 3 up, 3 in
Nov 29 08:04:11 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1620171169' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Nov 29 08:04:11 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Nov 29 08:04:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 3.4 KiB/s wr, 75 op/s
Nov 29 08:04:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Nov 29 08:04:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Nov 29 08:04:12 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Nov 29 08:04:12 compute-0 ceph-mon[75237]: osdmap e214: 3 total, 3 up, 3 in
Nov 29 08:04:12 compute-0 ceph-mon[75237]: pgmap v1301: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 3.4 KiB/s wr, 75 op/s
Nov 29 08:04:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:04:12 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1033399464' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:04:12 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1033399464' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:12 compute-0 nova_compute[255040]: 2025-11-29 08:04:12.524 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:04:13 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2944952965' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:04:13 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2944952965' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Nov 29 08:04:13 compute-0 ceph-mon[75237]: osdmap e215: 3 total, 3 up, 3 in
Nov 29 08:04:13 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1033399464' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:13 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1033399464' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:13 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2944952965' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:13 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2944952965' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Nov 29 08:04:13 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Nov 29 08:04:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 3.7 KiB/s wr, 42 op/s
Nov 29 08:04:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Nov 29 08:04:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Nov 29 08:04:13 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Nov 29 08:04:14 compute-0 ceph-mon[75237]: osdmap e216: 3 total, 3 up, 3 in
Nov 29 08:04:14 compute-0 ceph-mon[75237]: pgmap v1304: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 3.7 KiB/s wr, 42 op/s
Nov 29 08:04:14 compute-0 ceph-mon[75237]: osdmap e217: 3 total, 3 up, 3 in
Nov 29 08:04:14 compute-0 nova_compute[255040]: 2025-11-29 08:04:14.562 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Nov 29 08:04:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Nov 29 08:04:15 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Nov 29 08:04:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 186 KiB/s rd, 11 KiB/s wr, 252 op/s
Nov 29 08:04:16 compute-0 ceph-mon[75237]: osdmap e218: 3 total, 3 up, 3 in
Nov 29 08:04:16 compute-0 ceph-mon[75237]: pgmap v1307: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 186 KiB/s rd, 11 KiB/s wr, 252 op/s
Nov 29 08:04:16 compute-0 podman[272672]: 2025-11-29 08:04:16.941770836 +0000 UTC m=+0.111138298 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:04:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 9.0 KiB/s wr, 198 op/s
Nov 29 08:04:17 compute-0 nova_compute[255040]: 2025-11-29 08:04:17.577 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:17 compute-0 ceph-mon[75237]: pgmap v1308: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 9.0 KiB/s wr, 198 op/s
Nov 29 08:04:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Nov 29 08:04:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Nov 29 08:04:18 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Nov 29 08:04:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:04:19 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/139274945' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:04:19 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/139274945' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:19 compute-0 nova_compute[255040]: 2025-11-29 08:04:19.564 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 135 KiB/s rd, 7.3 KiB/s wr, 175 op/s
Nov 29 08:04:19 compute-0 ceph-mon[75237]: osdmap e219: 3 total, 3 up, 3 in
Nov 29 08:04:19 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/139274945' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:19 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/139274945' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:20 compute-0 ceph-mon[75237]: pgmap v1310: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 135 KiB/s rd, 7.3 KiB/s wr, 175 op/s
Nov 29 08:04:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 5.8 KiB/s wr, 155 op/s
Nov 29 08:04:21 compute-0 nova_compute[255040]: 2025-11-29 08:04:21.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:22 compute-0 nova_compute[255040]: 2025-11-29 08:04:22.580 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:22 compute-0 ceph-mon[75237]: pgmap v1311: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 5.8 KiB/s wr, 155 op/s
Nov 29 08:04:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:04:23 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3945514551' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:04:23 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3945514551' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Nov 29 08:04:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.6 KiB/s wr, 51 op/s
Nov 29 08:04:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Nov 29 08:04:23 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Nov 29 08:04:23 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3945514551' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:23 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3945514551' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:23 compute-0 ceph-mon[75237]: osdmap e220: 3 total, 3 up, 3 in
Nov 29 08:04:23 compute-0 nova_compute[255040]: 2025-11-29 08:04:23.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:04:24 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3140154202' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:04:24 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3140154202' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:24 compute-0 nova_compute[255040]: 2025-11-29 08:04:24.301 255071 DEBUG oslo_concurrency.lockutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "cd169ba7-ec52-418c-a12c-4069b40674d7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:24 compute-0 nova_compute[255040]: 2025-11-29 08:04:24.301 255071 DEBUG oslo_concurrency.lockutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "cd169ba7-ec52-418c-a12c-4069b40674d7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:24 compute-0 nova_compute[255040]: 2025-11-29 08:04:24.555 255071 DEBUG nova.compute.manager [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:04:24 compute-0 nova_compute[255040]: 2025-11-29 08:04:24.566 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:24 compute-0 ceph-mon[75237]: pgmap v1312: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.6 KiB/s wr, 51 op/s
Nov 29 08:04:24 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3140154202' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:24 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3140154202' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:24 compute-0 nova_compute[255040]: 2025-11-29 08:04:24.736 255071 DEBUG oslo_concurrency.lockutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:24 compute-0 nova_compute[255040]: 2025-11-29 08:04:24.737 255071 DEBUG oslo_concurrency.lockutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:24 compute-0 nova_compute[255040]: 2025-11-29 08:04:24.753 255071 DEBUG nova.virt.hardware [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:04:24 compute-0 nova_compute[255040]: 2025-11-29 08:04:24.754 255071 INFO nova.compute.claims [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:04:24 compute-0 nova_compute[255040]: 2025-11-29 08:04:24.971 255071 DEBUG oslo_concurrency.processutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:25 compute-0 nova_compute[255040]: 2025-11-29 08:04:25.001 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:25 compute-0 nova_compute[255040]: 2025-11-29 08:04:25.003 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:04:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/113552065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:25 compute-0 nova_compute[255040]: 2025-11-29 08:04:25.463 255071 DEBUG oslo_concurrency.processutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:25 compute-0 nova_compute[255040]: 2025-11-29 08:04:25.472 255071 DEBUG nova.compute.provider_tree [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:04:25 compute-0 nova_compute[255040]: 2025-11-29 08:04:25.491 255071 DEBUG nova.scheduler.client.report [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:04:25 compute-0 nova_compute[255040]: 2025-11-29 08:04:25.525 255071 DEBUG oslo_concurrency.lockutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:25 compute-0 nova_compute[255040]: 2025-11-29 08:04:25.527 255071 DEBUG nova.compute.manager [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:04:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 3.1 KiB/s wr, 91 op/s
Nov 29 08:04:25 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/113552065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:25 compute-0 nova_compute[255040]: 2025-11-29 08:04:25.731 255071 DEBUG nova.compute.manager [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:04:25 compute-0 nova_compute[255040]: 2025-11-29 08:04:25.732 255071 DEBUG nova.network.neutron [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:04:25 compute-0 nova_compute[255040]: 2025-11-29 08:04:25.761 255071 INFO nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:04:26 compute-0 nova_compute[255040]: 2025-11-29 08:04:26.113 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:26 compute-0 nova_compute[255040]: 2025-11-29 08:04:26.113 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:04:26 compute-0 nova_compute[255040]: 2025-11-29 08:04:26.113 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:04:26 compute-0 nova_compute[255040]: 2025-11-29 08:04:26.116 255071 DEBUG nova.compute.manager [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:04:26 compute-0 nova_compute[255040]: 2025-11-29 08:04:26.126 255071 DEBUG nova.policy [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2f0bad5019c043259e8f0cdbb532a167', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '122d6c1348a9421688c8c95fa7bfdf33', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:04:26 compute-0 nova_compute[255040]: 2025-11-29 08:04:26.144 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 08:04:26 compute-0 nova_compute[255040]: 2025-11-29 08:04:26.145 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:04:26 compute-0 nova_compute[255040]: 2025-11-29 08:04:26.388 255071 DEBUG nova.compute.manager [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:04:26 compute-0 nova_compute[255040]: 2025-11-29 08:04:26.391 255071 DEBUG nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:04:26 compute-0 nova_compute[255040]: 2025-11-29 08:04:26.392 255071 INFO nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Creating image(s)
Nov 29 08:04:26 compute-0 nova_compute[255040]: 2025-11-29 08:04:26.420 255071 DEBUG nova.storage.rbd_utils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] rbd image cd169ba7-ec52-418c-a12c-4069b40674d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:04:26 compute-0 nova_compute[255040]: 2025-11-29 08:04:26.450 255071 DEBUG nova.storage.rbd_utils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] rbd image cd169ba7-ec52-418c-a12c-4069b40674d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:04:26 compute-0 nova_compute[255040]: 2025-11-29 08:04:26.488 255071 DEBUG nova.storage.rbd_utils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] rbd image cd169ba7-ec52-418c-a12c-4069b40674d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:04:26 compute-0 nova_compute[255040]: 2025-11-29 08:04:26.493 255071 DEBUG oslo_concurrency.processutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:26 compute-0 nova_compute[255040]: 2025-11-29 08:04:26.565 255071 DEBUG oslo_concurrency.processutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:26 compute-0 nova_compute[255040]: 2025-11-29 08:04:26.566 255071 DEBUG oslo_concurrency.lockutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "55a6637599f7119d0d1afd670bb8713620840059" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:26 compute-0 nova_compute[255040]: 2025-11-29 08:04:26.567 255071 DEBUG oslo_concurrency.lockutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:26 compute-0 nova_compute[255040]: 2025-11-29 08:04:26.567 255071 DEBUG oslo_concurrency.lockutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:26 compute-0 nova_compute[255040]: 2025-11-29 08:04:26.595 255071 DEBUG nova.storage.rbd_utils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] rbd image cd169ba7-ec52-418c-a12c-4069b40674d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:04:26 compute-0 nova_compute[255040]: 2025-11-29 08:04:26.600 255071 DEBUG oslo_concurrency.processutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 cd169ba7-ec52-418c-a12c-4069b40674d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:26 compute-0 ceph-mon[75237]: pgmap v1314: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 3.1 KiB/s wr, 91 op/s
Nov 29 08:04:26 compute-0 podman[272815]: 2025-11-29 08:04:26.902251282 +0000 UTC m=+0.059622899 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:04:27 compute-0 nova_compute[255040]: 2025-11-29 08:04:27.002 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:27 compute-0 nova_compute[255040]: 2025-11-29 08:04:27.013 255071 DEBUG oslo_concurrency.processutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 cd169ba7-ec52-418c-a12c-4069b40674d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:27 compute-0 nova_compute[255040]: 2025-11-29 08:04:27.071 255071 DEBUG nova.storage.rbd_utils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] resizing rbd image cd169ba7-ec52-418c-a12c-4069b40674d7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:04:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:27.124 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:27.125 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:27.125 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:27 compute-0 nova_compute[255040]: 2025-11-29 08:04:27.159 255071 DEBUG nova.objects.instance [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lazy-loading 'migration_context' on Instance uuid cd169ba7-ec52-418c-a12c-4069b40674d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:04:27 compute-0 nova_compute[255040]: 2025-11-29 08:04:27.362 255071 DEBUG nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:04:27 compute-0 nova_compute[255040]: 2025-11-29 08:04:27.363 255071 DEBUG nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Ensure instance console log exists: /var/lib/nova/instances/cd169ba7-ec52-418c-a12c-4069b40674d7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:04:27 compute-0 nova_compute[255040]: 2025-11-29 08:04:27.364 255071 DEBUG oslo_concurrency.lockutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:27 compute-0 nova_compute[255040]: 2025-11-29 08:04:27.365 255071 DEBUG oslo_concurrency.lockutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:27 compute-0 nova_compute[255040]: 2025-11-29 08:04:27.365 255071 DEBUG oslo_concurrency.lockutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 2.3 KiB/s wr, 78 op/s
Nov 29 08:04:27 compute-0 nova_compute[255040]: 2025-11-29 08:04:27.584 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:27 compute-0 nova_compute[255040]: 2025-11-29 08:04:27.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.005 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.006 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.008 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.008 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.009 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:28 compute-0 ceph-mon[75237]: pgmap v1315: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 2.3 KiB/s wr, 78 op/s
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.042 255071 DEBUG nova.network.neutron [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Successfully created port: bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:04:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:04:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/563113228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.506 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Nov 29 08:04:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Nov 29 08:04:28 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.726 255071 DEBUG nova.network.neutron [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Successfully updated port: bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.745 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.747 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4666MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.747 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.748 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.767 255071 DEBUG oslo_concurrency.lockutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "refresh_cache-cd169ba7-ec52-418c-a12c-4069b40674d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.767 255071 DEBUG oslo_concurrency.lockutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquired lock "refresh_cache-cd169ba7-ec52-418c-a12c-4069b40674d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.767 255071 DEBUG nova.network.neutron [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.812 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance cd169ba7-ec52-418c-a12c-4069b40674d7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.813 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.813 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.863 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.894 255071 DEBUG nova.compute.manager [req-a1793b64-68fd-4f02-862a-d88bbf597b46 req-00ae5298-f8d2-4092-82f9-6b3019752bc6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Received event network-changed-bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.894 255071 DEBUG nova.compute.manager [req-a1793b64-68fd-4f02-862a-d88bbf597b46 req-00ae5298-f8d2-4092-82f9-6b3019752bc6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Refreshing instance network info cache due to event network-changed-bf5f0bd4-6972-4cd3-9d99-aace0e25efc8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.895 255071 DEBUG oslo_concurrency.lockutils [req-a1793b64-68fd-4f02-862a-d88bbf597b46 req-00ae5298-f8d2-4092-82f9-6b3019752bc6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-cd169ba7-ec52-418c-a12c-4069b40674d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:04:28 compute-0 nova_compute[255040]: 2025-11-29 08:04:28.923 255071 DEBUG nova.network.neutron [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:04:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:04:29 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4277474973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.312 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.321 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:04:29 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/563113228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:29 compute-0 ceph-mon[75237]: osdmap e221: 3 total, 3 up, 3 in
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.346 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.378 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.379 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.567 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.576 255071 DEBUG nova.network.neutron [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Updating instance_info_cache with network_info: [{"id": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "address": "fa:16:3e:17:a4:81", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f0bd4-69", "ovs_interfaceid": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:04:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 110 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 890 KiB/s wr, 84 op/s
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.591 255071 DEBUG oslo_concurrency.lockutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Releasing lock "refresh_cache-cd169ba7-ec52-418c-a12c-4069b40674d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.591 255071 DEBUG nova.compute.manager [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Instance network_info: |[{"id": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "address": "fa:16:3e:17:a4:81", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f0bd4-69", "ovs_interfaceid": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.592 255071 DEBUG oslo_concurrency.lockutils [req-a1793b64-68fd-4f02-862a-d88bbf597b46 req-00ae5298-f8d2-4092-82f9-6b3019752bc6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-cd169ba7-ec52-418c-a12c-4069b40674d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.592 255071 DEBUG nova.network.neutron [req-a1793b64-68fd-4f02-862a-d88bbf597b46 req-00ae5298-f8d2-4092-82f9-6b3019752bc6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Refreshing network info cache for port bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.594 255071 DEBUG nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Start _get_guest_xml network_info=[{"id": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "address": "fa:16:3e:17:a4:81", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f0bd4-69", "ovs_interfaceid": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'image_id': '36a9388d-0d77-4d24-a915-be92247e5dbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.599 255071 WARNING nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.603 255071 DEBUG nova.virt.libvirt.host [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.604 255071 DEBUG nova.virt.libvirt.host [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.611 255071 DEBUG nova.virt.libvirt.host [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.612 255071 DEBUG nova.virt.libvirt.host [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.613 255071 DEBUG nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.613 255071 DEBUG nova.virt.hardware [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.614 255071 DEBUG nova.virt.hardware [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.614 255071 DEBUG nova.virt.hardware [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.614 255071 DEBUG nova.virt.hardware [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.615 255071 DEBUG nova.virt.hardware [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.615 255071 DEBUG nova.virt.hardware [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.616 255071 DEBUG nova.virt.hardware [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.616 255071 DEBUG nova.virt.hardware [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.616 255071 DEBUG nova.virt.hardware [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.617 255071 DEBUG nova.virt.hardware [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.617 255071 DEBUG nova.virt.hardware [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:04:29 compute-0 nova_compute[255040]: 2025-11-29 08:04:29.621 255071 DEBUG oslo_concurrency.processutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:04:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2440560583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.081 255071 DEBUG oslo_concurrency.processutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.106 255071 DEBUG nova.storage.rbd_utils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] rbd image cd169ba7-ec52-418c-a12c-4069b40674d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.110 255071 DEBUG oslo_concurrency.processutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.380 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.381 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.381 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:04:30 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/4277474973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:30 compute-0 ceph-mon[75237]: pgmap v1317: 305 pgs: 305 active+clean; 110 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 890 KiB/s wr, 84 op/s
Nov 29 08:04:30 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2440560583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.473 255071 DEBUG nova.network.neutron [req-a1793b64-68fd-4f02-862a-d88bbf597b46 req-00ae5298-f8d2-4092-82f9-6b3019752bc6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Updated VIF entry in instance network info cache for port bf5f0bd4-6972-4cd3-9d99-aace0e25efc8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.474 255071 DEBUG nova.network.neutron [req-a1793b64-68fd-4f02-862a-d88bbf597b46 req-00ae5298-f8d2-4092-82f9-6b3019752bc6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Updating instance_info_cache with network_info: [{"id": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "address": "fa:16:3e:17:a4:81", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f0bd4-69", "ovs_interfaceid": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.493 255071 DEBUG oslo_concurrency.lockutils [req-a1793b64-68fd-4f02-862a-d88bbf597b46 req-00ae5298-f8d2-4092-82f9-6b3019752bc6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-cd169ba7-ec52-418c-a12c-4069b40674d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:04:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:04:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/521982357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.578 255071 DEBUG oslo_concurrency.processutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.580 255071 DEBUG nova.virt.libvirt.vif [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:04:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-433627761',display_name='tempest-VolumesBackupsTest-instance-433627761',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-433627761',id=6,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNuz0qIQ6xhoHq5TxQpExhWUsmNZyNDNS9yXH8I5twTL5A4pRxaeiLeVkHbUZyDz8LpYRH2KFWp5exvZLsFp2vL75/EmERN+ObohGkR86ilphfmiaekgcxTAymp8CPWjDw==',key_name='tempest-keypair-359361841',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='122d6c1348a9421688c8c95fa7bfdf33',ramdisk_id='',reservation_id='r-wsd520bl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-433060525',owner_user_name='tempest-VolumesBackupsTest-433060525-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:04:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2f0bad5019c043259e8f0cdbb532a167',uuid=cd169ba7-ec52-418c-a12c-4069b40674d7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "address": "fa:16:3e:17:a4:81", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f0bd4-69", "ovs_interfaceid": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.580 255071 DEBUG nova.network.os_vif_util [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Converting VIF {"id": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "address": "fa:16:3e:17:a4:81", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f0bd4-69", "ovs_interfaceid": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.581 255071 DEBUG nova.network.os_vif_util [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:a4:81,bridge_name='br-int',has_traffic_filtering=True,id=bf5f0bd4-6972-4cd3-9d99-aace0e25efc8,network=Network(0b79d41d-8eb2-4d4a-9786-7791592a7e66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5f0bd4-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.584 255071 DEBUG nova.objects.instance [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lazy-loading 'pci_devices' on Instance uuid cd169ba7-ec52-418c-a12c-4069b40674d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.601 255071 DEBUG nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:04:30 compute-0 nova_compute[255040]:   <uuid>cd169ba7-ec52-418c-a12c-4069b40674d7</uuid>
Nov 29 08:04:30 compute-0 nova_compute[255040]:   <name>instance-00000006</name>
Nov 29 08:04:30 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:04:30 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:04:30 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <nova:name>tempest-VolumesBackupsTest-instance-433627761</nova:name>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:04:29</nova:creationTime>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:04:30 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:04:30 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:04:30 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:04:30 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:04:30 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:04:30 compute-0 nova_compute[255040]:         <nova:user uuid="2f0bad5019c043259e8f0cdbb532a167">tempest-VolumesBackupsTest-433060525-project-member</nova:user>
Nov 29 08:04:30 compute-0 nova_compute[255040]:         <nova:project uuid="122d6c1348a9421688c8c95fa7bfdf33">tempest-VolumesBackupsTest-433060525</nova:project>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <nova:root type="image" uuid="36a9388d-0d77-4d24-a915-be92247e5dbc"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:04:30 compute-0 nova_compute[255040]:         <nova:port uuid="bf5f0bd4-6972-4cd3-9d99-aace0e25efc8">
Nov 29 08:04:30 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:04:30 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:04:30 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <system>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <entry name="serial">cd169ba7-ec52-418c-a12c-4069b40674d7</entry>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <entry name="uuid">cd169ba7-ec52-418c-a12c-4069b40674d7</entry>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     </system>
Nov 29 08:04:30 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:04:30 compute-0 nova_compute[255040]:   <os>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:   </os>
Nov 29 08:04:30 compute-0 nova_compute[255040]:   <features>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:   </features>
Nov 29 08:04:30 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:04:30 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:04:30 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/cd169ba7-ec52-418c-a12c-4069b40674d7_disk">
Nov 29 08:04:30 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       </source>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:04:30 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/cd169ba7-ec52-418c-a12c-4069b40674d7_disk.config">
Nov 29 08:04:30 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       </source>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:04:30 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:17:a4:81"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <target dev="tapbf5f0bd4-69"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/cd169ba7-ec52-418c-a12c-4069b40674d7/console.log" append="off"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <video>
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     </video>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:04:30 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:04:30 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:04:30 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:04:30 compute-0 nova_compute[255040]: </domain>
Nov 29 08:04:30 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.604 255071 DEBUG nova.compute.manager [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Preparing to wait for external event network-vif-plugged-bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.606 255071 DEBUG oslo_concurrency.lockutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "cd169ba7-ec52-418c-a12c-4069b40674d7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.606 255071 DEBUG oslo_concurrency.lockutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "cd169ba7-ec52-418c-a12c-4069b40674d7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.606 255071 DEBUG oslo_concurrency.lockutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "cd169ba7-ec52-418c-a12c-4069b40674d7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.607 255071 DEBUG nova.virt.libvirt.vif [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:04:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-433627761',display_name='tempest-VolumesBackupsTest-instance-433627761',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-433627761',id=6,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNuz0qIQ6xhoHq5TxQpExhWUsmNZyNDNS9yXH8I5twTL5A4pRxaeiLeVkHbUZyDz8LpYRH2KFWp5exvZLsFp2vL75/EmERN+ObohGkR86ilphfmiaekgcxTAymp8CPWjDw==',key_name='tempest-keypair-359361841',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='122d6c1348a9421688c8c95fa7bfdf33',ramdisk_id='',reservation_id='r-wsd520bl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-433060525',owner_user_name='tempest-VolumesBackupsTest-433060525-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:04:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2f0bad5019c043259e8f0cdbb532a167',uuid=cd169ba7-ec52-418c-a12c-4069b40674d7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "address": "fa:16:3e:17:a4:81", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f0bd4-69", "ovs_interfaceid": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.608 255071 DEBUG nova.network.os_vif_util [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Converting VIF {"id": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "address": "fa:16:3e:17:a4:81", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f0bd4-69", "ovs_interfaceid": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.609 255071 DEBUG nova.network.os_vif_util [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:a4:81,bridge_name='br-int',has_traffic_filtering=True,id=bf5f0bd4-6972-4cd3-9d99-aace0e25efc8,network=Network(0b79d41d-8eb2-4d4a-9786-7791592a7e66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5f0bd4-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.609 255071 DEBUG os_vif [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:a4:81,bridge_name='br-int',has_traffic_filtering=True,id=bf5f0bd4-6972-4cd3-9d99-aace0e25efc8,network=Network(0b79d41d-8eb2-4d4a-9786-7791592a7e66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5f0bd4-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.610 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.611 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.611 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.617 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.617 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf5f0bd4-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.618 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf5f0bd4-69, col_values=(('external_ids', {'iface-id': 'bf5f0bd4-6972-4cd3-9d99-aace0e25efc8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:a4:81', 'vm-uuid': 'cd169ba7-ec52-418c-a12c-4069b40674d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.620 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:30 compute-0 NetworkManager[49116]: <info>  [1764403470.6217] manager: (tapbf5f0bd4-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.623 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.630 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.631 255071 INFO os_vif [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:a4:81,bridge_name='br-int',has_traffic_filtering=True,id=bf5f0bd4-6972-4cd3-9d99-aace0e25efc8,network=Network(0b79d41d-8eb2-4d4a-9786-7791592a7e66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5f0bd4-69')
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.679 255071 DEBUG nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.679 255071 DEBUG nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.679 255071 DEBUG nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] No VIF found with MAC fa:16:3e:17:a4:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.680 255071 INFO nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Using config drive
Nov 29 08:04:30 compute-0 nova_compute[255040]: 2025-11-29 08:04:30.708 255071 DEBUG nova.storage.rbd_utils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] rbd image cd169ba7-ec52-418c-a12c-4069b40674d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:04:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:04:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2286624953' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:04:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2286624953' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:31 compute-0 nova_compute[255040]: 2025-11-29 08:04:31.027 255071 INFO nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Creating config drive at /var/lib/nova/instances/cd169ba7-ec52-418c-a12c-4069b40674d7/disk.config
Nov 29 08:04:31 compute-0 nova_compute[255040]: 2025-11-29 08:04:31.036 255071 DEBUG oslo_concurrency.processutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cd169ba7-ec52-418c-a12c-4069b40674d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppm_tk3jg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:31 compute-0 nova_compute[255040]: 2025-11-29 08:04:31.170 255071 DEBUG oslo_concurrency.processutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cd169ba7-ec52-418c-a12c-4069b40674d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppm_tk3jg" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:31 compute-0 nova_compute[255040]: 2025-11-29 08:04:31.199 255071 DEBUG nova.storage.rbd_utils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] rbd image cd169ba7-ec52-418c-a12c-4069b40674d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:04:31 compute-0 nova_compute[255040]: 2025-11-29 08:04:31.205 255071 DEBUG oslo_concurrency.processutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cd169ba7-ec52-418c-a12c-4069b40674d7/disk.config cd169ba7-ec52-418c-a12c-4069b40674d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:31 compute-0 nova_compute[255040]: 2025-11-29 08:04:31.368 255071 DEBUG oslo_concurrency.processutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cd169ba7-ec52-418c-a12c-4069b40674d7/disk.config cd169ba7-ec52-418c-a12c-4069b40674d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:31 compute-0 nova_compute[255040]: 2025-11-29 08:04:31.369 255071 INFO nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Deleting local config drive /var/lib/nova/instances/cd169ba7-ec52-418c-a12c-4069b40674d7/disk.config because it was imported into RBD.
Nov 29 08:04:31 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/521982357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:31 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2286624953' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:31 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2286624953' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:31 compute-0 kernel: tapbf5f0bd4-69: entered promiscuous mode
Nov 29 08:04:31 compute-0 NetworkManager[49116]: <info>  [1764403471.4474] manager: (tapbf5f0bd4-69): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Nov 29 08:04:31 compute-0 ovn_controller[153295]: 2025-11-29T08:04:31Z|00074|binding|INFO|Claiming lport bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 for this chassis.
Nov 29 08:04:31 compute-0 ovn_controller[153295]: 2025-11-29T08:04:31Z|00075|binding|INFO|bf5f0bd4-6972-4cd3-9d99-aace0e25efc8: Claiming fa:16:3e:17:a4:81 10.100.0.8
Nov 29 08:04:31 compute-0 nova_compute[255040]: 2025-11-29 08:04:31.450 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:31 compute-0 nova_compute[255040]: 2025-11-29 08:04:31.456 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:31 compute-0 nova_compute[255040]: 2025-11-29 08:04:31.458 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.470 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:a4:81 10.100.0.8'], port_security=['fa:16:3e:17:a4:81 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'cd169ba7-ec52-418c-a12c-4069b40674d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b79d41d-8eb2-4d4a-9786-7791592a7e66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '122d6c1348a9421688c8c95fa7bfdf33', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4fa5ad84-2e5d-4915-82bd-1bb6b8ec61df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=932d84e2-f2b7-4447-ace7-dc91550d516b, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=bf5f0bd4-6972-4cd3-9d99-aace0e25efc8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.472 163500 INFO neutron.agent.ovn.metadata.agent [-] Port bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 in datapath 0b79d41d-8eb2-4d4a-9786-7791592a7e66 bound to our chassis
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.473 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0b79d41d-8eb2-4d4a-9786-7791592a7e66
Nov 29 08:04:31 compute-0 systemd-udevd[273084]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:04:31 compute-0 systemd-machined[216271]: New machine qemu-6-instance-00000006.
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.499 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[1ba96dd8-fcdf-4e70-b455-88a79553431b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.501 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0b79d41d-81 in ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:04:31 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.505 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0b79d41d-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.506 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[364d4465-bf3e-417f-9e4a-c43414161a59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.508 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[e4de76e5-9c98-4075-bbff-d34e34312209]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:31 compute-0 NetworkManager[49116]: <info>  [1764403471.5125] device (tapbf5f0bd4-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:04:31 compute-0 NetworkManager[49116]: <info>  [1764403471.5140] device (tapbf5f0bd4-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:04:31 compute-0 nova_compute[255040]: 2025-11-29 08:04:31.530 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.530 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[e24bd5bc-f794-4f5b-9774-4c3d75936edb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:31 compute-0 ovn_controller[153295]: 2025-11-29T08:04:31Z|00076|binding|INFO|Setting lport bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 ovn-installed in OVS
Nov 29 08:04:31 compute-0 ovn_controller[153295]: 2025-11-29T08:04:31Z|00077|binding|INFO|Setting lport bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 up in Southbound
Nov 29 08:04:31 compute-0 nova_compute[255040]: 2025-11-29 08:04:31.537 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.548 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[f901ff1b-1123-4677-a842-64bc3eee9324]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 134 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 2.7 MiB/s wr, 102 op/s
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.595 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b12941-a504-44d3-a147-8ccfd44e7fe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.603 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[f7af64bc-e50a-4175-85e9-737cbf8dcabe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:31 compute-0 NetworkManager[49116]: <info>  [1764403471.6048] manager: (tap0b79d41d-80): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.636 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[a65c3688-869d-47ac-9141-b29f45e318fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.641 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[6fbe5d26-9e15-426b-9bb9-674e6638fc7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:31 compute-0 NetworkManager[49116]: <info>  [1764403471.6827] device (tap0b79d41d-80): carrier: link connected
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.687 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[d51da30f-b7bf-4268-bd0c-9bd684f9c05c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:31 compute-0 sudo[273114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:31 compute-0 sudo[273114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:31 compute-0 sudo[273114]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.707 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[2443d525-69b5-49de-87f7-d027fb42f648]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b79d41d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:ac:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570902, 'reachable_time': 41421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273139, 'error': None, 'target': 'ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.726 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[51f1c04b-08ae-4ceb-bef6-9e6b29209873]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:ac51'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570902, 'tstamp': 570902}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273142, 'error': None, 'target': 'ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:31 compute-0 nova_compute[255040]: 2025-11-29 08:04:31.743 255071 DEBUG nova.compute.manager [req-c8d13ec1-27e0-40d7-b238-983c1c6430ff req-15a59a03-d883-4204-976f-bafedd3134fc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Received event network-vif-plugged-bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:31 compute-0 nova_compute[255040]: 2025-11-29 08:04:31.743 255071 DEBUG oslo_concurrency.lockutils [req-c8d13ec1-27e0-40d7-b238-983c1c6430ff req-15a59a03-d883-4204-976f-bafedd3134fc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "cd169ba7-ec52-418c-a12c-4069b40674d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:31 compute-0 nova_compute[255040]: 2025-11-29 08:04:31.744 255071 DEBUG oslo_concurrency.lockutils [req-c8d13ec1-27e0-40d7-b238-983c1c6430ff req-15a59a03-d883-4204-976f-bafedd3134fc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "cd169ba7-ec52-418c-a12c-4069b40674d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:31 compute-0 nova_compute[255040]: 2025-11-29 08:04:31.744 255071 DEBUG oslo_concurrency.lockutils [req-c8d13ec1-27e0-40d7-b238-983c1c6430ff req-15a59a03-d883-4204-976f-bafedd3134fc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "cd169ba7-ec52-418c-a12c-4069b40674d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:31 compute-0 nova_compute[255040]: 2025-11-29 08:04:31.744 255071 DEBUG nova.compute.manager [req-c8d13ec1-27e0-40d7-b238-983c1c6430ff req-15a59a03-d883-4204-976f-bafedd3134fc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Processing event network-vif-plugged-bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.746 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c11503-111c-4443-8618-52cfa1787f55]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0b79d41d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:ac:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570902, 'reachable_time': 41421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273155, 'error': None, 'target': 'ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:31 compute-0 sudo[273143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.781 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[ec7f8d2d-a77f-49d5-8b2a-66326ec34212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:31 compute-0 sudo[273143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:31 compute-0 sudo[273143]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:31 compute-0 sudo[273172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:31 compute-0 sudo[273172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:31 compute-0 sudo[273172]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.844 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[bbc7310d-5c1b-48c7-aa23-0775b58b3b98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.846 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b79d41d-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.846 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.847 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b79d41d-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:31 compute-0 nova_compute[255040]: 2025-11-29 08:04:31.849 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:31 compute-0 NetworkManager[49116]: <info>  [1764403471.8497] manager: (tap0b79d41d-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Nov 29 08:04:31 compute-0 kernel: tap0b79d41d-80: entered promiscuous mode
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.852 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0b79d41d-80, col_values=(('external_ids', {'iface-id': '9fb3b3e1-f71e-47ab-acbc-6f0864db08ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:31 compute-0 nova_compute[255040]: 2025-11-29 08:04:31.853 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:31 compute-0 nova_compute[255040]: 2025-11-29 08:04:31.853 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.854 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0b79d41d-8eb2-4d4a-9786-7791592a7e66.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0b79d41d-8eb2-4d4a-9786-7791592a7e66.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.855 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d7243e-8dca-488d-b3cf-84dabae26927]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.856 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-0b79d41d-8eb2-4d4a-9786-7791592a7e66
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/0b79d41d-8eb2-4d4a-9786-7791592a7e66.pid.haproxy
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID 0b79d41d-8eb2-4d4a-9786-7791592a7e66
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:04:31 compute-0 ovn_controller[153295]: 2025-11-29T08:04:31Z|00078|binding|INFO|Releasing lport 9fb3b3e1-f71e-47ab-acbc-6f0864db08ce from this chassis (sb_readonly=0)
Nov 29 08:04:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:04:31.857 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66', 'env', 'PROCESS_TAG=haproxy-0b79d41d-8eb2-4d4a-9786-7791592a7e66', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0b79d41d-8eb2-4d4a-9786-7791592a7e66.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:04:31 compute-0 nova_compute[255040]: 2025-11-29 08:04:31.871 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:31 compute-0 sudo[273200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:04:31 compute-0 sudo[273200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:32 compute-0 podman[273262]: 2025-11-29 08:04:32.268647948 +0000 UTC m=+0.045214090 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:04:32 compute-0 podman[273262]: 2025-11-29 08:04:32.35367336 +0000 UTC m=+0.130239472 container create 4567cbda7746ec161ac6290cdd2ad8abdab7fc6c47d37f1151743dda3be17462 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 08:04:32 compute-0 sudo[273200]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:32 compute-0 systemd[1]: Started libpod-conmon-4567cbda7746ec161ac6290cdd2ad8abdab7fc6c47d37f1151743dda3be17462.scope.
Nov 29 08:04:32 compute-0 ceph-mon[75237]: pgmap v1318: 305 pgs: 305 active+clean; 134 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 2.7 MiB/s wr, 102 op/s
Nov 29 08:04:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/957b8c957884b48f0c8e2cf8dcee56e5843e1ce8acd92868b3dbd99e9ecfeb1d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:04:32 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:04:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:04:32 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:04:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:04:32 compute-0 podman[273262]: 2025-11-29 08:04:32.452376312 +0000 UTC m=+0.228942444 container init 4567cbda7746ec161ac6290cdd2ad8abdab7fc6c47d37f1151743dda3be17462 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 08:04:32 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:04:32 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 143dc3f3-1dbb-441a-be18-15196fca44d0 does not exist
Nov 29 08:04:32 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev d734e95d-0fbb-42bb-9c10-bef473e47cdc does not exist
Nov 29 08:04:32 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 1034d451-4a05-47e8-a0b6-ded059f2631d does not exist
Nov 29 08:04:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:04:32 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:04:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:04:32 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:04:32 compute-0 podman[273262]: 2025-11-29 08:04:32.461030565 +0000 UTC m=+0.237596677 container start 4567cbda7746ec161ac6290cdd2ad8abdab7fc6c47d37f1151743dda3be17462 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 08:04:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:04:32 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:04:32 compute-0 podman[273291]: 2025-11-29 08:04:32.476654446 +0000 UTC m=+0.082966088 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 08:04:32 compute-0 neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66[273301]: [NOTICE]   (273350) : New worker (273360) forked
Nov 29 08:04:32 compute-0 neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66[273301]: [NOTICE]   (273350) : Loading success.
Nov 29 08:04:32 compute-0 sudo[273352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:32 compute-0 sudo[273352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:32 compute-0 sudo[273352]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.594 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403472.5939136, cd169ba7-ec52-418c-a12c-4069b40674d7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.595 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] VM Started (Lifecycle Event)
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.598 255071 DEBUG nova.compute.manager [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.601 255071 DEBUG nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:04:32 compute-0 sudo[273394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.606 255071 INFO nova.virt.libvirt.driver [-] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Instance spawned successfully.
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.606 255071 DEBUG nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:04:32 compute-0 sudo[273394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:32 compute-0 sudo[273394]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.628 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.638 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.645 255071 DEBUG nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.646 255071 DEBUG nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.647 255071 DEBUG nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.648 255071 DEBUG nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.650 255071 DEBUG nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.651 255071 DEBUG nova.virt.libvirt.driver [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.660 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.661 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403472.5970788, cd169ba7-ec52-418c-a12c-4069b40674d7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.661 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] VM Paused (Lifecycle Event)
Nov 29 08:04:32 compute-0 sudo[273420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:32 compute-0 sudo[273420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:32 compute-0 sudo[273420]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.686 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.690 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403472.6011803, cd169ba7-ec52-418c-a12c-4069b40674d7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.690 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] VM Resumed (Lifecycle Event)
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.707 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.711 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.715 255071 INFO nova.compute.manager [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Took 6.33 seconds to spawn the instance on the hypervisor.
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.715 255071 DEBUG nova.compute.manager [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.726 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:04:32 compute-0 sudo[273445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:04:32 compute-0 sudo[273445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.771 255071 INFO nova.compute.manager [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Took 8.07 seconds to build instance.
Nov 29 08:04:32 compute-0 nova_compute[255040]: 2025-11-29 08:04:32.786 255071 DEBUG oslo_concurrency.lockutils [None req-4f025d6f-c9ed-4ed8-98c2-0ec21c088bcd 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "cd169ba7-ec52-418c-a12c-4069b40674d7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:33 compute-0 podman[273510]: 2025-11-29 08:04:33.185244408 +0000 UTC m=+0.098473915 container create c9d9c426f13354ecb00c9522b2b0289e0ae49251036ad0d21aa1a35d557436c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hellman, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 08:04:33 compute-0 systemd[1]: Started libpod-conmon-c9d9c426f13354ecb00c9522b2b0289e0ae49251036ad0d21aa1a35d557436c1.scope.
Nov 29 08:04:33 compute-0 podman[273510]: 2025-11-29 08:04:33.163771679 +0000 UTC m=+0.077001216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:04:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:04:33 compute-0 podman[273510]: 2025-11-29 08:04:33.284249607 +0000 UTC m=+0.197479124 container init c9d9c426f13354ecb00c9522b2b0289e0ae49251036ad0d21aa1a35d557436c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hellman, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 08:04:33 compute-0 podman[273510]: 2025-11-29 08:04:33.299059036 +0000 UTC m=+0.212288543 container start c9d9c426f13354ecb00c9522b2b0289e0ae49251036ad0d21aa1a35d557436c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Nov 29 08:04:33 compute-0 podman[273510]: 2025-11-29 08:04:33.303053994 +0000 UTC m=+0.216283521 container attach c9d9c426f13354ecb00c9522b2b0289e0ae49251036ad0d21aa1a35d557436c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hellman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 08:04:33 compute-0 zealous_hellman[273525]: 167 167
Nov 29 08:04:33 compute-0 systemd[1]: libpod-c9d9c426f13354ecb00c9522b2b0289e0ae49251036ad0d21aa1a35d557436c1.scope: Deactivated successfully.
Nov 29 08:04:33 compute-0 podman[273510]: 2025-11-29 08:04:33.308446099 +0000 UTC m=+0.221675606 container died c9d9c426f13354ecb00c9522b2b0289e0ae49251036ad0d21aa1a35d557436c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 08:04:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-11ba7ab699283f7d5867adb584b110460f1003739885729a31f15103d7c5804f-merged.mount: Deactivated successfully.
Nov 29 08:04:33 compute-0 podman[273510]: 2025-11-29 08:04:33.368417986 +0000 UTC m=+0.281647493 container remove c9d9c426f13354ecb00c9522b2b0289e0ae49251036ad0d21aa1a35d557436c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 08:04:33 compute-0 systemd[1]: libpod-conmon-c9d9c426f13354ecb00c9522b2b0289e0ae49251036ad0d21aa1a35d557436c1.scope: Deactivated successfully.
Nov 29 08:04:33 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:04:33 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:04:33 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:04:33 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:04:33 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:04:33 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:04:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Nov 29 08:04:33 compute-0 podman[273550]: 2025-11-29 08:04:33.530180866 +0000 UTC m=+0.025267962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:04:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:33 compute-0 podman[273550]: 2025-11-29 08:04:33.820948635 +0000 UTC m=+0.316035711 container create 4919feb967277b6e7676667b9ff048610cd4d9dc28c9b2999766339467edeb9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_benz, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:04:33 compute-0 nova_compute[255040]: 2025-11-29 08:04:33.834 255071 DEBUG nova.compute.manager [req-56fcab3d-844d-48e5-8f35-d992a8ada453 req-7026b73a-1fa7-431f-ba72-1bcc533fd566 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Received event network-vif-plugged-bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:33 compute-0 nova_compute[255040]: 2025-11-29 08:04:33.836 255071 DEBUG oslo_concurrency.lockutils [req-56fcab3d-844d-48e5-8f35-d992a8ada453 req-7026b73a-1fa7-431f-ba72-1bcc533fd566 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "cd169ba7-ec52-418c-a12c-4069b40674d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:33 compute-0 nova_compute[255040]: 2025-11-29 08:04:33.836 255071 DEBUG oslo_concurrency.lockutils [req-56fcab3d-844d-48e5-8f35-d992a8ada453 req-7026b73a-1fa7-431f-ba72-1bcc533fd566 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "cd169ba7-ec52-418c-a12c-4069b40674d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:33 compute-0 nova_compute[255040]: 2025-11-29 08:04:33.839 255071 DEBUG oslo_concurrency.lockutils [req-56fcab3d-844d-48e5-8f35-d992a8ada453 req-7026b73a-1fa7-431f-ba72-1bcc533fd566 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "cd169ba7-ec52-418c-a12c-4069b40674d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:33 compute-0 nova_compute[255040]: 2025-11-29 08:04:33.839 255071 DEBUG nova.compute.manager [req-56fcab3d-844d-48e5-8f35-d992a8ada453 req-7026b73a-1fa7-431f-ba72-1bcc533fd566 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] No waiting events found dispatching network-vif-plugged-bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:33 compute-0 nova_compute[255040]: 2025-11-29 08:04:33.839 255071 WARNING nova.compute.manager [req-56fcab3d-844d-48e5-8f35-d992a8ada453 req-7026b73a-1fa7-431f-ba72-1bcc533fd566 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Received unexpected event network-vif-plugged-bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 for instance with vm_state active and task_state None.
Nov 29 08:04:33 compute-0 systemd[1]: Started libpod-conmon-4919feb967277b6e7676667b9ff048610cd4d9dc28c9b2999766339467edeb9f.scope.
Nov 29 08:04:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63a713d72c433fa29f2dbf9f7f811a20a93452946cb187da53534e359c643333/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63a713d72c433fa29f2dbf9f7f811a20a93452946cb187da53534e359c643333/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63a713d72c433fa29f2dbf9f7f811a20a93452946cb187da53534e359c643333/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63a713d72c433fa29f2dbf9f7f811a20a93452946cb187da53534e359c643333/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63a713d72c433fa29f2dbf9f7f811a20a93452946cb187da53534e359c643333/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:34 compute-0 podman[273550]: 2025-11-29 08:04:34.271146612 +0000 UTC m=+0.766233748 container init 4919feb967277b6e7676667b9ff048610cd4d9dc28c9b2999766339467edeb9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_benz, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 29 08:04:34 compute-0 podman[273550]: 2025-11-29 08:04:34.282323484 +0000 UTC m=+0.777410600 container start 4919feb967277b6e7676667b9ff048610cd4d9dc28c9b2999766339467edeb9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_benz, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 08:04:34 compute-0 podman[273550]: 2025-11-29 08:04:34.288251463 +0000 UTC m=+0.783338569 container attach 4919feb967277b6e7676667b9ff048610cd4d9dc28c9b2999766339467edeb9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_benz, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:04:34 compute-0 nova_compute[255040]: 2025-11-29 08:04:34.570 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:04:34 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/723894409' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:04:34 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/723894409' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:34 compute-0 ceph-mon[75237]: pgmap v1319: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Nov 29 08:04:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:04:35 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/446747650' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:04:35 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/446747650' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:35 compute-0 nervous_benz[273566]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:04:35 compute-0 nervous_benz[273566]: --> relative data size: 1.0
Nov 29 08:04:35 compute-0 nervous_benz[273566]: --> All data devices are unavailable
Nov 29 08:04:35 compute-0 systemd[1]: libpod-4919feb967277b6e7676667b9ff048610cd4d9dc28c9b2999766339467edeb9f.scope: Deactivated successfully.
Nov 29 08:04:35 compute-0 systemd[1]: libpod-4919feb967277b6e7676667b9ff048610cd4d9dc28c9b2999766339467edeb9f.scope: Consumed 1.058s CPU time.
Nov 29 08:04:35 compute-0 podman[273550]: 2025-11-29 08:04:35.403629632 +0000 UTC m=+1.898716718 container died 4919feb967277b6e7676667b9ff048610cd4d9dc28c9b2999766339467edeb9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_benz, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:04:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-63a713d72c433fa29f2dbf9f7f811a20a93452946cb187da53534e359c643333-merged.mount: Deactivated successfully.
Nov 29 08:04:35 compute-0 podman[273550]: 2025-11-29 08:04:35.48517963 +0000 UTC m=+1.980266706 container remove 4919feb967277b6e7676667b9ff048610cd4d9dc28c9b2999766339467edeb9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:04:35 compute-0 systemd[1]: libpod-conmon-4919feb967277b6e7676667b9ff048610cd4d9dc28c9b2999766339467edeb9f.scope: Deactivated successfully.
Nov 29 08:04:35 compute-0 NetworkManager[49116]: <info>  [1764403475.5174] manager: (patch-br-int-to-provnet-0b50aea8-d2d6-4416-bd00-1ceabb7a7c1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Nov 29 08:04:35 compute-0 nova_compute[255040]: 2025-11-29 08:04:35.516 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:35 compute-0 ovn_controller[153295]: 2025-11-29T08:04:35Z|00079|binding|INFO|Releasing lport 9fb3b3e1-f71e-47ab-acbc-6f0864db08ce from this chassis (sb_readonly=0)
Nov 29 08:04:35 compute-0 NetworkManager[49116]: <info>  [1764403475.5184] manager: (patch-provnet-0b50aea8-d2d6-4416-bd00-1ceabb7a7c1d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Nov 29 08:04:35 compute-0 sudo[273445]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:35 compute-0 ovn_controller[153295]: 2025-11-29T08:04:35Z|00080|binding|INFO|Releasing lport 9fb3b3e1-f71e-47ab-acbc-6f0864db08ce from this chassis (sb_readonly=0)
Nov 29 08:04:35 compute-0 nova_compute[255040]: 2025-11-29 08:04:35.562 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:35 compute-0 nova_compute[255040]: 2025-11-29 08:04:35.567 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 178 op/s
Nov 29 08:04:35 compute-0 sudo[273609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:35 compute-0 sudo[273609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:35 compute-0 sudo[273609]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:35 compute-0 nova_compute[255040]: 2025-11-29 08:04:35.620 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:35 compute-0 sudo[273635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:04:35 compute-0 sudo[273635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:35 compute-0 sudo[273635]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:35 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/723894409' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:35 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/723894409' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:35 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/446747650' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:35 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/446747650' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:35 compute-0 sudo[273660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:35 compute-0 sudo[273660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:35 compute-0 sudo[273660]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:35 compute-0 sudo[273685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 08:04:35 compute-0 sudo[273685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:35 compute-0 nova_compute[255040]: 2025-11-29 08:04:35.963 255071 DEBUG nova.compute.manager [req-b8eb49e5-e002-477e-88c0-17698576e353 req-4e2ea759-50b3-4c26-a5e9-16ddd8ca57c1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Received event network-changed-bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:35 compute-0 nova_compute[255040]: 2025-11-29 08:04:35.963 255071 DEBUG nova.compute.manager [req-b8eb49e5-e002-477e-88c0-17698576e353 req-4e2ea759-50b3-4c26-a5e9-16ddd8ca57c1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Refreshing instance network info cache due to event network-changed-bf5f0bd4-6972-4cd3-9d99-aace0e25efc8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:04:35 compute-0 nova_compute[255040]: 2025-11-29 08:04:35.964 255071 DEBUG oslo_concurrency.lockutils [req-b8eb49e5-e002-477e-88c0-17698576e353 req-4e2ea759-50b3-4c26-a5e9-16ddd8ca57c1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-cd169ba7-ec52-418c-a12c-4069b40674d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:04:35 compute-0 nova_compute[255040]: 2025-11-29 08:04:35.964 255071 DEBUG oslo_concurrency.lockutils [req-b8eb49e5-e002-477e-88c0-17698576e353 req-4e2ea759-50b3-4c26-a5e9-16ddd8ca57c1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-cd169ba7-ec52-418c-a12c-4069b40674d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:04:35 compute-0 nova_compute[255040]: 2025-11-29 08:04:35.964 255071 DEBUG nova.network.neutron [req-b8eb49e5-e002-477e-88c0-17698576e353 req-4e2ea759-50b3-4c26-a5e9-16ddd8ca57c1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Refreshing network info cache for port bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:04:36 compute-0 podman[273751]: 2025-11-29 08:04:36.170506925 +0000 UTC m=+0.054721606 container create 049a57184c9698898eb885edb4ae71bba811bbfcd4503e4ee862dce445539bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_noyce, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 08:04:36 compute-0 systemd[1]: Started libpod-conmon-049a57184c9698898eb885edb4ae71bba811bbfcd4503e4ee862dce445539bc2.scope.
Nov 29 08:04:36 compute-0 podman[273751]: 2025-11-29 08:04:36.147431383 +0000 UTC m=+0.031646084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:04:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:04:36 compute-0 podman[273751]: 2025-11-29 08:04:36.257972253 +0000 UTC m=+0.142186964 container init 049a57184c9698898eb885edb4ae71bba811bbfcd4503e4ee862dce445539bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 08:04:36 compute-0 podman[273751]: 2025-11-29 08:04:36.264822868 +0000 UTC m=+0.149037539 container start 049a57184c9698898eb885edb4ae71bba811bbfcd4503e4ee862dce445539bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_noyce, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:04:36 compute-0 podman[273751]: 2025-11-29 08:04:36.269334719 +0000 UTC m=+0.153549610 container attach 049a57184c9698898eb885edb4ae71bba811bbfcd4503e4ee862dce445539bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_noyce, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:04:36 compute-0 optimistic_noyce[273767]: 167 167
Nov 29 08:04:36 compute-0 systemd[1]: libpod-049a57184c9698898eb885edb4ae71bba811bbfcd4503e4ee862dce445539bc2.scope: Deactivated successfully.
Nov 29 08:04:36 compute-0 podman[273751]: 2025-11-29 08:04:36.273040939 +0000 UTC m=+0.157255620 container died 049a57184c9698898eb885edb4ae71bba811bbfcd4503e4ee862dce445539bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_noyce, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 29 08:04:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-6133f5d075646a1aef12d50d62c9d07e1b2c28d30fee99ceae8dad8b8c907d81-merged.mount: Deactivated successfully.
Nov 29 08:04:36 compute-0 podman[273751]: 2025-11-29 08:04:36.318750811 +0000 UTC m=+0.202965492 container remove 049a57184c9698898eb885edb4ae71bba811bbfcd4503e4ee862dce445539bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:04:36 compute-0 systemd[1]: libpod-conmon-049a57184c9698898eb885edb4ae71bba811bbfcd4503e4ee862dce445539bc2.scope: Deactivated successfully.
Nov 29 08:04:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:04:36 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1064491475' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:04:36 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1064491475' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:36 compute-0 podman[273791]: 2025-11-29 08:04:36.537738345 +0000 UTC m=+0.076384220 container create 2d2cc192aa863932b451d7caa85a73d5c54cf3f2f62caed74b8e4f688a2cc3d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:04:36 compute-0 systemd[1]: Started libpod-conmon-2d2cc192aa863932b451d7caa85a73d5c54cf3f2f62caed74b8e4f688a2cc3d6.scope.
Nov 29 08:04:36 compute-0 podman[273791]: 2025-11-29 08:04:36.490786539 +0000 UTC m=+0.029432434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:04:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d778d723f6c7c2c2a294e94c23889913ce98c8dda5ca0f4e4c1c3e9c76ca3d73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d778d723f6c7c2c2a294e94c23889913ce98c8dda5ca0f4e4c1c3e9c76ca3d73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d778d723f6c7c2c2a294e94c23889913ce98c8dda5ca0f4e4c1c3e9c76ca3d73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d778d723f6c7c2c2a294e94c23889913ce98c8dda5ca0f4e4c1c3e9c76ca3d73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:36 compute-0 podman[273791]: 2025-11-29 08:04:36.631204294 +0000 UTC m=+0.169850199 container init 2d2cc192aa863932b451d7caa85a73d5c54cf3f2f62caed74b8e4f688a2cc3d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_thompson, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:04:36 compute-0 podman[273791]: 2025-11-29 08:04:36.64623563 +0000 UTC m=+0.184881515 container start 2d2cc192aa863932b451d7caa85a73d5c54cf3f2f62caed74b8e4f688a2cc3d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_thompson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:04:36 compute-0 podman[273791]: 2025-11-29 08:04:36.650167736 +0000 UTC m=+0.188813611 container attach 2d2cc192aa863932b451d7caa85a73d5c54cf3f2f62caed74b8e4f688a2cc3d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:04:36 compute-0 ceph-mon[75237]: pgmap v1320: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 178 op/s
Nov 29 08:04:36 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1064491475' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:36 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1064491475' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:37 compute-0 cranky_thompson[273807]: {
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:     "0": [
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:         {
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "devices": [
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "/dev/loop3"
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             ],
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "lv_name": "ceph_lv0",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "lv_size": "21470642176",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "name": "ceph_lv0",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "tags": {
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.cluster_name": "ceph",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.crush_device_class": "",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.encrypted": "0",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.osd_id": "0",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.type": "block",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.vdo": "0"
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             },
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "type": "block",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "vg_name": "ceph_vg0"
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:         }
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:     ],
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:     "1": [
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:         {
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "devices": [
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "/dev/loop4"
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             ],
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "lv_name": "ceph_lv1",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "lv_size": "21470642176",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "name": "ceph_lv1",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "tags": {
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.cluster_name": "ceph",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.crush_device_class": "",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.encrypted": "0",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.osd_id": "1",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.type": "block",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.vdo": "0"
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             },
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "type": "block",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "vg_name": "ceph_vg1"
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:         }
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:     ],
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:     "2": [
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:         {
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "devices": [
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "/dev/loop5"
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             ],
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "lv_name": "ceph_lv2",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "lv_size": "21470642176",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "name": "ceph_lv2",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "tags": {
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.cluster_name": "ceph",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.crush_device_class": "",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.encrypted": "0",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.osd_id": "2",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.type": "block",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:                 "ceph.vdo": "0"
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             },
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "type": "block",
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:             "vg_name": "ceph_vg2"
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:         }
Nov 29 08:04:37 compute-0 cranky_thompson[273807]:     ]
Nov 29 08:04:37 compute-0 cranky_thompson[273807]: }
Nov 29 08:04:37 compute-0 systemd[1]: libpod-2d2cc192aa863932b451d7caa85a73d5c54cf3f2f62caed74b8e4f688a2cc3d6.scope: Deactivated successfully.
Nov 29 08:04:37 compute-0 podman[273817]: 2025-11-29 08:04:37.523332694 +0000 UTC m=+0.037826211 container died 2d2cc192aa863932b451d7caa85a73d5c54cf3f2f62caed74b8e4f688a2cc3d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_thompson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:04:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-d778d723f6c7c2c2a294e94c23889913ce98c8dda5ca0f4e4c1c3e9c76ca3d73-merged.mount: Deactivated successfully.
Nov 29 08:04:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 178 op/s
Nov 29 08:04:37 compute-0 podman[273817]: 2025-11-29 08:04:37.600381041 +0000 UTC m=+0.114874528 container remove 2d2cc192aa863932b451d7caa85a73d5c54cf3f2f62caed74b8e4f688a2cc3d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_thompson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:04:37 compute-0 systemd[1]: libpod-conmon-2d2cc192aa863932b451d7caa85a73d5c54cf3f2f62caed74b8e4f688a2cc3d6.scope: Deactivated successfully.
Nov 29 08:04:37 compute-0 sudo[273685]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:37 compute-0 sudo[273830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:37 compute-0 sudo[273830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:37 compute-0 sudo[273830]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:37 compute-0 sudo[273855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:04:37 compute-0 sudo[273855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:37 compute-0 sudo[273855]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:37 compute-0 sudo[273880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:37 compute-0 sudo[273880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:37 compute-0 sudo[273880]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:37 compute-0 sudo[273905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 08:04:37 compute-0 sudo[273905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:38 compute-0 nova_compute[255040]: 2025-11-29 08:04:38.309 255071 DEBUG nova.network.neutron [req-b8eb49e5-e002-477e-88c0-17698576e353 req-4e2ea759-50b3-4c26-a5e9-16ddd8ca57c1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Updated VIF entry in instance network info cache for port bf5f0bd4-6972-4cd3-9d99-aace0e25efc8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:04:38 compute-0 nova_compute[255040]: 2025-11-29 08:04:38.312 255071 DEBUG nova.network.neutron [req-b8eb49e5-e002-477e-88c0-17698576e353 req-4e2ea759-50b3-4c26-a5e9-16ddd8ca57c1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Updating instance_info_cache with network_info: [{"id": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "address": "fa:16:3e:17:a4:81", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f0bd4-69", "ovs_interfaceid": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:04:38 compute-0 podman[273971]: 2025-11-29 08:04:38.306721413 +0000 UTC m=+0.056154304 container create 7ed7c686d64e5ed5a053b4a9ba901a766362f638ef35a13d407aece377a8986f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lalande, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:04:38 compute-0 nova_compute[255040]: 2025-11-29 08:04:38.340 255071 DEBUG oslo_concurrency.lockutils [req-b8eb49e5-e002-477e-88c0-17698576e353 req-4e2ea759-50b3-4c26-a5e9-16ddd8ca57c1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-cd169ba7-ec52-418c-a12c-4069b40674d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:04:38 compute-0 systemd[1]: Started libpod-conmon-7ed7c686d64e5ed5a053b4a9ba901a766362f638ef35a13d407aece377a8986f.scope.
Nov 29 08:04:38 compute-0 podman[273971]: 2025-11-29 08:04:38.280840865 +0000 UTC m=+0.030273786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:04:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:04:38 compute-0 podman[273971]: 2025-11-29 08:04:38.418403754 +0000 UTC m=+0.167836665 container init 7ed7c686d64e5ed5a053b4a9ba901a766362f638ef35a13d407aece377a8986f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lalande, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:04:38 compute-0 podman[273971]: 2025-11-29 08:04:38.426992856 +0000 UTC m=+0.176425747 container start 7ed7c686d64e5ed5a053b4a9ba901a766362f638ef35a13d407aece377a8986f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lalande, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:04:38 compute-0 elated_lalande[273987]: 167 167
Nov 29 08:04:38 compute-0 systemd[1]: libpod-7ed7c686d64e5ed5a053b4a9ba901a766362f638ef35a13d407aece377a8986f.scope: Deactivated successfully.
Nov 29 08:04:38 compute-0 conmon[273987]: conmon 7ed7c686d64e5ed5a053 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7ed7c686d64e5ed5a053b4a9ba901a766362f638ef35a13d407aece377a8986f.scope/container/memory.events
Nov 29 08:04:38 compute-0 podman[273971]: 2025-11-29 08:04:38.435742571 +0000 UTC m=+0.185175482 container attach 7ed7c686d64e5ed5a053b4a9ba901a766362f638ef35a13d407aece377a8986f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lalande, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 08:04:38 compute-0 podman[273971]: 2025-11-29 08:04:38.436586854 +0000 UTC m=+0.186019745 container died 7ed7c686d64e5ed5a053b4a9ba901a766362f638ef35a13d407aece377a8986f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lalande, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:04:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-77bdf20ef5d9fec4c744627de13b3bf09207870917b31355f8b6f9da861011aa-merged.mount: Deactivated successfully.
Nov 29 08:04:38 compute-0 podman[273971]: 2025-11-29 08:04:38.48504029 +0000 UTC m=+0.234473181 container remove 7ed7c686d64e5ed5a053b4a9ba901a766362f638ef35a13d407aece377a8986f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lalande, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 08:04:38 compute-0 systemd[1]: libpod-conmon-7ed7c686d64e5ed5a053b4a9ba901a766362f638ef35a13d407aece377a8986f.scope: Deactivated successfully.
Nov 29 08:04:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:38 compute-0 podman[274012]: 2025-11-29 08:04:38.667568481 +0000 UTC m=+0.044747577 container create 44c42c9d347b126b919ab7e85cf61d7a91becc0090031291e1e3173c8ceb03eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_robinson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 08:04:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:04:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:04:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:04:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:04:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:04:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:04:38 compute-0 systemd[1]: Started libpod-conmon-44c42c9d347b126b919ab7e85cf61d7a91becc0090031291e1e3173c8ceb03eb.scope.
Nov 29 08:04:38 compute-0 podman[274012]: 2025-11-29 08:04:38.647630983 +0000 UTC m=+0.024810089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:04:38 compute-0 ceph-mon[75237]: pgmap v1321: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 178 op/s
Nov 29 08:04:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:04:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0772ea9b333a56791c8f1b13dd88592552256f5693868d8b808a4f9129f51692/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0772ea9b333a56791c8f1b13dd88592552256f5693868d8b808a4f9129f51692/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0772ea9b333a56791c8f1b13dd88592552256f5693868d8b808a4f9129f51692/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0772ea9b333a56791c8f1b13dd88592552256f5693868d8b808a4f9129f51692/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:38 compute-0 podman[274012]: 2025-11-29 08:04:38.802778566 +0000 UTC m=+0.179957752 container init 44c42c9d347b126b919ab7e85cf61d7a91becc0090031291e1e3173c8ceb03eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 29 08:04:38 compute-0 podman[274012]: 2025-11-29 08:04:38.809922338 +0000 UTC m=+0.187101474 container start 44c42c9d347b126b919ab7e85cf61d7a91becc0090031291e1e3173c8ceb03eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_robinson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:04:38 compute-0 podman[274012]: 2025-11-29 08:04:38.815662803 +0000 UTC m=+0.192841999 container attach 44c42c9d347b126b919ab7e85cf61d7a91becc0090031291e1e3173c8ceb03eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:04:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:04:38
Nov 29 08:04:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:04:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:04:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'backups', 'volumes', 'cephfs.cephfs.meta', 'images']
Nov 29 08:04:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:04:39 compute-0 nova_compute[255040]: 2025-11-29 08:04:39.574 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 168 op/s
Nov 29 08:04:39 compute-0 sweet_robinson[274029]: {
Nov 29 08:04:39 compute-0 sweet_robinson[274029]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 08:04:39 compute-0 sweet_robinson[274029]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:04:39 compute-0 sweet_robinson[274029]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:04:39 compute-0 sweet_robinson[274029]:         "osd_id": 2,
Nov 29 08:04:39 compute-0 sweet_robinson[274029]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:04:39 compute-0 sweet_robinson[274029]:         "type": "bluestore"
Nov 29 08:04:39 compute-0 sweet_robinson[274029]:     },
Nov 29 08:04:39 compute-0 sweet_robinson[274029]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 08:04:39 compute-0 sweet_robinson[274029]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:04:39 compute-0 sweet_robinson[274029]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:04:39 compute-0 sweet_robinson[274029]:         "osd_id": 0,
Nov 29 08:04:39 compute-0 sweet_robinson[274029]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:04:39 compute-0 sweet_robinson[274029]:         "type": "bluestore"
Nov 29 08:04:39 compute-0 sweet_robinson[274029]:     },
Nov 29 08:04:39 compute-0 sweet_robinson[274029]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 08:04:39 compute-0 sweet_robinson[274029]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:04:39 compute-0 sweet_robinson[274029]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:04:39 compute-0 sweet_robinson[274029]:         "osd_id": 1,
Nov 29 08:04:39 compute-0 sweet_robinson[274029]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:04:39 compute-0 sweet_robinson[274029]:         "type": "bluestore"
Nov 29 08:04:39 compute-0 sweet_robinson[274029]:     }
Nov 29 08:04:39 compute-0 sweet_robinson[274029]: }
Nov 29 08:04:39 compute-0 systemd[1]: libpod-44c42c9d347b126b919ab7e85cf61d7a91becc0090031291e1e3173c8ceb03eb.scope: Deactivated successfully.
Nov 29 08:04:39 compute-0 systemd[1]: libpod-44c42c9d347b126b919ab7e85cf61d7a91becc0090031291e1e3173c8ceb03eb.scope: Consumed 1.141s CPU time.
Nov 29 08:04:39 compute-0 podman[274012]: 2025-11-29 08:04:39.956140808 +0000 UTC m=+1.333319954 container died 44c42c9d347b126b919ab7e85cf61d7a91becc0090031291e1e3173c8ceb03eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 29 08:04:40 compute-0 nova_compute[255040]: 2025-11-29 08:04:40.474 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:40 compute-0 nova_compute[255040]: 2025-11-29 08:04:40.623 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:41 compute-0 ceph-mon[75237]: pgmap v1322: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 168 op/s
Nov 29 08:04:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-0772ea9b333a56791c8f1b13dd88592552256f5693868d8b808a4f9129f51692-merged.mount: Deactivated successfully.
Nov 29 08:04:41 compute-0 podman[274012]: 2025-11-29 08:04:41.585928314 +0000 UTC m=+2.963107420 container remove 44c42c9d347b126b919ab7e85cf61d7a91becc0090031291e1e3173c8ceb03eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 08:04:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 146 op/s
Nov 29 08:04:41 compute-0 systemd[1]: libpod-conmon-44c42c9d347b126b919ab7e85cf61d7a91becc0090031291e1e3173c8ceb03eb.scope: Deactivated successfully.
Nov 29 08:04:41 compute-0 sudo[273905]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:04:41 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:04:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:04:41 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:04:41 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 5c0dc292-9ceb-46de-8da2-7390aa40511d does not exist
Nov 29 08:04:41 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 9cb578e7-f1a2-44c3-8ec3-a18f84a2de66 does not exist
Nov 29 08:04:41 compute-0 sudo[274074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:41 compute-0 sudo[274074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:41 compute-0 sudo[274074]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:41 compute-0 sudo[274099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:04:41 compute-0 sudo[274099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:41 compute-0 sudo[274099]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:42 compute-0 ceph-mon[75237]: pgmap v1323: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 146 op/s
Nov 29 08:04:42 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:04:42 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:04:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:04:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:04:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:04:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:04:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:04:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:04:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:04:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:04:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:04:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:04:43 compute-0 nova_compute[255040]: 2025-11-29 08:04:43.375 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 137 op/s
Nov 29 08:04:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:44 compute-0 nova_compute[255040]: 2025-11-29 08:04:44.576 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:45 compute-0 ceph-mon[75237]: pgmap v1324: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 137 op/s
Nov 29 08:04:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 138 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 539 KiB/s wr, 126 op/s
Nov 29 08:04:45 compute-0 nova_compute[255040]: 2025-11-29 08:04:45.628 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:46 compute-0 ceph-mon[75237]: pgmap v1325: 305 pgs: 305 active+clean; 138 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 539 KiB/s wr, 126 op/s
Nov 29 08:04:46 compute-0 ovn_controller[153295]: 2025-11-29T08:04:46Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:17:a4:81 10.100.0.8
Nov 29 08:04:46 compute-0 ovn_controller[153295]: 2025-11-29T08:04:46Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:17:a4:81 10.100.0.8
Nov 29 08:04:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 138 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 525 KiB/s wr, 42 op/s
Nov 29 08:04:47 compute-0 podman[274125]: 2025-11-29 08:04:47.972939748 +0000 UTC m=+0.128606838 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 08:04:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:48 compute-0 ceph-mon[75237]: pgmap v1326: 305 pgs: 305 active+clean; 138 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 525 KiB/s wr, 42 op/s
Nov 29 08:04:49 compute-0 nova_compute[255040]: 2025-11-29 08:04:49.367 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:49 compute-0 nova_compute[255040]: 2025-11-29 08:04:49.580 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 174 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 100 op/s
Nov 29 08:04:49 compute-0 ceph-mon[75237]: pgmap v1327: 305 pgs: 305 active+clean; 174 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 100 op/s
Nov 29 08:04:50 compute-0 nova_compute[255040]: 2025-11-29 08:04:50.630 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 199 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 97 op/s
Nov 29 08:04:52 compute-0 nova_compute[255040]: 2025-11-29 08:04:52.315 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:52 compute-0 ceph-mon[75237]: pgmap v1328: 305 pgs: 305 active+clean; 199 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 97 op/s
Nov 29 08:04:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 211 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 93 op/s
Nov 29 08:04:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:54 compute-0 ceph-mon[75237]: pgmap v1329: 305 pgs: 305 active+clean; 211 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 93 op/s
Nov 29 08:04:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:04:54 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2053254406' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:04:54 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2053254406' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:54 compute-0 nova_compute[255040]: 2025-11-29 08:04:54.581 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:55 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2053254406' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:55 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2053254406' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 213 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 111 op/s
Nov 29 08:04:55 compute-0 nova_compute[255040]: 2025-11-29 08:04:55.632 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007571745580191443 of space, bias 1.0, pg target 0.2271523674057433 quantized to 32 (current 32)
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0006928207617047601 of space, bias 1.0, pg target 0.207846228511428 quantized to 32 (current 32)
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:04:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:04:56 compute-0 ceph-mon[75237]: pgmap v1330: 305 pgs: 305 active+clean; 213 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 111 op/s
Nov 29 08:04:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:04:56 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2891692795' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:04:56 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2891692795' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:57 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2891692795' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:57 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2891692795' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 213 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 253 KiB/s rd, 3.4 MiB/s wr, 101 op/s
Nov 29 08:04:57 compute-0 podman[274152]: 2025-11-29 08:04:57.891407731 +0000 UTC m=+0.059726771 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 08:04:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:58 compute-0 ceph-mon[75237]: pgmap v1331: 305 pgs: 305 active+clean; 213 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 253 KiB/s rd, 3.4 MiB/s wr, 101 op/s
Nov 29 08:04:59 compute-0 nova_compute[255040]: 2025-11-29 08:04:59.583 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 187 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 265 KiB/s rd, 3.4 MiB/s wr, 117 op/s
Nov 29 08:05:00 compute-0 nova_compute[255040]: 2025-11-29 08:05:00.635 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:00 compute-0 ceph-mon[75237]: pgmap v1332: 305 pgs: 305 active+clean; 187 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 265 KiB/s rd, 3.4 MiB/s wr, 117 op/s
Nov 29 08:05:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:05:00 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4014710506' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:05:00 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4014710506' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:01 compute-0 anacron[30971]: Job `cron.monthly' started
Nov 29 08:05:01 compute-0 anacron[30971]: Job `cron.monthly' terminated
Nov 29 08:05:01 compute-0 anacron[30971]: Normal exit (3 jobs run)
Nov 29 08:05:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 167 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 1.6 MiB/s wr, 60 op/s
Nov 29 08:05:01 compute-0 nova_compute[255040]: 2025-11-29 08:05:01.675 255071 DEBUG oslo_concurrency.lockutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Acquiring lock "ef9475c4-846b-4370-8330-5a59e328bc07" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:01 compute-0 nova_compute[255040]: 2025-11-29 08:05:01.676 255071 DEBUG oslo_concurrency.lockutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lock "ef9475c4-846b-4370-8330-5a59e328bc07" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:01 compute-0 nova_compute[255040]: 2025-11-29 08:05:01.700 255071 DEBUG nova.compute.manager [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:05:01 compute-0 nova_compute[255040]: 2025-11-29 08:05:01.794 255071 DEBUG oslo_concurrency.lockutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:01 compute-0 nova_compute[255040]: 2025-11-29 08:05:01.795 255071 DEBUG oslo_concurrency.lockutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:01 compute-0 nova_compute[255040]: 2025-11-29 08:05:01.805 255071 DEBUG nova.virt.hardware [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:05:01 compute-0 nova_compute[255040]: 2025-11-29 08:05:01.805 255071 INFO nova.compute.claims [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:05:01 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4014710506' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:01 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4014710506' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:01 compute-0 nova_compute[255040]: 2025-11-29 08:05:01.913 255071 DEBUG oslo_concurrency.processutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:05:02 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3055618723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.417 255071 DEBUG oslo_concurrency.processutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.425 255071 DEBUG nova.compute.provider_tree [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.440 255071 DEBUG nova.scheduler.client.report [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.459 255071 DEBUG oslo_concurrency.lockutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.460 255071 DEBUG nova.compute.manager [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.516 255071 DEBUG nova.compute.manager [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.517 255071 DEBUG nova.network.neutron [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.547 255071 INFO nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.570 255071 DEBUG nova.compute.manager [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.650 255071 DEBUG nova.compute.manager [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.651 255071 DEBUG nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.652 255071 INFO nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Creating image(s)
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.678 255071 DEBUG nova.storage.rbd_utils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] rbd image ef9475c4-846b-4370-8330-5a59e328bc07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.703 255071 DEBUG nova.storage.rbd_utils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] rbd image ef9475c4-846b-4370-8330-5a59e328bc07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.725 255071 DEBUG nova.storage.rbd_utils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] rbd image ef9475c4-846b-4370-8330-5a59e328bc07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.730 255071 DEBUG oslo_concurrency.processutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.753 255071 DEBUG nova.policy [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7d13a2468b4442809f7968c612cb7523', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '65fc2f72f64e4c91b66d05d7ebaf9e4c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.812 255071 DEBUG oslo_concurrency.processutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.813 255071 DEBUG oslo_concurrency.lockutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Acquiring lock "55a6637599f7119d0d1afd670bb8713620840059" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.814 255071 DEBUG oslo_concurrency.lockutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.815 255071 DEBUG oslo_concurrency.lockutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.843 255071 DEBUG nova.storage.rbd_utils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] rbd image ef9475c4-846b-4370-8330-5a59e328bc07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:02 compute-0 nova_compute[255040]: 2025-11-29 08:05:02.848 255071 DEBUG oslo_concurrency.processutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 ef9475c4-846b-4370-8330-5a59e328bc07_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:02 compute-0 ceph-mon[75237]: pgmap v1333: 305 pgs: 305 active+clean; 167 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 1.6 MiB/s wr, 60 op/s
Nov 29 08:05:02 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3055618723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:02 compute-0 podman[274250]: 2025-11-29 08:05:02.918076642 +0000 UTC m=+0.073612586 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 08:05:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 167 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 548 KiB/s wr, 46 op/s
Nov 29 08:05:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Nov 29 08:05:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:04.149 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:05:04 compute-0 nova_compute[255040]: 2025-11-29 08:05:04.149 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:04.150 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:05:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Nov 29 08:05:04 compute-0 ceph-mon[75237]: pgmap v1334: 305 pgs: 305 active+clean; 167 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 548 KiB/s wr, 46 op/s
Nov 29 08:05:04 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Nov 29 08:05:04 compute-0 nova_compute[255040]: 2025-11-29 08:05:04.255 255071 DEBUG nova.network.neutron [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Successfully created port: 81864c7c-554a-4b81-ba45-62d08b95c981 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:05:04 compute-0 nova_compute[255040]: 2025-11-29 08:05:04.586 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:05 compute-0 ceph-mon[75237]: osdmap e222: 3 total, 3 up, 3 in
Nov 29 08:05:05 compute-0 nova_compute[255040]: 2025-11-29 08:05:05.503 255071 DEBUG nova.network.neutron [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Successfully updated port: 81864c7c-554a-4b81-ba45-62d08b95c981 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:05:05 compute-0 nova_compute[255040]: 2025-11-29 08:05:05.522 255071 DEBUG oslo_concurrency.lockutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Acquiring lock "refresh_cache-ef9475c4-846b-4370-8330-5a59e328bc07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:05:05 compute-0 nova_compute[255040]: 2025-11-29 08:05:05.523 255071 DEBUG oslo_concurrency.lockutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Acquired lock "refresh_cache-ef9475c4-846b-4370-8330-5a59e328bc07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:05:05 compute-0 nova_compute[255040]: 2025-11-29 08:05:05.523 255071 DEBUG nova.network.neutron [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:05:05 compute-0 nova_compute[255040]: 2025-11-29 08:05:05.594 255071 DEBUG nova.compute.manager [req-b07c7520-fd04-4923-888d-24aa77b4f0d9 req-b7932994-44fe-45f5-b1fb-409edab7787d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Received event network-changed-81864c7c-554a-4b81-ba45-62d08b95c981 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:05 compute-0 nova_compute[255040]: 2025-11-29 08:05:05.595 255071 DEBUG nova.compute.manager [req-b07c7520-fd04-4923-888d-24aa77b4f0d9 req-b7932994-44fe-45f5-b1fb-409edab7787d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Refreshing instance network info cache due to event network-changed-81864c7c-554a-4b81-ba45-62d08b95c981. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:05:05 compute-0 nova_compute[255040]: 2025-11-29 08:05:05.595 255071 DEBUG oslo_concurrency.lockutils [req-b07c7520-fd04-4923-888d-24aa77b4f0d9 req-b7932994-44fe-45f5-b1fb-409edab7787d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-ef9475c4-846b-4370-8330-5a59e328bc07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:05:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 191 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.3 MiB/s wr, 42 op/s
Nov 29 08:05:05 compute-0 nova_compute[255040]: 2025-11-29 08:05:05.684 255071 DEBUG nova.network.neutron [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:05:05 compute-0 nova_compute[255040]: 2025-11-29 08:05:05.711 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:05 compute-0 nova_compute[255040]: 2025-11-29 08:05:05.941 255071 DEBUG oslo_concurrency.processutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 ef9475c4-846b-4370-8330-5a59e328bc07_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:06 compute-0 nova_compute[255040]: 2025-11-29 08:05:06.012 255071 DEBUG nova.storage.rbd_utils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] resizing rbd image ef9475c4-846b-4370-8330-5a59e328bc07_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:05:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:06.152 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:06 compute-0 nova_compute[255040]: 2025-11-29 08:05:06.208 255071 DEBUG nova.objects.instance [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lazy-loading 'migration_context' on Instance uuid ef9475c4-846b-4370-8330-5a59e328bc07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:05:06 compute-0 nova_compute[255040]: 2025-11-29 08:05:06.227 255071 DEBUG nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:05:06 compute-0 nova_compute[255040]: 2025-11-29 08:05:06.227 255071 DEBUG nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Ensure instance console log exists: /var/lib/nova/instances/ef9475c4-846b-4370-8330-5a59e328bc07/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:05:06 compute-0 nova_compute[255040]: 2025-11-29 08:05:06.228 255071 DEBUG oslo_concurrency.lockutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:06 compute-0 nova_compute[255040]: 2025-11-29 08:05:06.229 255071 DEBUG oslo_concurrency.lockutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:06 compute-0 nova_compute[255040]: 2025-11-29 08:05:06.229 255071 DEBUG oslo_concurrency.lockutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:06 compute-0 ceph-mon[75237]: pgmap v1336: 305 pgs: 305 active+clean; 191 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.3 MiB/s wr, 42 op/s
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.314 255071 DEBUG nova.network.neutron [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Updating instance_info_cache with network_info: [{"id": "81864c7c-554a-4b81-ba45-62d08b95c981", "address": "fa:16:3e:6f:40:f3", "network": {"id": "905664a2-5abc-4150-a1c0-f1b86c7f655e", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1867814284-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65fc2f72f64e4c91b66d05d7ebaf9e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81864c7c-55", "ovs_interfaceid": "81864c7c-554a-4b81-ba45-62d08b95c981", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.368 255071 DEBUG oslo_concurrency.lockutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Releasing lock "refresh_cache-ef9475c4-846b-4370-8330-5a59e328bc07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.368 255071 DEBUG nova.compute.manager [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Instance network_info: |[{"id": "81864c7c-554a-4b81-ba45-62d08b95c981", "address": "fa:16:3e:6f:40:f3", "network": {"id": "905664a2-5abc-4150-a1c0-f1b86c7f655e", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1867814284-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65fc2f72f64e4c91b66d05d7ebaf9e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81864c7c-55", "ovs_interfaceid": "81864c7c-554a-4b81-ba45-62d08b95c981", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.369 255071 DEBUG oslo_concurrency.lockutils [req-b07c7520-fd04-4923-888d-24aa77b4f0d9 req-b7932994-44fe-45f5-b1fb-409edab7787d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-ef9475c4-846b-4370-8330-5a59e328bc07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.369 255071 DEBUG nova.network.neutron [req-b07c7520-fd04-4923-888d-24aa77b4f0d9 req-b7932994-44fe-45f5-b1fb-409edab7787d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Refreshing network info cache for port 81864c7c-554a-4b81-ba45-62d08b95c981 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.374 255071 DEBUG nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Start _get_guest_xml network_info=[{"id": "81864c7c-554a-4b81-ba45-62d08b95c981", "address": "fa:16:3e:6f:40:f3", "network": {"id": "905664a2-5abc-4150-a1c0-f1b86c7f655e", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1867814284-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65fc2f72f64e4c91b66d05d7ebaf9e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81864c7c-55", "ovs_interfaceid": "81864c7c-554a-4b81-ba45-62d08b95c981", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'image_id': '36a9388d-0d77-4d24-a915-be92247e5dbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.382 255071 WARNING nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.421 255071 DEBUG nova.virt.libvirt.host [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.422 255071 DEBUG nova.virt.libvirt.host [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.520 255071 DEBUG nova.virt.libvirt.host [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.521 255071 DEBUG nova.virt.libvirt.host [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.521 255071 DEBUG nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.522 255071 DEBUG nova.virt.hardware [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.522 255071 DEBUG nova.virt.hardware [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.522 255071 DEBUG nova.virt.hardware [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.523 255071 DEBUG nova.virt.hardware [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.523 255071 DEBUG nova.virt.hardware [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.523 255071 DEBUG nova.virt.hardware [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.523 255071 DEBUG nova.virt.hardware [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.523 255071 DEBUG nova.virt.hardware [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.524 255071 DEBUG nova.virt.hardware [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.524 255071 DEBUG nova.virt.hardware [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.524 255071 DEBUG nova.virt.hardware [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.528 255071 DEBUG oslo_concurrency.processutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Nov 29 08:05:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 191 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.3 MiB/s wr, 42 op/s
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.880 255071 DEBUG oslo_concurrency.lockutils [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "cd169ba7-ec52-418c-a12c-4069b40674d7" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.880 255071 DEBUG oslo_concurrency.lockutils [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "cd169ba7-ec52-418c-a12c-4069b40674d7" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.897 255071 DEBUG nova.objects.instance [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lazy-loading 'flavor' on Instance uuid cd169ba7-ec52-418c-a12c-4069b40674d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.921 255071 INFO nova.virt.libvirt.driver [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Ignoring supplied device name: /dev/vdb
Nov 29 08:05:07 compute-0 nova_compute[255040]: 2025-11-29 08:05:07.933 255071 DEBUG oslo_concurrency.lockutils [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "cd169ba7-ec52-418c-a12c-4069b40674d7" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.162 255071 DEBUG oslo_concurrency.lockutils [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "cd169ba7-ec52-418c-a12c-4069b40674d7" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.163 255071 DEBUG oslo_concurrency.lockutils [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "cd169ba7-ec52-418c-a12c-4069b40674d7" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.163 255071 INFO nova.compute.manager [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Attaching volume 4cdf7634-b0c7-4205-bd7c-0875c4bd1eba to /dev/vdb
Nov 29 08:05:08 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Nov 29 08:05:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:05:08 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/712382475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:08 compute-0 ceph-mon[75237]: pgmap v1337: 305 pgs: 305 active+clean; 191 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.3 MiB/s wr, 42 op/s
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.228 255071 DEBUG oslo_concurrency.processutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.700s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.253 255071 DEBUG nova.storage.rbd_utils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] rbd image ef9475c4-846b-4370-8330-5a59e328bc07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.259 255071 DEBUG oslo_concurrency.processutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.356 255071 DEBUG os_brick.utils [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.359 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.389 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.390 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[eb362f11-bba3-4e19-9460-c3de8a4e3198]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.392 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.406 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.406 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[08052371-c098-491c-a3dc-ebb742f767b5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.410 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.422 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.423 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea95d3f-6208-437a-a674-b59a6f2c1718]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.426 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[5cbfb59e-7c2e-4db1-bfa2-b45a1d683894]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.427 255071 DEBUG oslo_concurrency.processutils [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.460 255071 DEBUG oslo_concurrency.processutils [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.465 255071 DEBUG os_brick.initiator.connectors.lightos [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.465 255071 DEBUG os_brick.initiator.connectors.lightos [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.466 255071 DEBUG os_brick.initiator.connectors.lightos [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.467 255071 DEBUG os_brick.utils [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] <== get_connector_properties: return (109ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.468 255071 DEBUG nova.virt.block_device [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Updating existing volume attachment record: 3f114d5a-6144-4366-9759-0d4d16a4350f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:05:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:05:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:05:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:05:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:05:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:05:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:05:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:05:08 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1170137534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.734 255071 DEBUG oslo_concurrency.processutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.736 255071 DEBUG nova.virt.libvirt.vif [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-1131396906',display_name='tempest-VolumesExtendAttachedTest-instance-1131396906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-1131396906',id=7,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJtAWAu/tHfQ+gvLOin9nwfuA44WRb8BkhOR7BuFicQSRE2YvkDNdJHZ+xBcp7Nt990mVKqwKK+dkhENtNo30xapMYTR4HULTUocDd4F1NCeMGLP4UrL5jQbb2RKAX9+XA==',key_name='tempest-keypair-1417582676',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65fc2f72f64e4c91b66d05d7ebaf9e4c',ramdisk_id='',reservation_id='r-nl1nrz56',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-1821035368',owner_user_name='tempest-VolumesExtendAttachedTest-1821035368-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7d13a2468b4442809f7968c612cb7523',uuid=ef9475c4-846b-4370-8330-5a59e328bc07,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81864c7c-554a-4b81-ba45-62d08b95c981", "address": "fa:16:3e:6f:40:f3", "network": {"id": "905664a2-5abc-4150-a1c0-f1b86c7f655e", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1867814284-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65fc2f72f64e4c91b66d05d7ebaf9e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81864c7c-55", "ovs_interfaceid": "81864c7c-554a-4b81-ba45-62d08b95c981", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.737 255071 DEBUG nova.network.os_vif_util [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Converting VIF {"id": "81864c7c-554a-4b81-ba45-62d08b95c981", "address": "fa:16:3e:6f:40:f3", "network": {"id": "905664a2-5abc-4150-a1c0-f1b86c7f655e", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1867814284-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65fc2f72f64e4c91b66d05d7ebaf9e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81864c7c-55", "ovs_interfaceid": "81864c7c-554a-4b81-ba45-62d08b95c981", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.739 255071 DEBUG nova.network.os_vif_util [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:40:f3,bridge_name='br-int',has_traffic_filtering=True,id=81864c7c-554a-4b81-ba45-62d08b95c981,network=Network(905664a2-5abc-4150-a1c0-f1b86c7f655e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81864c7c-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.740 255071 DEBUG nova.objects.instance [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lazy-loading 'pci_devices' on Instance uuid ef9475c4-846b-4370-8330-5a59e328bc07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.757 255071 DEBUG nova.network.neutron [req-b07c7520-fd04-4923-888d-24aa77b4f0d9 req-b7932994-44fe-45f5-b1fb-409edab7787d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Updated VIF entry in instance network info cache for port 81864c7c-554a-4b81-ba45-62d08b95c981. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.758 255071 DEBUG nova.network.neutron [req-b07c7520-fd04-4923-888d-24aa77b4f0d9 req-b7932994-44fe-45f5-b1fb-409edab7787d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Updating instance_info_cache with network_info: [{"id": "81864c7c-554a-4b81-ba45-62d08b95c981", "address": "fa:16:3e:6f:40:f3", "network": {"id": "905664a2-5abc-4150-a1c0-f1b86c7f655e", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1867814284-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65fc2f72f64e4c91b66d05d7ebaf9e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81864c7c-55", "ovs_interfaceid": "81864c7c-554a-4b81-ba45-62d08b95c981", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.762 255071 DEBUG nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:05:08 compute-0 nova_compute[255040]:   <uuid>ef9475c4-846b-4370-8330-5a59e328bc07</uuid>
Nov 29 08:05:08 compute-0 nova_compute[255040]:   <name>instance-00000007</name>
Nov 29 08:05:08 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:05:08 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:05:08 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <nova:name>tempest-VolumesExtendAttachedTest-instance-1131396906</nova:name>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:05:07</nova:creationTime>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:05:08 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:05:08 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:05:08 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:05:08 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:05:08 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:05:08 compute-0 nova_compute[255040]:         <nova:user uuid="7d13a2468b4442809f7968c612cb7523">tempest-VolumesExtendAttachedTest-1821035368-project-member</nova:user>
Nov 29 08:05:08 compute-0 nova_compute[255040]:         <nova:project uuid="65fc2f72f64e4c91b66d05d7ebaf9e4c">tempest-VolumesExtendAttachedTest-1821035368</nova:project>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <nova:root type="image" uuid="36a9388d-0d77-4d24-a915-be92247e5dbc"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:05:08 compute-0 nova_compute[255040]:         <nova:port uuid="81864c7c-554a-4b81-ba45-62d08b95c981">
Nov 29 08:05:08 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:05:08 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:05:08 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <system>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <entry name="serial">ef9475c4-846b-4370-8330-5a59e328bc07</entry>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <entry name="uuid">ef9475c4-846b-4370-8330-5a59e328bc07</entry>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     </system>
Nov 29 08:05:08 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:05:08 compute-0 nova_compute[255040]:   <os>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:   </os>
Nov 29 08:05:08 compute-0 nova_compute[255040]:   <features>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:   </features>
Nov 29 08:05:08 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:05:08 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:05:08 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/ef9475c4-846b-4370-8330-5a59e328bc07_disk">
Nov 29 08:05:08 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       </source>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:05:08 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/ef9475c4-846b-4370-8330-5a59e328bc07_disk.config">
Nov 29 08:05:08 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       </source>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:05:08 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:6f:40:f3"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <target dev="tap81864c7c-55"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/ef9475c4-846b-4370-8330-5a59e328bc07/console.log" append="off"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <video>
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     </video>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:05:08 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:05:08 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:05:08 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:05:08 compute-0 nova_compute[255040]: </domain>
Nov 29 08:05:08 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.763 255071 DEBUG nova.compute.manager [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Preparing to wait for external event network-vif-plugged-81864c7c-554a-4b81-ba45-62d08b95c981 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.764 255071 DEBUG oslo_concurrency.lockutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Acquiring lock "ef9475c4-846b-4370-8330-5a59e328bc07-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.764 255071 DEBUG oslo_concurrency.lockutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lock "ef9475c4-846b-4370-8330-5a59e328bc07-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.765 255071 DEBUG oslo_concurrency.lockutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lock "ef9475c4-846b-4370-8330-5a59e328bc07-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.766 255071 DEBUG nova.virt.libvirt.vif [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-1131396906',display_name='tempest-VolumesExtendAttachedTest-instance-1131396906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-1131396906',id=7,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJtAWAu/tHfQ+gvLOin9nwfuA44WRb8BkhOR7BuFicQSRE2YvkDNdJHZ+xBcp7Nt990mVKqwKK+dkhENtNo30xapMYTR4HULTUocDd4F1NCeMGLP4UrL5jQbb2RKAX9+XA==',key_name='tempest-keypair-1417582676',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65fc2f72f64e4c91b66d05d7ebaf9e4c',ramdisk_id='',reservation_id='r-nl1nrz56',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-1821035368',owner_user_name='tempest-VolumesExtendAttachedTest-1821035368-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7d13a2468b4442809f7968c612cb7523',uuid=ef9475c4-846b-4370-8330-5a59e328bc07,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81864c7c-554a-4b81-ba45-62d08b95c981", "address": "fa:16:3e:6f:40:f3", "network": {"id": "905664a2-5abc-4150-a1c0-f1b86c7f655e", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1867814284-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65fc2f72f64e4c91b66d05d7ebaf9e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81864c7c-55", "ovs_interfaceid": "81864c7c-554a-4b81-ba45-62d08b95c981", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.766 255071 DEBUG nova.network.os_vif_util [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Converting VIF {"id": "81864c7c-554a-4b81-ba45-62d08b95c981", "address": "fa:16:3e:6f:40:f3", "network": {"id": "905664a2-5abc-4150-a1c0-f1b86c7f655e", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1867814284-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65fc2f72f64e4c91b66d05d7ebaf9e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81864c7c-55", "ovs_interfaceid": "81864c7c-554a-4b81-ba45-62d08b95c981", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.767 255071 DEBUG nova.network.os_vif_util [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:40:f3,bridge_name='br-int',has_traffic_filtering=True,id=81864c7c-554a-4b81-ba45-62d08b95c981,network=Network(905664a2-5abc-4150-a1c0-f1b86c7f655e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81864c7c-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.767 255071 DEBUG os_vif [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:40:f3,bridge_name='br-int',has_traffic_filtering=True,id=81864c7c-554a-4b81-ba45-62d08b95c981,network=Network(905664a2-5abc-4150-a1c0-f1b86c7f655e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81864c7c-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.768 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.769 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.769 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.775 255071 DEBUG oslo_concurrency.lockutils [req-b07c7520-fd04-4923-888d-24aa77b4f0d9 req-b7932994-44fe-45f5-b1fb-409edab7787d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-ef9475c4-846b-4370-8330-5a59e328bc07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.776 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.777 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81864c7c-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.777 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap81864c7c-55, col_values=(('external_ids', {'iface-id': '81864c7c-554a-4b81-ba45-62d08b95c981', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:40:f3', 'vm-uuid': 'ef9475c4-846b-4370-8330-5a59e328bc07'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.779 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:08 compute-0 NetworkManager[49116]: <info>  [1764403508.7807] manager: (tap81864c7c-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.782 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.788 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.788 255071 INFO os_vif [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:40:f3,bridge_name='br-int',has_traffic_filtering=True,id=81864c7c-554a-4b81-ba45-62d08b95c981,network=Network(905664a2-5abc-4150-a1c0-f1b86c7f655e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81864c7c-55')
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.972 255071 DEBUG nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.973 255071 DEBUG nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.973 255071 DEBUG nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] No VIF found with MAC fa:16:3e:6f:40:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:05:08 compute-0 nova_compute[255040]: 2025-11-29 08:05:08.974 255071 INFO nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Using config drive
Nov 29 08:05:09 compute-0 nova_compute[255040]: 2025-11-29 08:05:09.001 255071 DEBUG nova.storage.rbd_utils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] rbd image ef9475c4-846b-4370-8330-5a59e328bc07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:05:09 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3684768919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:09 compute-0 nova_compute[255040]: 2025-11-29 08:05:09.222 255071 DEBUG nova.objects.instance [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lazy-loading 'flavor' on Instance uuid cd169ba7-ec52-418c-a12c-4069b40674d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:05:09 compute-0 nova_compute[255040]: 2025-11-29 08:05:09.245 255071 DEBUG nova.virt.libvirt.driver [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Attempting to attach volume 4cdf7634-b0c7-4205-bd7c-0875c4bd1eba with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:05:09 compute-0 nova_compute[255040]: 2025-11-29 08:05:09.249 255071 DEBUG nova.virt.libvirt.guest [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:05:09 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:05:09 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-4cdf7634-b0c7-4205-bd7c-0875c4bd1eba">
Nov 29 08:05:09 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:05:09 compute-0 nova_compute[255040]:   </source>
Nov 29 08:05:09 compute-0 nova_compute[255040]:   <auth username="openstack">
Nov 29 08:05:09 compute-0 nova_compute[255040]:     <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:05:09 compute-0 nova_compute[255040]:   </auth>
Nov 29 08:05:09 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:05:09 compute-0 nova_compute[255040]:   <serial>4cdf7634-b0c7-4205-bd7c-0875c4bd1eba</serial>
Nov 29 08:05:09 compute-0 nova_compute[255040]: </disk>
Nov 29 08:05:09 compute-0 nova_compute[255040]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:05:09 compute-0 nova_compute[255040]: 2025-11-29 08:05:09.302 255071 INFO nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Creating config drive at /var/lib/nova/instances/ef9475c4-846b-4370-8330-5a59e328bc07/disk.config
Nov 29 08:05:09 compute-0 nova_compute[255040]: 2025-11-29 08:05:09.310 255071 DEBUG oslo_concurrency.processutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ef9475c4-846b-4370-8330-5a59e328bc07/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5uack57l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:09 compute-0 ceph-mon[75237]: osdmap e223: 3 total, 3 up, 3 in
Nov 29 08:05:09 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/712382475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:09 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1170137534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:09 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3684768919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:09 compute-0 nova_compute[255040]: 2025-11-29 08:05:09.446 255071 DEBUG oslo_concurrency.processutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ef9475c4-846b-4370-8330-5a59e328bc07/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5uack57l" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:09 compute-0 nova_compute[255040]: 2025-11-29 08:05:09.478 255071 DEBUG nova.storage.rbd_utils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] rbd image ef9475c4-846b-4370-8330-5a59e328bc07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:09 compute-0 nova_compute[255040]: 2025-11-29 08:05:09.482 255071 DEBUG oslo_concurrency.processutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ef9475c4-846b-4370-8330-5a59e328bc07/disk.config ef9475c4-846b-4370-8330-5a59e328bc07_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:09 compute-0 nova_compute[255040]: 2025-11-29 08:05:09.592 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 213 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 MiB/s wr, 57 op/s
Nov 29 08:05:09 compute-0 nova_compute[255040]: 2025-11-29 08:05:09.622 255071 DEBUG nova.virt.libvirt.driver [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:05:09 compute-0 nova_compute[255040]: 2025-11-29 08:05:09.623 255071 DEBUG nova.virt.libvirt.driver [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:05:09 compute-0 nova_compute[255040]: 2025-11-29 08:05:09.623 255071 DEBUG nova.virt.libvirt.driver [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:05:09 compute-0 nova_compute[255040]: 2025-11-29 08:05:09.623 255071 DEBUG nova.virt.libvirt.driver [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] No VIF found with MAC fa:16:3e:17:a4:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:05:09 compute-0 nova_compute[255040]: 2025-11-29 08:05:09.755 255071 DEBUG oslo_concurrency.processutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ef9475c4-846b-4370-8330-5a59e328bc07/disk.config ef9475c4-846b-4370-8330-5a59e328bc07_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:09 compute-0 nova_compute[255040]: 2025-11-29 08:05:09.756 255071 INFO nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Deleting local config drive /var/lib/nova/instances/ef9475c4-846b-4370-8330-5a59e328bc07/disk.config because it was imported into RBD.
Nov 29 08:05:09 compute-0 kernel: tap81864c7c-55: entered promiscuous mode
Nov 29 08:05:09 compute-0 NetworkManager[49116]: <info>  [1764403509.8080] manager: (tap81864c7c-55): new Tun device (/org/freedesktop/NetworkManager/Devices/54)
Nov 29 08:05:09 compute-0 nova_compute[255040]: 2025-11-29 08:05:09.808 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:09 compute-0 ovn_controller[153295]: 2025-11-29T08:05:09Z|00081|binding|INFO|Claiming lport 81864c7c-554a-4b81-ba45-62d08b95c981 for this chassis.
Nov 29 08:05:09 compute-0 ovn_controller[153295]: 2025-11-29T08:05:09Z|00082|binding|INFO|81864c7c-554a-4b81-ba45-62d08b95c981: Claiming fa:16:3e:6f:40:f3 10.100.0.8
Nov 29 08:05:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:09.817 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:40:f3 10.100.0.8'], port_security=['fa:16:3e:6f:40:f3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ef9475c4-846b-4370-8330-5a59e328bc07', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-905664a2-5abc-4150-a1c0-f1b86c7f655e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65fc2f72f64e4c91b66d05d7ebaf9e4c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0c32f199-4e61-41f8-97e9-097ce92f8499', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=750ebe7d-6ab1-4015-999e-8f79293608fa, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=81864c7c-554a-4b81-ba45-62d08b95c981) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:05:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:09.819 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 81864c7c-554a-4b81-ba45-62d08b95c981 in datapath 905664a2-5abc-4150-a1c0-f1b86c7f655e bound to our chassis
Nov 29 08:05:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:09.820 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 905664a2-5abc-4150-a1c0-f1b86c7f655e
Nov 29 08:05:09 compute-0 ovn_controller[153295]: 2025-11-29T08:05:09Z|00083|binding|INFO|Setting lport 81864c7c-554a-4b81-ba45-62d08b95c981 ovn-installed in OVS
Nov 29 08:05:09 compute-0 ovn_controller[153295]: 2025-11-29T08:05:09Z|00084|binding|INFO|Setting lport 81864c7c-554a-4b81-ba45-62d08b95c981 up in Southbound
Nov 29 08:05:09 compute-0 nova_compute[255040]: 2025-11-29 08:05:09.830 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:09.835 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[495cdeaf-a5b1-4510-97d7-e33cd4b8c490]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:09.837 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap905664a2-51 in ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:05:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:09.839 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap905664a2-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:05:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:09.839 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[9b40818b-d8d7-46d0-b8f4-a9bce8ccfdf3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:09.840 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba8409a-13e5-43fc-838a-0355f2b5eb76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:09 compute-0 nova_compute[255040]: 2025-11-29 08:05:09.844 255071 DEBUG oslo_concurrency.lockutils [None req-442c5a34-d67a-4600-a36a-c03c464d7e5e 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "cd169ba7-ec52-418c-a12c-4069b40674d7" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:09 compute-0 systemd-machined[216271]: New machine qemu-7-instance-00000007.
Nov 29 08:05:09 compute-0 systemd-udevd[274545]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:05:09 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Nov 29 08:05:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:09.859 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[66ff8fb9-318a-4cab-b825-870ee84447aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:09 compute-0 NetworkManager[49116]: <info>  [1764403509.8728] device (tap81864c7c-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:05:09 compute-0 NetworkManager[49116]: <info>  [1764403509.8736] device (tap81864c7c-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:05:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:09.881 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb6ddd1-e804-483e-bb65-9b40a699c776]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:09.918 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[89f775bd-217c-4a58-8d34-6d83072a77f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:09.924 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[83e78993-44d1-4afd-be16-8854876c7fd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:09 compute-0 NetworkManager[49116]: <info>  [1764403509.9258] manager: (tap905664a2-50): new Veth device (/org/freedesktop/NetworkManager/Devices/55)
Nov 29 08:05:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:09.963 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[530d4d66-fd6e-4ce5-9a3c-aeb1b5e12c64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:09.967 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[b46a9ba4-ba01-4f62-870f-4ec90ba7f892]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:09 compute-0 NetworkManager[49116]: <info>  [1764403509.9907] device (tap905664a2-50): carrier: link connected
Nov 29 08:05:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:09.998 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[fba689ff-4715-4484-a82f-5b3c7303652c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:10.020 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[edf13860-41be-4c91-b980-f1c0cc07970f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap905664a2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:61:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 574732, 'reachable_time': 43565, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274576, 'error': None, 'target': 'ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:10.040 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[1b90eab6-3d82-441a-8d5c-8ae451d80673]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:6188'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 574732, 'tstamp': 574732}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274577, 'error': None, 'target': 'ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:10.060 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[356b1650-6386-4f1b-a33c-a24722e541d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap905664a2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:61:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 574732, 'reachable_time': 43565, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274578, 'error': None, 'target': 'ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.068 255071 DEBUG nova.compute.manager [req-e0ec355b-d8b9-4004-90c5-b5df793457e8 req-ec2b65a2-023a-4df3-9866-45bcd73b4fc3 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Received event network-vif-plugged-81864c7c-554a-4b81-ba45-62d08b95c981 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.068 255071 DEBUG oslo_concurrency.lockutils [req-e0ec355b-d8b9-4004-90c5-b5df793457e8 req-ec2b65a2-023a-4df3-9866-45bcd73b4fc3 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "ef9475c4-846b-4370-8330-5a59e328bc07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.069 255071 DEBUG oslo_concurrency.lockutils [req-e0ec355b-d8b9-4004-90c5-b5df793457e8 req-ec2b65a2-023a-4df3-9866-45bcd73b4fc3 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "ef9475c4-846b-4370-8330-5a59e328bc07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.070 255071 DEBUG oslo_concurrency.lockutils [req-e0ec355b-d8b9-4004-90c5-b5df793457e8 req-ec2b65a2-023a-4df3-9866-45bcd73b4fc3 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "ef9475c4-846b-4370-8330-5a59e328bc07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.070 255071 DEBUG nova.compute.manager [req-e0ec355b-d8b9-4004-90c5-b5df793457e8 req-ec2b65a2-023a-4df3-9866-45bcd73b4fc3 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Processing event network-vif-plugged-81864c7c-554a-4b81-ba45-62d08b95c981 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:10.107 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[7def9122-4e52-493e-9de8-924d73afcce6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:10.189 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[2eeb77ba-ea87-4582-8169-deac117b287c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:10.192 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap905664a2-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:10.192 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:10.192 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap905664a2-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.194 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:10 compute-0 NetworkManager[49116]: <info>  [1764403510.1957] manager: (tap905664a2-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Nov 29 08:05:10 compute-0 kernel: tap905664a2-50: entered promiscuous mode
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.198 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:10.200 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap905664a2-50, col_values=(('external_ids', {'iface-id': '9b27068e-ab44-47df-bfa7-ae1ee2b760c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.201 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:10 compute-0 ovn_controller[153295]: 2025-11-29T08:05:10Z|00085|binding|INFO|Releasing lport 9b27068e-ab44-47df-bfa7-ae1ee2b760c5 from this chassis (sb_readonly=0)
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.202 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:10.203 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/905664a2-5abc-4150-a1c0-f1b86c7f655e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/905664a2-5abc-4150-a1c0-f1b86c7f655e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:10.204 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[88aa902c-c203-4f7e-951c-7b0b37e9854b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:10.205 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-905664a2-5abc-4150-a1c0-f1b86c7f655e
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/905664a2-5abc-4150-a1c0-f1b86c7f655e.pid.haproxy
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID 905664a2-5abc-4150-a1c0-f1b86c7f655e
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:10.206 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e', 'env', 'PROCESS_TAG=haproxy-905664a2-5abc-4150-a1c0-f1b86c7f655e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/905664a2-5abc-4150-a1c0-f1b86c7f655e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.218 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Nov 29 08:05:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Nov 29 08:05:10 compute-0 ceph-mon[75237]: pgmap v1339: 305 pgs: 305 active+clean; 213 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 MiB/s wr, 57 op/s
Nov 29 08:05:10 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Nov 29 08:05:10 compute-0 podman[274625]: 2025-11-29 08:05:10.590705772 +0000 UTC m=+0.024268716 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:05:10 compute-0 podman[274625]: 2025-11-29 08:05:10.800691503 +0000 UTC m=+0.234254447 container create 474ed2e01e7e95d8d6d655cbc2fdc840ba8097890b7fb3a92099497c50571dd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.799 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403510.798518, ef9475c4-846b-4370-8330-5a59e328bc07 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.800 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] VM Started (Lifecycle Event)
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.807 255071 DEBUG nova.compute.manager [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.817 255071 DEBUG nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.823 255071 INFO nova.virt.libvirt.driver [-] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Instance spawned successfully.
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.824 255071 DEBUG nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.829 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.834 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.849 255071 DEBUG nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.849 255071 DEBUG nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.850 255071 DEBUG nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.850 255071 DEBUG nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.851 255071 DEBUG nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.851 255071 DEBUG nova.virt.libvirt.driver [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.855 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.856 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403510.7989378, ef9475c4-846b-4370-8330-5a59e328bc07 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.856 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] VM Paused (Lifecycle Event)
Nov 29 08:05:10 compute-0 systemd[1]: Started libpod-conmon-474ed2e01e7e95d8d6d655cbc2fdc840ba8097890b7fb3a92099497c50571dd0.scope.
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.890 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.894 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403510.8133984, ef9475c4-846b-4370-8330-5a59e328bc07 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.895 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] VM Resumed (Lifecycle Event)
Nov 29 08:05:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/183092a2f242256f77da8ba0099ea1b4a00dfb8afbad552e91e98eb1cfb545d1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.908 255071 INFO nova.compute.manager [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Took 8.26 seconds to spawn the instance on the hypervisor.
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.909 255071 DEBUG nova.compute.manager [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.917 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.921 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:05:10 compute-0 podman[274625]: 2025-11-29 08:05:10.950310636 +0000 UTC m=+0.383873590 container init 474ed2e01e7e95d8d6d655cbc2fdc840ba8097890b7fb3a92099497c50571dd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.954 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:05:10 compute-0 podman[274625]: 2025-11-29 08:05:10.957573222 +0000 UTC m=+0.391136156 container start 474ed2e01e7e95d8d6d655cbc2fdc840ba8097890b7fb3a92099497c50571dd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.982 255071 INFO nova.compute.manager [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Took 9.22 seconds to build instance.
Nov 29 08:05:10 compute-0 neutron-haproxy-ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e[274666]: [NOTICE]   (274670) : New worker (274672) forked
Nov 29 08:05:10 compute-0 neutron-haproxy-ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e[274666]: [NOTICE]   (274670) : Loading success.
Nov 29 08:05:10 compute-0 nova_compute[255040]: 2025-11-29 08:05:10.999 255071 DEBUG oslo_concurrency.lockutils [None req-88519526-9cf1-4955-8b71-ebcbab1ed57a 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lock "ef9475c4-846b-4370-8330-5a59e328bc07" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:05:11 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/424340558' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 213 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.9 MiB/s wr, 59 op/s
Nov 29 08:05:11 compute-0 ceph-mon[75237]: osdmap e224: 3 total, 3 up, 3 in
Nov 29 08:05:11 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/424340558' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:12 compute-0 nova_compute[255040]: 2025-11-29 08:05:12.223 255071 DEBUG nova.compute.manager [req-ad6bdad5-391f-4ef1-9216-224b7593d590 req-6d521ca8-f637-40a7-91d5-f1e16e9deab0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Received event network-vif-plugged-81864c7c-554a-4b81-ba45-62d08b95c981 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:12 compute-0 nova_compute[255040]: 2025-11-29 08:05:12.224 255071 DEBUG oslo_concurrency.lockutils [req-ad6bdad5-391f-4ef1-9216-224b7593d590 req-6d521ca8-f637-40a7-91d5-f1e16e9deab0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "ef9475c4-846b-4370-8330-5a59e328bc07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:12 compute-0 nova_compute[255040]: 2025-11-29 08:05:12.224 255071 DEBUG oslo_concurrency.lockutils [req-ad6bdad5-391f-4ef1-9216-224b7593d590 req-6d521ca8-f637-40a7-91d5-f1e16e9deab0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "ef9475c4-846b-4370-8330-5a59e328bc07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:12 compute-0 nova_compute[255040]: 2025-11-29 08:05:12.224 255071 DEBUG oslo_concurrency.lockutils [req-ad6bdad5-391f-4ef1-9216-224b7593d590 req-6d521ca8-f637-40a7-91d5-f1e16e9deab0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "ef9475c4-846b-4370-8330-5a59e328bc07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:12 compute-0 nova_compute[255040]: 2025-11-29 08:05:12.224 255071 DEBUG nova.compute.manager [req-ad6bdad5-391f-4ef1-9216-224b7593d590 req-6d521ca8-f637-40a7-91d5-f1e16e9deab0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] No waiting events found dispatching network-vif-plugged-81864c7c-554a-4b81-ba45-62d08b95c981 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:05:12 compute-0 nova_compute[255040]: 2025-11-29 08:05:12.224 255071 WARNING nova.compute.manager [req-ad6bdad5-391f-4ef1-9216-224b7593d590 req-6d521ca8-f637-40a7-91d5-f1e16e9deab0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Received unexpected event network-vif-plugged-81864c7c-554a-4b81-ba45-62d08b95c981 for instance with vm_state active and task_state None.
Nov 29 08:05:13 compute-0 ceph-mon[75237]: pgmap v1341: 305 pgs: 305 active+clean; 213 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.9 MiB/s wr, 59 op/s
Nov 29 08:05:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 213 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.1 MiB/s wr, 98 op/s
Nov 29 08:05:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:13 compute-0 nova_compute[255040]: 2025-11-29 08:05:13.781 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Nov 29 08:05:14 compute-0 ceph-mon[75237]: pgmap v1342: 305 pgs: 305 active+clean; 213 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.1 MiB/s wr, 98 op/s
Nov 29 08:05:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Nov 29 08:05:14 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Nov 29 08:05:14 compute-0 nova_compute[255040]: 2025-11-29 08:05:14.300 255071 DEBUG nova.compute.manager [req-62dc5af9-2f60-440c-a9cb-da0192ebb36f req-9a3e780f-e4f3-462c-a2b7-e6c308193c70 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Received event network-changed-81864c7c-554a-4b81-ba45-62d08b95c981 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:14 compute-0 nova_compute[255040]: 2025-11-29 08:05:14.301 255071 DEBUG nova.compute.manager [req-62dc5af9-2f60-440c-a9cb-da0192ebb36f req-9a3e780f-e4f3-462c-a2b7-e6c308193c70 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Refreshing instance network info cache due to event network-changed-81864c7c-554a-4b81-ba45-62d08b95c981. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:05:14 compute-0 nova_compute[255040]: 2025-11-29 08:05:14.302 255071 DEBUG oslo_concurrency.lockutils [req-62dc5af9-2f60-440c-a9cb-da0192ebb36f req-9a3e780f-e4f3-462c-a2b7-e6c308193c70 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-ef9475c4-846b-4370-8330-5a59e328bc07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:05:14 compute-0 nova_compute[255040]: 2025-11-29 08:05:14.302 255071 DEBUG oslo_concurrency.lockutils [req-62dc5af9-2f60-440c-a9cb-da0192ebb36f req-9a3e780f-e4f3-462c-a2b7-e6c308193c70 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-ef9475c4-846b-4370-8330-5a59e328bc07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:05:14 compute-0 nova_compute[255040]: 2025-11-29 08:05:14.302 255071 DEBUG nova.network.neutron [req-62dc5af9-2f60-440c-a9cb-da0192ebb36f req-9a3e780f-e4f3-462c-a2b7-e6c308193c70 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Refreshing network info cache for port 81864c7c-554a-4b81-ba45-62d08b95c981 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:05:14 compute-0 nova_compute[255040]: 2025-11-29 08:05:14.607 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:15 compute-0 nova_compute[255040]: 2025-11-29 08:05:15.267 255071 DEBUG nova.network.neutron [req-62dc5af9-2f60-440c-a9cb-da0192ebb36f req-9a3e780f-e4f3-462c-a2b7-e6c308193c70 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Updated VIF entry in instance network info cache for port 81864c7c-554a-4b81-ba45-62d08b95c981. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:05:15 compute-0 nova_compute[255040]: 2025-11-29 08:05:15.268 255071 DEBUG nova.network.neutron [req-62dc5af9-2f60-440c-a9cb-da0192ebb36f req-9a3e780f-e4f3-462c-a2b7-e6c308193c70 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Updating instance_info_cache with network_info: [{"id": "81864c7c-554a-4b81-ba45-62d08b95c981", "address": "fa:16:3e:6f:40:f3", "network": {"id": "905664a2-5abc-4150-a1c0-f1b86c7f655e", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1867814284-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65fc2f72f64e4c91b66d05d7ebaf9e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81864c7c-55", "ovs_interfaceid": "81864c7c-554a-4b81-ba45-62d08b95c981", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:15 compute-0 nova_compute[255040]: 2025-11-29 08:05:15.295 255071 DEBUG oslo_concurrency.lockutils [req-62dc5af9-2f60-440c-a9cb-da0192ebb36f req-9a3e780f-e4f3-462c-a2b7-e6c308193c70 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-ef9475c4-846b-4370-8330-5a59e328bc07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:05:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 213 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.2 MiB/s wr, 193 op/s
Nov 29 08:05:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Nov 29 08:05:16 compute-0 ceph-mon[75237]: osdmap e225: 3 total, 3 up, 3 in
Nov 29 08:05:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Nov 29 08:05:16 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Nov 29 08:05:17 compute-0 ceph-mon[75237]: pgmap v1344: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 213 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.2 MiB/s wr, 193 op/s
Nov 29 08:05:17 compute-0 ceph-mon[75237]: osdmap e226: 3 total, 3 up, 3 in
Nov 29 08:05:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 213 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 28 KiB/s wr, 163 op/s
Nov 29 08:05:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Nov 29 08:05:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Nov 29 08:05:18 compute-0 ceph-mon[75237]: pgmap v1346: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 213 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 28 KiB/s wr, 163 op/s
Nov 29 08:05:18 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Nov 29 08:05:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:18 compute-0 nova_compute[255040]: 2025-11-29 08:05:18.784 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:18 compute-0 podman[274681]: 2025-11-29 08:05:18.943543268 +0000 UTC m=+0.101738213 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:05:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:05:18 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/846265103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Nov 29 08:05:19 compute-0 ceph-mon[75237]: osdmap e227: 3 total, 3 up, 3 in
Nov 29 08:05:19 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/846265103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Nov 29 08:05:19 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Nov 29 08:05:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 213 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 41 KiB/s wr, 205 op/s
Nov 29 08:05:19 compute-0 nova_compute[255040]: 2025-11-29 08:05:19.609 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Nov 29 08:05:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Nov 29 08:05:20 compute-0 ceph-mon[75237]: osdmap e228: 3 total, 3 up, 3 in
Nov 29 08:05:20 compute-0 ceph-mon[75237]: pgmap v1349: 305 pgs: 305 active+clean; 213 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 41 KiB/s wr, 205 op/s
Nov 29 08:05:20 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Nov 29 08:05:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:05:20 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3397418372' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:05:20 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3397418372' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:21 compute-0 ceph-mon[75237]: osdmap e229: 3 total, 3 up, 3 in
Nov 29 08:05:21 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3397418372' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:21 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3397418372' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 213 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 5.8 KiB/s wr, 85 op/s
Nov 29 08:05:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Nov 29 08:05:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Nov 29 08:05:22 compute-0 ceph-mon[75237]: pgmap v1351: 305 pgs: 305 active+clean; 213 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 5.8 KiB/s wr, 85 op/s
Nov 29 08:05:22 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Nov 29 08:05:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 213 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 4.9 KiB/s wr, 103 op/s
Nov 29 08:05:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Nov 29 08:05:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Nov 29 08:05:23 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Nov 29 08:05:23 compute-0 nova_compute[255040]: 2025-11-29 08:05:23.787 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:23 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 29 08:05:23 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:05:23.888181) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:05:23 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 29 08:05:23 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403523888356, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1869, "num_deletes": 517, "total_data_size": 2220763, "memory_usage": 2270616, "flush_reason": "Manual Compaction"}
Nov 29 08:05:23 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 29 08:05:23 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403523916700, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1892088, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23920, "largest_seqno": 25788, "table_properties": {"data_size": 1884255, "index_size": 4139, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21097, "raw_average_key_size": 20, "raw_value_size": 1865964, "raw_average_value_size": 1787, "num_data_blocks": 182, "num_entries": 1044, "num_filter_entries": 1044, "num_deletions": 517, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403410, "oldest_key_time": 1764403410, "file_creation_time": 1764403523, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:05:23 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 28650 microseconds, and 11249 cpu microseconds.
Nov 29 08:05:23 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:05:23 compute-0 ceph-mon[75237]: osdmap e230: 3 total, 3 up, 3 in
Nov 29 08:05:23 compute-0 ceph-mon[75237]: osdmap e231: 3 total, 3 up, 3 in
Nov 29 08:05:23 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:05:23.916843) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1892088 bytes OK
Nov 29 08:05:23 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:05:23.916876) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 29 08:05:23 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:05:23.920251) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 29 08:05:23 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:05:23.920278) EVENT_LOG_v1 {"time_micros": 1764403523920272, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:05:23 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:05:23.920307) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:05:23 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 2211363, prev total WAL file size 2239536, number of live WAL files 2.
Nov 29 08:05:23 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:05:23 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:05:23.921587) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353037' seq:72057594037927935, type:22 .. '6C6F676D00373538' seq:0, type:0; will stop at (end)
Nov 29 08:05:23 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:05:23 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1847KB)], [53(9730KB)]
Nov 29 08:05:23 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403523921707, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11856006, "oldest_snapshot_seqno": -1}
Nov 29 08:05:23 compute-0 nova_compute[255040]: 2025-11-29 08:05:23.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:24 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5312 keys, 8729278 bytes, temperature: kUnknown
Nov 29 08:05:24 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403524072645, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 8729278, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8691422, "index_size": 23474, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 133576, "raw_average_key_size": 25, "raw_value_size": 8593289, "raw_average_value_size": 1617, "num_data_blocks": 962, "num_entries": 5312, "num_filter_entries": 5312, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764403523, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:05:24 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:05:24 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:05:24.073199) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 8729278 bytes
Nov 29 08:05:24 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:05:24.077001) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 78.4 rd, 57.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 9.5 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(10.9) write-amplify(4.6) OK, records in: 6332, records dropped: 1020 output_compression: NoCompression
Nov 29 08:05:24 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:05:24.077027) EVENT_LOG_v1 {"time_micros": 1764403524077014, "job": 28, "event": "compaction_finished", "compaction_time_micros": 151233, "compaction_time_cpu_micros": 31533, "output_level": 6, "num_output_files": 1, "total_output_size": 8729278, "num_input_records": 6332, "num_output_records": 5312, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:05:24 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:05:24 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403524077545, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 29 08:05:24 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:05:24 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403524079529, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 29 08:05:24 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:05:23.921449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:05:24 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:05:24.079580) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:05:24 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:05:24.079586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:05:24 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:05:24.079588) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:05:24 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:05:24.079589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:05:24 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:05:24.079591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:05:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:05:24 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2741963831' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:24 compute-0 nova_compute[255040]: 2025-11-29 08:05:24.612 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Nov 29 08:05:24 compute-0 ceph-mon[75237]: pgmap v1353: 305 pgs: 305 active+clean; 213 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 4.9 KiB/s wr, 103 op/s
Nov 29 08:05:24 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2741963831' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Nov 29 08:05:24 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Nov 29 08:05:25 compute-0 ovn_controller[153295]: 2025-11-29T08:05:25Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6f:40:f3 10.100.0.8
Nov 29 08:05:25 compute-0 ovn_controller[153295]: 2025-11-29T08:05:25Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6f:40:f3 10.100.0.8
Nov 29 08:05:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 227 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 442 KiB/s rd, 2.8 MiB/s wr, 149 op/s
Nov 29 08:05:25 compute-0 nova_compute[255040]: 2025-11-29 08:05:25.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:25 compute-0 nova_compute[255040]: 2025-11-29 08:05:25.976 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:05:25 compute-0 nova_compute[255040]: 2025-11-29 08:05:25.977 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:05:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Nov 29 08:05:26 compute-0 ceph-mon[75237]: osdmap e232: 3 total, 3 up, 3 in
Nov 29 08:05:26 compute-0 ceph-mon[75237]: pgmap v1356: 305 pgs: 305 active+clean; 227 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 442 KiB/s rd, 2.8 MiB/s wr, 149 op/s
Nov 29 08:05:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Nov 29 08:05:26 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Nov 29 08:05:26 compute-0 nova_compute[255040]: 2025-11-29 08:05:26.259 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "refresh_cache-cd169ba7-ec52-418c-a12c-4069b40674d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:05:26 compute-0 nova_compute[255040]: 2025-11-29 08:05:26.259 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquired lock "refresh_cache-cd169ba7-ec52-418c-a12c-4069b40674d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:05:26 compute-0 nova_compute[255040]: 2025-11-29 08:05:26.259 255071 DEBUG nova.network.neutron [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:05:26 compute-0 nova_compute[255040]: 2025-11-29 08:05:26.259 255071 DEBUG nova.objects.instance [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lazy-loading 'info_cache' on Instance uuid cd169ba7-ec52-418c-a12c-4069b40674d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:05:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:27.127 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:27.128 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:27.129 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:27 compute-0 ceph-mon[75237]: osdmap e233: 3 total, 3 up, 3 in
Nov 29 08:05:27 compute-0 nova_compute[255040]: 2025-11-29 08:05:27.497 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 227 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 458 KiB/s rd, 2.9 MiB/s wr, 155 op/s
Nov 29 08:05:27 compute-0 nova_compute[255040]: 2025-11-29 08:05:27.786 255071 DEBUG nova.network.neutron [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Updating instance_info_cache with network_info: [{"id": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "address": "fa:16:3e:17:a4:81", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f0bd4-69", "ovs_interfaceid": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:27 compute-0 nova_compute[255040]: 2025-11-29 08:05:27.805 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Releasing lock "refresh_cache-cd169ba7-ec52-418c-a12c-4069b40674d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:05:27 compute-0 nova_compute[255040]: 2025-11-29 08:05:27.806 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:05:27 compute-0 nova_compute[255040]: 2025-11-29 08:05:27.806 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:27 compute-0 nova_compute[255040]: 2025-11-29 08:05:27.807 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:27 compute-0 nova_compute[255040]: 2025-11-29 08:05:27.807 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:27 compute-0 nova_compute[255040]: 2025-11-29 08:05:27.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:28 compute-0 nova_compute[255040]: 2025-11-29 08:05:28.026 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:28 compute-0 nova_compute[255040]: 2025-11-29 08:05:28.027 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:28 compute-0 nova_compute[255040]: 2025-11-29 08:05:28.027 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:28 compute-0 nova_compute[255040]: 2025-11-29 08:05:28.028 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:05:28 compute-0 nova_compute[255040]: 2025-11-29 08:05:28.028 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Nov 29 08:05:28 compute-0 ceph-mon[75237]: pgmap v1358: 305 pgs: 305 active+clean; 227 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 458 KiB/s rd, 2.9 MiB/s wr, 155 op/s
Nov 29 08:05:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Nov 29 08:05:28 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Nov 29 08:05:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:05:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4254022871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:28 compute-0 nova_compute[255040]: 2025-11-29 08:05:28.550 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:28 compute-0 nova_compute[255040]: 2025-11-29 08:05:28.654 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:05:28 compute-0 nova_compute[255040]: 2025-11-29 08:05:28.654 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:05:28 compute-0 nova_compute[255040]: 2025-11-29 08:05:28.660 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:05:28 compute-0 nova_compute[255040]: 2025-11-29 08:05:28.660 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:05:28 compute-0 nova_compute[255040]: 2025-11-29 08:05:28.660 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:05:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Nov 29 08:05:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Nov 29 08:05:28 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Nov 29 08:05:28 compute-0 podman[274730]: 2025-11-29 08:05:28.730136246 +0000 UTC m=+0.097049267 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 08:05:28 compute-0 nova_compute[255040]: 2025-11-29 08:05:28.790 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:28 compute-0 nova_compute[255040]: 2025-11-29 08:05:28.858 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:05:28 compute-0 nova_compute[255040]: 2025-11-29 08:05:28.859 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4258MB free_disk=59.90737533569336GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:05:28 compute-0 nova_compute[255040]: 2025-11-29 08:05:28.859 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:28 compute-0 nova_compute[255040]: 2025-11-29 08:05:28.859 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:28 compute-0 nova_compute[255040]: 2025-11-29 08:05:28.932 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance cd169ba7-ec52-418c-a12c-4069b40674d7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:05:28 compute-0 nova_compute[255040]: 2025-11-29 08:05:28.933 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance ef9475c4-846b-4370-8330-5a59e328bc07 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:05:28 compute-0 nova_compute[255040]: 2025-11-29 08:05:28.933 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:05:28 compute-0 nova_compute[255040]: 2025-11-29 08:05:28.933 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:05:28 compute-0 nova_compute[255040]: 2025-11-29 08:05:28.986 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:29 compute-0 ceph-mon[75237]: osdmap e234: 3 total, 3 up, 3 in
Nov 29 08:05:29 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/4254022871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:29 compute-0 ceph-mon[75237]: osdmap e235: 3 total, 3 up, 3 in
Nov 29 08:05:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:05:29 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2587314279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:29 compute-0 nova_compute[255040]: 2025-11-29 08:05:29.481 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:29 compute-0 nova_compute[255040]: 2025-11-29 08:05:29.490 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:05:29 compute-0 nova_compute[255040]: 2025-11-29 08:05:29.506 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:05:29 compute-0 nova_compute[255040]: 2025-11-29 08:05:29.532 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:05:29 compute-0 nova_compute[255040]: 2025-11-29 08:05:29.533 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 743 KiB/s rd, 4.9 MiB/s wr, 188 op/s
Nov 29 08:05:29 compute-0 nova_compute[255040]: 2025-11-29 08:05:29.614 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:30 compute-0 nova_compute[255040]: 2025-11-29 08:05:30.226 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Nov 29 08:05:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Nov 29 08:05:30 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2587314279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:30 compute-0 ceph-mon[75237]: pgmap v1361: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 743 KiB/s rd, 4.9 MiB/s wr, 188 op/s
Nov 29 08:05:30 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Nov 29 08:05:30 compute-0 nova_compute[255040]: 2025-11-29 08:05:30.526 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:30 compute-0 nova_compute[255040]: 2025-11-29 08:05:30.527 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:30 compute-0 nova_compute[255040]: 2025-11-29 08:05:30.527 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:30 compute-0 nova_compute[255040]: 2025-11-29 08:05:30.527 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:05:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 422 KiB/s rd, 2.1 MiB/s wr, 146 op/s
Nov 29 08:05:31 compute-0 ceph-mon[75237]: osdmap e236: 3 total, 3 up, 3 in
Nov 29 08:05:32 compute-0 ceph-mon[75237]: pgmap v1363: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 422 KiB/s rd, 2.1 MiB/s wr, 146 op/s
Nov 29 08:05:32 compute-0 nova_compute[255040]: 2025-11-29 08:05:32.969 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 409 KiB/s rd, 1.9 MiB/s wr, 170 op/s
Nov 29 08:05:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:33 compute-0 nova_compute[255040]: 2025-11-29 08:05:33.796 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:33 compute-0 podman[274771]: 2025-11-29 08:05:33.904751635 +0000 UTC m=+0.073182634 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd)
Nov 29 08:05:33 compute-0 ceph-mon[75237]: pgmap v1364: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 409 KiB/s rd, 1.9 MiB/s wr, 170 op/s
Nov 29 08:05:34 compute-0 nova_compute[255040]: 2025-11-29 08:05:34.486 255071 DEBUG oslo_concurrency.lockutils [None req-13e083c9-25dc-49f8-b557-05550fe086c6 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "cd169ba7-ec52-418c-a12c-4069b40674d7" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:34 compute-0 nova_compute[255040]: 2025-11-29 08:05:34.486 255071 DEBUG oslo_concurrency.lockutils [None req-13e083c9-25dc-49f8-b557-05550fe086c6 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "cd169ba7-ec52-418c-a12c-4069b40674d7" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:34 compute-0 nova_compute[255040]: 2025-11-29 08:05:34.551 255071 INFO nova.compute.manager [None req-13e083c9-25dc-49f8-b557-05550fe086c6 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Detaching volume 4cdf7634-b0c7-4205-bd7c-0875c4bd1eba
Nov 29 08:05:34 compute-0 nova_compute[255040]: 2025-11-29 08:05:34.570 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:34 compute-0 nova_compute[255040]: 2025-11-29 08:05:34.615 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:34 compute-0 nova_compute[255040]: 2025-11-29 08:05:34.720 255071 INFO nova.virt.block_device [None req-13e083c9-25dc-49f8-b557-05550fe086c6 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Attempting to driver detach volume 4cdf7634-b0c7-4205-bd7c-0875c4bd1eba from mountpoint /dev/vdb
Nov 29 08:05:34 compute-0 nova_compute[255040]: 2025-11-29 08:05:34.730 255071 DEBUG nova.virt.libvirt.driver [None req-13e083c9-25dc-49f8-b557-05550fe086c6 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Attempting to detach device vdb from instance cd169ba7-ec52-418c-a12c-4069b40674d7 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:05:34 compute-0 nova_compute[255040]: 2025-11-29 08:05:34.730 255071 DEBUG nova.virt.libvirt.guest [None req-13e083c9-25dc-49f8-b557-05550fe086c6 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:05:34 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:05:34 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-4cdf7634-b0c7-4205-bd7c-0875c4bd1eba">
Nov 29 08:05:34 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:05:34 compute-0 nova_compute[255040]:   </source>
Nov 29 08:05:34 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:05:34 compute-0 nova_compute[255040]:   <serial>4cdf7634-b0c7-4205-bd7c-0875c4bd1eba</serial>
Nov 29 08:05:34 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:05:34 compute-0 nova_compute[255040]: </disk>
Nov 29 08:05:34 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:05:34 compute-0 nova_compute[255040]: 2025-11-29 08:05:34.738 255071 INFO nova.virt.libvirt.driver [None req-13e083c9-25dc-49f8-b557-05550fe086c6 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Successfully detached device vdb from instance cd169ba7-ec52-418c-a12c-4069b40674d7 from the persistent domain config.
Nov 29 08:05:34 compute-0 nova_compute[255040]: 2025-11-29 08:05:34.738 255071 DEBUG nova.virt.libvirt.driver [None req-13e083c9-25dc-49f8-b557-05550fe086c6 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance cd169ba7-ec52-418c-a12c-4069b40674d7 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:05:34 compute-0 nova_compute[255040]: 2025-11-29 08:05:34.739 255071 DEBUG nova.virt.libvirt.guest [None req-13e083c9-25dc-49f8-b557-05550fe086c6 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:05:34 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:05:34 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-4cdf7634-b0c7-4205-bd7c-0875c4bd1eba">
Nov 29 08:05:34 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:05:34 compute-0 nova_compute[255040]:   </source>
Nov 29 08:05:34 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:05:34 compute-0 nova_compute[255040]:   <serial>4cdf7634-b0c7-4205-bd7c-0875c4bd1eba</serial>
Nov 29 08:05:34 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:05:34 compute-0 nova_compute[255040]: </disk>
Nov 29 08:05:34 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:05:34 compute-0 nova_compute[255040]: 2025-11-29 08:05:34.790 255071 DEBUG nova.virt.libvirt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Received event <DeviceRemovedEvent: 1764403534.789458, cd169ba7-ec52-418c-a12c-4069b40674d7 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:05:34 compute-0 nova_compute[255040]: 2025-11-29 08:05:34.792 255071 DEBUG nova.virt.libvirt.driver [None req-13e083c9-25dc-49f8-b557-05550fe086c6 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance cd169ba7-ec52-418c-a12c-4069b40674d7 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:05:34 compute-0 nova_compute[255040]: 2025-11-29 08:05:34.793 255071 INFO nova.virt.libvirt.driver [None req-13e083c9-25dc-49f8-b557-05550fe086c6 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Successfully detached device vdb from instance cd169ba7-ec52-418c-a12c-4069b40674d7 from the live domain config.
Nov 29 08:05:35 compute-0 nova_compute[255040]: 2025-11-29 08:05:35.004 255071 DEBUG nova.objects.instance [None req-13e083c9-25dc-49f8-b557-05550fe086c6 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lazy-loading 'flavor' on Instance uuid cd169ba7-ec52-418c-a12c-4069b40674d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:05:35 compute-0 nova_compute[255040]: 2025-11-29 08:05:35.042 255071 DEBUG oslo_concurrency.lockutils [None req-13e083c9-25dc-49f8-b557-05550fe086c6 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "cd169ba7-ec52-418c-a12c-4069b40674d7" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 334 KiB/s rd, 1.6 MiB/s wr, 141 op/s
Nov 29 08:05:35 compute-0 nova_compute[255040]: 2025-11-29 08:05:35.844 255071 DEBUG oslo_concurrency.lockutils [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "cd169ba7-ec52-418c-a12c-4069b40674d7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:35 compute-0 nova_compute[255040]: 2025-11-29 08:05:35.845 255071 DEBUG oslo_concurrency.lockutils [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "cd169ba7-ec52-418c-a12c-4069b40674d7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:35 compute-0 nova_compute[255040]: 2025-11-29 08:05:35.846 255071 DEBUG oslo_concurrency.lockutils [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "cd169ba7-ec52-418c-a12c-4069b40674d7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:35 compute-0 nova_compute[255040]: 2025-11-29 08:05:35.846 255071 DEBUG oslo_concurrency.lockutils [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "cd169ba7-ec52-418c-a12c-4069b40674d7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:35 compute-0 nova_compute[255040]: 2025-11-29 08:05:35.847 255071 DEBUG oslo_concurrency.lockutils [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "cd169ba7-ec52-418c-a12c-4069b40674d7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:35 compute-0 nova_compute[255040]: 2025-11-29 08:05:35.849 255071 INFO nova.compute.manager [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Terminating instance
Nov 29 08:05:35 compute-0 nova_compute[255040]: 2025-11-29 08:05:35.851 255071 DEBUG nova.compute.manager [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:05:35 compute-0 kernel: tapbf5f0bd4-69 (unregistering): left promiscuous mode
Nov 29 08:05:35 compute-0 NetworkManager[49116]: <info>  [1764403535.9076] device (tapbf5f0bd4-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:05:35 compute-0 nova_compute[255040]: 2025-11-29 08:05:35.918 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:35 compute-0 ovn_controller[153295]: 2025-11-29T08:05:35Z|00086|binding|INFO|Releasing lport bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 from this chassis (sb_readonly=0)
Nov 29 08:05:35 compute-0 ovn_controller[153295]: 2025-11-29T08:05:35Z|00087|binding|INFO|Setting lport bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 down in Southbound
Nov 29 08:05:35 compute-0 ovn_controller[153295]: 2025-11-29T08:05:35Z|00088|binding|INFO|Removing iface tapbf5f0bd4-69 ovn-installed in OVS
Nov 29 08:05:35 compute-0 nova_compute[255040]: 2025-11-29 08:05:35.921 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:35.927 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:a4:81 10.100.0.8'], port_security=['fa:16:3e:17:a4:81 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'cd169ba7-ec52-418c-a12c-4069b40674d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b79d41d-8eb2-4d4a-9786-7791592a7e66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '122d6c1348a9421688c8c95fa7bfdf33', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4fa5ad84-2e5d-4915-82bd-1bb6b8ec61df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=932d84e2-f2b7-4447-ace7-dc91550d516b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=bf5f0bd4-6972-4cd3-9d99-aace0e25efc8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:05:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:35.929 163500 INFO neutron.agent.ovn.metadata.agent [-] Port bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 in datapath 0b79d41d-8eb2-4d4a-9786-7791592a7e66 unbound from our chassis
Nov 29 08:05:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:35.931 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0b79d41d-8eb2-4d4a-9786-7791592a7e66, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:05:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:35.934 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[cfccc99a-ad85-42f5-be48-617583611d65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:35.935 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66 namespace which is not needed anymore
Nov 29 08:05:35 compute-0 nova_compute[255040]: 2025-11-29 08:05:35.936 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:35 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 29 08:05:35 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 16.909s CPU time.
Nov 29 08:05:35 compute-0 systemd-machined[216271]: Machine qemu-6-instance-00000006 terminated.
Nov 29 08:05:36 compute-0 neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66[273301]: [NOTICE]   (273350) : haproxy version is 2.8.14-c23fe91
Nov 29 08:05:36 compute-0 neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66[273301]: [NOTICE]   (273350) : path to executable is /usr/sbin/haproxy
Nov 29 08:05:36 compute-0 neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66[273301]: [WARNING]  (273350) : Exiting Master process...
Nov 29 08:05:36 compute-0 neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66[273301]: [ALERT]    (273350) : Current worker (273360) exited with code 143 (Terminated)
Nov 29 08:05:36 compute-0 neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66[273301]: [WARNING]  (273350) : All workers exited. Exiting... (0)
Nov 29 08:05:36 compute-0 systemd[1]: libpod-4567cbda7746ec161ac6290cdd2ad8abdab7fc6c47d37f1151743dda3be17462.scope: Deactivated successfully.
Nov 29 08:05:36 compute-0 podman[274817]: 2025-11-29 08:05:36.080981072 +0000 UTC m=+0.050016240 container died 4567cbda7746ec161ac6290cdd2ad8abdab7fc6c47d37f1151743dda3be17462 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.088 255071 INFO nova.virt.libvirt.driver [-] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Instance destroyed successfully.
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.089 255071 DEBUG nova.objects.instance [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lazy-loading 'resources' on Instance uuid cd169ba7-ec52-418c-a12c-4069b40674d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.100 255071 DEBUG nova.virt.libvirt.vif [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:04:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-433627761',display_name='tempest-VolumesBackupsTest-instance-433627761',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-433627761',id=6,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNuz0qIQ6xhoHq5TxQpExhWUsmNZyNDNS9yXH8I5twTL5A4pRxaeiLeVkHbUZyDz8LpYRH2KFWp5exvZLsFp2vL75/EmERN+ObohGkR86ilphfmiaekgcxTAymp8CPWjDw==',key_name='tempest-keypair-359361841',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:04:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='122d6c1348a9421688c8c95fa7bfdf33',ramdisk_id='',reservation_id='r-wsd520bl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-433060525',owner_user_name='tempest-VolumesBackupsTest-433060525-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:04:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2f0bad5019c043259e8f0cdbb532a167',uuid=cd169ba7-ec52-418c-a12c-4069b40674d7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "address": "fa:16:3e:17:a4:81", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f0bd4-69", "ovs_interfaceid": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.101 255071 DEBUG nova.network.os_vif_util [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Converting VIF {"id": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "address": "fa:16:3e:17:a4:81", "network": {"id": "0b79d41d-8eb2-4d4a-9786-7791592a7e66", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1557652812-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "122d6c1348a9421688c8c95fa7bfdf33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5f0bd4-69", "ovs_interfaceid": "bf5f0bd4-6972-4cd3-9d99-aace0e25efc8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.103 255071 DEBUG nova.network.os_vif_util [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:a4:81,bridge_name='br-int',has_traffic_filtering=True,id=bf5f0bd4-6972-4cd3-9d99-aace0e25efc8,network=Network(0b79d41d-8eb2-4d4a-9786-7791592a7e66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5f0bd4-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.103 255071 DEBUG os_vif [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:a4:81,bridge_name='br-int',has_traffic_filtering=True,id=bf5f0bd4-6972-4cd3-9d99-aace0e25efc8,network=Network(0b79d41d-8eb2-4d4a-9786-7791592a7e66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5f0bd4-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.110 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.111 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf5f0bd4-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.114 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.117 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:05:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4567cbda7746ec161ac6290cdd2ad8abdab7fc6c47d37f1151743dda3be17462-userdata-shm.mount: Deactivated successfully.
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.121 255071 INFO os_vif [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:a4:81,bridge_name='br-int',has_traffic_filtering=True,id=bf5f0bd4-6972-4cd3-9d99-aace0e25efc8,network=Network(0b79d41d-8eb2-4d4a-9786-7791592a7e66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5f0bd4-69')
Nov 29 08:05:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-957b8c957884b48f0c8e2cf8dcee56e5843e1ce8acd92868b3dbd99e9ecfeb1d-merged.mount: Deactivated successfully.
Nov 29 08:05:36 compute-0 podman[274817]: 2025-11-29 08:05:36.134186336 +0000 UTC m=+0.103221504 container cleanup 4567cbda7746ec161ac6290cdd2ad8abdab7fc6c47d37f1151743dda3be17462 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 08:05:36 compute-0 systemd[1]: libpod-conmon-4567cbda7746ec161ac6290cdd2ad8abdab7fc6c47d37f1151743dda3be17462.scope: Deactivated successfully.
Nov 29 08:05:36 compute-0 podman[274873]: 2025-11-29 08:05:36.204397589 +0000 UTC m=+0.043522424 container remove 4567cbda7746ec161ac6290cdd2ad8abdab7fc6c47d37f1151743dda3be17462 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:05:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:36.209 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[6516e633-26e6-4434-bd22-02ff191f2902]: (4, ('Sat Nov 29 08:05:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66 (4567cbda7746ec161ac6290cdd2ad8abdab7fc6c47d37f1151743dda3be17462)\n4567cbda7746ec161ac6290cdd2ad8abdab7fc6c47d37f1151743dda3be17462\nSat Nov 29 08:05:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66 (4567cbda7746ec161ac6290cdd2ad8abdab7fc6c47d37f1151743dda3be17462)\n4567cbda7746ec161ac6290cdd2ad8abdab7fc6c47d37f1151743dda3be17462\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:36.212 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[644d3ac6-f22f-4904-ba95-8b99b4b721e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:36.213 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b79d41d-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.215 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:36 compute-0 kernel: tap0b79d41d-80: left promiscuous mode
Nov 29 08:05:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:36.221 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[eda43639-3946-496b-8bcb-3d345d5d8d00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.232 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:36.239 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[63edb5b4-07e7-4d2f-8016-7055267022e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:36.240 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[07ae3577-962e-4e52-99fb-38f3551ef66d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:36.262 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[64e1bcc1-573c-425c-867b-2798d51cc644]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570892, 'reachable_time': 22994, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274891, 'error': None, 'target': 'ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:36 compute-0 systemd[1]: run-netns-ovnmeta\x2d0b79d41d\x2d8eb2\x2d4d4a\x2d9786\x2d7791592a7e66.mount: Deactivated successfully.
Nov 29 08:05:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:36.267 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0b79d41d-8eb2-4d4a-9786-7791592a7e66 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:05:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:36.268 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[4de2d921-6bc7-4f2c-9e71-b671b632fc2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.443 255071 DEBUG nova.compute.manager [req-5ae1d2b6-b39c-4f58-8616-02516b55e427 req-49176db7-fbd0-4c17-90a5-59cbecb10319 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Received event network-vif-unplugged-bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.443 255071 DEBUG oslo_concurrency.lockutils [req-5ae1d2b6-b39c-4f58-8616-02516b55e427 req-49176db7-fbd0-4c17-90a5-59cbecb10319 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "cd169ba7-ec52-418c-a12c-4069b40674d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.444 255071 DEBUG oslo_concurrency.lockutils [req-5ae1d2b6-b39c-4f58-8616-02516b55e427 req-49176db7-fbd0-4c17-90a5-59cbecb10319 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "cd169ba7-ec52-418c-a12c-4069b40674d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.444 255071 DEBUG oslo_concurrency.lockutils [req-5ae1d2b6-b39c-4f58-8616-02516b55e427 req-49176db7-fbd0-4c17-90a5-59cbecb10319 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "cd169ba7-ec52-418c-a12c-4069b40674d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.444 255071 DEBUG nova.compute.manager [req-5ae1d2b6-b39c-4f58-8616-02516b55e427 req-49176db7-fbd0-4c17-90a5-59cbecb10319 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] No waiting events found dispatching network-vif-unplugged-bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.444 255071 DEBUG nova.compute.manager [req-5ae1d2b6-b39c-4f58-8616-02516b55e427 req-49176db7-fbd0-4c17-90a5-59cbecb10319 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Received event network-vif-unplugged-bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.518 255071 INFO nova.virt.libvirt.driver [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Deleting instance files /var/lib/nova/instances/cd169ba7-ec52-418c-a12c-4069b40674d7_del
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.519 255071 INFO nova.virt.libvirt.driver [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Deletion of /var/lib/nova/instances/cd169ba7-ec52-418c-a12c-4069b40674d7_del complete
Nov 29 08:05:36 compute-0 ceph-mon[75237]: pgmap v1365: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 334 KiB/s rd, 1.6 MiB/s wr, 141 op/s
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.763 255071 INFO nova.compute.manager [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Took 0.91 seconds to destroy the instance on the hypervisor.
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.764 255071 DEBUG oslo.service.loopingcall [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.764 255071 DEBUG nova.compute.manager [-] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:05:36 compute-0 nova_compute[255040]: 2025-11-29 08:05:36.765 255071 DEBUG nova.network.neutron [-] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:05:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 34 KiB/s wr, 71 op/s
Nov 29 08:05:38 compute-0 nova_compute[255040]: 2025-11-29 08:05:38.266 255071 DEBUG nova.network.neutron [-] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:38 compute-0 nova_compute[255040]: 2025-11-29 08:05:38.297 255071 INFO nova.compute.manager [-] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Took 1.53 seconds to deallocate network for instance.
Nov 29 08:05:38 compute-0 nova_compute[255040]: 2025-11-29 08:05:38.342 255071 DEBUG oslo_concurrency.lockutils [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:38 compute-0 nova_compute[255040]: 2025-11-29 08:05:38.343 255071 DEBUG oslo_concurrency.lockutils [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:38 compute-0 nova_compute[255040]: 2025-11-29 08:05:38.427 255071 DEBUG oslo_concurrency.processutils [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:38 compute-0 nova_compute[255040]: 2025-11-29 08:05:38.519 255071 DEBUG nova.compute.manager [req-9580703b-3777-49ab-b8d1-b99943ce3f8a req-a38c2ed6-41c5-4654-b1ae-ead896de15ec cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Received event network-vif-plugged-bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:38 compute-0 nova_compute[255040]: 2025-11-29 08:05:38.520 255071 DEBUG oslo_concurrency.lockutils [req-9580703b-3777-49ab-b8d1-b99943ce3f8a req-a38c2ed6-41c5-4654-b1ae-ead896de15ec cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "cd169ba7-ec52-418c-a12c-4069b40674d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:38 compute-0 nova_compute[255040]: 2025-11-29 08:05:38.520 255071 DEBUG oslo_concurrency.lockutils [req-9580703b-3777-49ab-b8d1-b99943ce3f8a req-a38c2ed6-41c5-4654-b1ae-ead896de15ec cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "cd169ba7-ec52-418c-a12c-4069b40674d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:38 compute-0 nova_compute[255040]: 2025-11-29 08:05:38.520 255071 DEBUG oslo_concurrency.lockutils [req-9580703b-3777-49ab-b8d1-b99943ce3f8a req-a38c2ed6-41c5-4654-b1ae-ead896de15ec cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "cd169ba7-ec52-418c-a12c-4069b40674d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:38 compute-0 nova_compute[255040]: 2025-11-29 08:05:38.520 255071 DEBUG nova.compute.manager [req-9580703b-3777-49ab-b8d1-b99943ce3f8a req-a38c2ed6-41c5-4654-b1ae-ead896de15ec cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] No waiting events found dispatching network-vif-plugged-bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:05:38 compute-0 nova_compute[255040]: 2025-11-29 08:05:38.521 255071 WARNING nova.compute.manager [req-9580703b-3777-49ab-b8d1-b99943ce3f8a req-a38c2ed6-41c5-4654-b1ae-ead896de15ec cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Received unexpected event network-vif-plugged-bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 for instance with vm_state deleted and task_state None.
Nov 29 08:05:38 compute-0 nova_compute[255040]: 2025-11-29 08:05:38.521 255071 DEBUG nova.compute.manager [req-9580703b-3777-49ab-b8d1-b99943ce3f8a req-a38c2ed6-41c5-4654-b1ae-ead896de15ec cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Received event network-vif-deleted-bf5f0bd4-6972-4cd3-9d99-aace0e25efc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:38 compute-0 ceph-mon[75237]: pgmap v1366: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 34 KiB/s wr, 71 op/s
Nov 29 08:05:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:05:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:05:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:05:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:05:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:05:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:05:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Nov 29 08:05:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Nov 29 08:05:38 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Nov 29 08:05:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:05:38
Nov 29 08:05:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:05:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:05:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'vms', '.mgr', 'backups', 'volumes', 'default.rgw.log', 'default.rgw.meta']
Nov 29 08:05:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:05:38 compute-0 nova_compute[255040]: 2025-11-29 08:05:38.896 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:05:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/305037415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:38 compute-0 nova_compute[255040]: 2025-11-29 08:05:38.932 255071 DEBUG oslo_concurrency.processutils [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:38 compute-0 nova_compute[255040]: 2025-11-29 08:05:38.939 255071 DEBUG nova.compute.provider_tree [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:05:38 compute-0 nova_compute[255040]: 2025-11-29 08:05:38.961 255071 DEBUG nova.scheduler.client.report [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:05:38 compute-0 nova_compute[255040]: 2025-11-29 08:05:38.989 255071 DEBUG oslo_concurrency.lockutils [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:39 compute-0 nova_compute[255040]: 2025-11-29 08:05:39.015 255071 INFO nova.scheduler.client.report [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Deleted allocations for instance cd169ba7-ec52-418c-a12c-4069b40674d7
Nov 29 08:05:39 compute-0 nova_compute[255040]: 2025-11-29 08:05:39.079 255071 DEBUG oslo_concurrency.lockutils [None req-a51b7c02-0362-4535-a0fc-f5b367087caf 2f0bad5019c043259e8f0cdbb532a167 122d6c1348a9421688c8c95fa7bfdf33 - - default default] Lock "cd169ba7-ec52-418c-a12c-4069b40674d7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 199 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 18 KiB/s wr, 60 op/s
Nov 29 08:05:39 compute-0 nova_compute[255040]: 2025-11-29 08:05:39.617 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:39 compute-0 ceph-mon[75237]: osdmap e237: 3 total, 3 up, 3 in
Nov 29 08:05:39 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/305037415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:40 compute-0 ceph-mon[75237]: pgmap v1368: 305 pgs: 305 active+clean; 199 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 18 KiB/s wr, 60 op/s
Nov 29 08:05:41 compute-0 nova_compute[255040]: 2025-11-29 08:05:41.115 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 167 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 17 KiB/s wr, 58 op/s
Nov 29 08:05:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Nov 29 08:05:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Nov 29 08:05:41 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Nov 29 08:05:41 compute-0 sudo[274915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:41 compute-0 sudo[274915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:41 compute-0 sudo[274915]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:41 compute-0 sudo[274940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:05:41 compute-0 sudo[274940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:41 compute-0 sudo[274940]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:41 compute-0 sudo[274965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:41 compute-0 sudo[274965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:41 compute-0 sudo[274965]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:42 compute-0 sudo[274990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:05:42 compute-0 sudo[274990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:42 compute-0 sudo[274990]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:05:42 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:05:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:05:42 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:05:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:05:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:05:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:05:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:05:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:05:42 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:05:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Nov 29 08:05:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:05:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:05:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:05:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:05:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:05:43 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:05:43 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev c653b2e3-7447-41e3-8d0d-cd3ba7a3544b does not exist
Nov 29 08:05:43 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev cd658b7d-3a6c-4267-9483-bedfcbc2005d does not exist
Nov 29 08:05:43 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 373ead85-3671-46e2-bef2-d795e2f648ea does not exist
Nov 29 08:05:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:05:43 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:05:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Nov 29 08:05:43 compute-0 ceph-mon[75237]: pgmap v1369: 305 pgs: 305 active+clean; 167 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 17 KiB/s wr, 58 op/s
Nov 29 08:05:43 compute-0 ceph-mon[75237]: osdmap e238: 3 total, 3 up, 3 in
Nov 29 08:05:43 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:05:43 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:05:43 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Nov 29 08:05:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:05:43 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:05:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:05:43 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:05:43 compute-0 sudo[275046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:43 compute-0 sudo[275046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:43 compute-0 sudo[275046]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:43 compute-0 sudo[275071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:05:43 compute-0 sudo[275071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:43 compute-0 sudo[275071]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:43 compute-0 sudo[275096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:43 compute-0 sudo[275096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:43 compute-0 sudo[275096]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 167 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.5 KiB/s wr, 58 op/s
Nov 29 08:05:43 compute-0 sudo[275121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:05:43 compute-0 sudo[275121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:44 compute-0 podman[275186]: 2025-11-29 08:05:44.04693802 +0000 UTC m=+0.047230963 container create e6d89130a3501f5f8890ba620f95067baf87a2134d65abd7c6d64ed5d160c175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:05:44 compute-0 systemd[1]: Started libpod-conmon-e6d89130a3501f5f8890ba620f95067baf87a2134d65abd7c6d64ed5d160c175.scope.
Nov 29 08:05:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:05:44 compute-0 podman[275186]: 2025-11-29 08:05:44.024836796 +0000 UTC m=+0.025129759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:05:44 compute-0 podman[275186]: 2025-11-29 08:05:44.128845399 +0000 UTC m=+0.129138362 container init e6d89130a3501f5f8890ba620f95067baf87a2134d65abd7c6d64ed5d160c175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:05:44 compute-0 podman[275186]: 2025-11-29 08:05:44.137706068 +0000 UTC m=+0.137999011 container start e6d89130a3501f5f8890ba620f95067baf87a2134d65abd7c6d64ed5d160c175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 08:05:44 compute-0 podman[275186]: 2025-11-29 08:05:44.141635474 +0000 UTC m=+0.141928447 container attach e6d89130a3501f5f8890ba620f95067baf87a2134d65abd7c6d64ed5d160c175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:05:44 compute-0 vibrant_lalande[275202]: 167 167
Nov 29 08:05:44 compute-0 systemd[1]: libpod-e6d89130a3501f5f8890ba620f95067baf87a2134d65abd7c6d64ed5d160c175.scope: Deactivated successfully.
Nov 29 08:05:44 compute-0 podman[275186]: 2025-11-29 08:05:44.14555453 +0000 UTC m=+0.145847473 container died e6d89130a3501f5f8890ba620f95067baf87a2134d65abd7c6d64ed5d160c175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 08:05:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-815bc5668d1e68767c0927cc63c11ba4660f9c8b0f9d04a1261a46f12482ce28-merged.mount: Deactivated successfully.
Nov 29 08:05:44 compute-0 podman[275186]: 2025-11-29 08:05:44.189713839 +0000 UTC m=+0.190006812 container remove e6d89130a3501f5f8890ba620f95067baf87a2134d65abd7c6d64ed5d160c175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 08:05:44 compute-0 systemd[1]: libpod-conmon-e6d89130a3501f5f8890ba620f95067baf87a2134d65abd7c6d64ed5d160c175.scope: Deactivated successfully.
Nov 29 08:05:44 compute-0 nova_compute[255040]: 2025-11-29 08:05:44.206 255071 DEBUG oslo_concurrency.lockutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "e13306d3-0b4c-4937-8b4b-83605575ce82" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:44 compute-0 nova_compute[255040]: 2025-11-29 08:05:44.207 255071 DEBUG oslo_concurrency.lockutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "e13306d3-0b4c-4937-8b4b-83605575ce82" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:44 compute-0 nova_compute[255040]: 2025-11-29 08:05:44.226 255071 DEBUG nova.compute.manager [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:05:44 compute-0 nova_compute[255040]: 2025-11-29 08:05:44.316 255071 DEBUG oslo_concurrency.lockutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:44 compute-0 nova_compute[255040]: 2025-11-29 08:05:44.317 255071 DEBUG oslo_concurrency.lockutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:44 compute-0 nova_compute[255040]: 2025-11-29 08:05:44.326 255071 DEBUG nova.virt.hardware [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:05:44 compute-0 nova_compute[255040]: 2025-11-29 08:05:44.326 255071 INFO nova.compute.claims [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:05:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Nov 29 08:05:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Nov 29 08:05:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:05:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:05:44 compute-0 ceph-mon[75237]: osdmap e239: 3 total, 3 up, 3 in
Nov 29 08:05:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:05:44 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:05:44 compute-0 ceph-mon[75237]: pgmap v1372: 305 pgs: 305 active+clean; 167 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.5 KiB/s wr, 58 op/s
Nov 29 08:05:44 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Nov 29 08:05:44 compute-0 podman[275226]: 2025-11-29 08:05:44.39263827 +0000 UTC m=+0.060258225 container create 018f2bc67d3cf28aee5f003bc245ed8e8d1b5fb8865551961516eeb4ae337ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 08:05:44 compute-0 systemd[1]: Started libpod-conmon-018f2bc67d3cf28aee5f003bc245ed8e8d1b5fb8865551961516eeb4ae337ace.scope.
Nov 29 08:05:44 compute-0 podman[275226]: 2025-11-29 08:05:44.374644595 +0000 UTC m=+0.042264550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:05:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9d393f82a18843c25789d87d26c0b2d78d0a49e3c01081071699214b647ad23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9d393f82a18843c25789d87d26c0b2d78d0a49e3c01081071699214b647ad23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9d393f82a18843c25789d87d26c0b2d78d0a49e3c01081071699214b647ad23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9d393f82a18843c25789d87d26c0b2d78d0a49e3c01081071699214b647ad23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9d393f82a18843c25789d87d26c0b2d78d0a49e3c01081071699214b647ad23/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:44 compute-0 nova_compute[255040]: 2025-11-29 08:05:44.478 255071 DEBUG oslo_concurrency.processutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:44 compute-0 podman[275226]: 2025-11-29 08:05:44.490742885 +0000 UTC m=+0.158362880 container init 018f2bc67d3cf28aee5f003bc245ed8e8d1b5fb8865551961516eeb4ae337ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bhaskara, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:05:44 compute-0 podman[275226]: 2025-11-29 08:05:44.498367691 +0000 UTC m=+0.165987626 container start 018f2bc67d3cf28aee5f003bc245ed8e8d1b5fb8865551961516eeb4ae337ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bhaskara, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 08:05:44 compute-0 podman[275226]: 2025-11-29 08:05:44.502511733 +0000 UTC m=+0.170131748 container attach 018f2bc67d3cf28aee5f003bc245ed8e8d1b5fb8865551961516eeb4ae337ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bhaskara, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 08:05:44 compute-0 nova_compute[255040]: 2025-11-29 08:05:44.620 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:05:44 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/271889349' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:05:44 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/271889349' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:05:44 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2313824532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.015 255071 DEBUG oslo_concurrency.processutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.024 255071 DEBUG nova.compute.provider_tree [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.047 255071 DEBUG nova.scheduler.client.report [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.071 255071 DEBUG oslo_concurrency.lockutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.072 255071 DEBUG nova.compute.manager [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.285 255071 DEBUG nova.compute.manager [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.286 255071 DEBUG nova.network.neutron [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.310 255071 INFO nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.337 255071 DEBUG nova.compute.manager [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:05:45 compute-0 ceph-mon[75237]: osdmap e240: 3 total, 3 up, 3 in
Nov 29 08:05:45 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/271889349' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:45 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/271889349' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:45 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2313824532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.425 255071 DEBUG nova.compute.manager [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.426 255071 DEBUG nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.427 255071 INFO nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Creating image(s)
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.451 255071 DEBUG nova.storage.rbd_utils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] rbd image e13306d3-0b4c-4937-8b4b-83605575ce82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.478 255071 DEBUG nova.storage.rbd_utils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] rbd image e13306d3-0b4c-4937-8b4b-83605575ce82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.506 255071 DEBUG nova.storage.rbd_utils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] rbd image e13306d3-0b4c-4937-8b4b-83605575ce82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.511 255071 DEBUG oslo_concurrency.processutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.539 255071 DEBUG oslo_concurrency.lockutils [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Acquiring lock "ef9475c4-846b-4370-8330-5a59e328bc07" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.540 255071 DEBUG oslo_concurrency.lockutils [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lock "ef9475c4-846b-4370-8330-5a59e328bc07" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.561 255071 DEBUG nova.objects.instance [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lazy-loading 'flavor' on Instance uuid ef9475c4-846b-4370-8330-5a59e328bc07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.577 255071 DEBUG oslo_concurrency.processutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.578 255071 DEBUG oslo_concurrency.lockutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "55a6637599f7119d0d1afd670bb8713620840059" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.578 255071 DEBUG oslo_concurrency.lockutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.579 255071 DEBUG oslo_concurrency.lockutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:45 compute-0 boring_bhaskara[275242]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:05:45 compute-0 boring_bhaskara[275242]: --> relative data size: 1.0
Nov 29 08:05:45 compute-0 boring_bhaskara[275242]: --> All data devices are unavailable
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.606 255071 DEBUG nova.storage.rbd_utils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] rbd image e13306d3-0b4c-4937-8b4b-83605575ce82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.610 255071 DEBUG oslo_concurrency.processutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 e13306d3-0b4c-4937-8b4b-83605575ce82_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 167 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.7 KiB/s wr, 44 op/s
Nov 29 08:05:45 compute-0 systemd[1]: libpod-018f2bc67d3cf28aee5f003bc245ed8e8d1b5fb8865551961516eeb4ae337ace.scope: Deactivated successfully.
Nov 29 08:05:45 compute-0 podman[275226]: 2025-11-29 08:05:45.635449834 +0000 UTC m=+1.303069769 container died 018f2bc67d3cf28aee5f003bc245ed8e8d1b5fb8865551961516eeb4ae337ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 08:05:45 compute-0 systemd[1]: libpod-018f2bc67d3cf28aee5f003bc245ed8e8d1b5fb8865551961516eeb4ae337ace.scope: Consumed 1.046s CPU time.
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.643 255071 INFO nova.virt.libvirt.driver [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Ignoring supplied device name: /dev/vdb
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.662 255071 DEBUG oslo_concurrency.lockutils [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lock "ef9475c4-846b-4370-8330-5a59e328bc07" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9d393f82a18843c25789d87d26c0b2d78d0a49e3c01081071699214b647ad23-merged.mount: Deactivated successfully.
Nov 29 08:05:45 compute-0 podman[275226]: 2025-11-29 08:05:45.698177045 +0000 UTC m=+1.365796980 container remove 018f2bc67d3cf28aee5f003bc245ed8e8d1b5fb8865551961516eeb4ae337ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bhaskara, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 08:05:45 compute-0 systemd[1]: libpod-conmon-018f2bc67d3cf28aee5f003bc245ed8e8d1b5fb8865551961516eeb4ae337ace.scope: Deactivated successfully.
Nov 29 08:05:45 compute-0 sudo[275121]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:45 compute-0 sudo[275399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:45 compute-0 sudo[275399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:45 compute-0 sudo[275399]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.855 255071 DEBUG oslo_concurrency.lockutils [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Acquiring lock "ef9475c4-846b-4370-8330-5a59e328bc07" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.857 255071 DEBUG oslo_concurrency.lockutils [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lock "ef9475c4-846b-4370-8330-5a59e328bc07" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.857 255071 INFO nova.compute.manager [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Attaching volume 8fe937d6-ef88-4f69-9abb-837b0fa81235 to /dev/vdb
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.890 255071 DEBUG nova.policy [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c4f53a86d1eb4bdebed4ec5dd9b5ff45', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e34fda55585f453b8b66f12e625234fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:05:45 compute-0 sudo[275426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:05:45 compute-0 sudo[275426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:45 compute-0 sudo[275426]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:45 compute-0 nova_compute[255040]: 2025-11-29 08:05:45.978 255071 DEBUG oslo_concurrency.processutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 e13306d3-0b4c-4937-8b4b-83605575ce82_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:45 compute-0 sudo[275451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:45 compute-0 sudo[275451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:46 compute-0 sudo[275451]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.011 255071 DEBUG os_brick.utils [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.013 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.028 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.029 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[d578ded9-aa6d-45f6-803e-6f98991b611a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.059 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.066 255071 DEBUG nova.storage.rbd_utils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] resizing rbd image e13306d3-0b4c-4937-8b4b-83605575ce82_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:05:46 compute-0 sudo[275494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 08:05:46 compute-0 sudo[275494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.070 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.070 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[5d285ee0-cd80-481b-935d-4cf96d9fc221]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.104 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.119 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.120 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.120 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[74a71b99-0137-452d-a88c-a9dd32998163]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.122 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[37757bde-2359-4430-8996-dac899641b0c]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.122 255071 DEBUG oslo_concurrency.processutils [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.186 255071 DEBUG oslo_concurrency.processutils [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] CMD "nvme version" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.193 255071 DEBUG nova.objects.instance [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lazy-loading 'migration_context' on Instance uuid e13306d3-0b4c-4937-8b4b-83605575ce82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.195 255071 DEBUG os_brick.initiator.connectors.lightos [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.196 255071 DEBUG os_brick.initiator.connectors.lightos [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.196 255071 DEBUG os_brick.initiator.connectors.lightos [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.196 255071 DEBUG os_brick.utils [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] <== get_connector_properties: return (184ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.196 255071 DEBUG nova.virt.block_device [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Updating existing volume attachment record: 7d20aa43-fb12-4459-b625-eb55273ddfc6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.215 255071 DEBUG nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.215 255071 DEBUG nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Ensure instance console log exists: /var/lib/nova/instances/e13306d3-0b4c-4937-8b4b-83605575ce82/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.216 255071 DEBUG oslo_concurrency.lockutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.216 255071 DEBUG oslo_concurrency.lockutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:46 compute-0 nova_compute[255040]: 2025-11-29 08:05:46.216 255071 DEBUG oslo_concurrency.lockutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:46 compute-0 ceph-mon[75237]: pgmap v1374: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 167 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.7 KiB/s wr, 44 op/s
Nov 29 08:05:46 compute-0 podman[275623]: 2025-11-29 08:05:46.472329445 +0000 UTC m=+0.049861395 container create 17b3359ba4a165e6412c7a0f49b532960c7276b85533e03a9650bf4f74be54b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_archimedes, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:05:46 compute-0 systemd[1]: Started libpod-conmon-17b3359ba4a165e6412c7a0f49b532960c7276b85533e03a9650bf4f74be54b4.scope.
Nov 29 08:05:46 compute-0 podman[275623]: 2025-11-29 08:05:46.449912291 +0000 UTC m=+0.027444261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:05:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:05:46 compute-0 podman[275623]: 2025-11-29 08:05:46.571084287 +0000 UTC m=+0.148616247 container init 17b3359ba4a165e6412c7a0f49b532960c7276b85533e03a9650bf4f74be54b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_archimedes, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:05:46 compute-0 podman[275623]: 2025-11-29 08:05:46.579396501 +0000 UTC m=+0.156928451 container start 17b3359ba4a165e6412c7a0f49b532960c7276b85533e03a9650bf4f74be54b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 08:05:46 compute-0 podman[275623]: 2025-11-29 08:05:46.583545993 +0000 UTC m=+0.161077963 container attach 17b3359ba4a165e6412c7a0f49b532960c7276b85533e03a9650bf4f74be54b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_archimedes, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 08:05:46 compute-0 elegant_archimedes[275639]: 167 167
Nov 29 08:05:46 compute-0 systemd[1]: libpod-17b3359ba4a165e6412c7a0f49b532960c7276b85533e03a9650bf4f74be54b4.scope: Deactivated successfully.
Nov 29 08:05:46 compute-0 podman[275623]: 2025-11-29 08:05:46.586521263 +0000 UTC m=+0.164053213 container died 17b3359ba4a165e6412c7a0f49b532960c7276b85533e03a9650bf4f74be54b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_archimedes, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Nov 29 08:05:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6cefd8e50a3a72349b11dadd459ddb2d5431f349af490f195c77d3c6be9da28-merged.mount: Deactivated successfully.
Nov 29 08:05:46 compute-0 podman[275623]: 2025-11-29 08:05:46.630291313 +0000 UTC m=+0.207823293 container remove 17b3359ba4a165e6412c7a0f49b532960c7276b85533e03a9650bf4f74be54b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:05:46 compute-0 systemd[1]: libpod-conmon-17b3359ba4a165e6412c7a0f49b532960c7276b85533e03a9650bf4f74be54b4.scope: Deactivated successfully.
Nov 29 08:05:46 compute-0 podman[275663]: 2025-11-29 08:05:46.813131212 +0000 UTC m=+0.043697409 container create d97665b580f7eddc0bf1ce305784ad456c40c8f724ec137e9483d025e5b1f646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:05:46 compute-0 systemd[1]: Started libpod-conmon-d97665b580f7eddc0bf1ce305784ad456c40c8f724ec137e9483d025e5b1f646.scope.
Nov 29 08:05:46 compute-0 podman[275663]: 2025-11-29 08:05:46.793277977 +0000 UTC m=+0.023844194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:05:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b0b5511631b8e19e59b4b9239958d9a59ffdd5b4525c8078d75e1fa9e968ce7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b0b5511631b8e19e59b4b9239958d9a59ffdd5b4525c8078d75e1fa9e968ce7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b0b5511631b8e19e59b4b9239958d9a59ffdd5b4525c8078d75e1fa9e968ce7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b0b5511631b8e19e59b4b9239958d9a59ffdd5b4525c8078d75e1fa9e968ce7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:46 compute-0 podman[275663]: 2025-11-29 08:05:46.913032335 +0000 UTC m=+0.143598552 container init d97665b580f7eddc0bf1ce305784ad456c40c8f724ec137e9483d025e5b1f646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_fermi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 08:05:46 compute-0 podman[275663]: 2025-11-29 08:05:46.921289128 +0000 UTC m=+0.151855325 container start d97665b580f7eddc0bf1ce305784ad456c40c8f724ec137e9483d025e5b1f646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_fermi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 08:05:46 compute-0 podman[275663]: 2025-11-29 08:05:46.925347327 +0000 UTC m=+0.155913524 container attach d97665b580f7eddc0bf1ce305784ad456c40c8f724ec137e9483d025e5b1f646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_fermi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 08:05:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:05:46 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1954588159' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:47 compute-0 nova_compute[255040]: 2025-11-29 08:05:47.088 255071 DEBUG nova.objects.instance [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lazy-loading 'flavor' on Instance uuid ef9475c4-846b-4370-8330-5a59e328bc07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:05:47 compute-0 nova_compute[255040]: 2025-11-29 08:05:47.121 255071 DEBUG nova.virt.libvirt.driver [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Attempting to attach volume 8fe937d6-ef88-4f69-9abb-837b0fa81235 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:05:47 compute-0 nova_compute[255040]: 2025-11-29 08:05:47.125 255071 DEBUG nova.virt.libvirt.guest [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:05:47 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:05:47 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-8fe937d6-ef88-4f69-9abb-837b0fa81235">
Nov 29 08:05:47 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:05:47 compute-0 nova_compute[255040]:   </source>
Nov 29 08:05:47 compute-0 nova_compute[255040]:   <auth username="openstack">
Nov 29 08:05:47 compute-0 nova_compute[255040]:     <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:05:47 compute-0 nova_compute[255040]:   </auth>
Nov 29 08:05:47 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:05:47 compute-0 nova_compute[255040]:   <serial>8fe937d6-ef88-4f69-9abb-837b0fa81235</serial>
Nov 29 08:05:47 compute-0 nova_compute[255040]: </disk>
Nov 29 08:05:47 compute-0 nova_compute[255040]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:05:47 compute-0 nova_compute[255040]: 2025-11-29 08:05:47.152 255071 DEBUG nova.network.neutron [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Successfully created port: f819ff69-f947-468c-9e7a-6ba9cca9c85f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:05:47 compute-0 nova_compute[255040]: 2025-11-29 08:05:47.242 255071 DEBUG nova.virt.libvirt.driver [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:05:47 compute-0 nova_compute[255040]: 2025-11-29 08:05:47.243 255071 DEBUG nova.virt.libvirt.driver [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:05:47 compute-0 nova_compute[255040]: 2025-11-29 08:05:47.243 255071 DEBUG nova.virt.libvirt.driver [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:05:47 compute-0 nova_compute[255040]: 2025-11-29 08:05:47.243 255071 DEBUG nova.virt.libvirt.driver [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] No VIF found with MAC fa:16:3e:6f:40:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:05:47 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1954588159' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:47 compute-0 nova_compute[255040]: 2025-11-29 08:05:47.424 255071 DEBUG oslo_concurrency.lockutils [None req-7f27a783-07c8-44f5-8c8c-2eb118e1acff 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lock "ef9475c4-846b-4370-8330-5a59e328bc07" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 167 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.7 KiB/s wr, 41 op/s
Nov 29 08:05:47 compute-0 sharp_fermi[275680]: {
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:     "0": [
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:         {
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "devices": [
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "/dev/loop3"
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             ],
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "lv_name": "ceph_lv0",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "lv_size": "21470642176",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "name": "ceph_lv0",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "tags": {
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.cluster_name": "ceph",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.crush_device_class": "",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.encrypted": "0",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.osd_id": "0",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.type": "block",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.vdo": "0"
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             },
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "type": "block",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "vg_name": "ceph_vg0"
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:         }
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:     ],
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:     "1": [
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:         {
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "devices": [
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "/dev/loop4"
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             ],
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "lv_name": "ceph_lv1",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "lv_size": "21470642176",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "name": "ceph_lv1",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "tags": {
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.cluster_name": "ceph",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.crush_device_class": "",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.encrypted": "0",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.osd_id": "1",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.type": "block",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.vdo": "0"
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             },
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "type": "block",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "vg_name": "ceph_vg1"
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:         }
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:     ],
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:     "2": [
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:         {
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "devices": [
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "/dev/loop5"
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             ],
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "lv_name": "ceph_lv2",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "lv_size": "21470642176",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "name": "ceph_lv2",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "tags": {
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.cluster_name": "ceph",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.crush_device_class": "",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.encrypted": "0",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.osd_id": "2",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.type": "block",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:                 "ceph.vdo": "0"
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             },
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "type": "block",
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:             "vg_name": "ceph_vg2"
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:         }
Nov 29 08:05:47 compute-0 sharp_fermi[275680]:     ]
Nov 29 08:05:47 compute-0 sharp_fermi[275680]: }
Nov 29 08:05:47 compute-0 systemd[1]: libpod-d97665b580f7eddc0bf1ce305784ad456c40c8f724ec137e9483d025e5b1f646.scope: Deactivated successfully.
Nov 29 08:05:47 compute-0 podman[275663]: 2025-11-29 08:05:47.794766125 +0000 UTC m=+1.025332342 container died d97665b580f7eddc0bf1ce305784ad456c40c8f724ec137e9483d025e5b1f646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_fermi, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 08:05:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b0b5511631b8e19e59b4b9239958d9a59ffdd5b4525c8078d75e1fa9e968ce7-merged.mount: Deactivated successfully.
Nov 29 08:05:47 compute-0 podman[275663]: 2025-11-29 08:05:47.866715055 +0000 UTC m=+1.097281252 container remove d97665b580f7eddc0bf1ce305784ad456c40c8f724ec137e9483d025e5b1f646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 29 08:05:47 compute-0 systemd[1]: libpod-conmon-d97665b580f7eddc0bf1ce305784ad456c40c8f724ec137e9483d025e5b1f646.scope: Deactivated successfully.
Nov 29 08:05:47 compute-0 sudo[275494]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:47 compute-0 sudo[275719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:48 compute-0 sudo[275719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:48 compute-0 sudo[275719]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:48 compute-0 sudo[275744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:05:48 compute-0 sudo[275744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:48 compute-0 sudo[275744]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:48 compute-0 sudo[275769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:48 compute-0 sudo[275769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:48 compute-0 sudo[275769]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:48 compute-0 sudo[275794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 08:05:48 compute-0 sudo[275794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:48 compute-0 ceph-mon[75237]: pgmap v1375: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 167 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.7 KiB/s wr, 41 op/s
Nov 29 08:05:48 compute-0 podman[275858]: 2025-11-29 08:05:48.611815702 +0000 UTC m=+0.051792608 container create f9c22c42ae7ac0eb24ecd1da420afd847d90de61944a80c61237693cf5c7e387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jepsen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 08:05:48 compute-0 systemd[1]: Started libpod-conmon-f9c22c42ae7ac0eb24ecd1da420afd847d90de61944a80c61237693cf5c7e387.scope.
Nov 29 08:05:48 compute-0 podman[275858]: 2025-11-29 08:05:48.591352039 +0000 UTC m=+0.031328985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:05:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:05:48 compute-0 podman[275858]: 2025-11-29 08:05:48.707756627 +0000 UTC m=+0.147733563 container init f9c22c42ae7ac0eb24ecd1da420afd847d90de61944a80c61237693cf5c7e387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jepsen, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:05:48 compute-0 podman[275858]: 2025-11-29 08:05:48.717200312 +0000 UTC m=+0.157177258 container start f9c22c42ae7ac0eb24ecd1da420afd847d90de61944a80c61237693cf5c7e387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:05:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Nov 29 08:05:48 compute-0 podman[275858]: 2025-11-29 08:05:48.723913213 +0000 UTC m=+0.163890139 container attach f9c22c42ae7ac0eb24ecd1da420afd847d90de61944a80c61237693cf5c7e387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jepsen, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:05:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Nov 29 08:05:48 compute-0 stoic_jepsen[275874]: 167 167
Nov 29 08:05:48 compute-0 systemd[1]: libpod-f9c22c42ae7ac0eb24ecd1da420afd847d90de61944a80c61237693cf5c7e387.scope: Deactivated successfully.
Nov 29 08:05:48 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Nov 29 08:05:48 compute-0 podman[275879]: 2025-11-29 08:05:48.784146657 +0000 UTC m=+0.035661693 container died f9c22c42ae7ac0eb24ecd1da420afd847d90de61944a80c61237693cf5c7e387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:05:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b23a30b2423a788434199c67c1b9c16c28e7da4ddeb7fdc74311726002a7e8db-merged.mount: Deactivated successfully.
Nov 29 08:05:48 compute-0 podman[275879]: 2025-11-29 08:05:48.821231296 +0000 UTC m=+0.072746302 container remove f9c22c42ae7ac0eb24ecd1da420afd847d90de61944a80c61237693cf5c7e387 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jepsen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 08:05:48 compute-0 systemd[1]: libpod-conmon-f9c22c42ae7ac0eb24ecd1da420afd847d90de61944a80c61237693cf5c7e387.scope: Deactivated successfully.
Nov 29 08:05:48 compute-0 podman[275902]: 2025-11-29 08:05:48.984907069 +0000 UTC m=+0.039100565 container create 0e3665590a35508ed9ec22c1dd4cca0aee82bc238f846ca0915ee45f0e3a24b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_benz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 08:05:49 compute-0 systemd[1]: Started libpod-conmon-0e3665590a35508ed9ec22c1dd4cca0aee82bc238f846ca0915ee45f0e3a24b3.scope.
Nov 29 08:05:49 compute-0 nova_compute[255040]: 2025-11-29 08:05:49.016 255071 DEBUG nova.network.neutron [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Successfully updated port: f819ff69-f947-468c-9e7a-6ba9cca9c85f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:05:49 compute-0 nova_compute[255040]: 2025-11-29 08:05:49.035 255071 DEBUG oslo_concurrency.lockutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "refresh_cache-e13306d3-0b4c-4937-8b4b-83605575ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:05:49 compute-0 nova_compute[255040]: 2025-11-29 08:05:49.035 255071 DEBUG oslo_concurrency.lockutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquired lock "refresh_cache-e13306d3-0b4c-4937-8b4b-83605575ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:05:49 compute-0 nova_compute[255040]: 2025-11-29 08:05:49.036 255071 DEBUG nova.network.neutron [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:05:49 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:05:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16d5da61a765e49620170913893fc717fa348beccf6f6520d26d997ad5323160/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16d5da61a765e49620170913893fc717fa348beccf6f6520d26d997ad5323160/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16d5da61a765e49620170913893fc717fa348beccf6f6520d26d997ad5323160/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16d5da61a765e49620170913893fc717fa348beccf6f6520d26d997ad5323160/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:49 compute-0 podman[275902]: 2025-11-29 08:05:48.967556522 +0000 UTC m=+0.021750028 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:05:49 compute-0 podman[275902]: 2025-11-29 08:05:49.072055558 +0000 UTC m=+0.126249054 container init 0e3665590a35508ed9ec22c1dd4cca0aee82bc238f846ca0915ee45f0e3a24b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:05:49 compute-0 podman[275902]: 2025-11-29 08:05:49.082005166 +0000 UTC m=+0.136198662 container start 0e3665590a35508ed9ec22c1dd4cca0aee82bc238f846ca0915ee45f0e3a24b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:05:49 compute-0 podman[275902]: 2025-11-29 08:05:49.086222141 +0000 UTC m=+0.140415667 container attach 0e3665590a35508ed9ec22c1dd4cca0aee82bc238f846ca0915ee45f0e3a24b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_benz, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:05:49 compute-0 nova_compute[255040]: 2025-11-29 08:05:49.108 255071 DEBUG nova.compute.manager [req-18315f21-f489-4d67-939a-dd4419af8ff7 req-a0cc6bfb-7e63-426b-81c1-32297ba2a848 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Received event network-changed-f819ff69-f947-468c-9e7a-6ba9cca9c85f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:49 compute-0 nova_compute[255040]: 2025-11-29 08:05:49.109 255071 DEBUG nova.compute.manager [req-18315f21-f489-4d67-939a-dd4419af8ff7 req-a0cc6bfb-7e63-426b-81c1-32297ba2a848 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Refreshing instance network info cache due to event network-changed-f819ff69-f947-468c-9e7a-6ba9cca9c85f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:05:49 compute-0 nova_compute[255040]: 2025-11-29 08:05:49.109 255071 DEBUG oslo_concurrency.lockutils [req-18315f21-f489-4d67-939a-dd4419af8ff7 req-a0cc6bfb-7e63-426b-81c1-32297ba2a848 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-e13306d3-0b4c-4937-8b4b-83605575ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:05:49 compute-0 nova_compute[255040]: 2025-11-29 08:05:49.148 255071 DEBUG nova.compute.manager [req-75584ff3-28ad-47e5-8d4a-3dc7c540f3b3 req-a361959e-661f-4cda-9369-498478a62195 25ec8781b6804b3590f81f8e2d32f01e d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Received event volume-extended-8fe937d6-ef88-4f69-9abb-837b0fa81235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:49 compute-0 podman[275916]: 2025-11-29 08:05:49.167971244 +0000 UTC m=+0.136986454 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller)
Nov 29 08:05:49 compute-0 nova_compute[255040]: 2025-11-29 08:05:49.168 255071 DEBUG nova.compute.manager [req-75584ff3-28ad-47e5-8d4a-3dc7c540f3b3 req-a361959e-661f-4cda-9369-498478a62195 25ec8781b6804b3590f81f8e2d32f01e d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Handling volume-extended event for volume 8fe937d6-ef88-4f69-9abb-837b0fa81235 extend_volume /usr/lib/python3.9/site-packages/nova/compute/manager.py:10896
Nov 29 08:05:49 compute-0 nova_compute[255040]: 2025-11-29 08:05:49.176 255071 DEBUG nova.network.neutron [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:05:49 compute-0 nova_compute[255040]: 2025-11-29 08:05:49.193 255071 INFO nova.compute.manager [req-75584ff3-28ad-47e5-8d4a-3dc7c540f3b3 req-a361959e-661f-4cda-9369-498478a62195 25ec8781b6804b3590f81f8e2d32f01e d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Cinder extended volume 8fe937d6-ef88-4f69-9abb-837b0fa81235; extending it to detect new size
Nov 29 08:05:49 compute-0 nova_compute[255040]: 2025-11-29 08:05:49.360 255071 DEBUG nova.virt.libvirt.driver [req-75584ff3-28ad-47e5-8d4a-3dc7c540f3b3 req-a361959e-661f-4cda-9369-498478a62195 25ec8781b6804b3590f81f8e2d32f01e d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Resizing target device vdb to 2147483648 _resize_attached_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2756
Nov 29 08:05:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 206 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 3.2 MiB/s wr, 105 op/s
Nov 29 08:05:49 compute-0 nova_compute[255040]: 2025-11-29 08:05:49.622 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:49 compute-0 ceph-mon[75237]: osdmap e241: 3 total, 3 up, 3 in
Nov 29 08:05:50 compute-0 jovial_benz[275919]: {
Nov 29 08:05:50 compute-0 jovial_benz[275919]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 08:05:50 compute-0 jovial_benz[275919]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:05:50 compute-0 jovial_benz[275919]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:05:50 compute-0 jovial_benz[275919]:         "osd_id": 2,
Nov 29 08:05:50 compute-0 jovial_benz[275919]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:05:50 compute-0 jovial_benz[275919]:         "type": "bluestore"
Nov 29 08:05:50 compute-0 jovial_benz[275919]:     },
Nov 29 08:05:50 compute-0 jovial_benz[275919]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 08:05:50 compute-0 jovial_benz[275919]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:05:50 compute-0 jovial_benz[275919]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:05:50 compute-0 jovial_benz[275919]:         "osd_id": 0,
Nov 29 08:05:50 compute-0 jovial_benz[275919]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:05:50 compute-0 jovial_benz[275919]:         "type": "bluestore"
Nov 29 08:05:50 compute-0 jovial_benz[275919]:     },
Nov 29 08:05:50 compute-0 jovial_benz[275919]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 08:05:50 compute-0 jovial_benz[275919]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:05:50 compute-0 jovial_benz[275919]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:05:50 compute-0 jovial_benz[275919]:         "osd_id": 1,
Nov 29 08:05:50 compute-0 jovial_benz[275919]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:05:50 compute-0 jovial_benz[275919]:         "type": "bluestore"
Nov 29 08:05:50 compute-0 jovial_benz[275919]:     }
Nov 29 08:05:50 compute-0 jovial_benz[275919]: }
Nov 29 08:05:50 compute-0 systemd[1]: libpod-0e3665590a35508ed9ec22c1dd4cca0aee82bc238f846ca0915ee45f0e3a24b3.scope: Deactivated successfully.
Nov 29 08:05:50 compute-0 podman[275902]: 2025-11-29 08:05:50.066071895 +0000 UTC m=+1.120265401 container died 0e3665590a35508ed9ec22c1dd4cca0aee82bc238f846ca0915ee45f0e3a24b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_benz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:05:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-16d5da61a765e49620170913893fc717fa348beccf6f6520d26d997ad5323160-merged.mount: Deactivated successfully.
Nov 29 08:05:50 compute-0 podman[275902]: 2025-11-29 08:05:50.123269527 +0000 UTC m=+1.177463023 container remove 0e3665590a35508ed9ec22c1dd4cca0aee82bc238f846ca0915ee45f0e3a24b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 08:05:50 compute-0 systemd[1]: libpod-conmon-0e3665590a35508ed9ec22c1dd4cca0aee82bc238f846ca0915ee45f0e3a24b3.scope: Deactivated successfully.
Nov 29 08:05:50 compute-0 sudo[275794]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:05:50 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:05:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:05:50 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:05:50 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev d88e7bd4-2624-42e6-8f78-f0a7e7eb405d does not exist
Nov 29 08:05:50 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev f54f049b-7db2-43e3-a457-904017f9cb4e does not exist
Nov 29 08:05:50 compute-0 sudo[275989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:50 compute-0 sudo[275989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:50 compute-0 sudo[275989]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:50 compute-0 sudo[276014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:05:50 compute-0 sudo[276014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:50 compute-0 sudo[276014]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:50 compute-0 ovn_controller[153295]: 2025-11-29T08:05:50Z|00089|binding|INFO|Releasing lport 9b27068e-ab44-47df-bfa7-ae1ee2b760c5 from this chassis (sb_readonly=0)
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.364 255071 DEBUG oslo_concurrency.lockutils [None req-b8c66012-84e1-40d8-b872-156bb219e9a2 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Acquiring lock "ef9475c4-846b-4370-8330-5a59e328bc07" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.364 255071 DEBUG oslo_concurrency.lockutils [None req-b8c66012-84e1-40d8-b872-156bb219e9a2 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lock "ef9475c4-846b-4370-8330-5a59e328bc07" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.380 255071 INFO nova.compute.manager [None req-b8c66012-84e1-40d8-b872-156bb219e9a2 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Detaching volume 8fe937d6-ef88-4f69-9abb-837b0fa81235
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.382 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.502 255071 INFO nova.virt.block_device [None req-b8c66012-84e1-40d8-b872-156bb219e9a2 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Attempting to driver detach volume 8fe937d6-ef88-4f69-9abb-837b0fa81235 from mountpoint /dev/vdb
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.514 255071 DEBUG nova.virt.libvirt.driver [None req-b8c66012-84e1-40d8-b872-156bb219e9a2 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Attempting to detach device vdb from instance ef9475c4-846b-4370-8330-5a59e328bc07 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.515 255071 DEBUG nova.virt.libvirt.guest [None req-b8c66012-84e1-40d8-b872-156bb219e9a2 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:05:50 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:05:50 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-8fe937d6-ef88-4f69-9abb-837b0fa81235">
Nov 29 08:05:50 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:05:50 compute-0 nova_compute[255040]:   </source>
Nov 29 08:05:50 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:05:50 compute-0 nova_compute[255040]:   <serial>8fe937d6-ef88-4f69-9abb-837b0fa81235</serial>
Nov 29 08:05:50 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:05:50 compute-0 nova_compute[255040]: </disk>
Nov 29 08:05:50 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.524 255071 INFO nova.virt.libvirt.driver [None req-b8c66012-84e1-40d8-b872-156bb219e9a2 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Successfully detached device vdb from instance ef9475c4-846b-4370-8330-5a59e328bc07 from the persistent domain config.
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.524 255071 DEBUG nova.virt.libvirt.driver [None req-b8c66012-84e1-40d8-b872-156bb219e9a2 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance ef9475c4-846b-4370-8330-5a59e328bc07 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.525 255071 DEBUG nova.virt.libvirt.guest [None req-b8c66012-84e1-40d8-b872-156bb219e9a2 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:05:50 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:05:50 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-8fe937d6-ef88-4f69-9abb-837b0fa81235">
Nov 29 08:05:50 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:05:50 compute-0 nova_compute[255040]:   </source>
Nov 29 08:05:50 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:05:50 compute-0 nova_compute[255040]:   <serial>8fe937d6-ef88-4f69-9abb-837b0fa81235</serial>
Nov 29 08:05:50 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:05:50 compute-0 nova_compute[255040]: </disk>
Nov 29 08:05:50 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.605 255071 DEBUG nova.virt.libvirt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Received event <DeviceRemovedEvent: 1764403550.604447, ef9475c4-846b-4370-8330-5a59e328bc07 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.607 255071 DEBUG nova.virt.libvirt.driver [None req-b8c66012-84e1-40d8-b872-156bb219e9a2 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance ef9475c4-846b-4370-8330-5a59e328bc07 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.609 255071 INFO nova.virt.libvirt.driver [None req-b8c66012-84e1-40d8-b872-156bb219e9a2 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Successfully detached device vdb from instance ef9475c4-846b-4370-8330-5a59e328bc07 from the live domain config.
Nov 29 08:05:50 compute-0 ceph-mon[75237]: pgmap v1377: 305 pgs: 305 active+clean; 206 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 3.2 MiB/s wr, 105 op/s
Nov 29 08:05:50 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:05:50 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.796 255071 DEBUG nova.objects.instance [None req-b8c66012-84e1-40d8-b872-156bb219e9a2 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lazy-loading 'flavor' on Instance uuid ef9475c4-846b-4370-8330-5a59e328bc07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.826 255071 DEBUG oslo_concurrency.lockutils [None req-b8c66012-84e1-40d8-b872-156bb219e9a2 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lock "ef9475c4-846b-4370-8330-5a59e328bc07" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.462s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.895 255071 DEBUG nova.network.neutron [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Updating instance_info_cache with network_info: [{"id": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "address": "fa:16:3e:0d:71:48", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf819ff69-f9", "ovs_interfaceid": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.912 255071 DEBUG oslo_concurrency.lockutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Releasing lock "refresh_cache-e13306d3-0b4c-4937-8b4b-83605575ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.912 255071 DEBUG nova.compute.manager [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Instance network_info: |[{"id": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "address": "fa:16:3e:0d:71:48", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf819ff69-f9", "ovs_interfaceid": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.913 255071 DEBUG oslo_concurrency.lockutils [req-18315f21-f489-4d67-939a-dd4419af8ff7 req-a0cc6bfb-7e63-426b-81c1-32297ba2a848 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-e13306d3-0b4c-4937-8b4b-83605575ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.913 255071 DEBUG nova.network.neutron [req-18315f21-f489-4d67-939a-dd4419af8ff7 req-a0cc6bfb-7e63-426b-81c1-32297ba2a848 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Refreshing network info cache for port f819ff69-f947-468c-9e7a-6ba9cca9c85f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.918 255071 DEBUG nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Start _get_guest_xml network_info=[{"id": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "address": "fa:16:3e:0d:71:48", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf819ff69-f9", "ovs_interfaceid": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'image_id': '36a9388d-0d77-4d24-a915-be92247e5dbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.926 255071 WARNING nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.936 255071 DEBUG nova.virt.libvirt.host [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.937 255071 DEBUG nova.virt.libvirt.host [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.949 255071 DEBUG nova.virt.libvirt.host [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.950 255071 DEBUG nova.virt.libvirt.host [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.951 255071 DEBUG nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.951 255071 DEBUG nova.virt.hardware [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.952 255071 DEBUG nova.virt.hardware [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.953 255071 DEBUG nova.virt.hardware [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.953 255071 DEBUG nova.virt.hardware [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.954 255071 DEBUG nova.virt.hardware [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.954 255071 DEBUG nova.virt.hardware [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.955 255071 DEBUG nova.virt.hardware [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.955 255071 DEBUG nova.virt.hardware [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.956 255071 DEBUG nova.virt.hardware [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.956 255071 DEBUG nova.virt.hardware [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.957 255071 DEBUG nova.virt.hardware [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:05:50 compute-0 nova_compute[255040]: 2025-11-29 08:05:50.965 255071 DEBUG oslo_concurrency.processutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.092 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403536.0850415, cd169ba7-ec52-418c-a12c-4069b40674d7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.093 255071 INFO nova.compute.manager [-] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] VM Stopped (Lifecycle Event)
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.116 255071 DEBUG nova.compute.manager [None req-f80bbf04-1480-4806-abd5-9435d9dd9d39 - - - - - -] [instance: cd169ba7-ec52-418c-a12c-4069b40674d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.121 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:05:51 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4080531621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.477 255071 DEBUG oslo_concurrency.processutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.509 255071 DEBUG nova.storage.rbd_utils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] rbd image e13306d3-0b4c-4937-8b4b-83605575ce82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.514 255071 DEBUG oslo_concurrency.processutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 213 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 2.7 MiB/s wr, 95 op/s
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.648 255071 DEBUG oslo_concurrency.lockutils [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Acquiring lock "ef9475c4-846b-4370-8330-5a59e328bc07" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.649 255071 DEBUG oslo_concurrency.lockutils [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lock "ef9475c4-846b-4370-8330-5a59e328bc07" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.650 255071 DEBUG oslo_concurrency.lockutils [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Acquiring lock "ef9475c4-846b-4370-8330-5a59e328bc07-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.650 255071 DEBUG oslo_concurrency.lockutils [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lock "ef9475c4-846b-4370-8330-5a59e328bc07-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.650 255071 DEBUG oslo_concurrency.lockutils [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lock "ef9475c4-846b-4370-8330-5a59e328bc07-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.651 255071 INFO nova.compute.manager [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Terminating instance
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.652 255071 DEBUG nova.compute.manager [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:05:51 compute-0 kernel: tap81864c7c-55 (unregistering): left promiscuous mode
Nov 29 08:05:51 compute-0 NetworkManager[49116]: <info>  [1764403551.7241] device (tap81864c7c-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.728 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:51 compute-0 ovn_controller[153295]: 2025-11-29T08:05:51Z|00090|binding|INFO|Releasing lport 81864c7c-554a-4b81-ba45-62d08b95c981 from this chassis (sb_readonly=0)
Nov 29 08:05:51 compute-0 ovn_controller[153295]: 2025-11-29T08:05:51Z|00091|binding|INFO|Setting lport 81864c7c-554a-4b81-ba45-62d08b95c981 down in Southbound
Nov 29 08:05:51 compute-0 ovn_controller[153295]: 2025-11-29T08:05:51Z|00092|binding|INFO|Removing iface tap81864c7c-55 ovn-installed in OVS
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.732 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:51 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:51.737 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:40:f3 10.100.0.8'], port_security=['fa:16:3e:6f:40:f3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ef9475c4-846b-4370-8330-5a59e328bc07', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-905664a2-5abc-4150-a1c0-f1b86c7f655e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65fc2f72f64e4c91b66d05d7ebaf9e4c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0c32f199-4e61-41f8-97e9-097ce92f8499', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=750ebe7d-6ab1-4015-999e-8f79293608fa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=81864c7c-554a-4b81-ba45-62d08b95c981) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:05:51 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:51.739 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 81864c7c-554a-4b81-ba45-62d08b95c981 in datapath 905664a2-5abc-4150-a1c0-f1b86c7f655e unbound from our chassis
Nov 29 08:05:51 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:51.740 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 905664a2-5abc-4150-a1c0-f1b86c7f655e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:05:51 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:51.742 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ad7be6-83bf-41a8-95e5-22b9bb3ab99a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:51 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:51.743 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e namespace which is not needed anymore
Nov 29 08:05:51 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/4080531621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.761 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:51 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Nov 29 08:05:51 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 16.192s CPU time.
Nov 29 08:05:51 compute-0 systemd-machined[216271]: Machine qemu-7-instance-00000007 terminated.
Nov 29 08:05:51 compute-0 neutron-haproxy-ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e[274666]: [NOTICE]   (274670) : haproxy version is 2.8.14-c23fe91
Nov 29 08:05:51 compute-0 neutron-haproxy-ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e[274666]: [NOTICE]   (274670) : path to executable is /usr/sbin/haproxy
Nov 29 08:05:51 compute-0 neutron-haproxy-ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e[274666]: [WARNING]  (274670) : Exiting Master process...
Nov 29 08:05:51 compute-0 neutron-haproxy-ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e[274666]: [WARNING]  (274670) : Exiting Master process...
Nov 29 08:05:51 compute-0 neutron-haproxy-ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e[274666]: [ALERT]    (274670) : Current worker (274672) exited with code 143 (Terminated)
Nov 29 08:05:51 compute-0 neutron-haproxy-ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e[274666]: [WARNING]  (274670) : All workers exited. Exiting... (0)
Nov 29 08:05:51 compute-0 systemd[1]: libpod-474ed2e01e7e95d8d6d655cbc2fdc840ba8097890b7fb3a92099497c50571dd0.scope: Deactivated successfully.
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.895 255071 INFO nova.virt.libvirt.driver [-] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Instance destroyed successfully.
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.895 255071 DEBUG nova.objects.instance [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lazy-loading 'resources' on Instance uuid ef9475c4-846b-4370-8330-5a59e328bc07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:05:51 compute-0 podman[276125]: 2025-11-29 08:05:51.900751935 +0000 UTC m=+0.051868459 container died 474ed2e01e7e95d8d6d655cbc2fdc840ba8097890b7fb3a92099497c50571dd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.910 255071 DEBUG nova.virt.libvirt.vif [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:05:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-1131396906',display_name='tempest-VolumesExtendAttachedTest-instance-1131396906',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-1131396906',id=7,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJtAWAu/tHfQ+gvLOin9nwfuA44WRb8BkhOR7BuFicQSRE2YvkDNdJHZ+xBcp7Nt990mVKqwKK+dkhENtNo30xapMYTR4HULTUocDd4F1NCeMGLP4UrL5jQbb2RKAX9+XA==',key_name='tempest-keypair-1417582676',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:05:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='65fc2f72f64e4c91b66d05d7ebaf9e4c',ramdisk_id='',reservation_id='r-nl1nrz56',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesExtendAttachedTest-1821035368',owner_user_name='tempest-VolumesExtendAttachedTest-1821035368-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:05:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7d13a2468b4442809f7968c612cb7523',uuid=ef9475c4-846b-4370-8330-5a59e328bc07,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81864c7c-554a-4b81-ba45-62d08b95c981", "address": "fa:16:3e:6f:40:f3", "network": {"id": "905664a2-5abc-4150-a1c0-f1b86c7f655e", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1867814284-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65fc2f72f64e4c91b66d05d7ebaf9e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81864c7c-55", "ovs_interfaceid": "81864c7c-554a-4b81-ba45-62d08b95c981", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.911 255071 DEBUG nova.network.os_vif_util [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Converting VIF {"id": "81864c7c-554a-4b81-ba45-62d08b95c981", "address": "fa:16:3e:6f:40:f3", "network": {"id": "905664a2-5abc-4150-a1c0-f1b86c7f655e", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1867814284-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65fc2f72f64e4c91b66d05d7ebaf9e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81864c7c-55", "ovs_interfaceid": "81864c7c-554a-4b81-ba45-62d08b95c981", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.911 255071 DEBUG nova.network.os_vif_util [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6f:40:f3,bridge_name='br-int',has_traffic_filtering=True,id=81864c7c-554a-4b81-ba45-62d08b95c981,network=Network(905664a2-5abc-4150-a1c0-f1b86c7f655e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81864c7c-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.912 255071 DEBUG os_vif [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:40:f3,bridge_name='br-int',has_traffic_filtering=True,id=81864c7c-554a-4b81-ba45-62d08b95c981,network=Network(905664a2-5abc-4150-a1c0-f1b86c7f655e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81864c7c-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.915 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.915 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81864c7c-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.918 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.920 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:05:51 compute-0 nova_compute[255040]: 2025-11-29 08:05:51.922 255071 INFO os_vif [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:40:f3,bridge_name='br-int',has_traffic_filtering=True,id=81864c7c-554a-4b81-ba45-62d08b95c981,network=Network(905664a2-5abc-4150-a1c0-f1b86c7f655e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81864c7c-55')
Nov 29 08:05:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-474ed2e01e7e95d8d6d655cbc2fdc840ba8097890b7fb3a92099497c50571dd0-userdata-shm.mount: Deactivated successfully.
Nov 29 08:05:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-183092a2f242256f77da8ba0099ea1b4a00dfb8afbad552e91e98eb1cfb545d1-merged.mount: Deactivated successfully.
Nov 29 08:05:51 compute-0 podman[276125]: 2025-11-29 08:05:51.954623877 +0000 UTC m=+0.105740401 container cleanup 474ed2e01e7e95d8d6d655cbc2fdc840ba8097890b7fb3a92099497c50571dd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 08:05:51 compute-0 systemd[1]: libpod-conmon-474ed2e01e7e95d8d6d655cbc2fdc840ba8097890b7fb3a92099497c50571dd0.scope: Deactivated successfully.
Nov 29 08:05:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:05:51 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2505473751' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.022 255071 DEBUG oslo_concurrency.processutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.024 255071 DEBUG nova.virt.libvirt.vif [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1728969715',display_name='tempest-TestStampPattern-server-1728969715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1728969715',id=8,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJE9RqYsz9HjQ/t1DVesVU5+xhErSoHBDhqDFMn5e1HnxCoCHbyhG0Ca+mVMomD/L3wNZd1oYWRpzT93dK7YeXeDz2hG7gc6vbzGWNmMv5BpvrM+1KI+r/GQ5ox5/o1aRQ==',key_name='tempest-TestStampPattern-1389223213',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e34fda55585f453b8b66f12e625234fe',ramdisk_id='',reservation_id='r-cm16igo4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-194782062',owner_user_name='tempest-TestStampPattern-194782062-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:45Z,user_data=None,user_id='c4f53a86d1eb4bdebed4ec5dd9b5ff45',uuid=e13306d3-0b4c-4937-8b4b-83605575ce82,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "address": "fa:16:3e:0d:71:48", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf819ff69-f9", "ovs_interfaceid": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.024 255071 DEBUG nova.network.os_vif_util [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Converting VIF {"id": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "address": "fa:16:3e:0d:71:48", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf819ff69-f9", "ovs_interfaceid": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.024 255071 DEBUG nova.network.os_vif_util [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:71:48,bridge_name='br-int',has_traffic_filtering=True,id=f819ff69-f947-468c-9e7a-6ba9cca9c85f,network=Network(40f35c3c-5e61-44c9-af5e-70c7d4a4426c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf819ff69-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.026 255071 DEBUG nova.objects.instance [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lazy-loading 'pci_devices' on Instance uuid e13306d3-0b4c-4937-8b4b-83605575ce82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:05:52 compute-0 podman[276184]: 2025-11-29 08:05:52.029842365 +0000 UTC m=+0.052946829 container remove 474ed2e01e7e95d8d6d655cbc2fdc840ba8097890b7fb3a92099497c50571dd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:05:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:52.035 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[db683fcb-7d47-46d2-afe9-2c21ce9ef04f]: (4, ('Sat Nov 29 08:05:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e (474ed2e01e7e95d8d6d655cbc2fdc840ba8097890b7fb3a92099497c50571dd0)\n474ed2e01e7e95d8d6d655cbc2fdc840ba8097890b7fb3a92099497c50571dd0\nSat Nov 29 08:05:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e (474ed2e01e7e95d8d6d655cbc2fdc840ba8097890b7fb3a92099497c50571dd0)\n474ed2e01e7e95d8d6d655cbc2fdc840ba8097890b7fb3a92099497c50571dd0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:52.037 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[3dd7d22f-c50c-48b3-bdc8-552dc278a343]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:52.038 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap905664a2-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.039 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:52 compute-0 kernel: tap905664a2-50: left promiscuous mode
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.051 255071 DEBUG nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:05:52 compute-0 nova_compute[255040]:   <uuid>e13306d3-0b4c-4937-8b4b-83605575ce82</uuid>
Nov 29 08:05:52 compute-0 nova_compute[255040]:   <name>instance-00000008</name>
Nov 29 08:05:52 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:05:52 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:05:52 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <nova:name>tempest-TestStampPattern-server-1728969715</nova:name>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:05:50</nova:creationTime>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:05:52 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:05:52 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:05:52 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:05:52 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:05:52 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:05:52 compute-0 nova_compute[255040]:         <nova:user uuid="c4f53a86d1eb4bdebed4ec5dd9b5ff45">tempest-TestStampPattern-194782062-project-member</nova:user>
Nov 29 08:05:52 compute-0 nova_compute[255040]:         <nova:project uuid="e34fda55585f453b8b66f12e625234fe">tempest-TestStampPattern-194782062</nova:project>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <nova:root type="image" uuid="36a9388d-0d77-4d24-a915-be92247e5dbc"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:05:52 compute-0 nova_compute[255040]:         <nova:port uuid="f819ff69-f947-468c-9e7a-6ba9cca9c85f">
Nov 29 08:05:52 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:05:52 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:05:52 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <system>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <entry name="serial">e13306d3-0b4c-4937-8b4b-83605575ce82</entry>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <entry name="uuid">e13306d3-0b4c-4937-8b4b-83605575ce82</entry>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     </system>
Nov 29 08:05:52 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:05:52 compute-0 nova_compute[255040]:   <os>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:   </os>
Nov 29 08:05:52 compute-0 nova_compute[255040]:   <features>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:   </features>
Nov 29 08:05:52 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:05:52 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:05:52 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/e13306d3-0b4c-4937-8b4b-83605575ce82_disk">
Nov 29 08:05:52 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       </source>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:05:52 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/e13306d3-0b4c-4937-8b4b-83605575ce82_disk.config">
Nov 29 08:05:52 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       </source>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:05:52 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:0d:71:48"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <target dev="tapf819ff69-f9"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/e13306d3-0b4c-4937-8b4b-83605575ce82/console.log" append="off"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <video>
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     </video>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:05:52 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:05:52 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:05:52 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:05:52 compute-0 nova_compute[255040]: </domain>
Nov 29 08:05:52 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.052 255071 DEBUG nova.compute.manager [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Preparing to wait for external event network-vif-plugged-f819ff69-f947-468c-9e7a-6ba9cca9c85f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.052 255071 DEBUG oslo_concurrency.lockutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "e13306d3-0b4c-4937-8b4b-83605575ce82-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.052 255071 DEBUG oslo_concurrency.lockutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "e13306d3-0b4c-4937-8b4b-83605575ce82-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.053 255071 DEBUG oslo_concurrency.lockutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "e13306d3-0b4c-4937-8b4b-83605575ce82-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.053 255071 DEBUG nova.virt.libvirt.vif [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1728969715',display_name='tempest-TestStampPattern-server-1728969715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1728969715',id=8,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJE9RqYsz9HjQ/t1DVesVU5+xhErSoHBDhqDFMn5e1HnxCoCHbyhG0Ca+mVMomD/L3wNZd1oYWRpzT93dK7YeXeDz2hG7gc6vbzGWNmMv5BpvrM+1KI+r/GQ5ox5/o1aRQ==',key_name='tempest-TestStampPattern-1389223213',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e34fda55585f453b8b66f12e625234fe',ramdisk_id='',reservation_id='r-cm16igo4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-194782062',owner_user_name='tempest-TestStampPattern-194782062-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:45Z,user_data=None,user_id='c4f53a86d1eb4bdebed4ec5dd9b5ff45',uuid=e13306d3-0b4c-4937-8b4b-83605575ce82,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "address": "fa:16:3e:0d:71:48", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf819ff69-f9", "ovs_interfaceid": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.054 255071 DEBUG nova.network.os_vif_util [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Converting VIF {"id": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "address": "fa:16:3e:0d:71:48", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf819ff69-f9", "ovs_interfaceid": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.054 255071 DEBUG nova.network.os_vif_util [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:71:48,bridge_name='br-int',has_traffic_filtering=True,id=f819ff69-f947-468c-9e7a-6ba9cca9c85f,network=Network(40f35c3c-5e61-44c9-af5e-70c7d4a4426c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf819ff69-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.055 255071 DEBUG os_vif [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:71:48,bridge_name='br-int',has_traffic_filtering=True,id=f819ff69-f947-468c-9e7a-6ba9cca9c85f,network=Network(40f35c3c-5e61-44c9-af5e-70c7d4a4426c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf819ff69-f9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.055 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.056 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.056 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.058 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.059 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf819ff69-f9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.059 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf819ff69-f9, col_values=(('external_ids', {'iface-id': 'f819ff69-f947-468c-9e7a-6ba9cca9c85f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:71:48', 'vm-uuid': 'e13306d3-0b4c-4937-8b4b-83605575ce82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.060 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:52 compute-0 NetworkManager[49116]: <info>  [1764403552.0617] manager: (tapf819ff69-f9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.063 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:05:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:52.065 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[ea531ff8-057c-43b8-b82e-aae24fc1a334]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.068 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.069 255071 INFO os_vif [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:71:48,bridge_name='br-int',has_traffic_filtering=True,id=f819ff69-f947-468c-9e7a-6ba9cca9c85f,network=Network(40f35c3c-5e61-44c9-af5e-70c7d4a4426c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf819ff69-f9')
Nov 29 08:05:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:52.078 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a715fd-c6a7-47cf-98d1-64e0ad83388b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:52.080 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[864c5613-3094-470e-9d84-d214a67f6792]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:52.107 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[a01836d5-fcd6-4c9b-968f-d24deb3afa43]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 574725, 'reachable_time': 38779, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276204, 'error': None, 'target': 'ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:52.110 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-905664a2-5abc-4150-a1c0-f1b86c7f655e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:05:52 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:52.111 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[d279d91c-99b3-4ef4-83ed-c9f34111193e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d905664a2\x2d5abc\x2d4150\x2da1c0\x2df1b86c7f655e.mount: Deactivated successfully.
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.124 255071 DEBUG nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.125 255071 DEBUG nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.125 255071 DEBUG nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] No VIF found with MAC fa:16:3e:0d:71:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.126 255071 INFO nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Using config drive
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.153 255071 DEBUG nova.storage.rbd_utils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] rbd image e13306d3-0b4c-4937-8b4b-83605575ce82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.162 255071 DEBUG nova.compute.manager [req-a6c83446-d262-467c-86c6-73f14bc4f1ca req-37905a9a-ab46-4ec8-b1a3-54b9fd2d3fb9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Received event network-vif-unplugged-81864c7c-554a-4b81-ba45-62d08b95c981 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.163 255071 DEBUG oslo_concurrency.lockutils [req-a6c83446-d262-467c-86c6-73f14bc4f1ca req-37905a9a-ab46-4ec8-b1a3-54b9fd2d3fb9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "ef9475c4-846b-4370-8330-5a59e328bc07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.163 255071 DEBUG oslo_concurrency.lockutils [req-a6c83446-d262-467c-86c6-73f14bc4f1ca req-37905a9a-ab46-4ec8-b1a3-54b9fd2d3fb9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "ef9475c4-846b-4370-8330-5a59e328bc07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.164 255071 DEBUG oslo_concurrency.lockutils [req-a6c83446-d262-467c-86c6-73f14bc4f1ca req-37905a9a-ab46-4ec8-b1a3-54b9fd2d3fb9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "ef9475c4-846b-4370-8330-5a59e328bc07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.164 255071 DEBUG nova.compute.manager [req-a6c83446-d262-467c-86c6-73f14bc4f1ca req-37905a9a-ab46-4ec8-b1a3-54b9fd2d3fb9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] No waiting events found dispatching network-vif-unplugged-81864c7c-554a-4b81-ba45-62d08b95c981 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.164 255071 DEBUG nova.compute.manager [req-a6c83446-d262-467c-86c6-73f14bc4f1ca req-37905a9a-ab46-4ec8-b1a3-54b9fd2d3fb9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Received event network-vif-unplugged-81864c7c-554a-4b81-ba45-62d08b95c981 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.336 255071 INFO nova.virt.libvirt.driver [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Deleting instance files /var/lib/nova/instances/ef9475c4-846b-4370-8330-5a59e328bc07_del
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.337 255071 INFO nova.virt.libvirt.driver [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Deletion of /var/lib/nova/instances/ef9475c4-846b-4370-8330-5a59e328bc07_del complete
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.381 255071 INFO nova.compute.manager [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Took 0.73 seconds to destroy the instance on the hypervisor.
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.382 255071 DEBUG oslo.service.loopingcall [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.382 255071 DEBUG nova.compute.manager [-] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:05:52 compute-0 nova_compute[255040]: 2025-11-29 08:05:52.382 255071 DEBUG nova.network.neutron [-] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:05:53 compute-0 nova_compute[255040]: 2025-11-29 08:05:53.237 255071 INFO nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Creating config drive at /var/lib/nova/instances/e13306d3-0b4c-4937-8b4b-83605575ce82/disk.config
Nov 29 08:05:53 compute-0 nova_compute[255040]: 2025-11-29 08:05:53.250 255071 DEBUG oslo_concurrency.processutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e13306d3-0b4c-4937-8b4b-83605575ce82/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4gu9_72x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:53 compute-0 nova_compute[255040]: 2025-11-29 08:05:53.286 255071 DEBUG nova.network.neutron [req-18315f21-f489-4d67-939a-dd4419af8ff7 req-a0cc6bfb-7e63-426b-81c1-32297ba2a848 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Updated VIF entry in instance network info cache for port f819ff69-f947-468c-9e7a-6ba9cca9c85f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:05:53 compute-0 nova_compute[255040]: 2025-11-29 08:05:53.287 255071 DEBUG nova.network.neutron [req-18315f21-f489-4d67-939a-dd4419af8ff7 req-a0cc6bfb-7e63-426b-81c1-32297ba2a848 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Updating instance_info_cache with network_info: [{"id": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "address": "fa:16:3e:0d:71:48", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf819ff69-f9", "ovs_interfaceid": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:53 compute-0 nova_compute[255040]: 2025-11-29 08:05:53.320 255071 DEBUG oslo_concurrency.lockutils [req-18315f21-f489-4d67-939a-dd4419af8ff7 req-a0cc6bfb-7e63-426b-81c1-32297ba2a848 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-e13306d3-0b4c-4937-8b4b-83605575ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:05:53 compute-0 nova_compute[255040]: 2025-11-29 08:05:53.391 255071 DEBUG oslo_concurrency.processutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e13306d3-0b4c-4937-8b4b-83605575ce82/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4gu9_72x" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:53 compute-0 nova_compute[255040]: 2025-11-29 08:05:53.422 255071 DEBUG nova.storage.rbd_utils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] rbd image e13306d3-0b4c-4937-8b4b-83605575ce82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:53 compute-0 nova_compute[255040]: 2025-11-29 08:05:53.426 255071 DEBUG oslo_concurrency.processutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e13306d3-0b4c-4937-8b4b-83605575ce82/disk.config e13306d3-0b4c-4937-8b4b-83605575ce82_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:53 compute-0 ceph-mon[75237]: pgmap v1378: 305 pgs: 305 active+clean; 213 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 2.7 MiB/s wr, 95 op/s
Nov 29 08:05:53 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2505473751' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 178 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 2.3 MiB/s wr, 85 op/s
Nov 29 08:05:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:53 compute-0 nova_compute[255040]: 2025-11-29 08:05:53.838 255071 DEBUG oslo_concurrency.processutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e13306d3-0b4c-4937-8b4b-83605575ce82/disk.config e13306d3-0b4c-4937-8b4b-83605575ce82_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:53 compute-0 nova_compute[255040]: 2025-11-29 08:05:53.840 255071 INFO nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Deleting local config drive /var/lib/nova/instances/e13306d3-0b4c-4937-8b4b-83605575ce82/disk.config because it was imported into RBD.
Nov 29 08:05:53 compute-0 kernel: tapf819ff69-f9: entered promiscuous mode
Nov 29 08:05:53 compute-0 systemd-udevd[276103]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:05:53 compute-0 NetworkManager[49116]: <info>  [1764403553.8941] manager: (tapf819ff69-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Nov 29 08:05:53 compute-0 nova_compute[255040]: 2025-11-29 08:05:53.895 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:53 compute-0 ovn_controller[153295]: 2025-11-29T08:05:53Z|00093|binding|INFO|Claiming lport f819ff69-f947-468c-9e7a-6ba9cca9c85f for this chassis.
Nov 29 08:05:53 compute-0 ovn_controller[153295]: 2025-11-29T08:05:53Z|00094|binding|INFO|f819ff69-f947-468c-9e7a-6ba9cca9c85f: Claiming fa:16:3e:0d:71:48 10.100.0.13
Nov 29 08:05:53 compute-0 NetworkManager[49116]: <info>  [1764403553.9081] device (tapf819ff69-f9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:05:53 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:53.906 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:71:48 10.100.0.13'], port_security=['fa:16:3e:0d:71:48 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e13306d3-0b4c-4937-8b4b-83605575ce82', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40f35c3c-5e61-44c9-af5e-70c7d4a4426c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e34fda55585f453b8b66f12e625234fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '553abf0a-6893-4b91-98a5-f4750edd0687', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76f3566f-5b18-4f8e-8a2b-ee02876f83ee, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=f819ff69-f947-468c-9e7a-6ba9cca9c85f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:05:53 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:53.907 163500 INFO neutron.agent.ovn.metadata.agent [-] Port f819ff69-f947-468c-9e7a-6ba9cca9c85f in datapath 40f35c3c-5e61-44c9-af5e-70c7d4a4426c bound to our chassis
Nov 29 08:05:53 compute-0 NetworkManager[49116]: <info>  [1764403553.9090] device (tapf819ff69-f9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:05:53 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:53.908 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 40f35c3c-5e61-44c9-af5e-70c7d4a4426c
Nov 29 08:05:53 compute-0 ovn_controller[153295]: 2025-11-29T08:05:53Z|00095|binding|INFO|Setting lport f819ff69-f947-468c-9e7a-6ba9cca9c85f ovn-installed in OVS
Nov 29 08:05:53 compute-0 ovn_controller[153295]: 2025-11-29T08:05:53Z|00096|binding|INFO|Setting lport f819ff69-f947-468c-9e7a-6ba9cca9c85f up in Southbound
Nov 29 08:05:53 compute-0 nova_compute[255040]: 2025-11-29 08:05:53.915 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:53 compute-0 nova_compute[255040]: 2025-11-29 08:05:53.917 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:53 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:53.921 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[a8a1c499-ac04-4b09-973d-a5ae0ffd987c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:53 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:53.921 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap40f35c3c-51 in ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:05:53 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:53.923 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap40f35c3c-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:05:53 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:53.923 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[e9fc99a3-3ee0-460b-b0d7-7baeb4d8734a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:53 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:53.924 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[6d169173-af85-4050-a70c-c381fc966b9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:53 compute-0 systemd-machined[216271]: New machine qemu-8-instance-00000008.
Nov 29 08:05:53 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:53.940 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[17fe4263-4a20-4a89-96be-647fa2127337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:53 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Nov 29 08:05:53 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:53.952 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[4636c285-f8ab-40b7-906a-5ff36b773ac5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:53 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:53.983 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[b0449322-67ed-414e-9d5b-555c7bb1d0c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:53 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:53.988 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5dac3748-9dfb-4354-98f1-1a79d17f9cae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:53 compute-0 NetworkManager[49116]: <info>  [1764403553.9901] manager: (tap40f35c3c-50): new Veth device (/org/freedesktop/NetworkManager/Devices/59)
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.012 255071 DEBUG nova.network.neutron [-] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:54.024 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[dc975a62-22d0-4044-b962-f1bae58b46e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:54.027 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[baa51c30-227c-49e5-b448-a70fefa52216]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.044 255071 INFO nova.compute.manager [-] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Took 1.66 seconds to deallocate network for instance.
Nov 29 08:05:54 compute-0 NetworkManager[49116]: <info>  [1764403554.0495] device (tap40f35c3c-50): carrier: link connected
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:54.055 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[0dc8b5b4-d025-4be2-868b-203c164e8730]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:54.076 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[2335a0a2-697b-47a0-ad9a-52aeecb59f41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40f35c3c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:36:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579138, 'reachable_time': 35207, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276308, 'error': None, 'target': 'ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.085 255071 DEBUG oslo_concurrency.lockutils [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.085 255071 DEBUG oslo_concurrency.lockutils [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:54.100 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[237ac112-190a-4b3a-8feb-08a264881492]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:36ea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 579138, 'tstamp': 579138}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276309, 'error': None, 'target': 'ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:54.122 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[3a3b8265-3b57-41c3-b069-040ea152ac30]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40f35c3c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:36:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579138, 'reachable_time': 35207, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276310, 'error': None, 'target': 'ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:54.156 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[c16f1fa1-6730-42ee-95f7-e7ed977f9247]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.196 255071 DEBUG oslo_concurrency.processutils [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:54.232 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[766544b7-9e3c-449b-92f7-296f2d4767fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:54.234 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40f35c3c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:54.235 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:54.235 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40f35c3c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:54 compute-0 kernel: tap40f35c3c-50: entered promiscuous mode
Nov 29 08:05:54 compute-0 NetworkManager[49116]: <info>  [1764403554.2390] manager: (tap40f35c3c-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.238 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:54.242 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap40f35c3c-50, col_values=(('external_ids', {'iface-id': '7416de2d-6dc8-411d-a143-d9d9b0a4507f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.243 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:54 compute-0 ovn_controller[153295]: 2025-11-29T08:05:54Z|00097|binding|INFO|Releasing lport 7416de2d-6dc8-411d-a143-d9d9b0a4507f from this chassis (sb_readonly=0)
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.244 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:54.245 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/40f35c3c-5e61-44c9-af5e-70c7d4a4426c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/40f35c3c-5e61-44c9-af5e-70c7d4a4426c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:54.246 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[fb6bd205-d200-49f0-9626-8f9fdc628caa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:54.248 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-40f35c3c-5e61-44c9-af5e-70c7d4a4426c
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/40f35c3c-5e61-44c9-af5e-70c7d4a4426c.pid.haproxy
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID 40f35c3c-5e61-44c9-af5e-70c7d4a4426c
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:05:54.249 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c', 'env', 'PROCESS_TAG=haproxy-40f35c3c-5e61-44c9-af5e-70c7d4a4426c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/40f35c3c-5e61-44c9-af5e-70c7d4a4426c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.259 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.270 255071 DEBUG nova.compute.manager [req-697fdff9-0b70-4e6d-a1b1-686d84ea7dab req-0544cc71-e6ae-44c8-96b4-8079ecf14cc8 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Received event network-vif-plugged-81864c7c-554a-4b81-ba45-62d08b95c981 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.271 255071 DEBUG oslo_concurrency.lockutils [req-697fdff9-0b70-4e6d-a1b1-686d84ea7dab req-0544cc71-e6ae-44c8-96b4-8079ecf14cc8 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "ef9475c4-846b-4370-8330-5a59e328bc07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.271 255071 DEBUG oslo_concurrency.lockutils [req-697fdff9-0b70-4e6d-a1b1-686d84ea7dab req-0544cc71-e6ae-44c8-96b4-8079ecf14cc8 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "ef9475c4-846b-4370-8330-5a59e328bc07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.271 255071 DEBUG oslo_concurrency.lockutils [req-697fdff9-0b70-4e6d-a1b1-686d84ea7dab req-0544cc71-e6ae-44c8-96b4-8079ecf14cc8 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "ef9475c4-846b-4370-8330-5a59e328bc07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.271 255071 DEBUG nova.compute.manager [req-697fdff9-0b70-4e6d-a1b1-686d84ea7dab req-0544cc71-e6ae-44c8-96b4-8079ecf14cc8 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] No waiting events found dispatching network-vif-plugged-81864c7c-554a-4b81-ba45-62d08b95c981 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.272 255071 WARNING nova.compute.manager [req-697fdff9-0b70-4e6d-a1b1-686d84ea7dab req-0544cc71-e6ae-44c8-96b4-8079ecf14cc8 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Received unexpected event network-vif-plugged-81864c7c-554a-4b81-ba45-62d08b95c981 for instance with vm_state deleted and task_state None.
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.272 255071 DEBUG nova.compute.manager [req-697fdff9-0b70-4e6d-a1b1-686d84ea7dab req-0544cc71-e6ae-44c8-96b4-8079ecf14cc8 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Received event network-vif-deleted-81864c7c-554a-4b81-ba45-62d08b95c981 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.313 255071 DEBUG nova.compute.manager [req-f9f1f789-74c5-4379-9729-75a9c4ceb414 req-7c64cc99-9886-4f68-9704-af0255aad0f3 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Received event network-vif-plugged-f819ff69-f947-468c-9e7a-6ba9cca9c85f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.314 255071 DEBUG oslo_concurrency.lockutils [req-f9f1f789-74c5-4379-9729-75a9c4ceb414 req-7c64cc99-9886-4f68-9704-af0255aad0f3 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "e13306d3-0b4c-4937-8b4b-83605575ce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.314 255071 DEBUG oslo_concurrency.lockutils [req-f9f1f789-74c5-4379-9729-75a9c4ceb414 req-7c64cc99-9886-4f68-9704-af0255aad0f3 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "e13306d3-0b4c-4937-8b4b-83605575ce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.314 255071 DEBUG oslo_concurrency.lockutils [req-f9f1f789-74c5-4379-9729-75a9c4ceb414 req-7c64cc99-9886-4f68-9704-af0255aad0f3 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "e13306d3-0b4c-4937-8b4b-83605575ce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.315 255071 DEBUG nova.compute.manager [req-f9f1f789-74c5-4379-9729-75a9c4ceb414 req-7c64cc99-9886-4f68-9704-af0255aad0f3 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Processing event network-vif-plugged-f819ff69-f947-468c-9e7a-6ba9cca9c85f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:05:54 compute-0 ceph-mon[75237]: pgmap v1379: 305 pgs: 305 active+clean; 178 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 2.3 MiB/s wr, 85 op/s
Nov 29 08:05:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:05:54 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3810310280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.623 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.629 255071 DEBUG oslo_concurrency.processutils [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.636 255071 DEBUG nova.compute.provider_tree [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.644 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403554.643668, e13306d3-0b4c-4937-8b4b-83605575ce82 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.644 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] VM Started (Lifecycle Event)
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.645 255071 DEBUG nova.compute.manager [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.649 255071 DEBUG nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.652 255071 INFO nova.virt.libvirt.driver [-] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Instance spawned successfully.
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.652 255071 DEBUG nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.655 255071 DEBUG nova.scheduler.client.report [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.664 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.666 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:05:54 compute-0 podman[276401]: 2025-11-29 08:05:54.668864858 +0000 UTC m=+0.066983307 container create 2ee0b6df82bc277f980336cf56be5fe21731fefd74de7418c15461917a3c253d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 08:05:54 compute-0 systemd[1]: Started libpod-conmon-2ee0b6df82bc277f980336cf56be5fe21731fefd74de7418c15461917a3c253d.scope.
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.704 255071 DEBUG nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.704 255071 DEBUG nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.705 255071 DEBUG nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.705 255071 DEBUG nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.706 255071 DEBUG nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.706 255071 DEBUG nova.virt.libvirt.driver [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.711 255071 DEBUG oslo_concurrency.lockutils [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:54 compute-0 podman[276401]: 2025-11-29 08:05:54.632177308 +0000 UTC m=+0.030295777 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.727 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.727 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403554.643893, e13306d3-0b4c-4937-8b4b-83605575ce82 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.727 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] VM Paused (Lifecycle Event)
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.733 255071 INFO nova.scheduler.client.report [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Deleted allocations for instance ef9475c4-846b-4370-8330-5a59e328bc07
Nov 29 08:05:54 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:05:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1774ff2c0222df0b25e867da676708186d4a35be5be0d0ea6a0908c0c5521f0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:54 compute-0 podman[276401]: 2025-11-29 08:05:54.758611688 +0000 UTC m=+0.156730167 container init 2ee0b6df82bc277f980336cf56be5fe21731fefd74de7418c15461917a3c253d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.761 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:54 compute-0 podman[276401]: 2025-11-29 08:05:54.765843482 +0000 UTC m=+0.163961931 container start 2ee0b6df82bc277f980336cf56be5fe21731fefd74de7418c15461917a3c253d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.770 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403554.6487617, e13306d3-0b4c-4937-8b4b-83605575ce82 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.770 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] VM Resumed (Lifecycle Event)
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.775 255071 INFO nova.compute.manager [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Took 9.35 seconds to spawn the instance on the hypervisor.
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.776 255071 DEBUG nova.compute.manager [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:54 compute-0 neutron-haproxy-ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c[276419]: [NOTICE]   (276423) : New worker (276425) forked
Nov 29 08:05:54 compute-0 neutron-haproxy-ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c[276419]: [NOTICE]   (276423) : Loading success.
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.799 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.803 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.837 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.868 255071 DEBUG oslo_concurrency.lockutils [None req-32b7b753-91b7-400e-888f-77727701f6f0 7d13a2468b4442809f7968c612cb7523 65fc2f72f64e4c91b66d05d7ebaf9e4c - - default default] Lock "ef9475c4-846b-4370-8330-5a59e328bc07" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.872 255071 INFO nova.compute.manager [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Took 10.59 seconds to build instance.
Nov 29 08:05:54 compute-0 nova_compute[255040]: 2025-11-29 08:05:54.888 255071 DEBUG oslo_concurrency.lockutils [None req-05b98158-4d48-47f4-bed4-91a6a163dfb9 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "e13306d3-0b4c-4937-8b4b-83605575ce82" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:55 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3810310280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 134 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00034688731116103405 of space, bias 1.0, pg target 0.10406619334831022 quantized to 32 (current 32)
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:05:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:05:56 compute-0 nova_compute[255040]: 2025-11-29 08:05:56.407 255071 DEBUG nova.compute.manager [req-1c3586ad-7998-4f64-96e8-e8add056f34d req-d6c459f0-aa7b-42e7-a913-393c75c76d4d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Received event network-vif-plugged-f819ff69-f947-468c-9e7a-6ba9cca9c85f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:56 compute-0 nova_compute[255040]: 2025-11-29 08:05:56.408 255071 DEBUG oslo_concurrency.lockutils [req-1c3586ad-7998-4f64-96e8-e8add056f34d req-d6c459f0-aa7b-42e7-a913-393c75c76d4d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "e13306d3-0b4c-4937-8b4b-83605575ce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:56 compute-0 nova_compute[255040]: 2025-11-29 08:05:56.408 255071 DEBUG oslo_concurrency.lockutils [req-1c3586ad-7998-4f64-96e8-e8add056f34d req-d6c459f0-aa7b-42e7-a913-393c75c76d4d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "e13306d3-0b4c-4937-8b4b-83605575ce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:56 compute-0 nova_compute[255040]: 2025-11-29 08:05:56.408 255071 DEBUG oslo_concurrency.lockutils [req-1c3586ad-7998-4f64-96e8-e8add056f34d req-d6c459f0-aa7b-42e7-a913-393c75c76d4d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "e13306d3-0b4c-4937-8b4b-83605575ce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:56 compute-0 nova_compute[255040]: 2025-11-29 08:05:56.408 255071 DEBUG nova.compute.manager [req-1c3586ad-7998-4f64-96e8-e8add056f34d req-d6c459f0-aa7b-42e7-a913-393c75c76d4d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] No waiting events found dispatching network-vif-plugged-f819ff69-f947-468c-9e7a-6ba9cca9c85f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:05:56 compute-0 nova_compute[255040]: 2025-11-29 08:05:56.409 255071 WARNING nova.compute.manager [req-1c3586ad-7998-4f64-96e8-e8add056f34d req-d6c459f0-aa7b-42e7-a913-393c75c76d4d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Received unexpected event network-vif-plugged-f819ff69-f947-468c-9e7a-6ba9cca9c85f for instance with vm_state active and task_state None.
Nov 29 08:05:56 compute-0 ceph-mon[75237]: pgmap v1380: 305 pgs: 305 active+clean; 134 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Nov 29 08:05:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:05:56 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/26540153' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:05:56 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/26540153' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:57 compute-0 nova_compute[255040]: 2025-11-29 08:05:57.062 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:57 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/26540153' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:57 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/26540153' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 134 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Nov 29 08:05:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Nov 29 08:05:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:05:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3740135445' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:05:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3740135445' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:58 compute-0 ceph-mon[75237]: pgmap v1381: 305 pgs: 305 active+clean; 134 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Nov 29 08:05:58 compute-0 podman[276434]: 2025-11-29 08:05:58.913190667 +0000 UTC m=+0.077643364 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 08:05:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 134 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 133 op/s
Nov 29 08:05:59 compute-0 nova_compute[255040]: 2025-11-29 08:05:59.624 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Nov 29 08:05:59 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Nov 29 08:05:59 compute-0 nova_compute[255040]: 2025-11-29 08:05:59.671 255071 DEBUG nova.compute.manager [req-c7f18c91-36c3-4468-beda-c42082329fbe req-fa5b3748-807d-4881-af60-f24869eb573c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Received event network-changed-f819ff69-f947-468c-9e7a-6ba9cca9c85f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:59 compute-0 nova_compute[255040]: 2025-11-29 08:05:59.672 255071 DEBUG nova.compute.manager [req-c7f18c91-36c3-4468-beda-c42082329fbe req-fa5b3748-807d-4881-af60-f24869eb573c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Refreshing instance network info cache due to event network-changed-f819ff69-f947-468c-9e7a-6ba9cca9c85f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:05:59 compute-0 nova_compute[255040]: 2025-11-29 08:05:59.672 255071 DEBUG oslo_concurrency.lockutils [req-c7f18c91-36c3-4468-beda-c42082329fbe req-fa5b3748-807d-4881-af60-f24869eb573c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-e13306d3-0b4c-4937-8b4b-83605575ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:05:59 compute-0 nova_compute[255040]: 2025-11-29 08:05:59.672 255071 DEBUG oslo_concurrency.lockutils [req-c7f18c91-36c3-4468-beda-c42082329fbe req-fa5b3748-807d-4881-af60-f24869eb573c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-e13306d3-0b4c-4937-8b4b-83605575ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:05:59 compute-0 nova_compute[255040]: 2025-11-29 08:05:59.672 255071 DEBUG nova.network.neutron [req-c7f18c91-36c3-4468-beda-c42082329fbe req-fa5b3748-807d-4881-af60-f24869eb573c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Refreshing network info cache for port f819ff69-f947-468c-9e7a-6ba9cca9c85f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:05:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3740135445' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3740135445' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:59 compute-0 ceph-mon[75237]: osdmap e242: 3 total, 3 up, 3 in
Nov 29 08:06:00 compute-0 nova_compute[255040]: 2025-11-29 08:06:00.545 255071 DEBUG nova.network.neutron [req-c7f18c91-36c3-4468-beda-c42082329fbe req-fa5b3748-807d-4881-af60-f24869eb573c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Updated VIF entry in instance network info cache for port f819ff69-f947-468c-9e7a-6ba9cca9c85f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:06:00 compute-0 nova_compute[255040]: 2025-11-29 08:06:00.545 255071 DEBUG nova.network.neutron [req-c7f18c91-36c3-4468-beda-c42082329fbe req-fa5b3748-807d-4881-af60-f24869eb573c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Updating instance_info_cache with network_info: [{"id": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "address": "fa:16:3e:0d:71:48", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf819ff69-f9", "ovs_interfaceid": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:00 compute-0 nova_compute[255040]: 2025-11-29 08:06:00.564 255071 DEBUG oslo_concurrency.lockutils [req-c7f18c91-36c3-4468-beda-c42082329fbe req-fa5b3748-807d-4881-af60-f24869eb573c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-e13306d3-0b4c-4937-8b4b-83605575ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:06:00 compute-0 ovn_controller[153295]: 2025-11-29T08:06:00Z|00098|binding|INFO|Releasing lport 7416de2d-6dc8-411d-a143-d9d9b0a4507f from this chassis (sb_readonly=0)
Nov 29 08:06:00 compute-0 nova_compute[255040]: 2025-11-29 08:06:00.664 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:00 compute-0 ceph-mon[75237]: pgmap v1382: 305 pgs: 305 active+clean; 134 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 133 op/s
Nov 29 08:06:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 134 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 139 op/s
Nov 29 08:06:02 compute-0 nova_compute[255040]: 2025-11-29 08:06:02.064 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:02 compute-0 nova_compute[255040]: 2025-11-29 08:06:02.403 255071 DEBUG oslo_concurrency.lockutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Acquiring lock "fedca654-1ef7-4f00-ad7e-511a0b2334ac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:02 compute-0 nova_compute[255040]: 2025-11-29 08:06:02.403 255071 DEBUG oslo_concurrency.lockutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Lock "fedca654-1ef7-4f00-ad7e-511a0b2334ac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:02 compute-0 nova_compute[255040]: 2025-11-29 08:06:02.417 255071 DEBUG nova.compute.manager [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:06:02 compute-0 nova_compute[255040]: 2025-11-29 08:06:02.479 255071 DEBUG oslo_concurrency.lockutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:02 compute-0 nova_compute[255040]: 2025-11-29 08:06:02.479 255071 DEBUG oslo_concurrency.lockutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:02 compute-0 nova_compute[255040]: 2025-11-29 08:06:02.487 255071 DEBUG nova.virt.hardware [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:06:02 compute-0 nova_compute[255040]: 2025-11-29 08:06:02.488 255071 INFO nova.compute.claims [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:06:02 compute-0 ceph-mon[75237]: pgmap v1384: 305 pgs: 305 active+clean; 134 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 139 op/s
Nov 29 08:06:03 compute-0 nova_compute[255040]: 2025-11-29 08:06:03.473 255071 DEBUG oslo_concurrency.processutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 134 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 136 op/s
Nov 29 08:06:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:06:03 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/214398967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:03 compute-0 nova_compute[255040]: 2025-11-29 08:06:03.939 255071 DEBUG oslo_concurrency.processutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:03 compute-0 nova_compute[255040]: 2025-11-29 08:06:03.948 255071 DEBUG nova.compute.provider_tree [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:06:03 compute-0 nova_compute[255040]: 2025-11-29 08:06:03.965 255071 DEBUG nova.scheduler.client.report [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:06:03 compute-0 nova_compute[255040]: 2025-11-29 08:06:03.988 255071 DEBUG oslo_concurrency.lockutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:03 compute-0 nova_compute[255040]: 2025-11-29 08:06:03.989 255071 DEBUG nova.compute.manager [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.037 255071 DEBUG nova.compute.manager [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.038 255071 DEBUG nova.network.neutron [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.058 255071 INFO nova.virt.libvirt.driver [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.077 255071 DEBUG nova.compute.manager [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.169 255071 DEBUG nova.compute.manager [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.171 255071 DEBUG nova.virt.libvirt.driver [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.171 255071 INFO nova.virt.libvirt.driver [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Creating image(s)
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.195 255071 DEBUG nova.storage.rbd_utils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] rbd image fedca654-1ef7-4f00-ad7e-511a0b2334ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.229 255071 DEBUG nova.storage.rbd_utils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] rbd image fedca654-1ef7-4f00-ad7e-511a0b2334ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.258 255071 DEBUG nova.storage.rbd_utils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] rbd image fedca654-1ef7-4f00-ad7e-511a0b2334ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.263 255071 DEBUG oslo_concurrency.processutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.347 255071 DEBUG oslo_concurrency.processutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.349 255071 DEBUG oslo_concurrency.lockutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Acquiring lock "55a6637599f7119d0d1afd670bb8713620840059" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.350 255071 DEBUG oslo_concurrency.lockutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.350 255071 DEBUG oslo_concurrency.lockutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.376 255071 DEBUG nova.storage.rbd_utils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] rbd image fedca654-1ef7-4f00-ad7e-511a0b2334ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.382 255071 DEBUG oslo_concurrency.processutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 fedca654-1ef7-4f00-ad7e-511a0b2334ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:06:04.402 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:06:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:06:04.403 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.407 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.411 255071 DEBUG nova.network.neutron [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.412 255071 DEBUG nova.compute.manager [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:06:04 compute-0 ovn_controller[153295]: 2025-11-29T08:06:04Z|00099|binding|INFO|Releasing lport 7416de2d-6dc8-411d-a143-d9d9b0a4507f from this chassis (sb_readonly=0)
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.585 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.628 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.727 255071 DEBUG oslo_concurrency.processutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 fedca654-1ef7-4f00-ad7e-511a0b2334ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.797 255071 DEBUG nova.storage.rbd_utils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] resizing rbd image fedca654-1ef7-4f00-ad7e-511a0b2334ac_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:06:04 compute-0 ceph-mon[75237]: pgmap v1385: 305 pgs: 305 active+clean; 134 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 136 op/s
Nov 29 08:06:04 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/214398967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:04 compute-0 podman[276612]: 2025-11-29 08:06:04.906180576 +0000 UTC m=+0.067500531 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.910 255071 DEBUG nova.objects.instance [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Lazy-loading 'migration_context' on Instance uuid fedca654-1ef7-4f00-ad7e-511a0b2334ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.924 255071 DEBUG nova.virt.libvirt.driver [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.924 255071 DEBUG nova.virt.libvirt.driver [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Ensure instance console log exists: /var/lib/nova/instances/fedca654-1ef7-4f00-ad7e-511a0b2334ac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.925 255071 DEBUG oslo_concurrency.lockutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.925 255071 DEBUG oslo_concurrency.lockutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.926 255071 DEBUG oslo_concurrency.lockutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.927 255071 DEBUG nova.virt.libvirt.driver [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'image_id': '36a9388d-0d77-4d24-a915-be92247e5dbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.932 255071 WARNING nova.virt.libvirt.driver [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.938 255071 DEBUG nova.virt.libvirt.host [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.938 255071 DEBUG nova.virt.libvirt.host [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.942 255071 DEBUG nova.virt.libvirt.host [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.942 255071 DEBUG nova.virt.libvirt.host [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.943 255071 DEBUG nova.virt.libvirt.driver [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.943 255071 DEBUG nova.virt.hardware [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.943 255071 DEBUG nova.virt.hardware [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.944 255071 DEBUG nova.virt.hardware [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.944 255071 DEBUG nova.virt.hardware [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.944 255071 DEBUG nova.virt.hardware [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.944 255071 DEBUG nova.virt.hardware [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.946 255071 DEBUG nova.virt.hardware [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.946 255071 DEBUG nova.virt.hardware [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.946 255071 DEBUG nova.virt.hardware [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.947 255071 DEBUG nova.virt.hardware [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.947 255071 DEBUG nova.virt.hardware [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:06:04 compute-0 nova_compute[255040]: 2025-11-29 08:06:04.950 255071 DEBUG oslo_concurrency.processutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:06:05 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3497717404' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:05 compute-0 nova_compute[255040]: 2025-11-29 08:06:05.447 255071 DEBUG oslo_concurrency.processutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:05 compute-0 nova_compute[255040]: 2025-11-29 08:06:05.472 255071 DEBUG nova.storage.rbd_utils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] rbd image fedca654-1ef7-4f00-ad7e-511a0b2334ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:05 compute-0 nova_compute[255040]: 2025-11-29 08:06:05.477 255071 DEBUG oslo_concurrency.processutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 150 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 875 KiB/s wr, 120 op/s
Nov 29 08:06:05 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3497717404' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:05 compute-0 ceph-mon[75237]: pgmap v1386: 305 pgs: 305 active+clean; 150 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 875 KiB/s wr, 120 op/s
Nov 29 08:06:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:06:05 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3128400078' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:05 compute-0 nova_compute[255040]: 2025-11-29 08:06:05.979 255071 DEBUG oslo_concurrency.processutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:05 compute-0 nova_compute[255040]: 2025-11-29 08:06:05.982 255071 DEBUG nova.objects.instance [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Lazy-loading 'pci_devices' on Instance uuid fedca654-1ef7-4f00-ad7e-511a0b2334ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:05 compute-0 nova_compute[255040]: 2025-11-29 08:06:05.998 255071 DEBUG nova.virt.libvirt.driver [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:06:05 compute-0 nova_compute[255040]:   <uuid>fedca654-1ef7-4f00-ad7e-511a0b2334ac</uuid>
Nov 29 08:06:05 compute-0 nova_compute[255040]:   <name>instance-00000009</name>
Nov 29 08:06:05 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:06:05 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:06:05 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <nova:name>tempest-VolumesNegativeTest-instance-2042110606</nova:name>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:06:04</nova:creationTime>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:06:05 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:06:05 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:06:05 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:06:05 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:06:05 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:06:05 compute-0 nova_compute[255040]:         <nova:user uuid="e50429b3ae1f4fc991cb252fc466bcb6">tempest-VolumesNegativeTest-911300911-project-member</nova:user>
Nov 29 08:06:05 compute-0 nova_compute[255040]:         <nova:project uuid="22e0ec1941b84ce0b1e115e1f9bc04b8">tempest-VolumesNegativeTest-911300911</nova:project>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <nova:root type="image" uuid="36a9388d-0d77-4d24-a915-be92247e5dbc"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <nova:ports/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:06:05 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:06:05 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <system>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <entry name="serial">fedca654-1ef7-4f00-ad7e-511a0b2334ac</entry>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <entry name="uuid">fedca654-1ef7-4f00-ad7e-511a0b2334ac</entry>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     </system>
Nov 29 08:06:05 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:06:05 compute-0 nova_compute[255040]:   <os>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:   </os>
Nov 29 08:06:05 compute-0 nova_compute[255040]:   <features>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:   </features>
Nov 29 08:06:05 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:06:05 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:06:05 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/fedca654-1ef7-4f00-ad7e-511a0b2334ac_disk">
Nov 29 08:06:05 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       </source>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:06:05 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/fedca654-1ef7-4f00-ad7e-511a0b2334ac_disk.config">
Nov 29 08:06:05 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       </source>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:06:05 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/fedca654-1ef7-4f00-ad7e-511a0b2334ac/console.log" append="off"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <video>
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     </video>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:06:05 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:06:05 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:06:05 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:06:05 compute-0 nova_compute[255040]: </domain>
Nov 29 08:06:05 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:06:06 compute-0 nova_compute[255040]: 2025-11-29 08:06:06.049 255071 DEBUG nova.virt.libvirt.driver [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:06:06 compute-0 nova_compute[255040]: 2025-11-29 08:06:06.050 255071 DEBUG nova.virt.libvirt.driver [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:06:06 compute-0 nova_compute[255040]: 2025-11-29 08:06:06.050 255071 INFO nova.virt.libvirt.driver [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Using config drive
Nov 29 08:06:06 compute-0 nova_compute[255040]: 2025-11-29 08:06:06.073 255071 DEBUG nova.storage.rbd_utils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] rbd image fedca654-1ef7-4f00-ad7e-511a0b2334ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:06 compute-0 nova_compute[255040]: 2025-11-29 08:06:06.374 255071 INFO nova.virt.libvirt.driver [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Creating config drive at /var/lib/nova/instances/fedca654-1ef7-4f00-ad7e-511a0b2334ac/disk.config
Nov 29 08:06:06 compute-0 nova_compute[255040]: 2025-11-29 08:06:06.382 255071 DEBUG oslo_concurrency.processutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fedca654-1ef7-4f00-ad7e-511a0b2334ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx321kft7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:06 compute-0 nova_compute[255040]: 2025-11-29 08:06:06.530 255071 DEBUG oslo_concurrency.processutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fedca654-1ef7-4f00-ad7e-511a0b2334ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx321kft7" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:06 compute-0 nova_compute[255040]: 2025-11-29 08:06:06.558 255071 DEBUG nova.storage.rbd_utils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] rbd image fedca654-1ef7-4f00-ad7e-511a0b2334ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:06 compute-0 nova_compute[255040]: 2025-11-29 08:06:06.561 255071 DEBUG oslo_concurrency.processutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fedca654-1ef7-4f00-ad7e-511a0b2334ac/disk.config fedca654-1ef7-4f00-ad7e-511a0b2334ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:06 compute-0 nova_compute[255040]: 2025-11-29 08:06:06.733 255071 DEBUG oslo_concurrency.processutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fedca654-1ef7-4f00-ad7e-511a0b2334ac/disk.config fedca654-1ef7-4f00-ad7e-511a0b2334ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:06 compute-0 nova_compute[255040]: 2025-11-29 08:06:06.735 255071 INFO nova.virt.libvirt.driver [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Deleting local config drive /var/lib/nova/instances/fedca654-1ef7-4f00-ad7e-511a0b2334ac/disk.config because it was imported into RBD.
Nov 29 08:06:06 compute-0 systemd-machined[216271]: New machine qemu-9-instance-00000009.
Nov 29 08:06:06 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Nov 29 08:06:06 compute-0 nova_compute[255040]: 2025-11-29 08:06:06.890 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403551.8880315, ef9475c4-846b-4370-8330-5a59e328bc07 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:06 compute-0 nova_compute[255040]: 2025-11-29 08:06:06.892 255071 INFO nova.compute.manager [-] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] VM Stopped (Lifecycle Event)
Nov 29 08:06:06 compute-0 nova_compute[255040]: 2025-11-29 08:06:06.913 255071 DEBUG nova.compute.manager [None req-d9f3ea5d-cf7a-472d-aeef-bd999fcbe8f1 - - - - - -] [instance: ef9475c4-846b-4370-8330-5a59e328bc07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:06 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3128400078' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.066 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.275 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403567.275011, fedca654-1ef7-4f00-ad7e-511a0b2334ac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.276 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] VM Resumed (Lifecycle Event)
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.278 255071 DEBUG nova.compute.manager [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.278 255071 DEBUG nova.virt.libvirt.driver [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.281 255071 INFO nova.virt.libvirt.driver [-] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Instance spawned successfully.
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.281 255071 DEBUG nova.virt.libvirt.driver [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.304 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.311 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.314 255071 DEBUG nova.virt.libvirt.driver [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.314 255071 DEBUG nova.virt.libvirt.driver [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.314 255071 DEBUG nova.virt.libvirt.driver [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.315 255071 DEBUG nova.virt.libvirt.driver [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.315 255071 DEBUG nova.virt.libvirt.driver [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.315 255071 DEBUG nova.virt.libvirt.driver [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.347 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.349 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403567.2776606, fedca654-1ef7-4f00-ad7e-511a0b2334ac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.349 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] VM Started (Lifecycle Event)
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.373 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.377 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.383 255071 INFO nova.compute.manager [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Took 3.21 seconds to spawn the instance on the hypervisor.
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.383 255071 DEBUG nova.compute.manager [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.397 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.443 255071 INFO nova.compute.manager [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Took 4.99 seconds to build instance.
Nov 29 08:06:07 compute-0 nova_compute[255040]: 2025-11-29 08:06:07.460 255071 DEBUG oslo_concurrency.lockutils [None req-9b417402-d1f1-4254-8e20-4981844f29fe e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Lock "fedca654-1ef7-4f00-ad7e-511a0b2334ac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 150 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 875 KiB/s wr, 120 op/s
Nov 29 08:06:07 compute-0 ceph-mon[75237]: pgmap v1387: 305 pgs: 305 active+clean; 150 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 875 KiB/s wr, 120 op/s
Nov 29 08:06:08 compute-0 ovn_controller[153295]: 2025-11-29T08:06:08Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0d:71:48 10.100.0.13
Nov 29 08:06:08 compute-0 ovn_controller[153295]: 2025-11-29T08:06:08Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0d:71:48 10.100.0.13
Nov 29 08:06:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:06:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:06:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:06:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:06:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:06:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:06:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:08 compute-0 nova_compute[255040]: 2025-11-29 08:06:08.753 255071 DEBUG oslo_concurrency.lockutils [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Acquiring lock "fedca654-1ef7-4f00-ad7e-511a0b2334ac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:08 compute-0 nova_compute[255040]: 2025-11-29 08:06:08.754 255071 DEBUG oslo_concurrency.lockutils [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Lock "fedca654-1ef7-4f00-ad7e-511a0b2334ac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:08 compute-0 nova_compute[255040]: 2025-11-29 08:06:08.754 255071 DEBUG oslo_concurrency.lockutils [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Acquiring lock "fedca654-1ef7-4f00-ad7e-511a0b2334ac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:08 compute-0 nova_compute[255040]: 2025-11-29 08:06:08.755 255071 DEBUG oslo_concurrency.lockutils [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Lock "fedca654-1ef7-4f00-ad7e-511a0b2334ac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:08 compute-0 nova_compute[255040]: 2025-11-29 08:06:08.755 255071 DEBUG oslo_concurrency.lockutils [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Lock "fedca654-1ef7-4f00-ad7e-511a0b2334ac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:08 compute-0 nova_compute[255040]: 2025-11-29 08:06:08.757 255071 INFO nova.compute.manager [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Terminating instance
Nov 29 08:06:08 compute-0 nova_compute[255040]: 2025-11-29 08:06:08.758 255071 DEBUG oslo_concurrency.lockutils [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Acquiring lock "refresh_cache-fedca654-1ef7-4f00-ad7e-511a0b2334ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:06:08 compute-0 nova_compute[255040]: 2025-11-29 08:06:08.758 255071 DEBUG oslo_concurrency.lockutils [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Acquired lock "refresh_cache-fedca654-1ef7-4f00-ad7e-511a0b2334ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:06:08 compute-0 nova_compute[255040]: 2025-11-29 08:06:08.759 255071 DEBUG nova.network.neutron [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:06:09 compute-0 nova_compute[255040]: 2025-11-29 08:06:09.169 255071 DEBUG nova.network.neutron [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:06:09 compute-0 nova_compute[255040]: 2025-11-29 08:06:09.362 255071 DEBUG nova.network.neutron [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:09 compute-0 nova_compute[255040]: 2025-11-29 08:06:09.591 255071 DEBUG oslo_concurrency.lockutils [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Releasing lock "refresh_cache-fedca654-1ef7-4f00-ad7e-511a0b2334ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:06:09 compute-0 nova_compute[255040]: 2025-11-29 08:06:09.592 255071 DEBUG nova.compute.manager [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:06:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 194 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.3 MiB/s wr, 131 op/s
Nov 29 08:06:09 compute-0 nova_compute[255040]: 2025-11-29 08:06:09.630 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:10 compute-0 ceph-mon[75237]: pgmap v1388: 305 pgs: 305 active+clean; 194 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.3 MiB/s wr, 131 op/s
Nov 29 08:06:10 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 29 08:06:10 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 2.805s CPU time.
Nov 29 08:06:10 compute-0 systemd-machined[216271]: Machine qemu-9-instance-00000009 terminated.
Nov 29 08:06:10 compute-0 nova_compute[255040]: 2025-11-29 08:06:10.216 255071 INFO nova.virt.libvirt.driver [-] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Instance destroyed successfully.
Nov 29 08:06:10 compute-0 nova_compute[255040]: 2025-11-29 08:06:10.217 255071 DEBUG nova.objects.instance [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Lazy-loading 'resources' on Instance uuid fedca654-1ef7-4f00-ad7e-511a0b2334ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:10 compute-0 nova_compute[255040]: 2025-11-29 08:06:10.708 255071 INFO nova.virt.libvirt.driver [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Deleting instance files /var/lib/nova/instances/fedca654-1ef7-4f00-ad7e-511a0b2334ac_del
Nov 29 08:06:10 compute-0 nova_compute[255040]: 2025-11-29 08:06:10.709 255071 INFO nova.virt.libvirt.driver [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Deletion of /var/lib/nova/instances/fedca654-1ef7-4f00-ad7e-511a0b2334ac_del complete
Nov 29 08:06:10 compute-0 nova_compute[255040]: 2025-11-29 08:06:10.769 255071 INFO nova.compute.manager [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Took 1.18 seconds to destroy the instance on the hypervisor.
Nov 29 08:06:10 compute-0 nova_compute[255040]: 2025-11-29 08:06:10.769 255071 DEBUG oslo.service.loopingcall [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:06:10 compute-0 nova_compute[255040]: 2025-11-29 08:06:10.770 255071 DEBUG nova.compute.manager [-] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:06:10 compute-0 nova_compute[255040]: 2025-11-29 08:06:10.770 255071 DEBUG nova.network.neutron [-] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:06:10 compute-0 nova_compute[255040]: 2025-11-29 08:06:10.882 255071 DEBUG nova.network.neutron [-] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:06:11 compute-0 nova_compute[255040]: 2025-11-29 08:06:11.062 255071 DEBUG nova.network.neutron [-] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:11 compute-0 nova_compute[255040]: 2025-11-29 08:06:11.085 255071 INFO nova.compute.manager [-] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Took 0.32 seconds to deallocate network for instance.
Nov 29 08:06:11 compute-0 nova_compute[255040]: 2025-11-29 08:06:11.130 255071 DEBUG oslo_concurrency.lockutils [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:11 compute-0 nova_compute[255040]: 2025-11-29 08:06:11.131 255071 DEBUG oslo_concurrency.lockutils [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:11 compute-0 nova_compute[255040]: 2025-11-29 08:06:11.211 255071 DEBUG oslo_concurrency.processutils [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 206 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 149 op/s
Nov 29 08:06:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:06:11 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2114983226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:11 compute-0 nova_compute[255040]: 2025-11-29 08:06:11.899 255071 DEBUG oslo_concurrency.processutils [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.688s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:11 compute-0 nova_compute[255040]: 2025-11-29 08:06:11.905 255071 DEBUG nova.compute.provider_tree [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:06:11 compute-0 nova_compute[255040]: 2025-11-29 08:06:11.927 255071 DEBUG nova.scheduler.client.report [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:06:11 compute-0 nova_compute[255040]: 2025-11-29 08:06:11.954 255071 DEBUG oslo_concurrency.lockutils [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:11 compute-0 nova_compute[255040]: 2025-11-29 08:06:11.980 255071 INFO nova.scheduler.client.report [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Deleted allocations for instance fedca654-1ef7-4f00-ad7e-511a0b2334ac
Nov 29 08:06:12 compute-0 nova_compute[255040]: 2025-11-29 08:06:12.055 255071 DEBUG oslo_concurrency.lockutils [None req-72d10aa9-02d0-4e1a-a28b-ecfa329b02d7 e50429b3ae1f4fc991cb252fc466bcb6 22e0ec1941b84ce0b1e115e1f9bc04b8 - - default default] Lock "fedca654-1ef7-4f00-ad7e-511a0b2334ac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:12 compute-0 nova_compute[255040]: 2025-11-29 08:06:12.070 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:12 compute-0 ceph-mon[75237]: pgmap v1389: 305 pgs: 305 active+clean; 206 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 149 op/s
Nov 29 08:06:12 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2114983226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:06:13.406 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 198 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 179 op/s
Nov 29 08:06:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:14 compute-0 nova_compute[255040]: 2025-11-29 08:06:14.631 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:14 compute-0 ceph-mon[75237]: pgmap v1390: 305 pgs: 305 active+clean; 198 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 179 op/s
Nov 29 08:06:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 167 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 196 op/s
Nov 29 08:06:15 compute-0 nova_compute[255040]: 2025-11-29 08:06:15.748 255071 DEBUG oslo_concurrency.lockutils [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "e13306d3-0b4c-4937-8b4b-83605575ce82" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:15 compute-0 nova_compute[255040]: 2025-11-29 08:06:15.749 255071 DEBUG oslo_concurrency.lockutils [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "e13306d3-0b4c-4937-8b4b-83605575ce82" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:15 compute-0 nova_compute[255040]: 2025-11-29 08:06:15.763 255071 DEBUG nova.objects.instance [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lazy-loading 'flavor' on Instance uuid e13306d3-0b4c-4937-8b4b-83605575ce82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:15 compute-0 nova_compute[255040]: 2025-11-29 08:06:15.797 255071 DEBUG oslo_concurrency.lockutils [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "e13306d3-0b4c-4937-8b4b-83605575ce82" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:16 compute-0 nova_compute[255040]: 2025-11-29 08:06:16.025 255071 DEBUG oslo_concurrency.lockutils [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "e13306d3-0b4c-4937-8b4b-83605575ce82" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:16 compute-0 nova_compute[255040]: 2025-11-29 08:06:16.025 255071 DEBUG oslo_concurrency.lockutils [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "e13306d3-0b4c-4937-8b4b-83605575ce82" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:16 compute-0 nova_compute[255040]: 2025-11-29 08:06:16.026 255071 INFO nova.compute.manager [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Attaching volume 0de06e14-bafa-4551-9b68-07a3afab0078 to /dev/vdb
Nov 29 08:06:16 compute-0 ceph-mon[75237]: pgmap v1391: 305 pgs: 305 active+clean; 167 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 196 op/s
Nov 29 08:06:16 compute-0 nova_compute[255040]: 2025-11-29 08:06:16.204 255071 DEBUG os_brick.utils [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:06:16 compute-0 nova_compute[255040]: 2025-11-29 08:06:16.206 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:16 compute-0 nova_compute[255040]: 2025-11-29 08:06:16.221 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:16 compute-0 nova_compute[255040]: 2025-11-29 08:06:16.221 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf2d9df-558d-481f-bd72-2628a5c8623a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:16 compute-0 nova_compute[255040]: 2025-11-29 08:06:16.223 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:16 compute-0 nova_compute[255040]: 2025-11-29 08:06:16.233 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:16 compute-0 nova_compute[255040]: 2025-11-29 08:06:16.233 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4f4342-27cc-4bd8-9765-768bb3627a29]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:16 compute-0 nova_compute[255040]: 2025-11-29 08:06:16.235 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:16 compute-0 nova_compute[255040]: 2025-11-29 08:06:16.243 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:16 compute-0 nova_compute[255040]: 2025-11-29 08:06:16.243 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a7cc2a-5ad2-4ea4-9b2c-1d0488dd133a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:16 compute-0 nova_compute[255040]: 2025-11-29 08:06:16.246 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd88601-c179-4277-aaac-bf56b544b49f]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:16 compute-0 nova_compute[255040]: 2025-11-29 08:06:16.247 255071 DEBUG oslo_concurrency.processutils [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:16 compute-0 nova_compute[255040]: 2025-11-29 08:06:16.278 255071 DEBUG oslo_concurrency.processutils [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:16 compute-0 nova_compute[255040]: 2025-11-29 08:06:16.280 255071 DEBUG os_brick.initiator.connectors.lightos [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:06:16 compute-0 nova_compute[255040]: 2025-11-29 08:06:16.281 255071 DEBUG os_brick.initiator.connectors.lightos [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:06:16 compute-0 nova_compute[255040]: 2025-11-29 08:06:16.281 255071 DEBUG os_brick.initiator.connectors.lightos [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:06:16 compute-0 nova_compute[255040]: 2025-11-29 08:06:16.282 255071 DEBUG os_brick.utils [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] <== get_connector_properties: return (76ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:06:16 compute-0 nova_compute[255040]: 2025-11-29 08:06:16.282 255071 DEBUG nova.virt.block_device [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Updating existing volume attachment record: b52563d0-9190-4832-80a6-c62d97a6bcd7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:06:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:06:16 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1845876626' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:17 compute-0 nova_compute[255040]: 2025-11-29 08:06:17.003 255071 DEBUG nova.objects.instance [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lazy-loading 'flavor' on Instance uuid e13306d3-0b4c-4937-8b4b-83605575ce82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:17 compute-0 nova_compute[255040]: 2025-11-29 08:06:17.028 255071 DEBUG nova.virt.libvirt.driver [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Attempting to attach volume 0de06e14-bafa-4551-9b68-07a3afab0078 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:06:17 compute-0 nova_compute[255040]: 2025-11-29 08:06:17.032 255071 DEBUG nova.virt.libvirt.guest [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:06:17 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:06:17 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-0de06e14-bafa-4551-9b68-07a3afab0078">
Nov 29 08:06:17 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:06:17 compute-0 nova_compute[255040]:   </source>
Nov 29 08:06:17 compute-0 nova_compute[255040]:   <auth username="openstack">
Nov 29 08:06:17 compute-0 nova_compute[255040]:     <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:06:17 compute-0 nova_compute[255040]:   </auth>
Nov 29 08:06:17 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:06:17 compute-0 nova_compute[255040]:   <serial>0de06e14-bafa-4551-9b68-07a3afab0078</serial>
Nov 29 08:06:17 compute-0 nova_compute[255040]: </disk>
Nov 29 08:06:17 compute-0 nova_compute[255040]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:06:17 compute-0 nova_compute[255040]: 2025-11-29 08:06:17.073 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:17 compute-0 nova_compute[255040]: 2025-11-29 08:06:17.358 255071 DEBUG nova.virt.libvirt.driver [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:06:17 compute-0 nova_compute[255040]: 2025-11-29 08:06:17.359 255071 DEBUG nova.virt.libvirt.driver [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:06:17 compute-0 nova_compute[255040]: 2025-11-29 08:06:17.359 255071 DEBUG nova.virt.libvirt.driver [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:06:17 compute-0 nova_compute[255040]: 2025-11-29 08:06:17.359 255071 DEBUG nova.virt.libvirt.driver [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] No VIF found with MAC fa:16:3e:0d:71:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:06:17 compute-0 nova_compute[255040]: 2025-11-29 08:06:17.552 255071 DEBUG oslo_concurrency.lockutils [None req-77da83e0-3f4e-4231-b287-85833436eb3c c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "e13306d3-0b4c-4937-8b4b-83605575ce82" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Nov 29 08:06:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 167 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 184 op/s
Nov 29 08:06:17 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1845876626' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Nov 29 08:06:17 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Nov 29 08:06:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:18 compute-0 ceph-mon[75237]: pgmap v1392: 305 pgs: 305 active+clean; 167 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 184 op/s
Nov 29 08:06:18 compute-0 ceph-mon[75237]: osdmap e243: 3 total, 3 up, 3 in
Nov 29 08:06:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.5 MiB/s wr, 130 op/s
Nov 29 08:06:19 compute-0 nova_compute[255040]: 2025-11-29 08:06:19.633 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:19 compute-0 podman[276909]: 2025-11-29 08:06:19.939871908 +0000 UTC m=+0.101830526 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 08:06:19 compute-0 nova_compute[255040]: 2025-11-29 08:06:19.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:19 compute-0 nova_compute[255040]: 2025-11-29 08:06:19.976 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 08:06:19 compute-0 nova_compute[255040]: 2025-11-29 08:06:19.992 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 08:06:20 compute-0 ceph-mon[75237]: pgmap v1394: 305 pgs: 305 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.5 MiB/s wr, 130 op/s
Nov 29 08:06:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:06:20 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2313679573' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:06:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:06:20 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2313679573' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:06:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Nov 29 08:06:21 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2313679573' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:06:21 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2313679573' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:06:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Nov 29 08:06:21 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Nov 29 08:06:21 compute-0 nova_compute[255040]: 2025-11-29 08:06:21.369 255071 DEBUG oslo_concurrency.lockutils [None req-26d3c5f6-5228-4ad1-b08e-3a44f1b6cdbd c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "e13306d3-0b4c-4937-8b4b-83605575ce82" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:21 compute-0 nova_compute[255040]: 2025-11-29 08:06:21.370 255071 DEBUG oslo_concurrency.lockutils [None req-26d3c5f6-5228-4ad1-b08e-3a44f1b6cdbd c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "e13306d3-0b4c-4937-8b4b-83605575ce82" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:21 compute-0 nova_compute[255040]: 2025-11-29 08:06:21.392 255071 INFO nova.compute.manager [None req-26d3c5f6-5228-4ad1-b08e-3a44f1b6cdbd c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Detaching volume 0de06e14-bafa-4551-9b68-07a3afab0078
Nov 29 08:06:21 compute-0 nova_compute[255040]: 2025-11-29 08:06:21.520 255071 INFO nova.virt.block_device [None req-26d3c5f6-5228-4ad1-b08e-3a44f1b6cdbd c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Attempting to driver detach volume 0de06e14-bafa-4551-9b68-07a3afab0078 from mountpoint /dev/vdb
Nov 29 08:06:21 compute-0 nova_compute[255040]: 2025-11-29 08:06:21.530 255071 DEBUG nova.virt.libvirt.driver [None req-26d3c5f6-5228-4ad1-b08e-3a44f1b6cdbd c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Attempting to detach device vdb from instance e13306d3-0b4c-4937-8b4b-83605575ce82 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:06:21 compute-0 nova_compute[255040]: 2025-11-29 08:06:21.531 255071 DEBUG nova.virt.libvirt.guest [None req-26d3c5f6-5228-4ad1-b08e-3a44f1b6cdbd c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:06:21 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:06:21 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-0de06e14-bafa-4551-9b68-07a3afab0078">
Nov 29 08:06:21 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:06:21 compute-0 nova_compute[255040]:   </source>
Nov 29 08:06:21 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:06:21 compute-0 nova_compute[255040]:   <serial>0de06e14-bafa-4551-9b68-07a3afab0078</serial>
Nov 29 08:06:21 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:06:21 compute-0 nova_compute[255040]: </disk>
Nov 29 08:06:21 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:06:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 30 KiB/s wr, 51 op/s
Nov 29 08:06:21 compute-0 nova_compute[255040]: 2025-11-29 08:06:21.740 255071 INFO nova.virt.libvirt.driver [None req-26d3c5f6-5228-4ad1-b08e-3a44f1b6cdbd c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Successfully detached device vdb from instance e13306d3-0b4c-4937-8b4b-83605575ce82 from the persistent domain config.
Nov 29 08:06:21 compute-0 nova_compute[255040]: 2025-11-29 08:06:21.741 255071 DEBUG nova.virt.libvirt.driver [None req-26d3c5f6-5228-4ad1-b08e-3a44f1b6cdbd c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e13306d3-0b4c-4937-8b4b-83605575ce82 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:06:21 compute-0 nova_compute[255040]: 2025-11-29 08:06:21.741 255071 DEBUG nova.virt.libvirt.guest [None req-26d3c5f6-5228-4ad1-b08e-3a44f1b6cdbd c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:06:21 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:06:21 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-0de06e14-bafa-4551-9b68-07a3afab0078">
Nov 29 08:06:21 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:06:21 compute-0 nova_compute[255040]:   </source>
Nov 29 08:06:21 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:06:21 compute-0 nova_compute[255040]:   <serial>0de06e14-bafa-4551-9b68-07a3afab0078</serial>
Nov 29 08:06:21 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:06:21 compute-0 nova_compute[255040]: </disk>
Nov 29 08:06:21 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:06:22 compute-0 nova_compute[255040]: 2025-11-29 08:06:22.076 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:23 compute-0 nova_compute[255040]: 2025-11-29 08:06:23.050 255071 DEBUG nova.virt.libvirt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Received event <DeviceRemovedEvent: 1764403583.0502408, e13306d3-0b4c-4937-8b4b-83605575ce82 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:06:23 compute-0 nova_compute[255040]: 2025-11-29 08:06:23.053 255071 DEBUG nova.virt.libvirt.driver [None req-26d3c5f6-5228-4ad1-b08e-3a44f1b6cdbd c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e13306d3-0b4c-4937-8b4b-83605575ce82 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:06:23 compute-0 nova_compute[255040]: 2025-11-29 08:06:23.056 255071 INFO nova.virt.libvirt.driver [None req-26d3c5f6-5228-4ad1-b08e-3a44f1b6cdbd c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Successfully detached device vdb from instance e13306d3-0b4c-4937-8b4b-83605575ce82 from the live domain config.
Nov 29 08:06:23 compute-0 nova_compute[255040]: 2025-11-29 08:06:23.217 255071 DEBUG nova.objects.instance [None req-26d3c5f6-5228-4ad1-b08e-3a44f1b6cdbd c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lazy-loading 'flavor' on Instance uuid e13306d3-0b4c-4937-8b4b-83605575ce82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:23 compute-0 nova_compute[255040]: 2025-11-29 08:06:23.258 255071 DEBUG oslo_concurrency.lockutils [None req-26d3c5f6-5228-4ad1-b08e-3a44f1b6cdbd c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "e13306d3-0b4c-4937-8b4b-83605575ce82" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:23 compute-0 ceph-mon[75237]: osdmap e244: 3 total, 3 up, 3 in
Nov 29 08:06:23 compute-0 ceph-mon[75237]: pgmap v1396: 305 pgs: 305 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 30 KiB/s wr, 51 op/s
Nov 29 08:06:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 169 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 306 KiB/s wr, 39 op/s
Nov 29 08:06:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:23 compute-0 nova_compute[255040]: 2025-11-29 08:06:23.992 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:06:24 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3221461623' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:06:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:06:24 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3221461623' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:06:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Nov 29 08:06:24 compute-0 ceph-mon[75237]: pgmap v1397: 305 pgs: 305 active+clean; 169 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 306 KiB/s wr, 39 op/s
Nov 29 08:06:24 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3221461623' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:06:24 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3221461623' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:06:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Nov 29 08:06:24 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Nov 29 08:06:24 compute-0 nova_compute[255040]: 2025-11-29 08:06:24.634 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:25 compute-0 nova_compute[255040]: 2025-11-29 08:06:25.216 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403570.2131832, fedca654-1ef7-4f00-ad7e-511a0b2334ac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:25 compute-0 nova_compute[255040]: 2025-11-29 08:06:25.216 255071 INFO nova.compute.manager [-] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] VM Stopped (Lifecycle Event)
Nov 29 08:06:25 compute-0 nova_compute[255040]: 2025-11-29 08:06:25.236 255071 DEBUG nova.compute.manager [None req-c65ccfaa-10d7-4db7-b725-f5793d91556e - - - - - -] [instance: fedca654-1ef7-4f00-ad7e-511a0b2334ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:25 compute-0 ceph-mon[75237]: osdmap e245: 3 total, 3 up, 3 in
Nov 29 08:06:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 169 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 315 KiB/s wr, 110 op/s
Nov 29 08:06:25 compute-0 nova_compute[255040]: 2025-11-29 08:06:25.697 255071 DEBUG nova.compute.manager [None req-1ba08fe9-833e-4765-8e54-b5339f35ba4a c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:25 compute-0 nova_compute[255040]: 2025-11-29 08:06:25.751 255071 INFO nova.compute.manager [None req-1ba08fe9-833e-4765-8e54-b5339f35ba4a c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] instance snapshotting
Nov 29 08:06:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:06:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3588758782' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:06:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:06:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3588758782' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:06:25 compute-0 nova_compute[255040]: 2025-11-29 08:06:25.959 255071 INFO nova.virt.libvirt.driver [None req-1ba08fe9-833e-4765-8e54-b5339f35ba4a c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Beginning live snapshot process
Nov 29 08:06:26 compute-0 nova_compute[255040]: 2025-11-29 08:06:26.093 255071 DEBUG nova.virt.libvirt.imagebackend [None req-1ba08fe9-833e-4765-8e54-b5339f35ba4a c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] No parent info for 36a9388d-0d77-4d24-a915-be92247e5dbc; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 29 08:06:26 compute-0 nova_compute[255040]: 2025-11-29 08:06:26.259 255071 DEBUG nova.storage.rbd_utils [None req-1ba08fe9-833e-4765-8e54-b5339f35ba4a c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] creating snapshot(bfb4bc7738234fa2bac6ac1f911041e3) on rbd image(e13306d3-0b4c-4937-8b4b-83605575ce82_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 08:06:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Nov 29 08:06:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Nov 29 08:06:26 compute-0 ceph-mon[75237]: pgmap v1399: 305 pgs: 305 active+clean; 169 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 315 KiB/s wr, 110 op/s
Nov 29 08:06:26 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3588758782' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:06:26 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3588758782' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:06:26 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Nov 29 08:06:26 compute-0 nova_compute[255040]: 2025-11-29 08:06:26.585 255071 DEBUG nova.storage.rbd_utils [None req-1ba08fe9-833e-4765-8e54-b5339f35ba4a c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] cloning vms/e13306d3-0b4c-4937-8b4b-83605575ce82_disk@bfb4bc7738234fa2bac6ac1f911041e3 to images/66b56617-4575-4cdd-9816-e743304dffab clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 29 08:06:26 compute-0 nova_compute[255040]: 2025-11-29 08:06:26.705 255071 DEBUG nova.storage.rbd_utils [None req-1ba08fe9-833e-4765-8e54-b5339f35ba4a c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] flattening images/66b56617-4575-4cdd-9816-e743304dffab flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 29 08:06:26 compute-0 nova_compute[255040]: 2025-11-29 08:06:26.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:26 compute-0 nova_compute[255040]: 2025-11-29 08:06:26.975 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:06:27 compute-0 nova_compute[255040]: 2025-11-29 08:06:27.033 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:06:27 compute-0 nova_compute[255040]: 2025-11-29 08:06:27.033 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:27 compute-0 nova_compute[255040]: 2025-11-29 08:06:27.078 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:06:27.127 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:06:27.128 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:06:27.128 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 169 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 362 KiB/s wr, 105 op/s
Nov 29 08:06:27 compute-0 nova_compute[255040]: 2025-11-29 08:06:27.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:27 compute-0 nova_compute[255040]: 2025-11-29 08:06:27.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:28 compute-0 ceph-mon[75237]: osdmap e246: 3 total, 3 up, 3 in
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:06:28.101559) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403588101728, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1021, "num_deletes": 258, "total_data_size": 1266850, "memory_usage": 1294776, "flush_reason": "Manual Compaction"}
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403588156163, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1251171, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25789, "largest_seqno": 26809, "table_properties": {"data_size": 1246152, "index_size": 2479, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11835, "raw_average_key_size": 20, "raw_value_size": 1235761, "raw_average_value_size": 2160, "num_data_blocks": 108, "num_entries": 572, "num_filter_entries": 572, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403523, "oldest_key_time": 1764403523, "file_creation_time": 1764403588, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 54654 microseconds, and 6520 cpu microseconds.
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:06:28.156226) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1251171 bytes OK
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:06:28.156254) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:06:28.157886) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:06:28.157903) EVENT_LOG_v1 {"time_micros": 1764403588157897, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:06:28.157927) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1261809, prev total WAL file size 1261809, number of live WAL files 2.
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:06:28.158780) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1221KB)], [56(8524KB)]
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403588158935, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9980449, "oldest_snapshot_seqno": -1}
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5355 keys, 8194767 bytes, temperature: kUnknown
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403588224254, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 8194767, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8157189, "index_size": 23068, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 135359, "raw_average_key_size": 25, "raw_value_size": 8058857, "raw_average_value_size": 1504, "num_data_blocks": 935, "num_entries": 5355, "num_filter_entries": 5355, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764403588, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:06:28.224808) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 8194767 bytes
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:06:28.226933) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.1 rd, 124.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 8.3 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(14.5) write-amplify(6.5) OK, records in: 5884, records dropped: 529 output_compression: NoCompression
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:06:28.226965) EVENT_LOG_v1 {"time_micros": 1764403588226951, "job": 30, "event": "compaction_finished", "compaction_time_micros": 65621, "compaction_time_cpu_micros": 22584, "output_level": 6, "num_output_files": 1, "total_output_size": 8194767, "num_input_records": 5884, "num_output_records": 5355, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403588227372, "job": 30, "event": "table_file_deletion", "file_number": 58}
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403588228962, "job": 30, "event": "table_file_deletion", "file_number": 56}
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:06:28.158575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:06:28.229295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:06:28.229304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:06:28.229306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:06:28.229308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:06:28 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:06:28.229310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:06:28 compute-0 nova_compute[255040]: 2025-11-29 08:06:28.294 255071 DEBUG nova.storage.rbd_utils [None req-1ba08fe9-833e-4765-8e54-b5339f35ba4a c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] removing snapshot(bfb4bc7738234fa2bac6ac1f911041e3) on rbd image(e13306d3-0b4c-4937-8b4b-83605575ce82_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 29 08:06:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Nov 29 08:06:28 compute-0 nova_compute[255040]: 2025-11-29 08:06:28.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:28 compute-0 nova_compute[255040]: 2025-11-29 08:06:28.977 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 08:06:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 223 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.1 MiB/s wr, 202 op/s
Nov 29 08:06:29 compute-0 nova_compute[255040]: 2025-11-29 08:06:29.639 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Nov 29 08:06:29 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Nov 29 08:06:29 compute-0 podman[277062]: 2025-11-29 08:06:29.897377206 +0000 UTC m=+0.060970014 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 08:06:29 compute-0 nova_compute[255040]: 2025-11-29 08:06:29.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:29 compute-0 nova_compute[255040]: 2025-11-29 08:06:29.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:29 compute-0 nova_compute[255040]: 2025-11-29 08:06:29.976 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:06:29 compute-0 nova_compute[255040]: 2025-11-29 08:06:29.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:30 compute-0 nova_compute[255040]: 2025-11-29 08:06:29.998 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:30 compute-0 nova_compute[255040]: 2025-11-29 08:06:29.999 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:30 compute-0 nova_compute[255040]: 2025-11-29 08:06:29.999 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:30 compute-0 nova_compute[255040]: 2025-11-29 08:06:30.000 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:06:30 compute-0 nova_compute[255040]: 2025-11-29 08:06:30.000 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:30 compute-0 ceph-mon[75237]: pgmap v1401: 305 pgs: 305 active+clean; 169 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 362 KiB/s wr, 105 op/s
Nov 29 08:06:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:06:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3261311170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:31 compute-0 nova_compute[255040]: 2025-11-29 08:06:31.002 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.003s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:31 compute-0 nova_compute[255040]: 2025-11-29 08:06:31.068 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:06:31 compute-0 nova_compute[255040]: 2025-11-29 08:06:31.068 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:06:31 compute-0 nova_compute[255040]: 2025-11-29 08:06:31.248 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:06:31 compute-0 nova_compute[255040]: 2025-11-29 08:06:31.250 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4367MB free_disk=59.94266891479492GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:06:31 compute-0 nova_compute[255040]: 2025-11-29 08:06:31.250 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:31 compute-0 nova_compute[255040]: 2025-11-29 08:06:31.250 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:31 compute-0 nova_compute[255040]: 2025-11-29 08:06:31.478 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance e13306d3-0b4c-4937-8b4b-83605575ce82 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:06:31 compute-0 nova_compute[255040]: 2025-11-29 08:06:31.478 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:06:31 compute-0 nova_compute[255040]: 2025-11-29 08:06:31.478 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:06:31 compute-0 nova_compute[255040]: 2025-11-29 08:06:31.588 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 248 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 6.7 MiB/s rd, 6.5 MiB/s wr, 227 op/s
Nov 29 08:06:32 compute-0 nova_compute[255040]: 2025-11-29 08:06:32.080 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:06:32 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1232767000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:32 compute-0 nova_compute[255040]: 2025-11-29 08:06:32.448 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.860s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:32 compute-0 nova_compute[255040]: 2025-11-29 08:06:32.455 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:06:32 compute-0 nova_compute[255040]: 2025-11-29 08:06:32.476 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:06:32 compute-0 nova_compute[255040]: 2025-11-29 08:06:32.500 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:06:32 compute-0 nova_compute[255040]: 2025-11-29 08:06:32.501 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:32 compute-0 nova_compute[255040]: 2025-11-29 08:06:32.502 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:32 compute-0 ceph-mon[75237]: pgmap v1402: 305 pgs: 305 active+clean; 223 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.1 MiB/s wr, 202 op/s
Nov 29 08:06:32 compute-0 ceph-mon[75237]: osdmap e247: 3 total, 3 up, 3 in
Nov 29 08:06:32 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3261311170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:32 compute-0 nova_compute[255040]: 2025-11-29 08:06:32.746 255071 DEBUG nova.storage.rbd_utils [None req-1ba08fe9-833e-4765-8e54-b5339f35ba4a c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] creating snapshot(snap) on rbd image(66b56617-4575-4cdd-9816-e743304dffab) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 08:06:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 248 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 181 op/s
Nov 29 08:06:33 compute-0 ceph-mon[75237]: pgmap v1404: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 248 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 6.7 MiB/s rd, 6.5 MiB/s wr, 227 op/s
Nov 29 08:06:33 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1232767000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Nov 29 08:06:34 compute-0 nova_compute[255040]: 2025-11-29 08:06:34.377 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Nov 29 08:06:34 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Nov 29 08:06:34 compute-0 nova_compute[255040]: 2025-11-29 08:06:34.515 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:34 compute-0 nova_compute[255040]: 2025-11-29 08:06:34.641 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:35 compute-0 ceph-mon[75237]: pgmap v1405: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 248 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 181 op/s
Nov 29 08:06:35 compute-0 ceph-mon[75237]: osdmap e248: 3 total, 3 up, 3 in
Nov 29 08:06:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 248 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 187 op/s
Nov 29 08:06:35 compute-0 podman[277146]: 2025-11-29 08:06:35.922893973 +0000 UTC m=+0.081071677 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 08:06:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:06:36 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2338414245' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:06:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:06:36 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2338414245' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:06:36 compute-0 ceph-mon[75237]: pgmap v1407: 305 pgs: 305 active+clean; 248 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 187 op/s
Nov 29 08:06:36 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2338414245' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:06:36 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2338414245' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:06:37 compute-0 nova_compute[255040]: 2025-11-29 08:06:37.083 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:37 compute-0 nova_compute[255040]: 2025-11-29 08:06:37.414 255071 INFO nova.virt.libvirt.driver [None req-1ba08fe9-833e-4765-8e54-b5339f35ba4a c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Snapshot image upload complete
Nov 29 08:06:37 compute-0 nova_compute[255040]: 2025-11-29 08:06:37.415 255071 INFO nova.compute.manager [None req-1ba08fe9-833e-4765-8e54-b5339f35ba4a c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Took 11.66 seconds to snapshot the instance on the hypervisor.
Nov 29 08:06:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 248 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 69 op/s
Nov 29 08:06:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:06:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:06:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:06:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:06:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:06:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:06:38 compute-0 ceph-mon[75237]: pgmap v1408: 305 pgs: 305 active+clean; 248 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 69 op/s
Nov 29 08:06:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:06:38
Nov 29 08:06:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:06:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:06:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'vms', '.mgr', 'backups', '.rgw.root', 'cephfs.cephfs.data']
Nov 29 08:06:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:06:39 compute-0 nova_compute[255040]: 2025-11-29 08:06:39.384 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Nov 29 08:06:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Nov 29 08:06:39 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Nov 29 08:06:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 248 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 3.0 KiB/s wr, 70 op/s
Nov 29 08:06:39 compute-0 nova_compute[255040]: 2025-11-29 08:06:39.644 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:40 compute-0 ceph-mon[75237]: osdmap e249: 3 total, 3 up, 3 in
Nov 29 08:06:40 compute-0 ceph-mon[75237]: pgmap v1410: 305 pgs: 305 active+clean; 248 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 3.0 KiB/s wr, 70 op/s
Nov 29 08:06:41 compute-0 nova_compute[255040]: 2025-11-29 08:06:41.241 255071 DEBUG oslo_concurrency.lockutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:41 compute-0 nova_compute[255040]: 2025-11-29 08:06:41.242 255071 DEBUG oslo_concurrency.lockutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:41 compute-0 nova_compute[255040]: 2025-11-29 08:06:41.267 255071 DEBUG nova.compute.manager [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:06:41 compute-0 nova_compute[255040]: 2025-11-29 08:06:41.362 255071 DEBUG oslo_concurrency.lockutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:41 compute-0 nova_compute[255040]: 2025-11-29 08:06:41.363 255071 DEBUG oslo_concurrency.lockutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:41 compute-0 nova_compute[255040]: 2025-11-29 08:06:41.371 255071 DEBUG nova.virt.hardware [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:06:41 compute-0 nova_compute[255040]: 2025-11-29 08:06:41.372 255071 INFO nova.compute.claims [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:06:41 compute-0 nova_compute[255040]: 2025-11-29 08:06:41.482 255071 DEBUG oslo_concurrency.processutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 248 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.4 KiB/s wr, 57 op/s
Nov 29 08:06:42 compute-0 nova_compute[255040]: 2025-11-29 08:06:42.086 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:06:42 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3262798171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:42 compute-0 nova_compute[255040]: 2025-11-29 08:06:42.301 255071 DEBUG oslo_concurrency.processutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.819s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:42 compute-0 nova_compute[255040]: 2025-11-29 08:06:42.309 255071 DEBUG nova.compute.provider_tree [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:06:42 compute-0 nova_compute[255040]: 2025-11-29 08:06:42.362 255071 DEBUG nova.scheduler.client.report [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:06:42 compute-0 nova_compute[255040]: 2025-11-29 08:06:42.394 255071 DEBUG oslo_concurrency.lockutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:42 compute-0 nova_compute[255040]: 2025-11-29 08:06:42.396 255071 DEBUG nova.compute.manager [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:06:42 compute-0 nova_compute[255040]: 2025-11-29 08:06:42.461 255071 DEBUG nova.compute.manager [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:06:42 compute-0 nova_compute[255040]: 2025-11-29 08:06:42.462 255071 DEBUG nova.network.neutron [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:06:42 compute-0 nova_compute[255040]: 2025-11-29 08:06:42.482 255071 INFO nova.virt.libvirt.driver [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:06:42 compute-0 nova_compute[255040]: 2025-11-29 08:06:42.505 255071 DEBUG nova.compute.manager [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:06:42 compute-0 nova_compute[255040]: 2025-11-29 08:06:42.595 255071 DEBUG nova.compute.manager [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:06:42 compute-0 nova_compute[255040]: 2025-11-29 08:06:42.597 255071 DEBUG nova.virt.libvirt.driver [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:06:42 compute-0 nova_compute[255040]: 2025-11-29 08:06:42.597 255071 INFO nova.virt.libvirt.driver [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Creating image(s)
Nov 29 08:06:42 compute-0 nova_compute[255040]: 2025-11-29 08:06:42.618 255071 DEBUG nova.storage.rbd_utils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] rbd image b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:42 compute-0 nova_compute[255040]: 2025-11-29 08:06:42.646 255071 DEBUG nova.storage.rbd_utils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] rbd image b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:42 compute-0 nova_compute[255040]: 2025-11-29 08:06:42.675 255071 DEBUG nova.storage.rbd_utils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] rbd image b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:42 compute-0 nova_compute[255040]: 2025-11-29 08:06:42.682 255071 DEBUG oslo_concurrency.lockutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "3a329e02467f246e3d53846204b57142f04f5dcc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:42 compute-0 nova_compute[255040]: 2025-11-29 08:06:42.684 255071 DEBUG oslo_concurrency.lockutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "3a329e02467f246e3d53846204b57142f04f5dcc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:42 compute-0 nova_compute[255040]: 2025-11-29 08:06:42.689 255071 DEBUG nova.policy [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c4f53a86d1eb4bdebed4ec5dd9b5ff45', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e34fda55585f453b8b66f12e625234fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:06:42 compute-0 nova_compute[255040]: 2025-11-29 08:06:42.884 255071 DEBUG nova.virt.libvirt.imagebackend [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Image locations are: [{'url': 'rbd://321e9cb7-01a2-5759-bf8c-981c9a64aa3e/images/66b56617-4575-4cdd-9816-e743304dffab/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://321e9cb7-01a2-5759-bf8c-981c9a64aa3e/images/66b56617-4575-4cdd-9816-e743304dffab/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 29 08:06:42 compute-0 nova_compute[255040]: 2025-11-29 08:06:42.958 255071 DEBUG nova.virt.libvirt.imagebackend [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Selected location: {'url': 'rbd://321e9cb7-01a2-5759-bf8c-981c9a64aa3e/images/66b56617-4575-4cdd-9816-e743304dffab/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 29 08:06:42 compute-0 nova_compute[255040]: 2025-11-29 08:06:42.958 255071 DEBUG nova.storage.rbd_utils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] cloning images/66b56617-4575-4cdd-9816-e743304dffab@snap to None/b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 29 08:06:43 compute-0 ceph-mon[75237]: pgmap v1411: 305 pgs: 305 active+clean; 248 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.4 KiB/s wr, 57 op/s
Nov 29 08:06:43 compute-0 nova_compute[255040]: 2025-11-29 08:06:43.248 255071 DEBUG nova.network.neutron [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Successfully created port: 3adae585-03a6-434e-a645-7fb75855efe0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:06:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:06:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:06:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:06:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:06:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:06:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:06:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:06:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:06:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:06:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:06:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 248 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.6 KiB/s wr, 55 op/s
Nov 29 08:06:44 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3262798171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:44 compute-0 ceph-mon[75237]: pgmap v1412: 305 pgs: 305 active+clean; 248 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.6 KiB/s wr, 55 op/s
Nov 29 08:06:44 compute-0 nova_compute[255040]: 2025-11-29 08:06:44.134 255071 DEBUG nova.network.neutron [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Successfully updated port: 3adae585-03a6-434e-a645-7fb75855efe0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:06:44 compute-0 nova_compute[255040]: 2025-11-29 08:06:44.148 255071 DEBUG oslo_concurrency.lockutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "refresh_cache-b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:06:44 compute-0 nova_compute[255040]: 2025-11-29 08:06:44.149 255071 DEBUG oslo_concurrency.lockutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquired lock "refresh_cache-b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:06:44 compute-0 nova_compute[255040]: 2025-11-29 08:06:44.149 255071 DEBUG nova.network.neutron [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:06:44 compute-0 nova_compute[255040]: 2025-11-29 08:06:44.228 255071 DEBUG nova.compute.manager [req-94436cfe-b194-46ca-90eb-22628c2fcaed req-2cd3a497-a3c5-4ae9-a195-4903f6835492 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Received event network-changed-3adae585-03a6-434e-a645-7fb75855efe0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:44 compute-0 nova_compute[255040]: 2025-11-29 08:06:44.228 255071 DEBUG nova.compute.manager [req-94436cfe-b194-46ca-90eb-22628c2fcaed req-2cd3a497-a3c5-4ae9-a195-4903f6835492 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Refreshing instance network info cache due to event network-changed-3adae585-03a6-434e-a645-7fb75855efe0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:06:44 compute-0 nova_compute[255040]: 2025-11-29 08:06:44.229 255071 DEBUG oslo_concurrency.lockutils [req-94436cfe-b194-46ca-90eb-22628c2fcaed req-2cd3a497-a3c5-4ae9-a195-4903f6835492 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:06:44 compute-0 nova_compute[255040]: 2025-11-29 08:06:44.264 255071 DEBUG nova.network.neutron [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:06:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Nov 29 08:06:44 compute-0 nova_compute[255040]: 2025-11-29 08:06:44.646 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Nov 29 08:06:44 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Nov 29 08:06:44 compute-0 nova_compute[255040]: 2025-11-29 08:06:44.981 255071 DEBUG oslo_concurrency.lockutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "3a329e02467f246e3d53846204b57142f04f5dcc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.297s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.169 255071 DEBUG nova.objects.instance [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lazy-loading 'migration_context' on Instance uuid b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.184 255071 DEBUG nova.virt.libvirt.driver [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.185 255071 DEBUG nova.virt.libvirt.driver [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Ensure instance console log exists: /var/lib/nova/instances/b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.185 255071 DEBUG oslo_concurrency.lockutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.186 255071 DEBUG oslo_concurrency.lockutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.186 255071 DEBUG oslo_concurrency.lockutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.361 255071 DEBUG nova.network.neutron [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Updating instance_info_cache with network_info: [{"id": "3adae585-03a6-434e-a645-7fb75855efe0", "address": "fa:16:3e:52:2b:db", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3adae585-03", "ovs_interfaceid": "3adae585-03a6-434e-a645-7fb75855efe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.379 255071 DEBUG oslo_concurrency.lockutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Releasing lock "refresh_cache-b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.380 255071 DEBUG nova.compute.manager [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Instance network_info: |[{"id": "3adae585-03a6-434e-a645-7fb75855efe0", "address": "fa:16:3e:52:2b:db", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3adae585-03", "ovs_interfaceid": "3adae585-03a6-434e-a645-7fb75855efe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.381 255071 DEBUG oslo_concurrency.lockutils [req-94436cfe-b194-46ca-90eb-22628c2fcaed req-2cd3a497-a3c5-4ae9-a195-4903f6835492 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.382 255071 DEBUG nova.network.neutron [req-94436cfe-b194-46ca-90eb-22628c2fcaed req-2cd3a497-a3c5-4ae9-a195-4903f6835492 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Refreshing network info cache for port 3adae585-03a6-434e-a645-7fb75855efe0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.390 255071 DEBUG nova.virt.libvirt.driver [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Start _get_guest_xml network_info=[{"id": "3adae585-03a6-434e-a645-7fb75855efe0", "address": "fa:16:3e:52:2b:db", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3adae585-03", "ovs_interfaceid": "3adae585-03a6-434e-a645-7fb75855efe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:06:25Z,direct_url=<?>,disk_format='raw',id=66b56617-4575-4cdd-9816-e743304dffab,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-716531212',owner='e34fda55585f453b8b66f12e625234fe',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:06:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'image_id': '66b56617-4575-4cdd-9816-e743304dffab'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.398 255071 WARNING nova.virt.libvirt.driver [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.402 255071 DEBUG nova.virt.libvirt.host [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.403 255071 DEBUG nova.virt.libvirt.host [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.409 255071 DEBUG nova.virt.libvirt.host [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.410 255071 DEBUG nova.virt.libvirt.host [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.410 255071 DEBUG nova.virt.libvirt.driver [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.410 255071 DEBUG nova.virt.hardware [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:06:25Z,direct_url=<?>,disk_format='raw',id=66b56617-4575-4cdd-9816-e743304dffab,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-716531212',owner='e34fda55585f453b8b66f12e625234fe',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:06:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.411 255071 DEBUG nova.virt.hardware [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.411 255071 DEBUG nova.virt.hardware [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.412 255071 DEBUG nova.virt.hardware [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.412 255071 DEBUG nova.virt.hardware [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.412 255071 DEBUG nova.virt.hardware [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.413 255071 DEBUG nova.virt.hardware [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.413 255071 DEBUG nova.virt.hardware [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.413 255071 DEBUG nova.virt.hardware [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.413 255071 DEBUG nova.virt.hardware [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.414 255071 DEBUG nova.virt.hardware [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.417 255071 DEBUG oslo_concurrency.processutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 248 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 3.5 KiB/s wr, 73 op/s
Nov 29 08:06:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Nov 29 08:06:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Nov 29 08:06:45 compute-0 ceph-mon[75237]: osdmap e250: 3 total, 3 up, 3 in
Nov 29 08:06:45 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Nov 29 08:06:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:06:45 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/889634702' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.920 255071 DEBUG oslo_concurrency.processutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.973 255071 DEBUG nova.storage.rbd_utils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] rbd image b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:45 compute-0 nova_compute[255040]: 2025-11-29 08:06:45.980 255071 DEBUG oslo_concurrency.processutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:06:46 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2657468481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.440 255071 DEBUG oslo_concurrency.processutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.442 255071 DEBUG nova.virt.libvirt.vif [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-292663265',display_name='tempest-TestStampPattern-server-292663265',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-292663265',id=10,image_ref='66b56617-4575-4cdd-9816-e743304dffab',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJE9RqYsz9HjQ/t1DVesVU5+xhErSoHBDhqDFMn5e1HnxCoCHbyhG0Ca+mVMomD/L3wNZd1oYWRpzT93dK7YeXeDz2hG7gc6vbzGWNmMv5BpvrM+1KI+r/GQ5ox5/o1aRQ==',key_name='tempest-TestStampPattern-1389223213',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e34fda55585f453b8b66f12e625234fe',ramdisk_id='',reservation_id='r-zkgjybqr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='e13306d3-0b4c-4937-8b4b-83605575ce82',image_min_disk='1',image_min_ram='0',image_owner_id='e34fda55585f453b8b66f12e625234fe',image_owner_project_name='tempest-TestStampPattern-194782062',image_owner_user_name='tempest-TestStampPattern-194782062-project-member',image_user_id='c4f53a86d1eb4bdebed4ec5dd9b5ff45',network_allocated='True',owner_project_name='tempest-TestStampPattern-194782062',owner_user_name='tempest-TestStampPattern-194782062-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:42Z,user_data=None,user_id='c4f53a86d1eb4bdebed4ec5dd9b5ff45',uuid=b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3adae585-03a6-434e-a645-7fb75855efe0", "address": "fa:16:3e:52:2b:db", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3adae585-03", "ovs_interfaceid": "3adae585-03a6-434e-a645-7fb75855efe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.443 255071 DEBUG nova.network.os_vif_util [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Converting VIF {"id": "3adae585-03a6-434e-a645-7fb75855efe0", "address": "fa:16:3e:52:2b:db", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3adae585-03", "ovs_interfaceid": "3adae585-03a6-434e-a645-7fb75855efe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.444 255071 DEBUG nova.network.os_vif_util [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:2b:db,bridge_name='br-int',has_traffic_filtering=True,id=3adae585-03a6-434e-a645-7fb75855efe0,network=Network(40f35c3c-5e61-44c9-af5e-70c7d4a4426c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3adae585-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.445 255071 DEBUG nova.objects.instance [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lazy-loading 'pci_devices' on Instance uuid b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.459 255071 DEBUG nova.virt.libvirt.driver [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:06:46 compute-0 nova_compute[255040]:   <uuid>b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12</uuid>
Nov 29 08:06:46 compute-0 nova_compute[255040]:   <name>instance-0000000a</name>
Nov 29 08:06:46 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:06:46 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:06:46 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <nova:name>tempest-TestStampPattern-server-292663265</nova:name>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:06:45</nova:creationTime>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:06:46 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:06:46 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:06:46 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:06:46 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:06:46 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:06:46 compute-0 nova_compute[255040]:         <nova:user uuid="c4f53a86d1eb4bdebed4ec5dd9b5ff45">tempest-TestStampPattern-194782062-project-member</nova:user>
Nov 29 08:06:46 compute-0 nova_compute[255040]:         <nova:project uuid="e34fda55585f453b8b66f12e625234fe">tempest-TestStampPattern-194782062</nova:project>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <nova:root type="image" uuid="66b56617-4575-4cdd-9816-e743304dffab"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:06:46 compute-0 nova_compute[255040]:         <nova:port uuid="3adae585-03a6-434e-a645-7fb75855efe0">
Nov 29 08:06:46 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:06:46 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:06:46 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <system>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <entry name="serial">b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12</entry>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <entry name="uuid">b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12</entry>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     </system>
Nov 29 08:06:46 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:06:46 compute-0 nova_compute[255040]:   <os>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:   </os>
Nov 29 08:06:46 compute-0 nova_compute[255040]:   <features>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:   </features>
Nov 29 08:06:46 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:06:46 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:06:46 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12_disk">
Nov 29 08:06:46 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       </source>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:06:46 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12_disk.config">
Nov 29 08:06:46 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       </source>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:06:46 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:52:2b:db"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <target dev="tap3adae585-03"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12/console.log" append="off"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <video>
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     </video>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <input type="keyboard" bus="usb"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:06:46 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:06:46 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:06:46 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:06:46 compute-0 nova_compute[255040]: </domain>
Nov 29 08:06:46 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.460 255071 DEBUG nova.compute.manager [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Preparing to wait for external event network-vif-plugged-3adae585-03a6-434e-a645-7fb75855efe0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.461 255071 DEBUG oslo_concurrency.lockutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.461 255071 DEBUG oslo_concurrency.lockutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.461 255071 DEBUG oslo_concurrency.lockutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.462 255071 DEBUG nova.virt.libvirt.vif [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-292663265',display_name='tempest-TestStampPattern-server-292663265',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-292663265',id=10,image_ref='66b56617-4575-4cdd-9816-e743304dffab',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJE9RqYsz9HjQ/t1DVesVU5+xhErSoHBDhqDFMn5e1HnxCoCHbyhG0Ca+mVMomD/L3wNZd1oYWRpzT93dK7YeXeDz2hG7gc6vbzGWNmMv5BpvrM+1KI+r/GQ5ox5/o1aRQ==',key_name='tempest-TestStampPattern-1389223213',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e34fda55585f453b8b66f12e625234fe',ramdisk_id='',reservation_id='r-zkgjybqr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='e13306d3-0b4c-4937-8b4b-83605575ce82',image_min_disk='1',image_min_ram='0',image_owner_id='e34fda55585f453b8b66f12e625234fe',image_owner_project_name='tempest-TestStampPattern-194782062',image_owner_user_name='tempest-TestStampPattern-194782062-project-member',image_user_id='c4f53a86d1eb4bdebed4ec5dd9b5ff45',network_allocated='True',owner_project_name='tempest-TestStampPattern-194782062',owner_user_name='tempest-TestStampPattern-194782062-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:42Z,user_data=None,user_id='c4f53a86d1eb4bdebed4ec5dd9b5ff45',uuid=b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3adae585-03a6-434e-a645-7fb75855efe0", "address": "fa:16:3e:52:2b:db", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3adae585-03", "ovs_interfaceid": "3adae585-03a6-434e-a645-7fb75855efe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.462 255071 DEBUG nova.network.os_vif_util [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Converting VIF {"id": "3adae585-03a6-434e-a645-7fb75855efe0", "address": "fa:16:3e:52:2b:db", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3adae585-03", "ovs_interfaceid": "3adae585-03a6-434e-a645-7fb75855efe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.463 255071 DEBUG nova.network.os_vif_util [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:2b:db,bridge_name='br-int',has_traffic_filtering=True,id=3adae585-03a6-434e-a645-7fb75855efe0,network=Network(40f35c3c-5e61-44c9-af5e-70c7d4a4426c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3adae585-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.463 255071 DEBUG os_vif [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:2b:db,bridge_name='br-int',has_traffic_filtering=True,id=3adae585-03a6-434e-a645-7fb75855efe0,network=Network(40f35c3c-5e61-44c9-af5e-70c7d4a4426c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3adae585-03') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.464 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.464 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.465 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.469 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.470 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3adae585-03, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.470 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3adae585-03, col_values=(('external_ids', {'iface-id': '3adae585-03a6-434e-a645-7fb75855efe0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:2b:db', 'vm-uuid': 'b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:46 compute-0 NetworkManager[49116]: <info>  [1764403606.4745] manager: (tap3adae585-03): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.476 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.485 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.486 255071 INFO os_vif [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:2b:db,bridge_name='br-int',has_traffic_filtering=True,id=3adae585-03a6-434e-a645-7fb75855efe0,network=Network(40f35c3c-5e61-44c9-af5e-70c7d4a4426c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3adae585-03')
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.555 255071 DEBUG nova.virt.libvirt.driver [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.555 255071 DEBUG nova.virt.libvirt.driver [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.556 255071 DEBUG nova.virt.libvirt.driver [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] No VIF found with MAC fa:16:3e:52:2b:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.556 255071 INFO nova.virt.libvirt.driver [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Using config drive
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.580 255071 DEBUG nova.storage.rbd_utils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] rbd image b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.671 255071 DEBUG nova.network.neutron [req-94436cfe-b194-46ca-90eb-22628c2fcaed req-2cd3a497-a3c5-4ae9-a195-4903f6835492 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Updated VIF entry in instance network info cache for port 3adae585-03a6-434e-a645-7fb75855efe0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.672 255071 DEBUG nova.network.neutron [req-94436cfe-b194-46ca-90eb-22628c2fcaed req-2cd3a497-a3c5-4ae9-a195-4903f6835492 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Updating instance_info_cache with network_info: [{"id": "3adae585-03a6-434e-a645-7fb75855efe0", "address": "fa:16:3e:52:2b:db", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3adae585-03", "ovs_interfaceid": "3adae585-03a6-434e-a645-7fb75855efe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:46 compute-0 nova_compute[255040]: 2025-11-29 08:06:46.685 255071 DEBUG oslo_concurrency.lockutils [req-94436cfe-b194-46ca-90eb-22628c2fcaed req-2cd3a497-a3c5-4ae9-a195-4903f6835492 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:06:46 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:06:46 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 5804 writes, 26K keys, 5804 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 5804 writes, 5804 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1731 writes, 8407 keys, 1731 commit groups, 1.0 writes per commit group, ingest: 10.65 MB, 0.02 MB/s
                                           Interval WAL: 1732 writes, 1732 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     22.5      1.43              0.14        15    0.095       0      0       0.0       0.0
                                             L6      1/0    7.82 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4     53.5     43.8      2.50              0.41        14    0.179     69K   7905       0.0       0.0
                                            Sum      1/0    7.82 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     34.1     36.1      3.93              0.55        29    0.135     69K   7905       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.6     35.8     36.3      1.63              0.21        12    0.136     33K   3662       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     53.5     43.8      2.50              0.41        14    0.179     69K   7905       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     22.5      1.42              0.14        14    0.102       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     24.3      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.031, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.06 MB/s read, 3.9 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55dbdf32d1f0#2 capacity: 304.00 MB usage: 13.39 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000175 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(869,12.83 MB,4.22113%) FilterBlock(30,201.55 KB,0.0647444%) IndexBlock(30,372.42 KB,0.119636%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 08:06:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Nov 29 08:06:46 compute-0 ceph-mon[75237]: pgmap v1414: 305 pgs: 305 active+clean; 248 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 3.5 KiB/s wr, 73 op/s
Nov 29 08:06:46 compute-0 ceph-mon[75237]: osdmap e251: 3 total, 3 up, 3 in
Nov 29 08:06:46 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/889634702' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:46 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2657468481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Nov 29 08:06:46 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.030 255071 INFO nova.virt.libvirt.driver [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Creating config drive at /var/lib/nova/instances/b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12/disk.config
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.036 255071 DEBUG oslo_concurrency.processutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8x46akdo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.173 255071 DEBUG oslo_concurrency.processutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8x46akdo" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.205 255071 DEBUG nova.storage.rbd_utils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] rbd image b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.209 255071 DEBUG oslo_concurrency.processutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12/disk.config b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.379 255071 DEBUG oslo_concurrency.processutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12/disk.config b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.380 255071 INFO nova.virt.libvirt.driver [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Deleting local config drive /var/lib/nova/instances/b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12/disk.config because it was imported into RBD.
Nov 29 08:06:47 compute-0 kernel: tap3adae585-03: entered promiscuous mode
Nov 29 08:06:47 compute-0 NetworkManager[49116]: <info>  [1764403607.4281] manager: (tap3adae585-03): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Nov 29 08:06:47 compute-0 ovn_controller[153295]: 2025-11-29T08:06:47Z|00100|binding|INFO|Claiming lport 3adae585-03a6-434e-a645-7fb75855efe0 for this chassis.
Nov 29 08:06:47 compute-0 ovn_controller[153295]: 2025-11-29T08:06:47Z|00101|binding|INFO|3adae585-03a6-434e-a645-7fb75855efe0: Claiming fa:16:3e:52:2b:db 10.100.0.5
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.430 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:06:47.437 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:2b:db 10.100.0.5'], port_security=['fa:16:3e:52:2b:db 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40f35c3c-5e61-44c9-af5e-70c7d4a4426c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e34fda55585f453b8b66f12e625234fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '553abf0a-6893-4b91-98a5-f4750edd0687', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76f3566f-5b18-4f8e-8a2b-ee02876f83ee, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=3adae585-03a6-434e-a645-7fb75855efe0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:06:47.440 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 3adae585-03a6-434e-a645-7fb75855efe0 in datapath 40f35c3c-5e61-44c9-af5e-70c7d4a4426c bound to our chassis
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:06:47.441 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 40f35c3c-5e61-44c9-af5e-70c7d4a4426c
Nov 29 08:06:47 compute-0 ovn_controller[153295]: 2025-11-29T08:06:47Z|00102|binding|INFO|Setting lport 3adae585-03a6-434e-a645-7fb75855efe0 ovn-installed in OVS
Nov 29 08:06:47 compute-0 ovn_controller[153295]: 2025-11-29T08:06:47Z|00103|binding|INFO|Setting lport 3adae585-03a6-434e-a645-7fb75855efe0 up in Southbound
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.455 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.459 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:06:47.462 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[bc77d075-ba60-4f74-b7a3-a29042688e98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-0 systemd-udevd[277501]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:06:47 compute-0 systemd-machined[216271]: New machine qemu-10-instance-0000000a.
Nov 29 08:06:47 compute-0 NetworkManager[49116]: <info>  [1764403607.4787] device (tap3adae585-03): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:06:47 compute-0 NetworkManager[49116]: <info>  [1764403607.4800] device (tap3adae585-03): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:06:47 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:06:47.497 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f66a5c-3c6a-4043-ba87-a349f958a0b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:06:47.502 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[331458c9-ef2d-4d80-8fbc-37e3d5d14c61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:06:47.534 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[d41821be-c766-4a89-b3eb-37146c5e774f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:06:47.555 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[38ed21ca-2d35-464d-bbcc-5461f6ee7b21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40f35c3c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:36:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579138, 'reachable_time': 35207, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277512, 'error': None, 'target': 'ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:06:47.578 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[b7110ac2-ee9d-460d-bb76-9c950d1fe252]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap40f35c3c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 579152, 'tstamp': 579152}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277515, 'error': None, 'target': 'ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap40f35c3c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 579156, 'tstamp': 579156}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277515, 'error': None, 'target': 'ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:06:47.581 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40f35c3c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.583 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.586 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:06:47.586 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40f35c3c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:06:47.586 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:06:47.587 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap40f35c3c-50, col_values=(('external_ids', {'iface-id': '7416de2d-6dc8-411d-a143-d9d9b0a4507f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:06:47.587 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.621 255071 DEBUG nova.compute.manager [req-6a635462-7e23-448d-b680-49500de0bdf5 req-c6a3c9df-749d-43f0-91fd-582650aa4462 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Received event network-vif-plugged-3adae585-03a6-434e-a645-7fb75855efe0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.622 255071 DEBUG oslo_concurrency.lockutils [req-6a635462-7e23-448d-b680-49500de0bdf5 req-c6a3c9df-749d-43f0-91fd-582650aa4462 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.622 255071 DEBUG oslo_concurrency.lockutils [req-6a635462-7e23-448d-b680-49500de0bdf5 req-c6a3c9df-749d-43f0-91fd-582650aa4462 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.623 255071 DEBUG oslo_concurrency.lockutils [req-6a635462-7e23-448d-b680-49500de0bdf5 req-c6a3c9df-749d-43f0-91fd-582650aa4462 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.623 255071 DEBUG nova.compute.manager [req-6a635462-7e23-448d-b680-49500de0bdf5 req-c6a3c9df-749d-43f0-91fd-582650aa4462 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Processing event network-vif-plugged-3adae585-03a6-434e-a645-7fb75855efe0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:06:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 248 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.5 KiB/s wr, 29 op/s
Nov 29 08:06:47 compute-0 ceph-mon[75237]: osdmap e252: 3 total, 3 up, 3 in
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.900 255071 DEBUG nova.compute.manager [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.901 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403607.899148, b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.901 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] VM Started (Lifecycle Event)
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.905 255071 DEBUG nova.virt.libvirt.driver [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.909 255071 INFO nova.virt.libvirt.driver [-] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Instance spawned successfully.
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.910 255071 INFO nova.compute.manager [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Took 5.31 seconds to spawn the instance on the hypervisor.
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.910 255071 DEBUG nova.compute.manager [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.923 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.928 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:06:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:06:47 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1755651315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.967 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.968 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403607.899405, b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.968 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] VM Paused (Lifecycle Event)
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.989 255071 INFO nova.compute.manager [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Took 6.66 seconds to build instance.
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.991 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.996 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403607.9042053, b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:47 compute-0 nova_compute[255040]: 2025-11-29 08:06:47.996 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] VM Resumed (Lifecycle Event)
Nov 29 08:06:48 compute-0 nova_compute[255040]: 2025-11-29 08:06:48.006 255071 DEBUG oslo_concurrency.lockutils [None req-5b06855b-d038-47c6-9d66-44e7a4d93a3e c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:48 compute-0 nova_compute[255040]: 2025-11-29 08:06:48.011 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:48 compute-0 nova_compute[255040]: 2025-11-29 08:06:48.017 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:06:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:06:48 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1213107286' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:06:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:06:48 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1213107286' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:06:48 compute-0 ceph-mon[75237]: pgmap v1417: 305 pgs: 305 active+clean; 248 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.5 KiB/s wr, 29 op/s
Nov 29 08:06:48 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1755651315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:48 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1213107286' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:06:48 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1213107286' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:06:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:49 compute-0 nova_compute[255040]: 2025-11-29 08:06:49.649 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 284 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 482 KiB/s rd, 6.0 MiB/s wr, 235 op/s
Nov 29 08:06:49 compute-0 nova_compute[255040]: 2025-11-29 08:06:49.736 255071 DEBUG nova.compute.manager [req-e76960a2-084e-4da2-a989-103b3bfc14ed req-361f6f58-7488-48e2-8cb9-c040fad3b55e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Received event network-vif-plugged-3adae585-03a6-434e-a645-7fb75855efe0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:49 compute-0 nova_compute[255040]: 2025-11-29 08:06:49.736 255071 DEBUG oslo_concurrency.lockutils [req-e76960a2-084e-4da2-a989-103b3bfc14ed req-361f6f58-7488-48e2-8cb9-c040fad3b55e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:49 compute-0 nova_compute[255040]: 2025-11-29 08:06:49.737 255071 DEBUG oslo_concurrency.lockutils [req-e76960a2-084e-4da2-a989-103b3bfc14ed req-361f6f58-7488-48e2-8cb9-c040fad3b55e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:49 compute-0 nova_compute[255040]: 2025-11-29 08:06:49.737 255071 DEBUG oslo_concurrency.lockutils [req-e76960a2-084e-4da2-a989-103b3bfc14ed req-361f6f58-7488-48e2-8cb9-c040fad3b55e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:49 compute-0 nova_compute[255040]: 2025-11-29 08:06:49.737 255071 DEBUG nova.compute.manager [req-e76960a2-084e-4da2-a989-103b3bfc14ed req-361f6f58-7488-48e2-8cb9-c040fad3b55e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] No waiting events found dispatching network-vif-plugged-3adae585-03a6-434e-a645-7fb75855efe0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:06:49 compute-0 nova_compute[255040]: 2025-11-29 08:06:49.738 255071 WARNING nova.compute.manager [req-e76960a2-084e-4da2-a989-103b3bfc14ed req-361f6f58-7488-48e2-8cb9-c040fad3b55e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Received unexpected event network-vif-plugged-3adae585-03a6-434e-a645-7fb75855efe0 for instance with vm_state active and task_state None.
Nov 29 08:06:50 compute-0 ceph-mon[75237]: pgmap v1418: 305 pgs: 305 active+clean; 284 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 482 KiB/s rd, 6.0 MiB/s wr, 235 op/s
Nov 29 08:06:50 compute-0 sudo[277559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:50 compute-0 sudo[277559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:50 compute-0 sudo[277559]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:50 compute-0 sudo[277590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:06:50 compute-0 sudo[277590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:50 compute-0 sudo[277590]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:50 compute-0 sudo[277629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:50 compute-0 sudo[277629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:50 compute-0 sudo[277629]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:50 compute-0 podman[277583]: 2025-11-29 08:06:50.632251908 +0000 UTC m=+0.172055648 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:06:50 compute-0 sudo[277660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:06:50 compute-0 sudo[277660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:51 compute-0 sudo[277660]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:06:51 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:06:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:06:51 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:06:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:06:51 compute-0 nova_compute[255040]: 2025-11-29 08:06:51.474 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:51 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:06:51 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 616d22e1-d75c-4ae4-9ab4-7e56d03d4857 does not exist
Nov 29 08:06:51 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev f9b8fcf1-1973-4bc5-afe9-38b458cfc5c6 does not exist
Nov 29 08:06:51 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 9daeb0b4-45f6-4f90-9b66-ee01bd528d4a does not exist
Nov 29 08:06:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:06:51 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:06:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:06:51 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:06:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:06:51 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:06:51 compute-0 sudo[277716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 308 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 8.9 MiB/s wr, 225 op/s
Nov 29 08:06:51 compute-0 sudo[277716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:51 compute-0 sudo[277716]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:51 compute-0 sudo[277741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:06:51 compute-0 sudo[277741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:51 compute-0 sudo[277741]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:51 compute-0 sudo[277766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:51 compute-0 sudo[277766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:51 compute-0 sudo[277766]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:51 compute-0 sudo[277791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:06:51 compute-0 sudo[277791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:52 compute-0 podman[277854]: 2025-11-29 08:06:52.215973753 +0000 UTC m=+0.031089910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:06:52 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:06:52 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:06:52 compute-0 podman[277854]: 2025-11-29 08:06:52.422245634 +0000 UTC m=+0.237361751 container create 3488451388e3caf2cf97c1eb1de73c57028954d070163a2c509021b4d2964711 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:06:52 compute-0 systemd[1]: Started libpod-conmon-3488451388e3caf2cf97c1eb1de73c57028954d070163a2c509021b4d2964711.scope.
Nov 29 08:06:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:06:52 compute-0 nova_compute[255040]: 2025-11-29 08:06:52.542 255071 DEBUG nova.compute.manager [req-623548e4-4dd1-44ba-a0bf-8b01d6e2bb0a req-d75519b6-f437-4e20-8f10-ed3b2dae114b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Received event network-changed-3adae585-03a6-434e-a645-7fb75855efe0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:52 compute-0 nova_compute[255040]: 2025-11-29 08:06:52.544 255071 DEBUG nova.compute.manager [req-623548e4-4dd1-44ba-a0bf-8b01d6e2bb0a req-d75519b6-f437-4e20-8f10-ed3b2dae114b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Refreshing instance network info cache due to event network-changed-3adae585-03a6-434e-a645-7fb75855efe0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:06:52 compute-0 nova_compute[255040]: 2025-11-29 08:06:52.545 255071 DEBUG oslo_concurrency.lockutils [req-623548e4-4dd1-44ba-a0bf-8b01d6e2bb0a req-d75519b6-f437-4e20-8f10-ed3b2dae114b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:06:52 compute-0 nova_compute[255040]: 2025-11-29 08:06:52.545 255071 DEBUG oslo_concurrency.lockutils [req-623548e4-4dd1-44ba-a0bf-8b01d6e2bb0a req-d75519b6-f437-4e20-8f10-ed3b2dae114b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:06:52 compute-0 nova_compute[255040]: 2025-11-29 08:06:52.545 255071 DEBUG nova.network.neutron [req-623548e4-4dd1-44ba-a0bf-8b01d6e2bb0a req-d75519b6-f437-4e20-8f10-ed3b2dae114b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Refreshing network info cache for port 3adae585-03a6-434e-a645-7fb75855efe0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:06:52 compute-0 podman[277854]: 2025-11-29 08:06:52.581471116 +0000 UTC m=+0.396587263 container init 3488451388e3caf2cf97c1eb1de73c57028954d070163a2c509021b4d2964711 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 08:06:52 compute-0 podman[277854]: 2025-11-29 08:06:52.591500787 +0000 UTC m=+0.406616904 container start 3488451388e3caf2cf97c1eb1de73c57028954d070163a2c509021b4d2964711 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 08:06:52 compute-0 systemd[1]: libpod-3488451388e3caf2cf97c1eb1de73c57028954d070163a2c509021b4d2964711.scope: Deactivated successfully.
Nov 29 08:06:52 compute-0 musing_pasteur[277870]: 167 167
Nov 29 08:06:52 compute-0 conmon[277870]: conmon 3488451388e3caf2cf97 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3488451388e3caf2cf97c1eb1de73c57028954d070163a2c509021b4d2964711.scope/container/memory.events
Nov 29 08:06:52 compute-0 podman[277854]: 2025-11-29 08:06:52.60423127 +0000 UTC m=+0.419347407 container attach 3488451388e3caf2cf97c1eb1de73c57028954d070163a2c509021b4d2964711 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 08:06:52 compute-0 podman[277854]: 2025-11-29 08:06:52.604950409 +0000 UTC m=+0.420066526 container died 3488451388e3caf2cf97c1eb1de73c57028954d070163a2c509021b4d2964711 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:06:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d72ca8004c52ec8fd18cd68fd47b2387db96dd741042f5eb75619fbc02d79b6-merged.mount: Deactivated successfully.
Nov 29 08:06:52 compute-0 podman[277854]: 2025-11-29 08:06:52.732907709 +0000 UTC m=+0.548023816 container remove 3488451388e3caf2cf97c1eb1de73c57028954d070163a2c509021b4d2964711 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 08:06:52 compute-0 systemd[1]: libpod-conmon-3488451388e3caf2cf97c1eb1de73c57028954d070163a2c509021b4d2964711.scope: Deactivated successfully.
Nov 29 08:06:53 compute-0 podman[277893]: 2025-11-29 08:06:53.016555235 +0000 UTC m=+0.107551280 container create 09dc99f9b60636739c598094301cec43b2d5704569a8fa9e9482b84a874fbf45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hoover, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:06:53 compute-0 podman[277893]: 2025-11-29 08:06:52.941393199 +0000 UTC m=+0.032389264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:06:53 compute-0 systemd[1]: Started libpod-conmon-09dc99f9b60636739c598094301cec43b2d5704569a8fa9e9482b84a874fbf45.scope.
Nov 29 08:06:53 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cce19173898e6264cd979aa7c8b73482fe27a3021ba2a5e22fafc881cc78fa9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cce19173898e6264cd979aa7c8b73482fe27a3021ba2a5e22fafc881cc78fa9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cce19173898e6264cd979aa7c8b73482fe27a3021ba2a5e22fafc881cc78fa9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cce19173898e6264cd979aa7c8b73482fe27a3021ba2a5e22fafc881cc78fa9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cce19173898e6264cd979aa7c8b73482fe27a3021ba2a5e22fafc881cc78fa9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:53 compute-0 podman[277893]: 2025-11-29 08:06:53.22372856 +0000 UTC m=+0.314724605 container init 09dc99f9b60636739c598094301cec43b2d5704569a8fa9e9482b84a874fbf45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hoover, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:06:53 compute-0 podman[277893]: 2025-11-29 08:06:53.230487182 +0000 UTC m=+0.321483237 container start 09dc99f9b60636739c598094301cec43b2d5704569a8fa9e9482b84a874fbf45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 08:06:53 compute-0 podman[277893]: 2025-11-29 08:06:53.291947429 +0000 UTC m=+0.382943464 container attach 09dc99f9b60636739c598094301cec43b2d5704569a8fa9e9482b84a874fbf45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 08:06:53 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:06:53 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:06:53 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:06:53 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:06:53 compute-0 ceph-mon[75237]: pgmap v1419: 305 pgs: 305 active+clean; 308 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 8.9 MiB/s wr, 225 op/s
Nov 29 08:06:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 448 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 25 MiB/s wr, 231 op/s
Nov 29 08:06:54 compute-0 nova_compute[255040]: 2025-11-29 08:06:54.008 255071 DEBUG nova.network.neutron [req-623548e4-4dd1-44ba-a0bf-8b01d6e2bb0a req-d75519b6-f437-4e20-8f10-ed3b2dae114b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Updated VIF entry in instance network info cache for port 3adae585-03a6-434e-a645-7fb75855efe0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:06:54 compute-0 nova_compute[255040]: 2025-11-29 08:06:54.009 255071 DEBUG nova.network.neutron [req-623548e4-4dd1-44ba-a0bf-8b01d6e2bb0a req-d75519b6-f437-4e20-8f10-ed3b2dae114b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Updating instance_info_cache with network_info: [{"id": "3adae585-03a6-434e-a645-7fb75855efe0", "address": "fa:16:3e:52:2b:db", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3adae585-03", "ovs_interfaceid": "3adae585-03a6-434e-a645-7fb75855efe0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:54 compute-0 nova_compute[255040]: 2025-11-29 08:06:54.037 255071 DEBUG oslo_concurrency.lockutils [req-623548e4-4dd1-44ba-a0bf-8b01d6e2bb0a req-d75519b6-f437-4e20-8f10-ed3b2dae114b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:06:54 compute-0 ceph-mon[75237]: pgmap v1420: 305 pgs: 305 active+clean; 448 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 25 MiB/s wr, 231 op/s
Nov 29 08:06:54 compute-0 charming_hoover[277909]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:06:54 compute-0 charming_hoover[277909]: --> relative data size: 1.0
Nov 29 08:06:54 compute-0 charming_hoover[277909]: --> All data devices are unavailable
Nov 29 08:06:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Nov 29 08:06:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Nov 29 08:06:54 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Nov 29 08:06:54 compute-0 systemd[1]: libpod-09dc99f9b60636739c598094301cec43b2d5704569a8fa9e9482b84a874fbf45.scope: Deactivated successfully.
Nov 29 08:06:54 compute-0 systemd[1]: libpod-09dc99f9b60636739c598094301cec43b2d5704569a8fa9e9482b84a874fbf45.scope: Consumed 1.200s CPU time.
Nov 29 08:06:54 compute-0 podman[277893]: 2025-11-29 08:06:54.513473529 +0000 UTC m=+1.604469544 container died 09dc99f9b60636739c598094301cec43b2d5704569a8fa9e9482b84a874fbf45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hoover, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 08:06:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cce19173898e6264cd979aa7c8b73482fe27a3021ba2a5e22fafc881cc78fa9-merged.mount: Deactivated successfully.
Nov 29 08:06:54 compute-0 nova_compute[255040]: 2025-11-29 08:06:54.652 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:54 compute-0 podman[277893]: 2025-11-29 08:06:54.713516622 +0000 UTC m=+1.804512647 container remove 09dc99f9b60636739c598094301cec43b2d5704569a8fa9e9482b84a874fbf45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hoover, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:06:54 compute-0 systemd[1]: libpod-conmon-09dc99f9b60636739c598094301cec43b2d5704569a8fa9e9482b84a874fbf45.scope: Deactivated successfully.
Nov 29 08:06:54 compute-0 sudo[277791]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:54 compute-0 sudo[277952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:54 compute-0 sudo[277952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:54 compute-0 sudo[277952]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:54 compute-0 sudo[277977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:06:54 compute-0 sudo[277977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:54 compute-0 sudo[277977]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:54 compute-0 sudo[278002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:54 compute-0 sudo[278002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:54 compute-0 sudo[278002]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:55 compute-0 sudo[278027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 08:06:55 compute-0 sudo[278027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:55 compute-0 podman[278091]: 2025-11-29 08:06:55.473823258 +0000 UTC m=+0.055294721 container create 77b4ef53d31cb17b029f0345b3fed33625bb1fd121f8d376fb673420d88cb8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bassi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:06:55 compute-0 systemd[1]: Started libpod-conmon-77b4ef53d31cb17b029f0345b3fed33625bb1fd121f8d376fb673420d88cb8f1.scope.
Nov 29 08:06:55 compute-0 podman[278091]: 2025-11-29 08:06:55.448159086 +0000 UTC m=+0.029630609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:06:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:06:55 compute-0 ceph-mon[75237]: osdmap e253: 3 total, 3 up, 3 in
Nov 29 08:06:55 compute-0 podman[278091]: 2025-11-29 08:06:55.606175826 +0000 UTC m=+0.187647319 container init 77b4ef53d31cb17b029f0345b3fed33625bb1fd121f8d376fb673420d88cb8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 08:06:55 compute-0 podman[278091]: 2025-11-29 08:06:55.619108975 +0000 UTC m=+0.200580428 container start 77b4ef53d31cb17b029f0345b3fed33625bb1fd121f8d376fb673420d88cb8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 08:06:55 compute-0 podman[278091]: 2025-11-29 08:06:55.623723569 +0000 UTC m=+0.205195042 container attach 77b4ef53d31cb17b029f0345b3fed33625bb1fd121f8d376fb673420d88cb8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:06:55 compute-0 serene_bassi[278107]: 167 167
Nov 29 08:06:55 compute-0 systemd[1]: libpod-77b4ef53d31cb17b029f0345b3fed33625bb1fd121f8d376fb673420d88cb8f1.scope: Deactivated successfully.
Nov 29 08:06:55 compute-0 podman[278091]: 2025-11-29 08:06:55.631979692 +0000 UTC m=+0.213451165 container died 77b4ef53d31cb17b029f0345b3fed33625bb1fd121f8d376fb673420d88cb8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bassi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 08:06:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 692 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 51 MiB/s wr, 332 op/s
Nov 29 08:06:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-098c05511f0c2a8dd7beb081551933a97633cd974b90c987768487501de5b459-merged.mount: Deactivated successfully.
Nov 29 08:06:55 compute-0 podman[278091]: 2025-11-29 08:06:55.871126569 +0000 UTC m=+0.452598032 container remove 77b4ef53d31cb17b029f0345b3fed33625bb1fd121f8d376fb673420d88cb8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 08:06:55 compute-0 systemd[1]: libpod-conmon-77b4ef53d31cb17b029f0345b3fed33625bb1fd121f8d376fb673420d88cb8f1.scope: Deactivated successfully.
Nov 29 08:06:56 compute-0 podman[278130]: 2025-11-29 08:06:56.065008056 +0000 UTC m=+0.046861595 container create efc47e69066b1f199bb21b4a5a251eb4235af8cc36bbcfd6b48d527f24288ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nash, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007630884938464544 of space, bias 1.0, pg target 0.22892654815393632 quantized to 32 (current 32)
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003799544792277137 of space, bias 1.0, pg target 0.11398634376831411 quantized to 32 (current 32)
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.007488632525070001 of space, bias 1.0, pg target 2.2465897575210003 quantized to 32 (current 32)
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014239867202253044 of space, bias 1.0, pg target 0.4243480426271407 quantized to 32 (current 32)
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006064009897766491 quantized to 16 (current 16)
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.580012372208114e-05 quantized to 32 (current 32)
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006443010516376897 quantized to 32 (current 32)
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:06:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015160024744416227 quantized to 32 (current 32)
Nov 29 08:06:56 compute-0 systemd[1]: Started libpod-conmon-efc47e69066b1f199bb21b4a5a251eb4235af8cc36bbcfd6b48d527f24288ba3.scope.
Nov 29 08:06:56 compute-0 podman[278130]: 2025-11-29 08:06:56.043065734 +0000 UTC m=+0.024919293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:06:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6acea1a9a4882e425a033b4c19000b54f2c69cf7662420127f39cf9727ea0b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6acea1a9a4882e425a033b4c19000b54f2c69cf7662420127f39cf9727ea0b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6acea1a9a4882e425a033b4c19000b54f2c69cf7662420127f39cf9727ea0b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6acea1a9a4882e425a033b4c19000b54f2c69cf7662420127f39cf9727ea0b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:56 compute-0 podman[278130]: 2025-11-29 08:06:56.171391664 +0000 UTC m=+0.153245223 container init efc47e69066b1f199bb21b4a5a251eb4235af8cc36bbcfd6b48d527f24288ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nash, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 08:06:56 compute-0 podman[278130]: 2025-11-29 08:06:56.177465227 +0000 UTC m=+0.159318766 container start efc47e69066b1f199bb21b4a5a251eb4235af8cc36bbcfd6b48d527f24288ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 08:06:56 compute-0 podman[278130]: 2025-11-29 08:06:56.181907657 +0000 UTC m=+0.163761216 container attach efc47e69066b1f199bb21b4a5a251eb4235af8cc36bbcfd6b48d527f24288ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:06:56 compute-0 nova_compute[255040]: 2025-11-29 08:06:56.479 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:56 compute-0 ceph-mon[75237]: pgmap v1422: 305 pgs: 305 active+clean; 692 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 51 MiB/s wr, 332 op/s
Nov 29 08:06:57 compute-0 nova_compute[255040]: 2025-11-29 08:06:57.024 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:57 compute-0 dazzling_nash[278146]: {
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:     "0": [
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:         {
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "devices": [
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "/dev/loop3"
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             ],
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "lv_name": "ceph_lv0",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "lv_size": "21470642176",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "name": "ceph_lv0",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "tags": {
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.cluster_name": "ceph",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.crush_device_class": "",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.encrypted": "0",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.osd_id": "0",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.type": "block",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.vdo": "0"
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             },
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "type": "block",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "vg_name": "ceph_vg0"
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:         }
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:     ],
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:     "1": [
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:         {
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "devices": [
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "/dev/loop4"
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             ],
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "lv_name": "ceph_lv1",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "lv_size": "21470642176",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "name": "ceph_lv1",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "tags": {
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.cluster_name": "ceph",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.crush_device_class": "",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.encrypted": "0",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.osd_id": "1",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.type": "block",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.vdo": "0"
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             },
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "type": "block",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "vg_name": "ceph_vg1"
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:         }
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:     ],
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:     "2": [
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:         {
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "devices": [
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "/dev/loop5"
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             ],
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "lv_name": "ceph_lv2",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "lv_size": "21470642176",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "name": "ceph_lv2",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "tags": {
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.cluster_name": "ceph",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.crush_device_class": "",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.encrypted": "0",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.osd_id": "2",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.type": "block",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:                 "ceph.vdo": "0"
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             },
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "type": "block",
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:             "vg_name": "ceph_vg2"
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:         }
Nov 29 08:06:57 compute-0 dazzling_nash[278146]:     ]
Nov 29 08:06:57 compute-0 dazzling_nash[278146]: }
Nov 29 08:06:57 compute-0 systemd[1]: libpod-efc47e69066b1f199bb21b4a5a251eb4235af8cc36bbcfd6b48d527f24288ba3.scope: Deactivated successfully.
Nov 29 08:06:57 compute-0 podman[278130]: 2025-11-29 08:06:57.074257473 +0000 UTC m=+1.056111042 container died efc47e69066b1f199bb21b4a5a251eb4235af8cc36bbcfd6b48d527f24288ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nash, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 08:06:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6acea1a9a4882e425a033b4c19000b54f2c69cf7662420127f39cf9727ea0b1-merged.mount: Deactivated successfully.
Nov 29 08:06:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 692 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 44 MiB/s wr, 291 op/s
Nov 29 08:06:58 compute-0 podman[278130]: 2025-11-29 08:06:58.002705082 +0000 UTC m=+1.984558621 container remove efc47e69066b1f199bb21b4a5a251eb4235af8cc36bbcfd6b48d527f24288ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nash, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 08:06:58 compute-0 ceph-mon[75237]: pgmap v1423: 305 pgs: 305 active+clean; 692 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 44 MiB/s wr, 291 op/s
Nov 29 08:06:58 compute-0 systemd[1]: libpod-conmon-efc47e69066b1f199bb21b4a5a251eb4235af8cc36bbcfd6b48d527f24288ba3.scope: Deactivated successfully.
Nov 29 08:06:58 compute-0 sudo[278027]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:58 compute-0 sudo[278166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:58 compute-0 sudo[278166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:58 compute-0 sudo[278166]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:58 compute-0 sudo[278191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:06:58 compute-0 sudo[278191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:58 compute-0 sudo[278191]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:58 compute-0 sudo[278216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:58 compute-0 sudo[278216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:58 compute-0 sudo[278216]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:58 compute-0 sudo[278241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 08:06:58 compute-0 sudo[278241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:58 compute-0 podman[278303]: 2025-11-29 08:06:58.726789612 +0000 UTC m=+0.083138642 container create ca742f54425fe13ef19b7f741299aea0bde9b0311b46eadd522452067d078bee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 08:06:58 compute-0 podman[278303]: 2025-11-29 08:06:58.668600253 +0000 UTC m=+0.024949303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:06:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:06:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2639598531' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:06:58 compute-0 systemd[1]: Started libpod-conmon-ca742f54425fe13ef19b7f741299aea0bde9b0311b46eadd522452067d078bee.scope.
Nov 29 08:06:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:06:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2639598531' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:06:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:06:58 compute-0 podman[278303]: 2025-11-29 08:06:58.893209828 +0000 UTC m=+0.249558958 container init ca742f54425fe13ef19b7f741299aea0bde9b0311b46eadd522452067d078bee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wright, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:06:58 compute-0 podman[278303]: 2025-11-29 08:06:58.911459601 +0000 UTC m=+0.267808641 container start ca742f54425fe13ef19b7f741299aea0bde9b0311b46eadd522452067d078bee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 08:06:58 compute-0 gallant_wright[278319]: 167 167
Nov 29 08:06:58 compute-0 systemd[1]: libpod-ca742f54425fe13ef19b7f741299aea0bde9b0311b46eadd522452067d078bee.scope: Deactivated successfully.
Nov 29 08:06:58 compute-0 podman[278303]: 2025-11-29 08:06:58.94186343 +0000 UTC m=+0.298212480 container attach ca742f54425fe13ef19b7f741299aea0bde9b0311b46eadd522452067d078bee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wright, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:06:58 compute-0 podman[278303]: 2025-11-29 08:06:58.942876267 +0000 UTC m=+0.299225327 container died ca742f54425fe13ef19b7f741299aea0bde9b0311b46eadd522452067d078bee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:06:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2639598531' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:06:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2639598531' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:06:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9963fd9ece6ef01385e79c3dc2343e3acab63f645683524864979b1fe3868e8-merged.mount: Deactivated successfully.
Nov 29 08:06:59 compute-0 podman[278303]: 2025-11-29 08:06:59.219923956 +0000 UTC m=+0.576272986 container remove ca742f54425fe13ef19b7f741299aea0bde9b0311b46eadd522452067d078bee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:06:59 compute-0 systemd[1]: libpod-conmon-ca742f54425fe13ef19b7f741299aea0bde9b0311b46eadd522452067d078bee.scope: Deactivated successfully.
Nov 29 08:06:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:59 compute-0 podman[278344]: 2025-11-29 08:06:59.498083385 +0000 UTC m=+0.088224030 container create 7ad0a716e110e65d07033957ebdcd4a81d601f587356ade3b8c718f820f4f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:06:59 compute-0 podman[278344]: 2025-11-29 08:06:59.455198459 +0000 UTC m=+0.045339134 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:06:59 compute-0 systemd[1]: Started libpod-conmon-7ad0a716e110e65d07033957ebdcd4a81d601f587356ade3b8c718f820f4f362.scope.
Nov 29 08:06:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5949fb5976ea7fc239cf60aa92c8e41a7e266debeab507b508f9be8ba9a2775a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5949fb5976ea7fc239cf60aa92c8e41a7e266debeab507b508f9be8ba9a2775a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5949fb5976ea7fc239cf60aa92c8e41a7e266debeab507b508f9be8ba9a2775a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5949fb5976ea7fc239cf60aa92c8e41a7e266debeab507b508f9be8ba9a2775a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:59 compute-0 podman[278344]: 2025-11-29 08:06:59.606378404 +0000 UTC m=+0.196519079 container init 7ad0a716e110e65d07033957ebdcd4a81d601f587356ade3b8c718f820f4f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rosalind, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 08:06:59 compute-0 podman[278344]: 2025-11-29 08:06:59.613937008 +0000 UTC m=+0.204077663 container start 7ad0a716e110e65d07033957ebdcd4a81d601f587356ade3b8c718f820f4f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:06:59 compute-0 podman[278344]: 2025-11-29 08:06:59.617465583 +0000 UTC m=+0.207606258 container attach 7ad0a716e110e65d07033957ebdcd4a81d601f587356ade3b8c718f820f4f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 08:06:59 compute-0 nova_compute[255040]: 2025-11-29 08:06:59.655 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 992 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 71 MiB/s wr, 245 op/s
Nov 29 08:07:00 compute-0 ceph-mon[75237]: pgmap v1424: 305 pgs: 305 active+clean; 992 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 71 MiB/s wr, 245 op/s
Nov 29 08:07:00 compute-0 magical_rosalind[278361]: {
Nov 29 08:07:00 compute-0 magical_rosalind[278361]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 08:07:00 compute-0 magical_rosalind[278361]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:07:00 compute-0 magical_rosalind[278361]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:07:00 compute-0 magical_rosalind[278361]:         "osd_id": 2,
Nov 29 08:07:00 compute-0 magical_rosalind[278361]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:07:00 compute-0 magical_rosalind[278361]:         "type": "bluestore"
Nov 29 08:07:00 compute-0 magical_rosalind[278361]:     },
Nov 29 08:07:00 compute-0 magical_rosalind[278361]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 08:07:00 compute-0 magical_rosalind[278361]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:07:00 compute-0 magical_rosalind[278361]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:07:00 compute-0 magical_rosalind[278361]:         "osd_id": 0,
Nov 29 08:07:00 compute-0 magical_rosalind[278361]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:07:00 compute-0 magical_rosalind[278361]:         "type": "bluestore"
Nov 29 08:07:00 compute-0 magical_rosalind[278361]:     },
Nov 29 08:07:00 compute-0 magical_rosalind[278361]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 08:07:00 compute-0 magical_rosalind[278361]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:07:00 compute-0 magical_rosalind[278361]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:07:00 compute-0 magical_rosalind[278361]:         "osd_id": 1,
Nov 29 08:07:00 compute-0 magical_rosalind[278361]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:07:00 compute-0 magical_rosalind[278361]:         "type": "bluestore"
Nov 29 08:07:00 compute-0 magical_rosalind[278361]:     }
Nov 29 08:07:00 compute-0 magical_rosalind[278361]: }
Nov 29 08:07:00 compute-0 systemd[1]: libpod-7ad0a716e110e65d07033957ebdcd4a81d601f587356ade3b8c718f820f4f362.scope: Deactivated successfully.
Nov 29 08:07:00 compute-0 podman[278344]: 2025-11-29 08:07:00.828932062 +0000 UTC m=+1.419072747 container died 7ad0a716e110e65d07033957ebdcd4a81d601f587356ade3b8c718f820f4f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Nov 29 08:07:00 compute-0 systemd[1]: libpod-7ad0a716e110e65d07033957ebdcd4a81d601f587356ade3b8c718f820f4f362.scope: Consumed 1.150s CPU time.
Nov 29 08:07:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-5949fb5976ea7fc239cf60aa92c8e41a7e266debeab507b508f9be8ba9a2775a-merged.mount: Deactivated successfully.
Nov 29 08:07:00 compute-0 podman[278344]: 2025-11-29 08:07:00.940731526 +0000 UTC m=+1.530872181 container remove 7ad0a716e110e65d07033957ebdcd4a81d601f587356ade3b8c718f820f4f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rosalind, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 08:07:00 compute-0 systemd[1]: libpod-conmon-7ad0a716e110e65d07033957ebdcd4a81d601f587356ade3b8c718f820f4f362.scope: Deactivated successfully.
Nov 29 08:07:00 compute-0 sudo[278241]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:07:00 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:07:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:07:00 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:07:00 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev f4e2a9a2-bb79-4f23-aa76-1e566aad21ed does not exist
Nov 29 08:07:00 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev e9fb1856-713a-42f9-90b2-abcb9fd26d8a does not exist
Nov 29 08:07:01 compute-0 podman[278395]: 2025-11-29 08:07:01.038573614 +0000 UTC m=+0.200969880 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:07:01 compute-0 sudo[278423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:07:01 compute-0 sudo[278423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:01 compute-0 sudo[278423]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:01 compute-0 sudo[278448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:07:01 compute-0 sudo[278448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:01 compute-0 sudo[278448]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:01 compute-0 nova_compute[255040]: 2025-11-29 08:07:01.472 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:01 compute-0 nova_compute[255040]: 2025-11-29 08:07:01.484 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 1.1 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 83 MiB/s wr, 228 op/s
Nov 29 08:07:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:07:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:07:01 compute-0 ceph-mon[75237]: pgmap v1425: 305 pgs: 305 active+clean; 1.1 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 83 MiB/s wr, 228 op/s
Nov 29 08:07:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 853 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 986 KiB/s rd, 78 MiB/s wr, 248 op/s
Nov 29 08:07:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:04 compute-0 nova_compute[255040]: 2025-11-29 08:07:04.658 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Nov 29 08:07:04 compute-0 ceph-mon[75237]: pgmap v1426: 305 pgs: 305 active+clean; 853 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 986 KiB/s rd, 78 MiB/s wr, 248 op/s
Nov 29 08:07:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Nov 29 08:07:04 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Nov 29 08:07:05 compute-0 ovn_controller[153295]: 2025-11-29T08:07:05Z|00014|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.5
Nov 29 08:07:05 compute-0 ovn_controller[153295]: 2025-11-29T08:07:05Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:52:2b:db 10.100.0.5
Nov 29 08:07:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 262 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 59 MiB/s wr, 242 op/s
Nov 29 08:07:05 compute-0 ceph-mon[75237]: osdmap e254: 3 total, 3 up, 3 in
Nov 29 08:07:06 compute-0 nova_compute[255040]: 2025-11-29 08:07:06.488 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:06 compute-0 nova_compute[255040]: 2025-11-29 08:07:06.553 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:06.553 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:07:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:06.555 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:07:06 compute-0 ceph-mon[75237]: pgmap v1428: 305 pgs: 305 active+clean; 262 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 59 MiB/s wr, 242 op/s
Nov 29 08:07:06 compute-0 podman[278473]: 2025-11-29 08:07:06.905143925 +0000 UTC m=+0.069163006 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:07:07 compute-0 nova_compute[255040]: 2025-11-29 08:07:07.093 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 262 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 59 MiB/s wr, 242 op/s
Nov 29 08:07:08 compute-0 ceph-mon[75237]: pgmap v1429: 305 pgs: 305 active+clean; 262 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 59 MiB/s wr, 242 op/s
Nov 29 08:07:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:07:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:07:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:07:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:07:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:07:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:07:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:09 compute-0 ovn_controller[153295]: 2025-11-29T08:07:09Z|00016|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.5
Nov 29 08:07:09 compute-0 ovn_controller[153295]: 2025-11-29T08:07:09Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:52:2b:db 10.100.0.5
Nov 29 08:07:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 262 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 29 MiB/s wr, 179 op/s
Nov 29 08:07:09 compute-0 nova_compute[255040]: 2025-11-29 08:07:09.660 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:10 compute-0 ovn_controller[153295]: 2025-11-29T08:07:10Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:52:2b:db 10.100.0.5
Nov 29 08:07:10 compute-0 ovn_controller[153295]: 2025-11-29T08:07:10Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:52:2b:db 10.100.0.5
Nov 29 08:07:10 compute-0 nova_compute[255040]: 2025-11-29 08:07:10.887 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:11 compute-0 nova_compute[255040]: 2025-11-29 08:07:11.491 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 262 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 14 MiB/s wr, 177 op/s
Nov 29 08:07:11 compute-0 ceph-mon[75237]: pgmap v1430: 305 pgs: 305 active+clean; 262 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 29 MiB/s wr, 179 op/s
Nov 29 08:07:12 compute-0 ceph-mon[75237]: pgmap v1431: 305 pgs: 305 active+clean; 262 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 14 MiB/s wr, 177 op/s
Nov 29 08:07:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 266 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.4 MiB/s wr, 127 op/s
Nov 29 08:07:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Nov 29 08:07:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Nov 29 08:07:14 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Nov 29 08:07:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:14.557 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:14 compute-0 nova_compute[255040]: 2025-11-29 08:07:14.662 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:14 compute-0 ceph-mon[75237]: pgmap v1432: 305 pgs: 305 active+clean; 266 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.4 MiB/s wr, 127 op/s
Nov 29 08:07:14 compute-0 ceph-mon[75237]: osdmap e255: 3 total, 3 up, 3 in
Nov 29 08:07:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 267 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 64 KiB/s wr, 45 op/s
Nov 29 08:07:16 compute-0 nova_compute[255040]: 2025-11-29 08:07:16.494 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:16 compute-0 ceph-mon[75237]: pgmap v1434: 305 pgs: 305 active+clean; 267 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 64 KiB/s wr, 45 op/s
Nov 29 08:07:17 compute-0 nova_compute[255040]: 2025-11-29 08:07:17.405 255071 DEBUG oslo_concurrency.lockutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Acquiring lock "fbf20945-7898-4904-95c5-0047536f3eab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:17 compute-0 nova_compute[255040]: 2025-11-29 08:07:17.406 255071 DEBUG oslo_concurrency.lockutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lock "fbf20945-7898-4904-95c5-0047536f3eab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:17 compute-0 nova_compute[255040]: 2025-11-29 08:07:17.422 255071 DEBUG nova.compute.manager [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:07:17 compute-0 nova_compute[255040]: 2025-11-29 08:07:17.511 255071 DEBUG oslo_concurrency.lockutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:17 compute-0 nova_compute[255040]: 2025-11-29 08:07:17.512 255071 DEBUG oslo_concurrency.lockutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:17 compute-0 nova_compute[255040]: 2025-11-29 08:07:17.527 255071 DEBUG nova.virt.hardware [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:07:17 compute-0 nova_compute[255040]: 2025-11-29 08:07:17.529 255071 INFO nova.compute.claims [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:07:17 compute-0 nova_compute[255040]: 2025-11-29 08:07:17.654 255071 DEBUG oslo_concurrency.processutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 267 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 64 KiB/s wr, 45 op/s
Nov 29 08:07:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:07:18 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/469172679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.189 255071 DEBUG oslo_concurrency.processutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.198 255071 DEBUG nova.compute.provider_tree [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.218 255071 DEBUG nova.scheduler.client.report [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.242 255071 DEBUG oslo_concurrency.lockutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.244 255071 DEBUG nova.compute.manager [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.307 255071 DEBUG nova.compute.manager [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.309 255071 DEBUG nova.network.neutron [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.330 255071 INFO nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.351 255071 DEBUG nova.compute.manager [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.450 255071 DEBUG nova.compute.manager [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.451 255071 DEBUG nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.452 255071 INFO nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Creating image(s)
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.478 255071 DEBUG nova.storage.rbd_utils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] rbd image fbf20945-7898-4904-95c5-0047536f3eab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:07:18 compute-0 ceph-mon[75237]: pgmap v1435: 305 pgs: 305 active+clean; 267 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 64 KiB/s wr, 45 op/s
Nov 29 08:07:18 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/469172679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.507 255071 DEBUG nova.storage.rbd_utils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] rbd image fbf20945-7898-4904-95c5-0047536f3eab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.758 255071 DEBUG nova.storage.rbd_utils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] rbd image fbf20945-7898-4904-95c5-0047536f3eab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.763 255071 DEBUG oslo_concurrency.processutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.795 255071 DEBUG nova.policy [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3299eefacb3a43a898b339895ff0f205', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eeda2edc1f464a5480a29e4ff783c9b7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.847 255071 DEBUG oslo_concurrency.processutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.848 255071 DEBUG oslo_concurrency.lockutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Acquiring lock "55a6637599f7119d0d1afd670bb8713620840059" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.848 255071 DEBUG oslo_concurrency.lockutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.849 255071 DEBUG oslo_concurrency.lockutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.874 255071 DEBUG nova.storage.rbd_utils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] rbd image fbf20945-7898-4904-95c5-0047536f3eab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:07:18 compute-0 nova_compute[255040]: 2025-11-29 08:07:18.879 255071 DEBUG oslo_concurrency.processutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 fbf20945-7898-4904-95c5-0047536f3eab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 300 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 43 op/s
Nov 29 08:07:19 compute-0 nova_compute[255040]: 2025-11-29 08:07:19.665 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:19 compute-0 nova_compute[255040]: 2025-11-29 08:07:19.750 255071 DEBUG nova.network.neutron [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Successfully created port: 2eff6be1-3572-4ee8-b40e-208a0051b03c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:07:20 compute-0 nova_compute[255040]: 2025-11-29 08:07:20.464 255071 DEBUG nova.network.neutron [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Successfully updated port: 2eff6be1-3572-4ee8-b40e-208a0051b03c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:07:20 compute-0 nova_compute[255040]: 2025-11-29 08:07:20.482 255071 DEBUG oslo_concurrency.lockutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Acquiring lock "refresh_cache-fbf20945-7898-4904-95c5-0047536f3eab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:07:20 compute-0 nova_compute[255040]: 2025-11-29 08:07:20.483 255071 DEBUG oslo_concurrency.lockutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Acquired lock "refresh_cache-fbf20945-7898-4904-95c5-0047536f3eab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:07:20 compute-0 nova_compute[255040]: 2025-11-29 08:07:20.483 255071 DEBUG nova.network.neutron [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:07:20 compute-0 nova_compute[255040]: 2025-11-29 08:07:20.568 255071 DEBUG nova.compute.manager [req-587a3d6b-6da3-4cca-a79f-a4bb7b2b3603 req-7ec6606c-7596-41c2-b7b3-572c3d85a9c9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Received event network-changed-2eff6be1-3572-4ee8-b40e-208a0051b03c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:20 compute-0 nova_compute[255040]: 2025-11-29 08:07:20.569 255071 DEBUG nova.compute.manager [req-587a3d6b-6da3-4cca-a79f-a4bb7b2b3603 req-7ec6606c-7596-41c2-b7b3-572c3d85a9c9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Refreshing instance network info cache due to event network-changed-2eff6be1-3572-4ee8-b40e-208a0051b03c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:07:20 compute-0 nova_compute[255040]: 2025-11-29 08:07:20.569 255071 DEBUG oslo_concurrency.lockutils [req-587a3d6b-6da3-4cca-a79f-a4bb7b2b3603 req-7ec6606c-7596-41c2-b7b3-572c3d85a9c9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-fbf20945-7898-4904-95c5-0047536f3eab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:07:20 compute-0 ceph-mon[75237]: pgmap v1436: 305 pgs: 305 active+clean; 300 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 43 op/s
Nov 29 08:07:20 compute-0 nova_compute[255040]: 2025-11-29 08:07:20.680 255071 DEBUG nova.network.neutron [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:07:20 compute-0 nova_compute[255040]: 2025-11-29 08:07:20.945 255071 DEBUG oslo_concurrency.lockutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:20 compute-0 nova_compute[255040]: 2025-11-29 08:07:20.946 255071 DEBUG oslo_concurrency.lockutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:20 compute-0 nova_compute[255040]: 2025-11-29 08:07:20.962 255071 DEBUG nova.compute.manager [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:07:20 compute-0 podman[278609]: 2025-11-29 08:07:20.971289601 +0000 UTC m=+0.133637564 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.029 255071 DEBUG oslo_concurrency.lockutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.029 255071 DEBUG oslo_concurrency.lockutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.040 255071 DEBUG nova.virt.hardware [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.041 255071 INFO nova.compute.claims [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.173 255071 DEBUG oslo_concurrency.processutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.315 255071 DEBUG nova.network.neutron [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Updating instance_info_cache with network_info: [{"id": "2eff6be1-3572-4ee8-b40e-208a0051b03c", "address": "fa:16:3e:59:76:07", "network": {"id": "2b360768-ee11-45df-a7b1-30c167686953", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2134492214-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eeda2edc1f464a5480a29e4ff783c9b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eff6be1-35", "ovs_interfaceid": "2eff6be1-3572-4ee8-b40e-208a0051b03c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.339 255071 DEBUG oslo_concurrency.lockutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Releasing lock "refresh_cache-fbf20945-7898-4904-95c5-0047536f3eab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.339 255071 DEBUG nova.compute.manager [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Instance network_info: |[{"id": "2eff6be1-3572-4ee8-b40e-208a0051b03c", "address": "fa:16:3e:59:76:07", "network": {"id": "2b360768-ee11-45df-a7b1-30c167686953", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2134492214-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eeda2edc1f464a5480a29e4ff783c9b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eff6be1-35", "ovs_interfaceid": "2eff6be1-3572-4ee8-b40e-208a0051b03c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.340 255071 DEBUG oslo_concurrency.lockutils [req-587a3d6b-6da3-4cca-a79f-a4bb7b2b3603 req-7ec6606c-7596-41c2-b7b3-572c3d85a9c9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-fbf20945-7898-4904-95c5-0047536f3eab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.340 255071 DEBUG nova.network.neutron [req-587a3d6b-6da3-4cca-a79f-a4bb7b2b3603 req-7ec6606c-7596-41c2-b7b3-572c3d85a9c9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Refreshing network info cache for port 2eff6be1-3572-4ee8-b40e-208a0051b03c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.496 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:07:21 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3986840011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.638 255071 DEBUG oslo_concurrency.processutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.645 255071 DEBUG nova.compute.provider_tree [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.662 255071 DEBUG nova.scheduler.client.report [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:07:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 312 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 2.2 MiB/s wr, 49 op/s
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.689 255071 DEBUG oslo_concurrency.lockutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.690 255071 DEBUG nova.compute.manager [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.742 255071 DEBUG nova.compute.manager [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.743 255071 DEBUG nova.network.neutron [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.769 255071 INFO nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.794 255071 DEBUG nova.compute.manager [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.903 255071 DEBUG nova.compute.manager [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.904 255071 DEBUG nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.905 255071 INFO nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Creating image(s)
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.927 255071 DEBUG nova.storage.rbd_utils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] rbd image fdfa056f-5aa2-4ec1-b558-19291f104ebd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.954 255071 DEBUG nova.storage.rbd_utils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] rbd image fdfa056f-5aa2-4ec1-b558-19291f104ebd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.979 255071 DEBUG nova.storage.rbd_utils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] rbd image fdfa056f-5aa2-4ec1-b558-19291f104ebd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:07:21 compute-0 nova_compute[255040]: 2025-11-29 08:07:21.984 255071 DEBUG oslo_concurrency.processutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:22 compute-0 nova_compute[255040]: 2025-11-29 08:07:22.014 255071 DEBUG nova.policy [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '090eb6259968476885903b5734f6f67a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '87f822d62c8f4ac6bed1a893f2b9e73f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:07:22 compute-0 nova_compute[255040]: 2025-11-29 08:07:22.053 255071 DEBUG oslo_concurrency.processutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:22 compute-0 nova_compute[255040]: 2025-11-29 08:07:22.054 255071 DEBUG oslo_concurrency.lockutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "55a6637599f7119d0d1afd670bb8713620840059" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:22 compute-0 nova_compute[255040]: 2025-11-29 08:07:22.055 255071 DEBUG oslo_concurrency.lockutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:22 compute-0 nova_compute[255040]: 2025-11-29 08:07:22.055 255071 DEBUG oslo_concurrency.lockutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:22 compute-0 nova_compute[255040]: 2025-11-29 08:07:22.077 255071 DEBUG nova.storage.rbd_utils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] rbd image fdfa056f-5aa2-4ec1-b558-19291f104ebd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:07:22 compute-0 nova_compute[255040]: 2025-11-29 08:07:22.081 255071 DEBUG oslo_concurrency.processutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 fdfa056f-5aa2-4ec1-b558-19291f104ebd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:22 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3986840011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:22 compute-0 nova_compute[255040]: 2025-11-29 08:07:22.577 255071 DEBUG nova.network.neutron [req-587a3d6b-6da3-4cca-a79f-a4bb7b2b3603 req-7ec6606c-7596-41c2-b7b3-572c3d85a9c9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Updated VIF entry in instance network info cache for port 2eff6be1-3572-4ee8-b40e-208a0051b03c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:07:22 compute-0 nova_compute[255040]: 2025-11-29 08:07:22.578 255071 DEBUG nova.network.neutron [req-587a3d6b-6da3-4cca-a79f-a4bb7b2b3603 req-7ec6606c-7596-41c2-b7b3-572c3d85a9c9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Updating instance_info_cache with network_info: [{"id": "2eff6be1-3572-4ee8-b40e-208a0051b03c", "address": "fa:16:3e:59:76:07", "network": {"id": "2b360768-ee11-45df-a7b1-30c167686953", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2134492214-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eeda2edc1f464a5480a29e4ff783c9b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eff6be1-35", "ovs_interfaceid": "2eff6be1-3572-4ee8-b40e-208a0051b03c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:22 compute-0 nova_compute[255040]: 2025-11-29 08:07:22.592 255071 DEBUG oslo_concurrency.lockutils [req-587a3d6b-6da3-4cca-a79f-a4bb7b2b3603 req-7ec6606c-7596-41c2-b7b3-572c3d85a9c9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-fbf20945-7898-4904-95c5-0047536f3eab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:07:22 compute-0 nova_compute[255040]: 2025-11-29 08:07:22.658 255071 DEBUG nova.network.neutron [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Successfully created port: a591a89f-fb00-4493-90c0-a41a373c5a5d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:07:22 compute-0 nova_compute[255040]: 2025-11-29 08:07:22.903 255071 DEBUG oslo_concurrency.processutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 fbf20945-7898-4904-95c5-0047536f3eab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:22 compute-0 nova_compute[255040]: 2025-11-29 08:07:22.981 255071 DEBUG nova.storage.rbd_utils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] resizing rbd image fbf20945-7898-4904-95c5-0047536f3eab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:07:23 compute-0 ceph-mon[75237]: pgmap v1437: 305 pgs: 305 active+clean; 312 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 2.2 MiB/s wr, 49 op/s
Nov 29 08:07:23 compute-0 nova_compute[255040]: 2025-11-29 08:07:23.507 255071 DEBUG nova.network.neutron [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Successfully updated port: a591a89f-fb00-4493-90c0-a41a373c5a5d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:07:23 compute-0 nova_compute[255040]: 2025-11-29 08:07:23.511 255071 DEBUG nova.compute.manager [req-94aa2b1d-06ea-4d1b-9f2a-deeb9cffa816 req-983c0aac-682a-47ed-be0d-862b9bb07dd7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Received event network-changed-a591a89f-fb00-4493-90c0-a41a373c5a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:23 compute-0 nova_compute[255040]: 2025-11-29 08:07:23.511 255071 DEBUG nova.compute.manager [req-94aa2b1d-06ea-4d1b-9f2a-deeb9cffa816 req-983c0aac-682a-47ed-be0d-862b9bb07dd7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Refreshing instance network info cache due to event network-changed-a591a89f-fb00-4493-90c0-a41a373c5a5d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:07:23 compute-0 nova_compute[255040]: 2025-11-29 08:07:23.512 255071 DEBUG oslo_concurrency.lockutils [req-94aa2b1d-06ea-4d1b-9f2a-deeb9cffa816 req-983c0aac-682a-47ed-be0d-862b9bb07dd7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-fdfa056f-5aa2-4ec1-b558-19291f104ebd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:07:23 compute-0 nova_compute[255040]: 2025-11-29 08:07:23.512 255071 DEBUG oslo_concurrency.lockutils [req-94aa2b1d-06ea-4d1b-9f2a-deeb9cffa816 req-983c0aac-682a-47ed-be0d-862b9bb07dd7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-fdfa056f-5aa2-4ec1-b558-19291f104ebd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:07:23 compute-0 nova_compute[255040]: 2025-11-29 08:07:23.512 255071 DEBUG nova.network.neutron [req-94aa2b1d-06ea-4d1b-9f2a-deeb9cffa816 req-983c0aac-682a-47ed-be0d-862b9bb07dd7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Refreshing network info cache for port a591a89f-fb00-4493-90c0-a41a373c5a5d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:07:23 compute-0 nova_compute[255040]: 2025-11-29 08:07:23.543 255071 DEBUG oslo_concurrency.lockutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "refresh_cache-fdfa056f-5aa2-4ec1-b558-19291f104ebd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:07:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 338 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 263 KiB/s rd, 3.5 MiB/s wr, 72 op/s
Nov 29 08:07:23 compute-0 nova_compute[255040]: 2025-11-29 08:07:23.955 255071 DEBUG nova.network.neutron [req-94aa2b1d-06ea-4d1b-9f2a-deeb9cffa816 req-983c0aac-682a-47ed-be0d-862b9bb07dd7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:07:23 compute-0 nova_compute[255040]: 2025-11-29 08:07:23.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.284 255071 DEBUG nova.network.neutron [req-94aa2b1d-06ea-4d1b-9f2a-deeb9cffa816 req-983c0aac-682a-47ed-be0d-862b9bb07dd7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.304 255071 DEBUG oslo_concurrency.lockutils [req-94aa2b1d-06ea-4d1b-9f2a-deeb9cffa816 req-983c0aac-682a-47ed-be0d-862b9bb07dd7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-fdfa056f-5aa2-4ec1-b558-19291f104ebd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.305 255071 DEBUG oslo_concurrency.lockutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquired lock "refresh_cache-fdfa056f-5aa2-4ec1-b558-19291f104ebd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.305 255071 DEBUG nova.network.neutron [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.393 255071 DEBUG oslo_concurrency.processutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 fdfa056f-5aa2-4ec1-b558-19291f104ebd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.458 255071 DEBUG nova.network.neutron [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.501 255071 DEBUG nova.storage.rbd_utils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] resizing rbd image fdfa056f-5aa2-4ec1-b558-19291f104ebd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:07:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:24 compute-0 ceph-mon[75237]: pgmap v1438: 305 pgs: 305 active+clean; 338 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 263 KiB/s rd, 3.5 MiB/s wr, 72 op/s
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.650 255071 DEBUG nova.objects.instance [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lazy-loading 'migration_context' on Instance uuid fbf20945-7898-4904-95c5-0047536f3eab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.666 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.670 255071 DEBUG nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.671 255071 DEBUG nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Ensure instance console log exists: /var/lib/nova/instances/fbf20945-7898-4904-95c5-0047536f3eab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.671 255071 DEBUG oslo_concurrency.lockutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.672 255071 DEBUG oslo_concurrency.lockutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.672 255071 DEBUG oslo_concurrency.lockutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.674 255071 DEBUG nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Start _get_guest_xml network_info=[{"id": "2eff6be1-3572-4ee8-b40e-208a0051b03c", "address": "fa:16:3e:59:76:07", "network": {"id": "2b360768-ee11-45df-a7b1-30c167686953", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2134492214-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eeda2edc1f464a5480a29e4ff783c9b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eff6be1-35", "ovs_interfaceid": "2eff6be1-3572-4ee8-b40e-208a0051b03c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'image_id': '36a9388d-0d77-4d24-a915-be92247e5dbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.679 255071 WARNING nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.686 255071 DEBUG nova.virt.libvirt.host [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.686 255071 DEBUG nova.virt.libvirt.host [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.690 255071 DEBUG nova.virt.libvirt.host [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.690 255071 DEBUG nova.virt.libvirt.host [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.691 255071 DEBUG nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.691 255071 DEBUG nova.virt.hardware [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.692 255071 DEBUG nova.virt.hardware [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.692 255071 DEBUG nova.virt.hardware [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.692 255071 DEBUG nova.virt.hardware [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.693 255071 DEBUG nova.virt.hardware [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.693 255071 DEBUG nova.virt.hardware [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.693 255071 DEBUG nova.virt.hardware [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.693 255071 DEBUG nova.virt.hardware [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.694 255071 DEBUG nova.virt.hardware [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.694 255071 DEBUG nova.virt.hardware [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.694 255071 DEBUG nova.virt.hardware [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.698 255071 DEBUG oslo_concurrency.processutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.927 255071 DEBUG oslo_concurrency.lockutils [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.928 255071 DEBUG oslo_concurrency.lockutils [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.945 255071 DEBUG nova.objects.instance [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lazy-loading 'flavor' on Instance uuid b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:24 compute-0 nova_compute[255040]: 2025-11-29 08:07:24.989 255071 DEBUG oslo_concurrency.lockutils [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:07:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3263710154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.167 255071 DEBUG oslo_concurrency.processutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.194 255071 DEBUG nova.storage.rbd_utils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] rbd image fbf20945-7898-4904-95c5-0047536f3eab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.199 255071 DEBUG oslo_concurrency.processutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.271 255071 DEBUG nova.objects.instance [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lazy-loading 'migration_context' on Instance uuid fdfa056f-5aa2-4ec1-b558-19291f104ebd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.280 255071 DEBUG nova.network.neutron [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Updating instance_info_cache with network_info: [{"id": "a591a89f-fb00-4493-90c0-a41a373c5a5d", "address": "fa:16:3e:1d:e1:53", "network": {"id": "b1606039-8d07-4578-bb07-e1193dc21498", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-920991102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87f822d62c8f4ac6bed1a893f2b9e73f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa591a89f-fb", "ovs_interfaceid": "a591a89f-fb00-4493-90c0-a41a373c5a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.285 255071 DEBUG oslo_concurrency.lockutils [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.286 255071 DEBUG oslo_concurrency.lockutils [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.286 255071 INFO nova.compute.manager [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Attaching volume cca9f80d-43e0-486a-935a-1377b2493429 to /dev/vdb
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.291 255071 DEBUG nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.292 255071 DEBUG nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Ensure instance console log exists: /var/lib/nova/instances/fdfa056f-5aa2-4ec1-b558-19291f104ebd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.292 255071 DEBUG oslo_concurrency.lockutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.292 255071 DEBUG oslo_concurrency.lockutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.293 255071 DEBUG oslo_concurrency.lockutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.302 255071 DEBUG oslo_concurrency.lockutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Releasing lock "refresh_cache-fdfa056f-5aa2-4ec1-b558-19291f104ebd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.303 255071 DEBUG nova.compute.manager [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Instance network_info: |[{"id": "a591a89f-fb00-4493-90c0-a41a373c5a5d", "address": "fa:16:3e:1d:e1:53", "network": {"id": "b1606039-8d07-4578-bb07-e1193dc21498", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-920991102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87f822d62c8f4ac6bed1a893f2b9e73f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa591a89f-fb", "ovs_interfaceid": "a591a89f-fb00-4493-90c0-a41a373c5a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.306 255071 DEBUG nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Start _get_guest_xml network_info=[{"id": "a591a89f-fb00-4493-90c0-a41a373c5a5d", "address": "fa:16:3e:1d:e1:53", "network": {"id": "b1606039-8d07-4578-bb07-e1193dc21498", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-920991102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87f822d62c8f4ac6bed1a893f2b9e73f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa591a89f-fb", "ovs_interfaceid": "a591a89f-fb00-4493-90c0-a41a373c5a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'image_id': '36a9388d-0d77-4d24-a915-be92247e5dbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.315 255071 WARNING nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.320 255071 DEBUG nova.virt.libvirt.host [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.321 255071 DEBUG nova.virt.libvirt.host [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.324 255071 DEBUG nova.virt.libvirt.host [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.324 255071 DEBUG nova.virt.libvirt.host [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.324 255071 DEBUG nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.325 255071 DEBUG nova.virt.hardware [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.325 255071 DEBUG nova.virt.hardware [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.325 255071 DEBUG nova.virt.hardware [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.326 255071 DEBUG nova.virt.hardware [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.326 255071 DEBUG nova.virt.hardware [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.326 255071 DEBUG nova.virt.hardware [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.326 255071 DEBUG nova.virt.hardware [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.326 255071 DEBUG nova.virt.hardware [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.327 255071 DEBUG nova.virt.hardware [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.327 255071 DEBUG nova.virt.hardware [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.327 255071 DEBUG nova.virt.hardware [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.330 255071 DEBUG oslo_concurrency.processutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.444 255071 DEBUG os_brick.utils [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.447 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.494 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.494 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[3308b17f-a510-4e95-a79b-b423cbc486cc]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.495 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.506 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.506 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[745d4c32-f188-4b02-abc8-619ad8fa3ee4]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.508 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.519 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.520 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[37b02b10-e871-4c27-b2f6-67d2a834dc69]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.521 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[147387b9-9da1-44cc-a159-4570a2aee3ee]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.521 255071 DEBUG oslo_concurrency.processutils [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.548 255071 DEBUG oslo_concurrency.processutils [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.551 255071 DEBUG os_brick.initiator.connectors.lightos [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.552 255071 DEBUG os_brick.initiator.connectors.lightos [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.552 255071 DEBUG os_brick.initiator.connectors.lightos [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.552 255071 DEBUG os_brick.utils [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] <== get_connector_properties: return (107ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.552 255071 DEBUG nova.virt.block_device [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Updating existing volume attachment record: 5639050c-fdd2-4bac-9706-e9f7b808d3b7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:07:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:07:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1344952698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.657 255071 DEBUG oslo_concurrency.processutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.659 255071 DEBUG nova.virt.libvirt.vif [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:07:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-131074981',display_name='tempest-TestEncryptedCinderVolumes-server-131074981',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-131074981',id=11,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMK9VaGmZ36kv/Mjcz8YDZVjW+jRY/a3Wky6io428BhUkvZBLvNhzuy/PTBG8aoHewyEfjcuHMib0cX1Pr+f0ccuUMkyY0get8uS8l8RvbqyeDD6q9/pKiBO0vGwSsUJ3Q==',key_name='tempest-keypair-1866085076',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eeda2edc1f464a5480a29e4ff783c9b7',ramdisk_id='',reservation_id='r-tiswropc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-2112377596',owner_user_name='tempest-TestEncryptedCinderVolumes-2112377596-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3299eefacb3a43a898b339895ff0f205',uuid=fbf20945-7898-4904-95c5-0047536f3eab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2eff6be1-3572-4ee8-b40e-208a0051b03c", "address": "fa:16:3e:59:76:07", "network": {"id": "2b360768-ee11-45df-a7b1-30c167686953", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2134492214-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eeda2edc1f464a5480a29e4ff783c9b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eff6be1-35", "ovs_interfaceid": "2eff6be1-3572-4ee8-b40e-208a0051b03c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.659 255071 DEBUG nova.network.os_vif_util [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Converting VIF {"id": "2eff6be1-3572-4ee8-b40e-208a0051b03c", "address": "fa:16:3e:59:76:07", "network": {"id": "2b360768-ee11-45df-a7b1-30c167686953", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2134492214-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eeda2edc1f464a5480a29e4ff783c9b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eff6be1-35", "ovs_interfaceid": "2eff6be1-3572-4ee8-b40e-208a0051b03c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.660 255071 DEBUG nova.network.os_vif_util [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:76:07,bridge_name='br-int',has_traffic_filtering=True,id=2eff6be1-3572-4ee8-b40e-208a0051b03c,network=Network(2b360768-ee11-45df-a7b1-30c167686953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eff6be1-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.661 255071 DEBUG nova.objects.instance [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lazy-loading 'pci_devices' on Instance uuid fbf20945-7898-4904-95c5-0047536f3eab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 387 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 243 KiB/s rd, 5.0 MiB/s wr, 78 op/s
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.676 255071 DEBUG nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:07:25 compute-0 nova_compute[255040]:   <uuid>fbf20945-7898-4904-95c5-0047536f3eab</uuid>
Nov 29 08:07:25 compute-0 nova_compute[255040]:   <name>instance-0000000b</name>
Nov 29 08:07:25 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:07:25 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:07:25 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-131074981</nova:name>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:07:24</nova:creationTime>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:07:25 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:07:25 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:07:25 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:07:25 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:07:25 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:07:25 compute-0 nova_compute[255040]:         <nova:user uuid="3299eefacb3a43a898b339895ff0f205">tempest-TestEncryptedCinderVolumes-2112377596-project-member</nova:user>
Nov 29 08:07:25 compute-0 nova_compute[255040]:         <nova:project uuid="eeda2edc1f464a5480a29e4ff783c9b7">tempest-TestEncryptedCinderVolumes-2112377596</nova:project>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <nova:root type="image" uuid="36a9388d-0d77-4d24-a915-be92247e5dbc"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:07:25 compute-0 nova_compute[255040]:         <nova:port uuid="2eff6be1-3572-4ee8-b40e-208a0051b03c">
Nov 29 08:07:25 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:07:25 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:07:25 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <system>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <entry name="serial">fbf20945-7898-4904-95c5-0047536f3eab</entry>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <entry name="uuid">fbf20945-7898-4904-95c5-0047536f3eab</entry>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     </system>
Nov 29 08:07:25 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:07:25 compute-0 nova_compute[255040]:   <os>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:   </os>
Nov 29 08:07:25 compute-0 nova_compute[255040]:   <features>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:   </features>
Nov 29 08:07:25 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:07:25 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:07:25 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/fbf20945-7898-4904-95c5-0047536f3eab_disk">
Nov 29 08:07:25 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       </source>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:07:25 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/fbf20945-7898-4904-95c5-0047536f3eab_disk.config">
Nov 29 08:07:25 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       </source>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:07:25 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:59:76:07"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <target dev="tap2eff6be1-35"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/fbf20945-7898-4904-95c5-0047536f3eab/console.log" append="off"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <video>
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     </video>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:07:25 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:07:25 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:07:25 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:07:25 compute-0 nova_compute[255040]: </domain>
Nov 29 08:07:25 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.677 255071 DEBUG nova.compute.manager [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Preparing to wait for external event network-vif-plugged-2eff6be1-3572-4ee8-b40e-208a0051b03c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.677 255071 DEBUG oslo_concurrency.lockutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Acquiring lock "fbf20945-7898-4904-95c5-0047536f3eab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.678 255071 DEBUG oslo_concurrency.lockutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lock "fbf20945-7898-4904-95c5-0047536f3eab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.678 255071 DEBUG oslo_concurrency.lockutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lock "fbf20945-7898-4904-95c5-0047536f3eab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.679 255071 DEBUG nova.virt.libvirt.vif [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:07:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-131074981',display_name='tempest-TestEncryptedCinderVolumes-server-131074981',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-131074981',id=11,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMK9VaGmZ36kv/Mjcz8YDZVjW+jRY/a3Wky6io428BhUkvZBLvNhzuy/PTBG8aoHewyEfjcuHMib0cX1Pr+f0ccuUMkyY0get8uS8l8RvbqyeDD6q9/pKiBO0vGwSsUJ3Q==',key_name='tempest-keypair-1866085076',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eeda2edc1f464a5480a29e4ff783c9b7',ramdisk_id='',reservation_id='r-tiswropc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-2112377596',owner_user_name='tempest-TestEncryptedCinderVolumes-2112377596-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3299eefacb3a43a898b339895ff0f205',uuid=fbf20945-7898-4904-95c5-0047536f3eab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2eff6be1-3572-4ee8-b40e-208a0051b03c", "address": "fa:16:3e:59:76:07", "network": {"id": "2b360768-ee11-45df-a7b1-30c167686953", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2134492214-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eeda2edc1f464a5480a29e4ff783c9b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eff6be1-35", "ovs_interfaceid": "2eff6be1-3572-4ee8-b40e-208a0051b03c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.680 255071 DEBUG nova.network.os_vif_util [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Converting VIF {"id": "2eff6be1-3572-4ee8-b40e-208a0051b03c", "address": "fa:16:3e:59:76:07", "network": {"id": "2b360768-ee11-45df-a7b1-30c167686953", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2134492214-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eeda2edc1f464a5480a29e4ff783c9b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eff6be1-35", "ovs_interfaceid": "2eff6be1-3572-4ee8-b40e-208a0051b03c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.681 255071 DEBUG nova.network.os_vif_util [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:76:07,bridge_name='br-int',has_traffic_filtering=True,id=2eff6be1-3572-4ee8-b40e-208a0051b03c,network=Network(2b360768-ee11-45df-a7b1-30c167686953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eff6be1-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.681 255071 DEBUG os_vif [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:76:07,bridge_name='br-int',has_traffic_filtering=True,id=2eff6be1-3572-4ee8-b40e-208a0051b03c,network=Network(2b360768-ee11-45df-a7b1-30c167686953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eff6be1-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.682 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.683 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.683 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.688 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.689 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2eff6be1-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.689 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2eff6be1-35, col_values=(('external_ids', {'iface-id': '2eff6be1-3572-4ee8-b40e-208a0051b03c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:59:76:07', 'vm-uuid': 'fbf20945-7898-4904-95c5-0047536f3eab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:25 compute-0 NetworkManager[49116]: <info>  [1764403645.6933] manager: (tap2eff6be1-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.696 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.702 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.703 255071 INFO os_vif [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:76:07,bridge_name='br-int',has_traffic_filtering=True,id=2eff6be1-3572-4ee8-b40e-208a0051b03c,network=Network(2b360768-ee11-45df-a7b1-30c167686953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eff6be1-35')
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.766 255071 DEBUG nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.767 255071 DEBUG nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.767 255071 DEBUG nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] No VIF found with MAC fa:16:3e:59:76:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:07:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:07:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2496161263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.767 255071 INFO nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Using config drive
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.794 255071 DEBUG nova.storage.rbd_utils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] rbd image fbf20945-7898-4904-95c5-0047536f3eab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.802 255071 DEBUG oslo_concurrency.processutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.827 255071 DEBUG nova.storage.rbd_utils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] rbd image fdfa056f-5aa2-4ec1-b558-19291f104ebd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:07:25 compute-0 nova_compute[255040]: 2025-11-29 08:07:25.831 255071 DEBUG oslo_concurrency.processutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:25 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3263710154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:25 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1344952698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:25 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2496161263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:07:26 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1751881116' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:07:26 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/95140947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.320 255071 DEBUG oslo_concurrency.processutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.322 255071 DEBUG nova.virt.libvirt.vif [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1434087890',display_name='tempest-VolumesSnapshotTestJSON-instance-1434087890',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1434087890',id=12,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL96ORfIik8BeRuckrnafOvlHRyjxUIWdH+at4r5POMR9+CtrL7JGCU8Mqd/H9E2HLcsOiM81E+IsSwOIPT+TVHX1Ez55V+XTvrnXrO+rTNDtC6s8MnjzygSU9av5U5xCQ==',key_name='tempest-keypair-748936652',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='87f822d62c8f4ac6bed1a893f2b9e73f',ramdisk_id='',reservation_id='r-368bhosc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-248670584',owner_user_name='tempest-VolumesSnapshotTestJSON-248670584-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='090eb6259968476885903b5734f6f67a',uuid=fdfa056f-5aa2-4ec1-b558-19291f104ebd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a591a89f-fb00-4493-90c0-a41a373c5a5d", "address": "fa:16:3e:1d:e1:53", "network": {"id": "b1606039-8d07-4578-bb07-e1193dc21498", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-920991102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87f822d62c8f4ac6bed1a893f2b9e73f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa591a89f-fb", "ovs_interfaceid": "a591a89f-fb00-4493-90c0-a41a373c5a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.322 255071 DEBUG nova.network.os_vif_util [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Converting VIF {"id": "a591a89f-fb00-4493-90c0-a41a373c5a5d", "address": "fa:16:3e:1d:e1:53", "network": {"id": "b1606039-8d07-4578-bb07-e1193dc21498", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-920991102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87f822d62c8f4ac6bed1a893f2b9e73f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa591a89f-fb", "ovs_interfaceid": "a591a89f-fb00-4493-90c0-a41a373c5a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.323 255071 DEBUG nova.network.os_vif_util [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:e1:53,bridge_name='br-int',has_traffic_filtering=True,id=a591a89f-fb00-4493-90c0-a41a373c5a5d,network=Network(b1606039-8d07-4578-bb07-e1193dc21498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa591a89f-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.324 255071 DEBUG nova.objects.instance [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lazy-loading 'pci_devices' on Instance uuid fdfa056f-5aa2-4ec1-b558-19291f104ebd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.342 255071 DEBUG nova.objects.instance [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lazy-loading 'flavor' on Instance uuid b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.363 255071 DEBUG nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:07:26 compute-0 nova_compute[255040]:   <uuid>fdfa056f-5aa2-4ec1-b558-19291f104ebd</uuid>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   <name>instance-0000000c</name>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <nova:name>tempest-VolumesSnapshotTestJSON-instance-1434087890</nova:name>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:07:25</nova:creationTime>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:07:26 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:07:26 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:07:26 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:07:26 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:07:26 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:07:26 compute-0 nova_compute[255040]:         <nova:user uuid="090eb6259968476885903b5734f6f67a">tempest-VolumesSnapshotTestJSON-248670584-project-member</nova:user>
Nov 29 08:07:26 compute-0 nova_compute[255040]:         <nova:project uuid="87f822d62c8f4ac6bed1a893f2b9e73f">tempest-VolumesSnapshotTestJSON-248670584</nova:project>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <nova:root type="image" uuid="36a9388d-0d77-4d24-a915-be92247e5dbc"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:07:26 compute-0 nova_compute[255040]:         <nova:port uuid="a591a89f-fb00-4493-90c0-a41a373c5a5d">
Nov 29 08:07:26 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <system>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <entry name="serial">fdfa056f-5aa2-4ec1-b558-19291f104ebd</entry>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <entry name="uuid">fdfa056f-5aa2-4ec1-b558-19291f104ebd</entry>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     </system>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   <os>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   </os>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   <features>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   </features>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/fdfa056f-5aa2-4ec1-b558-19291f104ebd_disk">
Nov 29 08:07:26 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       </source>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:07:26 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/fdfa056f-5aa2-4ec1-b558-19291f104ebd_disk.config">
Nov 29 08:07:26 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       </source>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:07:26 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:1d:e1:53"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <target dev="tapa591a89f-fb"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/fdfa056f-5aa2-4ec1-b558-19291f104ebd/console.log" append="off"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <video>
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     </video>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:07:26 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:07:26 compute-0 nova_compute[255040]: </domain>
Nov 29 08:07:26 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.364 255071 DEBUG nova.compute.manager [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Preparing to wait for external event network-vif-plugged-a591a89f-fb00-4493-90c0-a41a373c5a5d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.365 255071 DEBUG oslo_concurrency.lockutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.365 255071 DEBUG oslo_concurrency.lockutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.365 255071 DEBUG oslo_concurrency.lockutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.366 255071 DEBUG nova.virt.libvirt.vif [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1434087890',display_name='tempest-VolumesSnapshotTestJSON-instance-1434087890',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1434087890',id=12,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL96ORfIik8BeRuckrnafOvlHRyjxUIWdH+at4r5POMR9+CtrL7JGCU8Mqd/H9E2HLcsOiM81E+IsSwOIPT+TVHX1Ez55V+XTvrnXrO+rTNDtC6s8MnjzygSU9av5U5xCQ==',key_name='tempest-keypair-748936652',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='87f822d62c8f4ac6bed1a893f2b9e73f',ramdisk_id='',reservation_id='r-368bhosc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-248670584',owner_user_name='tempest-VolumesSnapshotTestJSON-248670584-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='090eb6259968476885903b5734f6f67a',uuid=fdfa056f-5aa2-4ec1-b558-19291f104ebd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a591a89f-fb00-4493-90c0-a41a373c5a5d", "address": "fa:16:3e:1d:e1:53", "network": {"id": "b1606039-8d07-4578-bb07-e1193dc21498", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-920991102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87f822d62c8f4ac6bed1a893f2b9e73f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa591a89f-fb", "ovs_interfaceid": "a591a89f-fb00-4493-90c0-a41a373c5a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.366 255071 DEBUG nova.network.os_vif_util [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Converting VIF {"id": "a591a89f-fb00-4493-90c0-a41a373c5a5d", "address": "fa:16:3e:1d:e1:53", "network": {"id": "b1606039-8d07-4578-bb07-e1193dc21498", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-920991102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87f822d62c8f4ac6bed1a893f2b9e73f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa591a89f-fb", "ovs_interfaceid": "a591a89f-fb00-4493-90c0-a41a373c5a5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.367 255071 DEBUG nova.network.os_vif_util [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:e1:53,bridge_name='br-int',has_traffic_filtering=True,id=a591a89f-fb00-4493-90c0-a41a373c5a5d,network=Network(b1606039-8d07-4578-bb07-e1193dc21498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa591a89f-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.367 255071 DEBUG os_vif [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:e1:53,bridge_name='br-int',has_traffic_filtering=True,id=a591a89f-fb00-4493-90c0-a41a373c5a5d,network=Network(b1606039-8d07-4578-bb07-e1193dc21498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa591a89f-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.368 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.368 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.369 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.374 255071 DEBUG nova.virt.libvirt.driver [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Attempting to attach volume cca9f80d-43e0-486a-935a-1377b2493429 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.375 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.375 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa591a89f-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.376 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa591a89f-fb, col_values=(('external_ids', {'iface-id': 'a591a89f-fb00-4493-90c0-a41a373c5a5d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:e1:53', 'vm-uuid': 'fdfa056f-5aa2-4ec1-b558-19291f104ebd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.377 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:26 compute-0 NetworkManager[49116]: <info>  [1764403646.3794] manager: (tapa591a89f-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.381 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.382 255071 DEBUG nova.virt.libvirt.guest [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:07:26 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-cca9f80d-43e0-486a-935a-1377b2493429">
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   </source>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   <auth username="openstack">
Nov 29 08:07:26 compute-0 nova_compute[255040]:     <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   </auth>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:07:26 compute-0 nova_compute[255040]:   <serial>cca9f80d-43e0-486a-935a-1377b2493429</serial>
Nov 29 08:07:26 compute-0 nova_compute[255040]: </disk>
Nov 29 08:07:26 compute-0 nova_compute[255040]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.389 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.392 255071 INFO os_vif [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:e1:53,bridge_name='br-int',has_traffic_filtering=True,id=a591a89f-fb00-4493-90c0-a41a373c5a5d,network=Network(b1606039-8d07-4578-bb07-e1193dc21498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa591a89f-fb')
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.466 255071 INFO nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Creating config drive at /var/lib/nova/instances/fbf20945-7898-4904-95c5-0047536f3eab/disk.config
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.476 255071 DEBUG oslo_concurrency.processutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fbf20945-7898-4904-95c5-0047536f3eab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdrk0t_6r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.611 255071 DEBUG oslo_concurrency.processutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fbf20945-7898-4904-95c5-0047536f3eab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdrk0t_6r" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.645 255071 DEBUG nova.storage.rbd_utils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] rbd image fbf20945-7898-4904-95c5-0047536f3eab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.651 255071 DEBUG oslo_concurrency.processutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fbf20945-7898-4904-95c5-0047536f3eab/disk.config fbf20945-7898-4904-95c5-0047536f3eab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.723 255071 DEBUG nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.724 255071 DEBUG nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.724 255071 DEBUG nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] No VIF found with MAC fa:16:3e:1d:e1:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.725 255071 INFO nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Using config drive
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.753 255071 DEBUG nova.storage.rbd_utils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] rbd image fdfa056f-5aa2-4ec1-b558-19291f104ebd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.792 255071 DEBUG nova.virt.libvirt.driver [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.793 255071 DEBUG nova.virt.libvirt.driver [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.793 255071 DEBUG nova.virt.libvirt.driver [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.793 255071 DEBUG nova.virt.libvirt.driver [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] No VIF found with MAC fa:16:3e:52:2b:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:07:26 compute-0 nova_compute[255040]: 2025-11-29 08:07:26.985 255071 DEBUG oslo_concurrency.lockutils [None req-b9feecda-7c9b-4566-96e2-63b47a01754d c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:27 compute-0 nova_compute[255040]: 2025-11-29 08:07:27.049 255071 INFO nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Creating config drive at /var/lib/nova/instances/fdfa056f-5aa2-4ec1-b558-19291f104ebd/disk.config
Nov 29 08:07:27 compute-0 nova_compute[255040]: 2025-11-29 08:07:27.058 255071 DEBUG oslo_concurrency.processutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fdfa056f-5aa2-4ec1-b558-19291f104ebd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmcagzf4i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:27 compute-0 ceph-mon[75237]: pgmap v1439: 305 pgs: 305 active+clean; 387 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 243 KiB/s rd, 5.0 MiB/s wr, 78 op/s
Nov 29 08:07:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1751881116' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/95140947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:27.129 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:27.130 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:27.131 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:27 compute-0 nova_compute[255040]: 2025-11-29 08:07:27.198 255071 DEBUG oslo_concurrency.processutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fdfa056f-5aa2-4ec1-b558-19291f104ebd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmcagzf4i" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:27 compute-0 nova_compute[255040]: 2025-11-29 08:07:27.227 255071 DEBUG nova.storage.rbd_utils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] rbd image fdfa056f-5aa2-4ec1-b558-19291f104ebd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:07:27 compute-0 nova_compute[255040]: 2025-11-29 08:07:27.233 255071 DEBUG oslo_concurrency.processutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fdfa056f-5aa2-4ec1-b558-19291f104ebd/disk.config fdfa056f-5aa2-4ec1-b558-19291f104ebd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 387 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 224 KiB/s rd, 4.7 MiB/s wr, 69 op/s
Nov 29 08:07:27 compute-0 nova_compute[255040]: 2025-11-29 08:07:27.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:27 compute-0 nova_compute[255040]: 2025-11-29 08:07:27.977 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:28 compute-0 ceph-mon[75237]: pgmap v1440: 305 pgs: 305 active+clean; 387 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 224 KiB/s rd, 4.7 MiB/s wr, 69 op/s
Nov 29 08:07:28 compute-0 nova_compute[255040]: 2025-11-29 08:07:28.484 255071 DEBUG oslo_concurrency.processutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fbf20945-7898-4904-95c5-0047536f3eab/disk.config fbf20945-7898-4904-95c5-0047536f3eab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.833s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:28 compute-0 nova_compute[255040]: 2025-11-29 08:07:28.485 255071 INFO nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Deleting local config drive /var/lib/nova/instances/fbf20945-7898-4904-95c5-0047536f3eab/disk.config because it was imported into RBD.
Nov 29 08:07:28 compute-0 kernel: tap2eff6be1-35: entered promiscuous mode
Nov 29 08:07:28 compute-0 NetworkManager[49116]: <info>  [1764403648.5290] manager: (tap2eff6be1-35): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Nov 29 08:07:28 compute-0 nova_compute[255040]: 2025-11-29 08:07:28.540 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:28 compute-0 ovn_controller[153295]: 2025-11-29T08:07:28Z|00104|binding|INFO|Claiming lport 2eff6be1-3572-4ee8-b40e-208a0051b03c for this chassis.
Nov 29 08:07:28 compute-0 ovn_controller[153295]: 2025-11-29T08:07:28Z|00105|binding|INFO|2eff6be1-3572-4ee8-b40e-208a0051b03c: Claiming fa:16:3e:59:76:07 10.100.0.4
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.548 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:76:07 10.100.0.4'], port_security=['fa:16:3e:59:76:07 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fbf20945-7898-4904-95c5-0047536f3eab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b360768-ee11-45df-a7b1-30c167686953', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eeda2edc1f464a5480a29e4ff783c9b7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e941dca4-f8e2-4c1a-8bd1-11bea4d6ba77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59d06473-cfb2-4bab-8c48-e5a28c2465ff, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=2eff6be1-3572-4ee8-b40e-208a0051b03c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.549 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 2eff6be1-3572-4ee8-b40e-208a0051b03c in datapath 2b360768-ee11-45df-a7b1-30c167686953 bound to our chassis
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.551 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b360768-ee11-45df-a7b1-30c167686953
Nov 29 08:07:28 compute-0 ovn_controller[153295]: 2025-11-29T08:07:28Z|00106|binding|INFO|Setting lport 2eff6be1-3572-4ee8-b40e-208a0051b03c ovn-installed in OVS
Nov 29 08:07:28 compute-0 ovn_controller[153295]: 2025-11-29T08:07:28Z|00107|binding|INFO|Setting lport 2eff6be1-3572-4ee8-b40e-208a0051b03c up in Southbound
Nov 29 08:07:28 compute-0 nova_compute[255040]: 2025-11-29 08:07:28.569 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:28 compute-0 nova_compute[255040]: 2025-11-29 08:07:28.571 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:28 compute-0 systemd-machined[216271]: New machine qemu-11-instance-0000000b.
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.571 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[50750248-fb6f-4239-bb9a-e462fb956116]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.573 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b360768-e1 in ovnmeta-2b360768-ee11-45df-a7b1-30c167686953 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:07:28 compute-0 systemd-udevd[279184]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.576 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b360768-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.576 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[75a55352-5149-4097-b26b-b9e07c40783a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.578 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[8489472d-b379-4b5f-9b78-60c63d4fe43e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:28 compute-0 NetworkManager[49116]: <info>  [1764403648.5917] device (tap2eff6be1-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:07:28 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Nov 29 08:07:28 compute-0 NetworkManager[49116]: <info>  [1764403648.5942] device (tap2eff6be1-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.599 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[1664821f-e0ec-4cdf-a5b8-d81eaa72eb38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.618 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[0a373ae1-432b-4183-ab73-3b1bcee496f8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.658 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[43404a09-6210-4058-b7d2-63e3698188b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.666 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[116f2e94-125a-4f6b-87dc-bd283e5f3ef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:28 compute-0 NetworkManager[49116]: <info>  [1764403648.6678] manager: (tap2b360768-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/66)
Nov 29 08:07:28 compute-0 nova_compute[255040]: 2025-11-29 08:07:28.684 255071 DEBUG oslo_concurrency.processutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fdfa056f-5aa2-4ec1-b558-19291f104ebd/disk.config fdfa056f-5aa2-4ec1-b558-19291f104ebd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:28 compute-0 nova_compute[255040]: 2025-11-29 08:07:28.684 255071 INFO nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Deleting local config drive /var/lib/nova/instances/fdfa056f-5aa2-4ec1-b558-19291f104ebd/disk.config because it was imported into RBD.
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.710 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[457ce75e-b6dc-4714-b18c-9e9804c97947]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.714 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[d544b918-3ec8-4069-a884-b96b6fcf3f98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:28 compute-0 kernel: tapa591a89f-fb: entered promiscuous mode
Nov 29 08:07:28 compute-0 NetworkManager[49116]: <info>  [1764403648.7464] manager: (tapa591a89f-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Nov 29 08:07:28 compute-0 systemd-udevd[279205]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:07:28 compute-0 NetworkManager[49116]: <info>  [1764403648.7481] device (tap2b360768-e0): carrier: link connected
Nov 29 08:07:28 compute-0 ovn_controller[153295]: 2025-11-29T08:07:28Z|00108|binding|INFO|Claiming lport a591a89f-fb00-4493-90c0-a41a373c5a5d for this chassis.
Nov 29 08:07:28 compute-0 ovn_controller[153295]: 2025-11-29T08:07:28Z|00109|binding|INFO|a591a89f-fb00-4493-90c0-a41a373c5a5d: Claiming fa:16:3e:1d:e1:53 10.100.0.13
Nov 29 08:07:28 compute-0 nova_compute[255040]: 2025-11-29 08:07:28.750 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.755 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[bca627df-1bdf-46e1-bfa5-52a1f82154de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:28 compute-0 nova_compute[255040]: 2025-11-29 08:07:28.758 255071 DEBUG nova.compute.manager [req-20a378c8-e2a8-4268-9f4f-95682d9ab9e9 req-44e96b6f-c011-4452-80d6-4c440ca1c564 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Received event network-vif-plugged-2eff6be1-3572-4ee8-b40e-208a0051b03c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:28 compute-0 NetworkManager[49116]: <info>  [1764403648.7639] device (tapa591a89f-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.764 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:e1:53 10.100.0.13'], port_security=['fa:16:3e:1d:e1:53 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'fdfa056f-5aa2-4ec1-b558-19291f104ebd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1606039-8d07-4578-bb07-e1193dc21498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '87f822d62c8f4ac6bed1a893f2b9e73f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03433030-da4a-462a-bc74-36d4da632562', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c31b5f0-bc6c-4aab-ba94-61fe7903fc35, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=a591a89f-fb00-4493-90c0-a41a373c5a5d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:07:28 compute-0 nova_compute[255040]: 2025-11-29 08:07:28.764 255071 DEBUG oslo_concurrency.lockutils [req-20a378c8-e2a8-4268-9f4f-95682d9ab9e9 req-44e96b6f-c011-4452-80d6-4c440ca1c564 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "fbf20945-7898-4904-95c5-0047536f3eab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:28 compute-0 NetworkManager[49116]: <info>  [1764403648.7653] device (tapa591a89f-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:07:28 compute-0 nova_compute[255040]: 2025-11-29 08:07:28.765 255071 DEBUG oslo_concurrency.lockutils [req-20a378c8-e2a8-4268-9f4f-95682d9ab9e9 req-44e96b6f-c011-4452-80d6-4c440ca1c564 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fbf20945-7898-4904-95c5-0047536f3eab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:28 compute-0 nova_compute[255040]: 2025-11-29 08:07:28.765 255071 DEBUG oslo_concurrency.lockutils [req-20a378c8-e2a8-4268-9f4f-95682d9ab9e9 req-44e96b6f-c011-4452-80d6-4c440ca1c564 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fbf20945-7898-4904-95c5-0047536f3eab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:28 compute-0 nova_compute[255040]: 2025-11-29 08:07:28.765 255071 DEBUG nova.compute.manager [req-20a378c8-e2a8-4268-9f4f-95682d9ab9e9 req-44e96b6f-c011-4452-80d6-4c440ca1c564 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Processing event network-vif-plugged-2eff6be1-3572-4ee8-b40e-208a0051b03c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:07:28 compute-0 ovn_controller[153295]: 2025-11-29T08:07:28Z|00110|binding|INFO|Setting lport a591a89f-fb00-4493-90c0-a41a373c5a5d ovn-installed in OVS
Nov 29 08:07:28 compute-0 ovn_controller[153295]: 2025-11-29T08:07:28Z|00111|binding|INFO|Setting lport a591a89f-fb00-4493-90c0-a41a373c5a5d up in Southbound
Nov 29 08:07:28 compute-0 nova_compute[255040]: 2025-11-29 08:07:28.775 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:28 compute-0 nova_compute[255040]: 2025-11-29 08:07:28.777 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.787 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0988f8-8f0c-4ba2-8536-3f58ce726646]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b360768-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:90:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588608, 'reachable_time': 16899, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279228, 'error': None, 'target': 'ovnmeta-2b360768-ee11-45df-a7b1-30c167686953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:28 compute-0 systemd-machined[216271]: New machine qemu-12-instance-0000000c.
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.812 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[1ccf30f6-870a-45b3-99c7-ce415d087758]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:9071'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588608, 'tstamp': 588608}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279232, 'error': None, 'target': 'ovnmeta-2b360768-ee11-45df-a7b1-30c167686953', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:28 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.830 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[556d211f-b7e1-418a-a3f8-449c65df0bcb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b360768-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:90:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588608, 'reachable_time': 16899, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279233, 'error': None, 'target': 'ovnmeta-2b360768-ee11-45df-a7b1-30c167686953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.871 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5d52f7-75d4-44db-9ad4-e0f655979a93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:28 compute-0 nova_compute[255040]: 2025-11-29 08:07:28.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:28 compute-0 nova_compute[255040]: 2025-11-29 08:07:28.975 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:07:28 compute-0 nova_compute[255040]: 2025-11-29 08:07:28.975 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.991 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[354e0cc1-ef44-4524-a709-95ed87a04a12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.993 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b360768-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.993 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:28 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:28.994 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b360768-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:28 compute-0 kernel: tap2b360768-e0: entered promiscuous mode
Nov 29 08:07:28 compute-0 NetworkManager[49116]: <info>  [1764403648.9972] manager: (tap2b360768-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Nov 29 08:07:28 compute-0 nova_compute[255040]: 2025-11-29 08:07:28.999 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 08:07:28 compute-0 nova_compute[255040]: 2025-11-29 08:07:28.999 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:29.000 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b360768-e0, col_values=(('external_ids', {'iface-id': 'd577047c-d6ea-4cbf-ab19-e45b4b47b0f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:29 compute-0 ovn_controller[153295]: 2025-11-29T08:07:29Z|00112|binding|INFO|Releasing lport d577047c-d6ea-4cbf-ab19-e45b4b47b0f6 from this chassis (sb_readonly=0)
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.004 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.021 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:29.023 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b360768-ee11-45df-a7b1-30c167686953.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b360768-ee11-45df-a7b1-30c167686953.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:29.025 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[9992287a-4d0a-4e72-8083-1eb37f1d0fce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:29.025 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-2b360768-ee11-45df-a7b1-30c167686953
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/2b360768-ee11-45df-a7b1-30c167686953.pid.haproxy
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID 2b360768-ee11-45df-a7b1-30c167686953
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:07:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:29.026 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b360768-ee11-45df-a7b1-30c167686953', 'env', 'PROCESS_TAG=haproxy-2b360768-ee11-45df-a7b1-30c167686953', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b360768-ee11-45df-a7b1-30c167686953.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.180 255071 DEBUG oslo_concurrency.lockutils [None req-e36b7583-3d74-42e6-aa4f-a2746c39e52b c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.181 255071 DEBUG oslo_concurrency.lockutils [None req-e36b7583-3d74-42e6-aa4f-a2746c39e52b c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.196 255071 INFO nova.compute.manager [None req-e36b7583-3d74-42e6-aa4f-a2746c39e52b c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Detaching volume cca9f80d-43e0-486a-935a-1377b2493429
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.301 255071 INFO nova.virt.block_device [None req-e36b7583-3d74-42e6-aa4f-a2746c39e52b c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Attempting to driver detach volume cca9f80d-43e0-486a-935a-1377b2493429 from mountpoint /dev/vdb
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.321 255071 DEBUG nova.virt.libvirt.driver [None req-e36b7583-3d74-42e6-aa4f-a2746c39e52b c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Attempting to detach device vdb from instance b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.322 255071 DEBUG nova.virt.libvirt.guest [None req-e36b7583-3d74-42e6-aa4f-a2746c39e52b c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:07:29 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:07:29 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-cca9f80d-43e0-486a-935a-1377b2493429">
Nov 29 08:07:29 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:29 compute-0 nova_compute[255040]:   </source>
Nov 29 08:07:29 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:07:29 compute-0 nova_compute[255040]:   <serial>cca9f80d-43e0-486a-935a-1377b2493429</serial>
Nov 29 08:07:29 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:07:29 compute-0 nova_compute[255040]: </disk>
Nov 29 08:07:29 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.422 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "refresh_cache-e13306d3-0b4c-4937-8b4b-83605575ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.422 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquired lock "refresh_cache-e13306d3-0b4c-4937-8b4b-83605575ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.422 255071 DEBUG nova.network.neutron [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.423 255071 DEBUG nova.objects.instance [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e13306d3-0b4c-4937-8b4b-83605575ce82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.483 255071 INFO nova.virt.libvirt.driver [None req-e36b7583-3d74-42e6-aa4f-a2746c39e52b c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Successfully detached device vdb from instance b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12 from the persistent domain config.
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.484 255071 DEBUG nova.virt.libvirt.driver [None req-e36b7583-3d74-42e6-aa4f-a2746c39e52b c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.484 255071 DEBUG nova.virt.libvirt.guest [None req-e36b7583-3d74-42e6-aa4f-a2746c39e52b c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:07:29 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:07:29 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-cca9f80d-43e0-486a-935a-1377b2493429">
Nov 29 08:07:29 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:29 compute-0 nova_compute[255040]:   </source>
Nov 29 08:07:29 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:07:29 compute-0 nova_compute[255040]:   <serial>cca9f80d-43e0-486a-935a-1377b2493429</serial>
Nov 29 08:07:29 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:07:29 compute-0 nova_compute[255040]: </disk>
Nov 29 08:07:29 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:07:29 compute-0 podman[279270]: 2025-11-29 08:07:29.424954839 +0000 UTC m=+0.034262015 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:07:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:29 compute-0 podman[279270]: 2025-11-29 08:07:29.618482286 +0000 UTC m=+0.227789442 container create d88a4d980d3a273443f9c5f2781df40ce59fedb83515ae9de4badcc27fde307d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b360768-ee11-45df-a7b1-30c167686953, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 08:07:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 411 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 384 KiB/s rd, 5.7 MiB/s wr, 100 op/s
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.668 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:29 compute-0 systemd[1]: Started libpod-conmon-d88a4d980d3a273443f9c5f2781df40ce59fedb83515ae9de4badcc27fde307d.scope.
Nov 29 08:07:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:07:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd4462a06011eab897301f40e8cec203a5cc30897bb84c176af69c9bc5c3da0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.847 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403649.846662, fbf20945-7898-4904-95c5-0047536f3eab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.848 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fbf20945-7898-4904-95c5-0047536f3eab] VM Started (Lifecycle Event)
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.850 255071 DEBUG nova.compute.manager [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.854 255071 DEBUG nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.859 255071 INFO nova.virt.libvirt.driver [-] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Instance spawned successfully.
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.859 255071 DEBUG nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.884 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.893 255071 DEBUG nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.893 255071 DEBUG nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.894 255071 DEBUG nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.894 255071 DEBUG nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.895 255071 DEBUG nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.895 255071 DEBUG nova.virt.libvirt.driver [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.899 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.933 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fbf20945-7898-4904-95c5-0047536f3eab] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.933 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403649.8471174, fbf20945-7898-4904-95c5-0047536f3eab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.934 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fbf20945-7898-4904-95c5-0047536f3eab] VM Paused (Lifecycle Event)
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.956 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.962 255071 INFO nova.compute.manager [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Took 11.51 seconds to spawn the instance on the hypervisor.
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.962 255071 DEBUG nova.compute.manager [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.964 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403649.8500319, fdfa056f-5aa2-4ec1-b558-19291f104ebd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.964 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] VM Started (Lifecycle Event)
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.993 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.997 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403649.8510292, fdfa056f-5aa2-4ec1-b558-19291f104ebd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:07:29 compute-0 nova_compute[255040]: 2025-11-29 08:07:29.997 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] VM Paused (Lifecycle Event)
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.023 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.037 255071 INFO nova.compute.manager [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Took 12.56 seconds to build instance.
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.040 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.059 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.060 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403649.8534687, fbf20945-7898-4904-95c5-0047536f3eab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.060 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fbf20945-7898-4904-95c5-0047536f3eab] VM Resumed (Lifecycle Event)
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.063 255071 DEBUG oslo_concurrency.lockutils [None req-a31ebaab-fa7a-4a93-a7a1-801aa671af20 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lock "fbf20945-7898-4904-95c5-0047536f3eab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.076 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.079 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:07:30 compute-0 podman[279270]: 2025-11-29 08:07:30.111169798 +0000 UTC m=+0.720476984 container init d88a4d980d3a273443f9c5f2781df40ce59fedb83515ae9de4badcc27fde307d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b360768-ee11-45df-a7b1-30c167686953, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:07:30 compute-0 podman[279270]: 2025-11-29 08:07:30.119858692 +0000 UTC m=+0.729165848 container start d88a4d980d3a273443f9c5f2781df40ce59fedb83515ae9de4badcc27fde307d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b360768-ee11-45df-a7b1-30c167686953, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.138 255071 DEBUG nova.virt.libvirt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Received event <DeviceRemovedEvent: 1764403650.1368873, b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.139 255071 DEBUG nova.virt.libvirt.driver [None req-e36b7583-3d74-42e6-aa4f-a2746c39e52b c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.142 255071 INFO nova.virt.libvirt.driver [None req-e36b7583-3d74-42e6-aa4f-a2746c39e52b c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Successfully detached device vdb from instance b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12 from the live domain config.
Nov 29 08:07:30 compute-0 neutron-haproxy-ovnmeta-2b360768-ee11-45df-a7b1-30c167686953[279366]: [NOTICE]   (279373) : New worker (279377) forked
Nov 29 08:07:30 compute-0 neutron-haproxy-ovnmeta-2b360768-ee11-45df-a7b1-30c167686953[279366]: [NOTICE]   (279373) : Loading success.
Nov 29 08:07:30 compute-0 ceph-mon[75237]: pgmap v1441: 305 pgs: 305 active+clean; 411 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 384 KiB/s rd, 5.7 MiB/s wr, 100 op/s
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.313 255071 DEBUG nova.objects.instance [None req-e36b7583-3d74-42e6-aa4f-a2746c39e52b c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lazy-loading 'flavor' on Instance uuid b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.354 255071 DEBUG oslo_concurrency.lockutils [None req-e36b7583-3d74-42e6-aa4f-a2746c39e52b c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.379 163500 INFO neutron.agent.ovn.metadata.agent [-] Port a591a89f-fb00-4493-90c0-a41a373c5a5d in datapath b1606039-8d07-4578-bb07-e1193dc21498 unbound from our chassis
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.383 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1606039-8d07-4578-bb07-e1193dc21498
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.406 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[ad047873-2d3f-4be7-8879-18ad56a1869e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.408 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb1606039-81 in ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.411 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb1606039-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.411 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[8420926b-a8bd-4531-bbbb-3bd58cbe8d6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.413 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[34aa410e-5d47-4883-8d52-0a7883993f0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.430 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[2439fb99-ed43-4d4d-83f6-bba37c7d96ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.449 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[42ea924c-43fc-450a-a5c3-a1f18f22a6ed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.489 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[abe7a376-bddb-4c4f-a1bf-b9251f32fc97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:30 compute-0 NetworkManager[49116]: <info>  [1764403650.4992] manager: (tapb1606039-80): new Veth device (/org/freedesktop/NetworkManager/Devices/69)
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.501 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[7636343b-3202-410e-b6c5-5e803ccd94e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.510 255071 DEBUG nova.network.neutron [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Updating instance_info_cache with network_info: [{"id": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "address": "fa:16:3e:0d:71:48", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf819ff69-f9", "ovs_interfaceid": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.525 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Releasing lock "refresh_cache-e13306d3-0b4c-4937-8b4b-83605575ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.526 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.526 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.527 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.528 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.528 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.549 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[739d0e5b-0d7e-463b-9dd8-df6e0065409c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.550 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.551 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.551 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.551 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.552 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.555 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[479d7e36-084f-47d5-9681-caddb3130f84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:30 compute-0 NetworkManager[49116]: <info>  [1764403650.5847] device (tapb1606039-80): carrier: link connected
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.591 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[785397f6-4fa0-4d14-9afd-2a821554f214]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.616 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[3b59ac86-6fff-428c-9f5a-d95d142bef28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1606039-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:11:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588792, 'reachable_time': 27067, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279397, 'error': None, 'target': 'ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.641 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[817bda2d-dfb5-47c0-a419-5db13f84d4dc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:11b1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588792, 'tstamp': 588792}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279398, 'error': None, 'target': 'ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.669 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[04ae18ce-7dc5-4e89-a302-0032e099c851]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1606039-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:11:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588792, 'reachable_time': 27067, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279399, 'error': None, 'target': 'ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.713 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb4f82d-1207-4367-982e-dbf930e2032e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.797 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[ddecef0a-5fab-474b-82ae-89c88f2a631b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.799 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1606039-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.800 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.800 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1606039-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:30 compute-0 kernel: tapb1606039-80: entered promiscuous mode
Nov 29 08:07:30 compute-0 NetworkManager[49116]: <info>  [1764403650.8040] manager: (tapb1606039-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.803 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.810 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.810 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1606039-80, col_values=(('external_ids', {'iface-id': '27ddf48d-41ab-4a2b-bcec-12ec830f91a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.812 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:30 compute-0 ovn_controller[153295]: 2025-11-29T08:07:30Z|00113|binding|INFO|Releasing lport 27ddf48d-41ab-4a2b-bcec-12ec830f91a5 from this chassis (sb_readonly=0)
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.829 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.831 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b1606039-8d07-4578-bb07-e1193dc21498.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b1606039-8d07-4578-bb07-e1193dc21498.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.834 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[281219c3-a5e6-4b0a-95ad-20a5d5ddf5b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.835 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-b1606039-8d07-4578-bb07-e1193dc21498
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/b1606039-8d07-4578-bb07-e1193dc21498.pid.haproxy
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID b1606039-8d07-4578-bb07-e1193dc21498
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:07:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:30.836 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498', 'env', 'PROCESS_TAG=haproxy-b1606039-8d07-4578-bb07-e1193dc21498', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b1606039-8d07-4578-bb07-e1193dc21498.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.839 255071 DEBUG nova.compute.manager [req-66590f46-5822-4e50-acad-5e0ca77c1fc9 req-fe92f442-e97e-421a-b9a6-2eb69d53d689 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Received event network-vif-plugged-2eff6be1-3572-4ee8-b40e-208a0051b03c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.839 255071 DEBUG oslo_concurrency.lockutils [req-66590f46-5822-4e50-acad-5e0ca77c1fc9 req-fe92f442-e97e-421a-b9a6-2eb69d53d689 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "fbf20945-7898-4904-95c5-0047536f3eab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.839 255071 DEBUG oslo_concurrency.lockutils [req-66590f46-5822-4e50-acad-5e0ca77c1fc9 req-fe92f442-e97e-421a-b9a6-2eb69d53d689 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fbf20945-7898-4904-95c5-0047536f3eab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.840 255071 DEBUG oslo_concurrency.lockutils [req-66590f46-5822-4e50-acad-5e0ca77c1fc9 req-fe92f442-e97e-421a-b9a6-2eb69d53d689 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fbf20945-7898-4904-95c5-0047536f3eab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.840 255071 DEBUG nova.compute.manager [req-66590f46-5822-4e50-acad-5e0ca77c1fc9 req-fe92f442-e97e-421a-b9a6-2eb69d53d689 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] No waiting events found dispatching network-vif-plugged-2eff6be1-3572-4ee8-b40e-208a0051b03c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.841 255071 WARNING nova.compute.manager [req-66590f46-5822-4e50-acad-5e0ca77c1fc9 req-fe92f442-e97e-421a-b9a6-2eb69d53d689 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Received unexpected event network-vif-plugged-2eff6be1-3572-4ee8-b40e-208a0051b03c for instance with vm_state active and task_state None.
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.841 255071 DEBUG nova.compute.manager [req-66590f46-5822-4e50-acad-5e0ca77c1fc9 req-fe92f442-e97e-421a-b9a6-2eb69d53d689 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Received event network-vif-plugged-a591a89f-fb00-4493-90c0-a41a373c5a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.841 255071 DEBUG oslo_concurrency.lockutils [req-66590f46-5822-4e50-acad-5e0ca77c1fc9 req-fe92f442-e97e-421a-b9a6-2eb69d53d689 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.842 255071 DEBUG oslo_concurrency.lockutils [req-66590f46-5822-4e50-acad-5e0ca77c1fc9 req-fe92f442-e97e-421a-b9a6-2eb69d53d689 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.842 255071 DEBUG oslo_concurrency.lockutils [req-66590f46-5822-4e50-acad-5e0ca77c1fc9 req-fe92f442-e97e-421a-b9a6-2eb69d53d689 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.842 255071 DEBUG nova.compute.manager [req-66590f46-5822-4e50-acad-5e0ca77c1fc9 req-fe92f442-e97e-421a-b9a6-2eb69d53d689 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Processing event network-vif-plugged-a591a89f-fb00-4493-90c0-a41a373c5a5d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.842 255071 DEBUG nova.compute.manager [req-66590f46-5822-4e50-acad-5e0ca77c1fc9 req-fe92f442-e97e-421a-b9a6-2eb69d53d689 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Received event network-vif-plugged-a591a89f-fb00-4493-90c0-a41a373c5a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.842 255071 DEBUG oslo_concurrency.lockutils [req-66590f46-5822-4e50-acad-5e0ca77c1fc9 req-fe92f442-e97e-421a-b9a6-2eb69d53d689 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.842 255071 DEBUG oslo_concurrency.lockutils [req-66590f46-5822-4e50-acad-5e0ca77c1fc9 req-fe92f442-e97e-421a-b9a6-2eb69d53d689 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.843 255071 DEBUG oslo_concurrency.lockutils [req-66590f46-5822-4e50-acad-5e0ca77c1fc9 req-fe92f442-e97e-421a-b9a6-2eb69d53d689 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.843 255071 DEBUG nova.compute.manager [req-66590f46-5822-4e50-acad-5e0ca77c1fc9 req-fe92f442-e97e-421a-b9a6-2eb69d53d689 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] No waiting events found dispatching network-vif-plugged-a591a89f-fb00-4493-90c0-a41a373c5a5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.843 255071 WARNING nova.compute.manager [req-66590f46-5822-4e50-acad-5e0ca77c1fc9 req-fe92f442-e97e-421a-b9a6-2eb69d53d689 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Received unexpected event network-vif-plugged-a591a89f-fb00-4493-90c0-a41a373c5a5d for instance with vm_state building and task_state spawning.
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.844 255071 DEBUG nova.compute.manager [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.859 255071 DEBUG nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.864 255071 INFO nova.virt.libvirt.driver [-] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Instance spawned successfully.
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.864 255071 DEBUG nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.868 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403650.8678067, fdfa056f-5aa2-4ec1-b558-19291f104ebd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.869 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] VM Resumed (Lifecycle Event)
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.887 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.893 255071 DEBUG nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.893 255071 DEBUG nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.894 255071 DEBUG nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.894 255071 DEBUG nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.895 255071 DEBUG nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.895 255071 DEBUG nova.virt.libvirt.driver [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.906 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.925 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.946 255071 INFO nova.compute.manager [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Took 9.04 seconds to spawn the instance on the hypervisor.
Nov 29 08:07:30 compute-0 nova_compute[255040]: 2025-11-29 08:07:30.946 255071 DEBUG nova.compute.manager [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.013 255071 INFO nova.compute.manager [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Took 10.00 seconds to build instance.
Nov 29 08:07:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:07:31 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/330532934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.035 255071 DEBUG oslo_concurrency.lockutils [None req-dc85286d-3eba-4331-9fef-21550d66a5c0 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.066 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.199 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.201 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.206 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.207 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.214 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.215 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:07:31 compute-0 podman[279432]: 2025-11-29 08:07:31.216799553 +0000 UTC m=+0.082068083 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.223 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.224 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.378 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:31 compute-0 podman[279471]: 2025-11-29 08:07:31.298530037 +0000 UTC m=+0.036072034 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:07:31 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/330532934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.486 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.487 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3939MB free_disk=59.89241027832031GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.488 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.488 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.574 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance e13306d3-0b4c-4937-8b4b-83605575ce82 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.574 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.575 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance fbf20945-7898-4904-95c5-0047536f3eab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.575 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance fdfa056f-5aa2-4ec1-b558-19291f104ebd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.575 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.576 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.666 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 411 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 264 KiB/s rd, 4.6 MiB/s wr, 92 op/s
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.994 255071 DEBUG oslo_concurrency.lockutils [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.995 255071 DEBUG oslo_concurrency.lockutils [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.995 255071 DEBUG oslo_concurrency.lockutils [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.995 255071 DEBUG oslo_concurrency.lockutils [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.996 255071 DEBUG oslo_concurrency.lockutils [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.998 255071 INFO nova.compute.manager [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Terminating instance
Nov 29 08:07:31 compute-0 nova_compute[255040]: 2025-11-29 08:07:31.999 255071 DEBUG nova.compute.manager [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:07:32 compute-0 podman[279471]: 2025-11-29 08:07:32.001734184 +0000 UTC m=+0.739276151 container create 8535d7ae63daf69785de3e946e3851c2f457c95f13354a2226c47d4d45ee95bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:07:32 compute-0 systemd[1]: Started libpod-conmon-8535d7ae63daf69785de3e946e3851c2f457c95f13354a2226c47d4d45ee95bd.scope.
Nov 29 08:07:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:07:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6494336fe498ee142c7068d1a4bfdc815a23e1c0ff5ff61813d866573877315/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:07:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:07:32 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3054327054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.198 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.207 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:07:32 compute-0 podman[279471]: 2025-11-29 08:07:32.216602366 +0000 UTC m=+0.954144333 container init 8535d7ae63daf69785de3e946e3851c2f457c95f13354a2226c47d4d45ee95bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.224 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:07:32 compute-0 podman[279471]: 2025-11-29 08:07:32.225320782 +0000 UTC m=+0.962862749 container start 8535d7ae63daf69785de3e946e3851c2f457c95f13354a2226c47d4d45ee95bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.254 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.255 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:32 compute-0 neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498[279506]: [NOTICE]   (279512) : New worker (279514) forked
Nov 29 08:07:32 compute-0 neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498[279506]: [NOTICE]   (279512) : Loading success.
Nov 29 08:07:32 compute-0 kernel: tap3adae585-03 (unregistering): left promiscuous mode
Nov 29 08:07:32 compute-0 NetworkManager[49116]: <info>  [1764403652.3830] device (tap3adae585-03): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:07:32 compute-0 ovn_controller[153295]: 2025-11-29T08:07:32Z|00114|binding|INFO|Releasing lport 3adae585-03a6-434e-a645-7fb75855efe0 from this chassis (sb_readonly=0)
Nov 29 08:07:32 compute-0 ovn_controller[153295]: 2025-11-29T08:07:32Z|00115|binding|INFO|Setting lport 3adae585-03a6-434e-a645-7fb75855efe0 down in Southbound
Nov 29 08:07:32 compute-0 ovn_controller[153295]: 2025-11-29T08:07:32Z|00116|binding|INFO|Removing iface tap3adae585-03 ovn-installed in OVS
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.400 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:32.410 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:2b:db 10.100.0.5'], port_security=['fa:16:3e:52:2b:db 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40f35c3c-5e61-44c9-af5e-70c7d4a4426c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e34fda55585f453b8b66f12e625234fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '553abf0a-6893-4b91-98a5-f4750edd0687', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76f3566f-5b18-4f8e-8a2b-ee02876f83ee, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=3adae585-03a6-434e-a645-7fb75855efe0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.423 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:32 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Nov 29 08:07:32 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 18.003s CPU time.
Nov 29 08:07:32 compute-0 systemd-machined[216271]: Machine qemu-10-instance-0000000a terminated.
Nov 29 08:07:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:32.455 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 3adae585-03a6-434e-a645-7fb75855efe0 in datapath 40f35c3c-5e61-44c9-af5e-70c7d4a4426c unbound from our chassis
Nov 29 08:07:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:32.458 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 40f35c3c-5e61-44c9-af5e-70c7d4a4426c
Nov 29 08:07:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:32.485 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed647fb-db2c-4a1f-aa44-b37bdb86d1ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:32.519 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[bd65c846-59b0-438e-b4dc-8ea9562422d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:32.524 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[7d55efae-01bd-4155-816a-3540b89adbc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:32 compute-0 ceph-mon[75237]: pgmap v1442: 305 pgs: 305 active+clean; 411 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 264 KiB/s rd, 4.6 MiB/s wr, 92 op/s
Nov 29 08:07:32 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3054327054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:32.559 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[16592f21-1ece-46b4-9947-9efd9d7098c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:32.583 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[73d14108-a870-45fd-8d4f-c84e62efea70]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40f35c3c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:36:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579138, 'reachable_time': 35207, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279534, 'error': None, 'target': 'ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:32 compute-0 NetworkManager[49116]: <info>  [1764403652.6198] manager: (tap3adae585-03): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Nov 29 08:07:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:32.618 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[0d6b0f86-aaab-4e2c-9054-2c169d685dbe]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap40f35c3c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 579152, 'tstamp': 579152}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279535, 'error': None, 'target': 'ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap40f35c3c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 579156, 'tstamp': 579156}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279535, 'error': None, 'target': 'ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:32.621 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40f35c3c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.627 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.634 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:32.632 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40f35c3c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:32.633 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:32.633 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap40f35c3c-50, col_values=(('external_ids', {'iface-id': '7416de2d-6dc8-411d-a143-d9d9b0a4507f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:07:32.634 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.642 255071 INFO nova.virt.libvirt.driver [-] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Instance destroyed successfully.
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.643 255071 DEBUG nova.objects.instance [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lazy-loading 'resources' on Instance uuid b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.660 255071 DEBUG nova.virt.libvirt.vif [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-292663265',display_name='tempest-TestStampPattern-server-292663265',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-292663265',id=10,image_ref='66b56617-4575-4cdd-9816-e743304dffab',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJE9RqYsz9HjQ/t1DVesVU5+xhErSoHBDhqDFMn5e1HnxCoCHbyhG0Ca+mVMomD/L3wNZd1oYWRpzT93dK7YeXeDz2hG7gc6vbzGWNmMv5BpvrM+1KI+r/GQ5ox5/o1aRQ==',key_name='tempest-TestStampPattern-1389223213',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:06:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e34fda55585f453b8b66f12e625234fe',ramdisk_id='',reservation_id='r-zkgjybqr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='e13306d3-0b4c-4937-8b4b-83605575ce82',image_min_disk='1',image_min_ram='0',image_owner_id='e34fda55585f453b8b66f12e625234fe',image_owner_project_name='tempest-TestStampPattern-194782062',image_owner_user_name='tempest-TestStampPattern-194782062-project-member',image_user_id='c4f53a86d1eb4bdebed4ec5dd9b5ff45',owner_project_name='tempest-TestStampPattern-194782062',owner_user_name='tempest-TestStampPattern-194782062-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:06:47Z,user_data=None,user_id='c4f53a86d1eb4bdebed4ec5dd9b5ff45',uuid=b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3adae585-03a6-434e-a645-7fb75855efe0", "address": "fa:16:3e:52:2b:db", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3adae585-03", "ovs_interfaceid": "3adae585-03a6-434e-a645-7fb75855efe0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.663 255071 DEBUG nova.network.os_vif_util [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Converting VIF {"id": "3adae585-03a6-434e-a645-7fb75855efe0", "address": "fa:16:3e:52:2b:db", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3adae585-03", "ovs_interfaceid": "3adae585-03a6-434e-a645-7fb75855efe0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.664 255071 DEBUG nova.network.os_vif_util [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:52:2b:db,bridge_name='br-int',has_traffic_filtering=True,id=3adae585-03a6-434e-a645-7fb75855efe0,network=Network(40f35c3c-5e61-44c9-af5e-70c7d4a4426c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3adae585-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.664 255071 DEBUG os_vif [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:52:2b:db,bridge_name='br-int',has_traffic_filtering=True,id=3adae585-03a6-434e-a645-7fb75855efe0,network=Network(40f35c3c-5e61-44c9-af5e-70c7d4a4426c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3adae585-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.671 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.671 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3adae585-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.676 255071 DEBUG nova.compute.manager [req-596da109-0e86-45d2-9079-92705af5392b req-18db1d0a-10d7-46f0-9f1d-77d0874e7381 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Received event network-vif-unplugged-3adae585-03a6-434e-a645-7fb75855efe0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.677 255071 DEBUG oslo_concurrency.lockutils [req-596da109-0e86-45d2-9079-92705af5392b req-18db1d0a-10d7-46f0-9f1d-77d0874e7381 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.678 255071 DEBUG oslo_concurrency.lockutils [req-596da109-0e86-45d2-9079-92705af5392b req-18db1d0a-10d7-46f0-9f1d-77d0874e7381 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.678 255071 DEBUG oslo_concurrency.lockutils [req-596da109-0e86-45d2-9079-92705af5392b req-18db1d0a-10d7-46f0-9f1d-77d0874e7381 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.678 255071 DEBUG nova.compute.manager [req-596da109-0e86-45d2-9079-92705af5392b req-18db1d0a-10d7-46f0-9f1d-77d0874e7381 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] No waiting events found dispatching network-vif-unplugged-3adae585-03a6-434e-a645-7fb75855efe0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.678 255071 DEBUG nova.compute.manager [req-596da109-0e86-45d2-9079-92705af5392b req-18db1d0a-10d7-46f0-9f1d-77d0874e7381 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Received event network-vif-unplugged-3adae585-03a6-434e-a645-7fb75855efe0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.679 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.680 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.683 255071 INFO os_vif [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:52:2b:db,bridge_name='br-int',has_traffic_filtering=True,id=3adae585-03a6-434e-a645-7fb75855efe0,network=Network(40f35c3c-5e61-44c9-af5e-70c7d4a4426c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3adae585-03')
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.938 255071 DEBUG nova.compute.manager [req-4717d0f1-6ca0-4283-bf4c-f67a6224abad req-d4ddd308-b787-4770-abb0-0306777d7f4c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Received event network-changed-3adae585-03a6-434e-a645-7fb75855efe0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.938 255071 DEBUG nova.compute.manager [req-4717d0f1-6ca0-4283-bf4c-f67a6224abad req-d4ddd308-b787-4770-abb0-0306777d7f4c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Refreshing instance network info cache due to event network-changed-3adae585-03a6-434e-a645-7fb75855efe0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.939 255071 DEBUG oslo_concurrency.lockutils [req-4717d0f1-6ca0-4283-bf4c-f67a6224abad req-d4ddd308-b787-4770-abb0-0306777d7f4c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.939 255071 DEBUG oslo_concurrency.lockutils [req-4717d0f1-6ca0-4283-bf4c-f67a6224abad req-d4ddd308-b787-4770-abb0-0306777d7f4c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:07:32 compute-0 nova_compute[255040]: 2025-11-29 08:07:32.939 255071 DEBUG nova.network.neutron [req-4717d0f1-6ca0-4283-bf4c-f67a6224abad req-d4ddd308-b787-4770-abb0-0306777d7f4c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Refreshing network info cache for port 3adae585-03a6-434e-a645-7fb75855efe0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:07:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 411 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Nov 29 08:07:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:34 compute-0 nova_compute[255040]: 2025-11-29 08:07:34.672 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:34 compute-0 nova_compute[255040]: 2025-11-29 08:07:34.706 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:34 compute-0 nova_compute[255040]: 2025-11-29 08:07:34.706 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:34 compute-0 ceph-mon[75237]: pgmap v1443: 305 pgs: 305 active+clean; 411 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Nov 29 08:07:34 compute-0 nova_compute[255040]: 2025-11-29 08:07:34.892 255071 DEBUG nova.compute.manager [req-15e7e2ff-3ba7-49b3-89c0-f56fc8ed32ca req-246dff59-dbff-4539-9ee3-af9148389f8a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Received event network-changed-2eff6be1-3572-4ee8-b40e-208a0051b03c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:34 compute-0 nova_compute[255040]: 2025-11-29 08:07:34.893 255071 DEBUG nova.compute.manager [req-15e7e2ff-3ba7-49b3-89c0-f56fc8ed32ca req-246dff59-dbff-4539-9ee3-af9148389f8a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Refreshing instance network info cache due to event network-changed-2eff6be1-3572-4ee8-b40e-208a0051b03c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:07:34 compute-0 nova_compute[255040]: 2025-11-29 08:07:34.894 255071 DEBUG oslo_concurrency.lockutils [req-15e7e2ff-3ba7-49b3-89c0-f56fc8ed32ca req-246dff59-dbff-4539-9ee3-af9148389f8a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-fbf20945-7898-4904-95c5-0047536f3eab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:07:34 compute-0 nova_compute[255040]: 2025-11-29 08:07:34.894 255071 DEBUG oslo_concurrency.lockutils [req-15e7e2ff-3ba7-49b3-89c0-f56fc8ed32ca req-246dff59-dbff-4539-9ee3-af9148389f8a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-fbf20945-7898-4904-95c5-0047536f3eab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:07:34 compute-0 nova_compute[255040]: 2025-11-29 08:07:34.894 255071 DEBUG nova.network.neutron [req-15e7e2ff-3ba7-49b3-89c0-f56fc8ed32ca req-246dff59-dbff-4539-9ee3-af9148389f8a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Refreshing network info cache for port 2eff6be1-3572-4ee8-b40e-208a0051b03c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:07:35 compute-0 nova_compute[255040]: 2025-11-29 08:07:35.044 255071 DEBUG nova.compute.manager [req-6eb4b846-4f92-466e-ae5e-2767697a2abb req-31661754-cfa1-486d-8581-11fb2a42968f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Received event network-changed-a591a89f-fb00-4493-90c0-a41a373c5a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:35 compute-0 nova_compute[255040]: 2025-11-29 08:07:35.044 255071 DEBUG nova.compute.manager [req-6eb4b846-4f92-466e-ae5e-2767697a2abb req-31661754-cfa1-486d-8581-11fb2a42968f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Refreshing instance network info cache due to event network-changed-a591a89f-fb00-4493-90c0-a41a373c5a5d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:07:35 compute-0 nova_compute[255040]: 2025-11-29 08:07:35.045 255071 DEBUG oslo_concurrency.lockutils [req-6eb4b846-4f92-466e-ae5e-2767697a2abb req-31661754-cfa1-486d-8581-11fb2a42968f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-fdfa056f-5aa2-4ec1-b558-19291f104ebd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:07:35 compute-0 nova_compute[255040]: 2025-11-29 08:07:35.045 255071 DEBUG oslo_concurrency.lockutils [req-6eb4b846-4f92-466e-ae5e-2767697a2abb req-31661754-cfa1-486d-8581-11fb2a42968f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-fdfa056f-5aa2-4ec1-b558-19291f104ebd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:07:35 compute-0 nova_compute[255040]: 2025-11-29 08:07:35.045 255071 DEBUG nova.network.neutron [req-6eb4b846-4f92-466e-ae5e-2767697a2abb req-31661754-cfa1-486d-8581-11fb2a42968f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Refreshing network info cache for port a591a89f-fb00-4493-90c0-a41a373c5a5d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:07:35 compute-0 nova_compute[255040]: 2025-11-29 08:07:35.558 255071 DEBUG nova.network.neutron [req-4717d0f1-6ca0-4283-bf4c-f67a6224abad req-d4ddd308-b787-4770-abb0-0306777d7f4c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Updated VIF entry in instance network info cache for port 3adae585-03a6-434e-a645-7fb75855efe0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:07:35 compute-0 nova_compute[255040]: 2025-11-29 08:07:35.559 255071 DEBUG nova.network.neutron [req-4717d0f1-6ca0-4283-bf4c-f67a6224abad req-d4ddd308-b787-4770-abb0-0306777d7f4c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Updating instance_info_cache with network_info: [{"id": "3adae585-03a6-434e-a645-7fb75855efe0", "address": "fa:16:3e:52:2b:db", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3adae585-03", "ovs_interfaceid": "3adae585-03a6-434e-a645-7fb75855efe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:35 compute-0 nova_compute[255040]: 2025-11-29 08:07:35.581 255071 DEBUG oslo_concurrency.lockutils [req-4717d0f1-6ca0-4283-bf4c-f67a6224abad req-d4ddd308-b787-4770-abb0-0306777d7f4c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:07:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 398 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.9 MiB/s wr, 208 op/s
Nov 29 08:07:36 compute-0 ceph-mon[75237]: pgmap v1444: 305 pgs: 305 active+clean; 398 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.9 MiB/s wr, 208 op/s
Nov 29 08:07:36 compute-0 nova_compute[255040]: 2025-11-29 08:07:36.969 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 398 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.1 MiB/s wr, 184 op/s
Nov 29 08:07:37 compute-0 nova_compute[255040]: 2025-11-29 08:07:37.708 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:37 compute-0 podman[279567]: 2025-11-29 08:07:37.936217966 +0000 UTC m=+0.094671283 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Nov 29 08:07:38 compute-0 nova_compute[255040]: 2025-11-29 08:07:38.594 255071 DEBUG nova.network.neutron [req-15e7e2ff-3ba7-49b3-89c0-f56fc8ed32ca req-246dff59-dbff-4539-9ee3-af9148389f8a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Updated VIF entry in instance network info cache for port 2eff6be1-3572-4ee8-b40e-208a0051b03c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:07:38 compute-0 nova_compute[255040]: 2025-11-29 08:07:38.595 255071 DEBUG nova.network.neutron [req-15e7e2ff-3ba7-49b3-89c0-f56fc8ed32ca req-246dff59-dbff-4539-9ee3-af9148389f8a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Updating instance_info_cache with network_info: [{"id": "2eff6be1-3572-4ee8-b40e-208a0051b03c", "address": "fa:16:3e:59:76:07", "network": {"id": "2b360768-ee11-45df-a7b1-30c167686953", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2134492214-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eeda2edc1f464a5480a29e4ff783c9b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eff6be1-35", "ovs_interfaceid": "2eff6be1-3572-4ee8-b40e-208a0051b03c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:38 compute-0 nova_compute[255040]: 2025-11-29 08:07:38.621 255071 DEBUG oslo_concurrency.lockutils [req-15e7e2ff-3ba7-49b3-89c0-f56fc8ed32ca req-246dff59-dbff-4539-9ee3-af9148389f8a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-fbf20945-7898-4904-95c5-0047536f3eab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:07:38 compute-0 nova_compute[255040]: 2025-11-29 08:07:38.623 255071 DEBUG nova.compute.manager [req-15e7e2ff-3ba7-49b3-89c0-f56fc8ed32ca req-246dff59-dbff-4539-9ee3-af9148389f8a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Received event network-vif-plugged-3adae585-03a6-434e-a645-7fb75855efe0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:38 compute-0 nova_compute[255040]: 2025-11-29 08:07:38.623 255071 DEBUG oslo_concurrency.lockutils [req-15e7e2ff-3ba7-49b3-89c0-f56fc8ed32ca req-246dff59-dbff-4539-9ee3-af9148389f8a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:38 compute-0 nova_compute[255040]: 2025-11-29 08:07:38.623 255071 DEBUG oslo_concurrency.lockutils [req-15e7e2ff-3ba7-49b3-89c0-f56fc8ed32ca req-246dff59-dbff-4539-9ee3-af9148389f8a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:38 compute-0 nova_compute[255040]: 2025-11-29 08:07:38.624 255071 DEBUG oslo_concurrency.lockutils [req-15e7e2ff-3ba7-49b3-89c0-f56fc8ed32ca req-246dff59-dbff-4539-9ee3-af9148389f8a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:38 compute-0 nova_compute[255040]: 2025-11-29 08:07:38.624 255071 DEBUG nova.compute.manager [req-15e7e2ff-3ba7-49b3-89c0-f56fc8ed32ca req-246dff59-dbff-4539-9ee3-af9148389f8a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] No waiting events found dispatching network-vif-plugged-3adae585-03a6-434e-a645-7fb75855efe0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:07:38 compute-0 nova_compute[255040]: 2025-11-29 08:07:38.624 255071 WARNING nova.compute.manager [req-15e7e2ff-3ba7-49b3-89c0-f56fc8ed32ca req-246dff59-dbff-4539-9ee3-af9148389f8a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Received unexpected event network-vif-plugged-3adae585-03a6-434e-a645-7fb75855efe0 for instance with vm_state active and task_state deleting.
Nov 29 08:07:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:07:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:07:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:07:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:07:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:07:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:07:38 compute-0 ceph-mon[75237]: pgmap v1445: 305 pgs: 305 active+clean; 398 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.1 MiB/s wr, 184 op/s
Nov 29 08:07:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:07:38
Nov 29 08:07:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:07:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:07:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', '.mgr', 'backups', 'default.rgw.control', '.rgw.root', 'images', 'volumes']
Nov 29 08:07:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:07:39 compute-0 nova_compute[255040]: 2025-11-29 08:07:39.499 255071 DEBUG nova.network.neutron [req-6eb4b846-4f92-466e-ae5e-2767697a2abb req-31661754-cfa1-486d-8581-11fb2a42968f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Updated VIF entry in instance network info cache for port a591a89f-fb00-4493-90c0-a41a373c5a5d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:07:39 compute-0 nova_compute[255040]: 2025-11-29 08:07:39.500 255071 DEBUG nova.network.neutron [req-6eb4b846-4f92-466e-ae5e-2767697a2abb req-31661754-cfa1-486d-8581-11fb2a42968f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Updating instance_info_cache with network_info: [{"id": "a591a89f-fb00-4493-90c0-a41a373c5a5d", "address": "fa:16:3e:1d:e1:53", "network": {"id": "b1606039-8d07-4578-bb07-e1193dc21498", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-920991102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87f822d62c8f4ac6bed1a893f2b9e73f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa591a89f-fb", "ovs_interfaceid": "a591a89f-fb00-4493-90c0-a41a373c5a5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:39 compute-0 nova_compute[255040]: 2025-11-29 08:07:39.516 255071 DEBUG oslo_concurrency.lockutils [req-6eb4b846-4f92-466e-ae5e-2767697a2abb req-31661754-cfa1-486d-8581-11fb2a42968f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-fdfa056f-5aa2-4ec1-b558-19291f104ebd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:07:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 390 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.1 MiB/s wr, 195 op/s
Nov 29 08:07:39 compute-0 nova_compute[255040]: 2025-11-29 08:07:39.674 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:41 compute-0 ceph-mon[75237]: pgmap v1446: 305 pgs: 305 active+clean; 390 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.1 MiB/s wr, 195 op/s
Nov 29 08:07:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 390 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.6 KiB/s wr, 163 op/s
Nov 29 08:07:42 compute-0 nova_compute[255040]: 2025-11-29 08:07:42.710 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:42 compute-0 ceph-mon[75237]: pgmap v1447: 305 pgs: 305 active+clean; 390 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.6 KiB/s wr, 163 op/s
Nov 29 08:07:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:07:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:07:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:07:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:07:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:07:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:07:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:07:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:07:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:07:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:07:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 390 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.7 KiB/s wr, 158 op/s
Nov 29 08:07:44 compute-0 nova_compute[255040]: 2025-11-29 08:07:44.676 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:45 compute-0 ceph-mon[75237]: pgmap v1448: 305 pgs: 305 active+clean; 390 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.7 KiB/s wr, 158 op/s
Nov 29 08:07:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 392 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 153 KiB/s wr, 106 op/s
Nov 29 08:07:47 compute-0 nova_compute[255040]: 2025-11-29 08:07:47.639 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403652.6370027, b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:07:47 compute-0 nova_compute[255040]: 2025-11-29 08:07:47.640 255071 INFO nova.compute.manager [-] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] VM Stopped (Lifecycle Event)
Nov 29 08:07:47 compute-0 nova_compute[255040]: 2025-11-29 08:07:47.661 255071 DEBUG nova.compute.manager [None req-51d7fdb8-3188-41bf-a333-01a39cafe2d2 - - - - - -] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:07:47 compute-0 nova_compute[255040]: 2025-11-29 08:07:47.669 255071 DEBUG nova.compute.manager [None req-51d7fdb8-3188-41bf-a333-01a39cafe2d2 - - - - - -] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:07:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 392 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 152 KiB/s wr, 31 op/s
Nov 29 08:07:47 compute-0 nova_compute[255040]: 2025-11-29 08:07:47.691 255071 INFO nova.compute.manager [None req-51d7fdb8-3188-41bf-a333-01a39cafe2d2 - - - - - -] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] During sync_power_state the instance has a pending task (deleting). Skip.
Nov 29 08:07:47 compute-0 nova_compute[255040]: 2025-11-29 08:07:47.765 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:48 compute-0 ceph-mon[75237]: pgmap v1449: 305 pgs: 305 active+clean; 392 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 153 KiB/s wr, 106 op/s
Nov 29 08:07:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 401 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 1.3 MiB/s wr, 43 op/s
Nov 29 08:07:49 compute-0 nova_compute[255040]: 2025-11-29 08:07:49.679 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:50 compute-0 ceph-mon[75237]: pgmap v1450: 305 pgs: 305 active+clean; 392 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 152 KiB/s wr, 31 op/s
Nov 29 08:07:51 compute-0 sshd[189732]: Timeout before authentication for connection from 45.78.219.195 to 38.102.83.203, pid = 275896
Nov 29 08:07:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 402 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 1.4 MiB/s wr, 33 op/s
Nov 29 08:07:51 compute-0 podman[279588]: 2025-11-29 08:07:51.961741779 +0000 UTC m=+0.104445960 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 29 08:07:52 compute-0 nova_compute[255040]: 2025-11-29 08:07:52.767 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 406 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 1.9 MiB/s wr, 40 op/s
Nov 29 08:07:53 compute-0 ceph-mon[75237]: pgmap v1451: 305 pgs: 305 active+clean; 401 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 1.3 MiB/s wr, 43 op/s
Nov 29 08:07:54 compute-0 nova_compute[255040]: 2025-11-29 08:07:54.681 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 410 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 2.5 MiB/s wr, 41 op/s
Nov 29 08:07:56 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.780549526s, txc = 0x558bf245a900
Nov 29 08:07:56 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.780166149s, txc = 0x558bf314af00
Nov 29 08:07:56 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.778464794s, txc = 0x558bf245bb00
Nov 29 08:07:56 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.778217316s, txc = 0x558bf3096900
Nov 29 08:07:56 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.778244019s, txc = 0x558bf3071800
Nov 29 08:07:56 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.778017998s, txc = 0x558bf21b2f00
Nov 29 08:07:56 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.778039455s, txc = 0x558bf218bb00
Nov 29 08:07:56 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.777891159s, txc = 0x558bf1ea0000
Nov 29 08:07:56 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.777847767s, txc = 0x558bf245af00
Nov 29 08:07:56 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.777720928s, txc = 0x558bf24a6900
Nov 29 08:07:56 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.777545452s, txc = 0x558bf23acf00
Nov 29 08:07:56 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.777036667s, txc = 0x558bf21b3200
Nov 29 08:07:56 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.777060509s, txc = 0x558bf310f500
Nov 29 08:07:56 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.776832104s, txc = 0x558bf2d50300
Nov 29 08:07:56 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.776288033s, txc = 0x558bf23d8c00
Nov 29 08:07:56 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.776040554s, txc = 0x558bf1ec3b00
Nov 29 08:07:56 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.595216274s, txc = 0x558bf24a7b00
Nov 29 08:07:56 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.593986034s, txc = 0x558bf2516600
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020658713249658137 of space, bias 1.0, pg target 0.6197613974897441 quantized to 32 (current 32)
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0007625797681838901 of space, bias 1.0, pg target 0.22877393045516703 quantized to 32 (current 32)
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014239867202253044 of space, bias 1.0, pg target 0.42719601606759133 quantized to 32 (current 32)
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:07:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:07:56 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.832301140s, txc = 0x5571f323c600
Nov 29 08:07:56 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.831918716s, txc = 0x5571f3e68000
Nov 29 08:07:56 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.831801891s, txc = 0x5571f32bbb00
Nov 29 08:07:56 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.831669331s, txc = 0x5571f3e24900
Nov 29 08:07:56 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.831518650s, txc = 0x5571f3a83500
Nov 29 08:07:56 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.831317902s, txc = 0x5571f2f5c000
Nov 29 08:07:56 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.831207275s, txc = 0x5571f323c900
Nov 29 08:07:56 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.830897808s, txc = 0x5571f3dd2300
Nov 29 08:07:56 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.830754280s, txc = 0x5571f3e71800
Nov 29 08:07:56 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.830445766s, txc = 0x5571f38f4000
Nov 29 08:07:56 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.829812527s, txc = 0x5571f32ba300
Nov 29 08:07:56 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.648219585s, txc = 0x5571f2ccdb00
Nov 29 08:07:56 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.648050308s, txc = 0x5571f3e9e600
Nov 29 08:07:56 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.647362709s, txc = 0x5571f3e70300
Nov 29 08:07:56 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.646933079s, txc = 0x5571f2ca5500
Nov 29 08:07:56 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.646096706s, txc = 0x5571f3b26600
Nov 29 08:07:56 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.645798206s, txc = 0x5571f3e48f00
Nov 29 08:07:56 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.645304680s, txc = 0x5571f3f61b00
Nov 29 08:07:56 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.675352097s, txc = 0x562224a94300
Nov 29 08:07:56 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.672725677s, txc = 0x5622258c8000
Nov 29 08:07:56 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.672294617s, txc = 0x56222492ec00
Nov 29 08:07:56 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.672754765s, txc = 0x562224af6000
Nov 29 08:07:56 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.672529221s, txc = 0x562224ad6f00
Nov 29 08:07:56 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.671866417s, txc = 0x562225459200
Nov 29 08:07:56 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.671420097s, txc = 0x562224982f00
Nov 29 08:07:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:56 compute-0 ceph-mon[75237]: pgmap v1452: 305 pgs: 305 active+clean; 402 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 1.4 MiB/s wr, 33 op/s
Nov 29 08:07:56 compute-0 ceph-mon[75237]: pgmap v1453: 305 pgs: 305 active+clean; 406 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 1.9 MiB/s wr, 40 op/s
Nov 29 08:07:56 compute-0 ceph-mon[75237]: pgmap v1454: 305 pgs: 305 active+clean; 410 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 2.5 MiB/s wr, 41 op/s
Nov 29 08:07:57 compute-0 nova_compute[255040]: 2025-11-29 08:07:57.202 255071 INFO nova.virt.libvirt.driver [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Deleting instance files /var/lib/nova/instances/b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12_del
Nov 29 08:07:57 compute-0 nova_compute[255040]: 2025-11-29 08:07:57.203 255071 INFO nova.virt.libvirt.driver [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Deletion of /var/lib/nova/instances/b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12_del complete
Nov 29 08:07:57 compute-0 nova_compute[255040]: 2025-11-29 08:07:57.264 255071 INFO nova.compute.manager [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Took 25.26 seconds to destroy the instance on the hypervisor.
Nov 29 08:07:57 compute-0 nova_compute[255040]: 2025-11-29 08:07:57.265 255071 DEBUG oslo.service.loopingcall [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:07:57 compute-0 nova_compute[255040]: 2025-11-29 08:07:57.266 255071 DEBUG nova.compute.manager [-] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:07:57 compute-0 nova_compute[255040]: 2025-11-29 08:07:57.266 255071 DEBUG nova.network.neutron [-] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:07:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 410 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 2.4 MiB/s wr, 23 op/s
Nov 29 08:07:57 compute-0 nova_compute[255040]: 2025-11-29 08:07:57.770 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:57 compute-0 nova_compute[255040]: 2025-11-29 08:07:57.909 255071 DEBUG nova.network.neutron [-] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:57 compute-0 nova_compute[255040]: 2025-11-29 08:07:57.931 255071 INFO nova.compute.manager [-] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Took 0.66 seconds to deallocate network for instance.
Nov 29 08:07:57 compute-0 nova_compute[255040]: 2025-11-29 08:07:57.973 255071 DEBUG oslo_concurrency.lockutils [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:57 compute-0 nova_compute[255040]: 2025-11-29 08:07:57.975 255071 DEBUG oslo_concurrency.lockutils [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:57 compute-0 nova_compute[255040]: 2025-11-29 08:07:57.978 255071 DEBUG nova.compute.manager [req-82546c55-d303-403e-aa1c-955e87c42be6 req-18cbe73a-f4f2-4851-bc53-f85905fc16f8 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12] Received event network-vif-deleted-3adae585-03a6-434e-a645-7fb75855efe0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:58 compute-0 nova_compute[255040]: 2025-11-29 08:07:58.084 255071 DEBUG oslo_concurrency.processutils [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:58 compute-0 ovn_controller[153295]: 2025-11-29T08:07:58Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1d:e1:53 10.100.0.13
Nov 29 08:07:58 compute-0 ovn_controller[153295]: 2025-11-29T08:07:58Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1d:e1:53 10.100.0.13
Nov 29 08:07:58 compute-0 ceph-mon[75237]: pgmap v1455: 305 pgs: 305 active+clean; 410 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 2.4 MiB/s wr, 23 op/s
Nov 29 08:07:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:07:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3397912215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:58 compute-0 nova_compute[255040]: 2025-11-29 08:07:58.578 255071 DEBUG oslo_concurrency.processutils [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:58 compute-0 nova_compute[255040]: 2025-11-29 08:07:58.588 255071 DEBUG nova.compute.provider_tree [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:07:58 compute-0 nova_compute[255040]: 2025-11-29 08:07:58.608 255071 DEBUG nova.scheduler.client.report [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:07:58 compute-0 nova_compute[255040]: 2025-11-29 08:07:58.627 255071 DEBUG oslo_concurrency.lockutils [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:58 compute-0 nova_compute[255040]: 2025-11-29 08:07:58.657 255071 INFO nova.scheduler.client.report [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Deleted allocations for instance b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12
Nov 29 08:07:58 compute-0 nova_compute[255040]: 2025-11-29 08:07:58.721 255071 DEBUG oslo_concurrency.lockutils [None req-232c954f-d8c9-4d6d-905d-3f3e1f814aff c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "b6e51e8c-17cb-4e35-be0f-dc2aa5b28b12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 26.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:07:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2365385023' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:07:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:07:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2365385023' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:07:59 compute-0 ovn_controller[153295]: 2025-11-29T08:07:59Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:59:76:07 10.100.0.4
Nov 29 08:07:59 compute-0 ovn_controller[153295]: 2025-11-29T08:07:59Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:59:76:07 10.100.0.4
Nov 29 08:07:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 428 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 440 KiB/s rd, 3.6 MiB/s wr, 75 op/s
Nov 29 08:07:59 compute-0 nova_compute[255040]: 2025-11-29 08:07:59.685 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3397912215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2365385023' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:07:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2365385023' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:01 compute-0 sudo[279636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:01 compute-0 sudo[279636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:01 compute-0 sudo[279636]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:01 compute-0 podman[279660]: 2025-11-29 08:08:01.361219854 +0000 UTC m=+0.076783386 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 08:08:01 compute-0 sudo[279670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:08:01 compute-0 sudo[279670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:01 compute-0 sudo[279670]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:01 compute-0 ceph-mon[75237]: pgmap v1456: 305 pgs: 305 active+clean; 428 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 440 KiB/s rd, 3.6 MiB/s wr, 75 op/s
Nov 29 08:08:01 compute-0 sudo[279705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:01 compute-0 sudo[279705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:01 compute-0 sudo[279705]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:01 compute-0 sudo[279730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:08:01 compute-0 sudo[279730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 436 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 503 KiB/s rd, 2.8 MiB/s wr, 73 op/s
Nov 29 08:08:02 compute-0 sudo[279730]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:08:02 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:08:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:08:02 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:08:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:08:02 compute-0 nova_compute[255040]: 2025-11-29 08:08:02.805 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:03 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:08:03 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev a73e724c-89d3-4500-8a0a-a09a697526d8 does not exist
Nov 29 08:08:03 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev ead9ed24-a80d-4a01-b35f-57bc21a81d75 does not exist
Nov 29 08:08:03 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev bb968875-747d-4f8c-8b25-c6d53defecc7 does not exist
Nov 29 08:08:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:08:03 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:08:03 compute-0 ceph-mon[75237]: pgmap v1457: 305 pgs: 305 active+clean; 436 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 503 KiB/s rd, 2.8 MiB/s wr, 73 op/s
Nov 29 08:08:03 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:08:03 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:08:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:08:03 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:08:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:08:03 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:08:03 compute-0 sudo[279784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:03 compute-0 sudo[279784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:03 compute-0 sudo[279784]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:03 compute-0 sudo[279809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:08:03 compute-0 sudo[279809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:03 compute-0 sudo[279809]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:03 compute-0 sudo[279834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 441 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 516 KiB/s rd, 2.7 MiB/s wr, 74 op/s
Nov 29 08:08:03 compute-0 sudo[279834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:03 compute-0 sudo[279834]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:03 compute-0 sudo[279859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:08:03 compute-0 sudo[279859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:08:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2073087025' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:08:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2073087025' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:04 compute-0 podman[279924]: 2025-11-29 08:08:04.123579533 +0000 UTC m=+0.046446310 container create 3533fdfdc53d70ec4450dd3d6bc411552179bc9285d3260aa5a2968b8f09f1ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hypatia, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:08:04 compute-0 systemd[1]: Started libpod-conmon-3533fdfdc53d70ec4450dd3d6bc411552179bc9285d3260aa5a2968b8f09f1ce.scope.
Nov 29 08:08:04 compute-0 podman[279924]: 2025-11-29 08:08:04.102401174 +0000 UTC m=+0.025267981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:08:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:08:04 compute-0 podman[279924]: 2025-11-29 08:08:04.242482121 +0000 UTC m=+0.165348928 container init 3533fdfdc53d70ec4450dd3d6bc411552179bc9285d3260aa5a2968b8f09f1ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 29 08:08:04 compute-0 podman[279924]: 2025-11-29 08:08:04.253909309 +0000 UTC m=+0.176776086 container start 3533fdfdc53d70ec4450dd3d6bc411552179bc9285d3260aa5a2968b8f09f1ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hypatia, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:08:04 compute-0 podman[279924]: 2025-11-29 08:08:04.258841901 +0000 UTC m=+0.181708688 container attach 3533fdfdc53d70ec4450dd3d6bc411552179bc9285d3260aa5a2968b8f09f1ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 08:08:04 compute-0 condescending_hypatia[279941]: 167 167
Nov 29 08:08:04 compute-0 systemd[1]: libpod-3533fdfdc53d70ec4450dd3d6bc411552179bc9285d3260aa5a2968b8f09f1ce.scope: Deactivated successfully.
Nov 29 08:08:04 compute-0 podman[279924]: 2025-11-29 08:08:04.264051371 +0000 UTC m=+0.186918148 container died 3533fdfdc53d70ec4450dd3d6bc411552179bc9285d3260aa5a2968b8f09f1ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 08:08:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d799fd2b799ae998856178d1760feb54a96b3f5446ba8a9b97ab85f2f888adb-merged.mount: Deactivated successfully.
Nov 29 08:08:04 compute-0 podman[279924]: 2025-11-29 08:08:04.310082469 +0000 UTC m=+0.232949246 container remove 3533fdfdc53d70ec4450dd3d6bc411552179bc9285d3260aa5a2968b8f09f1ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hypatia, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 08:08:04 compute-0 systemd[1]: libpod-conmon-3533fdfdc53d70ec4450dd3d6bc411552179bc9285d3260aa5a2968b8f09f1ce.scope: Deactivated successfully.
Nov 29 08:08:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:08:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:08:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:08:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:08:04 compute-0 ceph-mon[75237]: pgmap v1458: 305 pgs: 305 active+clean; 441 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 516 KiB/s rd, 2.7 MiB/s wr, 74 op/s
Nov 29 08:08:04 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2073087025' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:04 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2073087025' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:04 compute-0 podman[279965]: 2025-11-29 08:08:04.541873473 +0000 UTC m=+0.055268468 container create fd987bf7822b4f2891655c0910b3b2a4997ee2886da42a444d21e7567ab90a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_morse, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 08:08:04 compute-0 systemd[1]: Started libpod-conmon-fd987bf7822b4f2891655c0910b3b2a4997ee2886da42a444d21e7567ab90a3c.scope.
Nov 29 08:08:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:08:04 compute-0 podman[279965]: 2025-11-29 08:08:04.519346797 +0000 UTC m=+0.032741812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:08:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a717c0f7fbb81ff6b7532aa3959f362e865428e7a6a0bd42683ff361450542c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a717c0f7fbb81ff6b7532aa3959f362e865428e7a6a0bd42683ff361450542c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a717c0f7fbb81ff6b7532aa3959f362e865428e7a6a0bd42683ff361450542c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a717c0f7fbb81ff6b7532aa3959f362e865428e7a6a0bd42683ff361450542c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a717c0f7fbb81ff6b7532aa3959f362e865428e7a6a0bd42683ff361450542c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:04 compute-0 podman[279965]: 2025-11-29 08:08:04.643018563 +0000 UTC m=+0.156413578 container init fd987bf7822b4f2891655c0910b3b2a4997ee2886da42a444d21e7567ab90a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_morse, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 29 08:08:04 compute-0 podman[279965]: 2025-11-29 08:08:04.650665199 +0000 UTC m=+0.164060194 container start fd987bf7822b4f2891655c0910b3b2a4997ee2886da42a444d21e7567ab90a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 08:08:04 compute-0 podman[279965]: 2025-11-29 08:08:04.654639366 +0000 UTC m=+0.168034381 container attach fd987bf7822b4f2891655c0910b3b2a4997ee2886da42a444d21e7567ab90a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:08:04 compute-0 nova_compute[255040]: 2025-11-29 08:08:04.688 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Nov 29 08:08:05 compute-0 nova_compute[255040]: 2025-11-29 08:08:05.567 255071 DEBUG oslo_concurrency.lockutils [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:05 compute-0 nova_compute[255040]: 2025-11-29 08:08:05.569 255071 DEBUG oslo_concurrency.lockutils [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:05 compute-0 nova_compute[255040]: 2025-11-29 08:08:05.587 255071 DEBUG nova.objects.instance [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lazy-loading 'flavor' on Instance uuid fdfa056f-5aa2-4ec1-b558-19291f104ebd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:05 compute-0 nova_compute[255040]: 2025-11-29 08:08:05.603 255071 INFO nova.virt.libvirt.driver [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Ignoring supplied device name: /dev/vdb
Nov 29 08:08:05 compute-0 nova_compute[255040]: 2025-11-29 08:08:05.622 255071 DEBUG oslo_concurrency.lockutils [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Nov 29 08:08:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 448 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 2.3 MiB/s wr, 110 op/s
Nov 29 08:08:05 compute-0 charming_morse[279981]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:08:05 compute-0 charming_morse[279981]: --> relative data size: 1.0
Nov 29 08:08:05 compute-0 charming_morse[279981]: --> All data devices are unavailable
Nov 29 08:08:05 compute-0 systemd[1]: libpod-fd987bf7822b4f2891655c0910b3b2a4997ee2886da42a444d21e7567ab90a3c.scope: Deactivated successfully.
Nov 29 08:08:05 compute-0 systemd[1]: libpod-fd987bf7822b4f2891655c0910b3b2a4997ee2886da42a444d21e7567ab90a3c.scope: Consumed 1.137s CPU time.
Nov 29 08:08:05 compute-0 conmon[279981]: conmon fd987bf7822b4f289165 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fd987bf7822b4f2891655c0910b3b2a4997ee2886da42a444d21e7567ab90a3c.scope/container/memory.events
Nov 29 08:08:05 compute-0 nova_compute[255040]: 2025-11-29 08:08:05.839 255071 DEBUG oslo_concurrency.lockutils [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:05 compute-0 podman[279965]: 2025-11-29 08:08:05.841856385 +0000 UTC m=+1.355251380 container died fd987bf7822b4f2891655c0910b3b2a4997ee2886da42a444d21e7567ab90a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_morse, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:08:05 compute-0 nova_compute[255040]: 2025-11-29 08:08:05.841 255071 DEBUG oslo_concurrency.lockutils [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:05 compute-0 nova_compute[255040]: 2025-11-29 08:08:05.842 255071 INFO nova.compute.manager [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Attaching volume cf12aba3-a386-4b4b-b57d-9f09288b68cb to /dev/vdb
Nov 29 08:08:05 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Nov 29 08:08:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a717c0f7fbb81ff6b7532aa3959f362e865428e7a6a0bd42683ff361450542c0-merged.mount: Deactivated successfully.
Nov 29 08:08:05 compute-0 nova_compute[255040]: 2025-11-29 08:08:05.980 255071 DEBUG oslo_concurrency.lockutils [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Acquiring lock "fbf20945-7898-4904-95c5-0047536f3eab" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:05 compute-0 nova_compute[255040]: 2025-11-29 08:08:05.981 255071 DEBUG oslo_concurrency.lockutils [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lock "fbf20945-7898-4904-95c5-0047536f3eab" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:05 compute-0 nova_compute[255040]: 2025-11-29 08:08:05.995 255071 DEBUG os_brick.utils [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:08:05 compute-0 nova_compute[255040]: 2025-11-29 08:08:05.998 255071 DEBUG nova.objects.instance [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lazy-loading 'flavor' on Instance uuid fbf20945-7898-4904-95c5-0047536f3eab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:05.997 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:06 compute-0 podman[279965]: 2025-11-29 08:08:06.006983236 +0000 UTC m=+1.520378231 container remove fd987bf7822b4f2891655c0910b3b2a4997ee2886da42a444d21e7567ab90a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 08:08:06 compute-0 systemd[1]: libpod-conmon-fd987bf7822b4f2891655c0910b3b2a4997ee2886da42a444d21e7567ab90a3c.scope: Deactivated successfully.
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.020 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.020 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[fc22b7b4-98f1-44f7-bff3-672352f4c814]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.023 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.032 255071 DEBUG oslo_concurrency.lockutils [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lock "fbf20945-7898-4904-95c5-0047536f3eab" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.034 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.035 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[d791682e-8915-4153-ac19-e15c3bb2ded1]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.038 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.049 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.049 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[79c42ed2-c6f3-4faf-80de-7ba01b58dd79]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.051 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[1d8d6249-dab1-4e36-85f5-ede6b521c530]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.051 255071 DEBUG oslo_concurrency.processutils [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:06 compute-0 sudo[279859]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.085 255071 DEBUG oslo_concurrency.processutils [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.089 255071 DEBUG os_brick.initiator.connectors.lightos [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.090 255071 DEBUG os_brick.initiator.connectors.lightos [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.090 255071 DEBUG os_brick.initiator.connectors.lightos [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.091 255071 DEBUG os_brick.utils [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] <== get_connector_properties: return (95ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.091 255071 DEBUG nova.virt.block_device [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Updating existing volume attachment record: e62618f1-4cf1-483f-b657-9c7818451899 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:08:06 compute-0 sudo[280030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:06 compute-0 sudo[280030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:06 compute-0 sudo[280030]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:06 compute-0 sudo[280055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:08:06 compute-0 sudo[280055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:06 compute-0 sudo[280055]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.242 255071 DEBUG oslo_concurrency.lockutils [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Acquiring lock "fbf20945-7898-4904-95c5-0047536f3eab" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.243 255071 DEBUG oslo_concurrency.lockutils [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lock "fbf20945-7898-4904-95c5-0047536f3eab" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.243 255071 INFO nova.compute.manager [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Attaching volume 88b3e3ce-733e-4d1e-9625-90db86ff56b1 to /dev/vdb
Nov 29 08:08:06 compute-0 sudo[280080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:06 compute-0 sudo[280080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:06 compute-0 sudo[280080]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:06 compute-0 sudo[280105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 08:08:06 compute-0 sudo[280105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.451 255071 DEBUG os_brick.utils [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.453 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.464 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.464 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[d2eaa86c-a5fe-450a-8346-705d96aed954]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.466 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.473 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.473 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[0928e68d-82de-4e29-990d-59b9e8886a73]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.475 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.485 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.485 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[a5656563-0bb7-496a-bafb-2330f6e71423]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.490 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[ebb94706-e318-46b4-845d-fbb907216ad5]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.490 255071 DEBUG oslo_concurrency.processutils [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.513 255071 DEBUG oslo_concurrency.processutils [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.516 255071 DEBUG os_brick.initiator.connectors.lightos [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.516 255071 DEBUG os_brick.initiator.connectors.lightos [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.516 255071 DEBUG os_brick.initiator.connectors.lightos [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.516 255071 DEBUG os_brick.utils [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.517 255071 DEBUG nova.virt.block_device [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Updating existing volume attachment record: 9774611c-6349-42de-a9b7-dfa6f6d5f4c4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:08:06 compute-0 ceph-mon[75237]: pgmap v1459: 305 pgs: 305 active+clean; 448 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 2.3 MiB/s wr, 110 op/s
Nov 29 08:08:06 compute-0 ceph-mon[75237]: osdmap e256: 3 total, 3 up, 3 in
Nov 29 08:08:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:08:06 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1289120613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:06 compute-0 podman[280175]: 2025-11-29 08:08:06.739560228 +0000 UTC m=+0.044979521 container create 418a98647de86f0cceef3e08d85672cc222123f968049b9fb862eb7378fc8281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:08:06 compute-0 systemd[1]: Started libpod-conmon-418a98647de86f0cceef3e08d85672cc222123f968049b9fb862eb7378fc8281.scope.
Nov 29 08:08:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:08:06 compute-0 podman[280175]: 2025-11-29 08:08:06.721947794 +0000 UTC m=+0.027367107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.827 255071 DEBUG nova.objects.instance [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lazy-loading 'flavor' on Instance uuid fdfa056f-5aa2-4ec1-b558-19291f104ebd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:06 compute-0 podman[280175]: 2025-11-29 08:08:06.830871103 +0000 UTC m=+0.136290426 container init 418a98647de86f0cceef3e08d85672cc222123f968049b9fb862eb7378fc8281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hellman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 08:08:06 compute-0 podman[280175]: 2025-11-29 08:08:06.838659363 +0000 UTC m=+0.144078676 container start 418a98647de86f0cceef3e08d85672cc222123f968049b9fb862eb7378fc8281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hellman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 08:08:06 compute-0 podman[280175]: 2025-11-29 08:08:06.843459561 +0000 UTC m=+0.148878864 container attach 418a98647de86f0cceef3e08d85672cc222123f968049b9fb862eb7378fc8281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:08:06 compute-0 competent_hellman[280191]: 167 167
Nov 29 08:08:06 compute-0 systemd[1]: libpod-418a98647de86f0cceef3e08d85672cc222123f968049b9fb862eb7378fc8281.scope: Deactivated successfully.
Nov 29 08:08:06 compute-0 podman[280175]: 2025-11-29 08:08:06.844552351 +0000 UTC m=+0.149971644 container died 418a98647de86f0cceef3e08d85672cc222123f968049b9fb862eb7378fc8281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.857 255071 DEBUG nova.virt.libvirt.driver [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Attempting to attach volume cf12aba3-a386-4b4b-b57d-9f09288b68cb with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:08:06 compute-0 nova_compute[255040]: 2025-11-29 08:08:06.863 255071 DEBUG nova.virt.libvirt.guest [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:08:06 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:08:06 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-cf12aba3-a386-4b4b-b57d-9f09288b68cb">
Nov 29 08:08:06 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:08:06 compute-0 nova_compute[255040]:   </source>
Nov 29 08:08:06 compute-0 nova_compute[255040]:   <auth username="openstack">
Nov 29 08:08:06 compute-0 nova_compute[255040]:     <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:08:06 compute-0 nova_compute[255040]:   </auth>
Nov 29 08:08:06 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:08:06 compute-0 nova_compute[255040]:   <serial>cf12aba3-a386-4b4b-b57d-9f09288b68cb</serial>
Nov 29 08:08:06 compute-0 nova_compute[255040]: </disk>
Nov 29 08:08:06 compute-0 nova_compute[255040]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:08:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-135a402808f93ddc0bfd249e49d83647500c02c1fb003af3c0ac7aedc58b72de-merged.mount: Deactivated successfully.
Nov 29 08:08:06 compute-0 podman[280175]: 2025-11-29 08:08:06.896358764 +0000 UTC m=+0.201778057 container remove 418a98647de86f0cceef3e08d85672cc222123f968049b9fb862eb7378fc8281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hellman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 08:08:06 compute-0 systemd[1]: libpod-conmon-418a98647de86f0cceef3e08d85672cc222123f968049b9fb862eb7378fc8281.scope: Deactivated successfully.
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.006 255071 DEBUG nova.virt.libvirt.driver [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.009 255071 DEBUG nova.virt.libvirt.driver [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.009 255071 DEBUG nova.virt.libvirt.driver [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.009 255071 DEBUG nova.virt.libvirt.driver [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] No VIF found with MAC fa:16:3e:1d:e1:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:08:07 compute-0 podman[280233]: 2025-11-29 08:08:07.130599354 +0000 UTC m=+0.065462241 container create b51dd70c18dae0b904376076b6691c71294ec1cb6fc8eea8fd316e3e665302d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jepsen, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:08:07 compute-0 systemd[1]: Started libpod-conmon-b51dd70c18dae0b904376076b6691c71294ec1cb6fc8eea8fd316e3e665302d4.scope.
Nov 29 08:08:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:08:07 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1813447968' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:07 compute-0 podman[280233]: 2025-11-29 08:08:07.104270356 +0000 UTC m=+0.039133283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:08:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:08:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58cfe8d60267856634fe2821da2b84f421a77817ab3ff55e619229fd2125541e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58cfe8d60267856634fe2821da2b84f421a77817ab3ff55e619229fd2125541e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58cfe8d60267856634fe2821da2b84f421a77817ab3ff55e619229fd2125541e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58cfe8d60267856634fe2821da2b84f421a77817ab3ff55e619229fd2125541e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:07 compute-0 podman[280233]: 2025-11-29 08:08:07.234451848 +0000 UTC m=+0.169314765 container init b51dd70c18dae0b904376076b6691c71294ec1cb6fc8eea8fd316e3e665302d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jepsen, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:08:07 compute-0 podman[280233]: 2025-11-29 08:08:07.242599906 +0000 UTC m=+0.177462793 container start b51dd70c18dae0b904376076b6691c71294ec1cb6fc8eea8fd316e3e665302d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jepsen, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 08:08:07 compute-0 podman[280233]: 2025-11-29 08:08:07.246172832 +0000 UTC m=+0.181035719 container attach b51dd70c18dae0b904376076b6691c71294ec1cb6fc8eea8fd316e3e665302d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.261 255071 DEBUG oslo_concurrency.lockutils [None req-2ef5631a-b189-4088-9de5-3d5ce6c889e6 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.382 255071 DEBUG os_brick.encryptors [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Using volume encryption metadata '{'encryption_key_id': 'e925dc48-d2cc-466d-96b3-73d895a242d7', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-88b3e3ce-733e-4d1e-9625-90db86ff56b1', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '88b3e3ce-733e-4d1e-9625-90db86ff56b1', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'fbf20945-7898-4904-95c5-0047536f3eab', 'attached_at': '', 'detached_at': '', 'volume_id': '88b3e3ce-733e-4d1e-9625-90db86ff56b1', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.392 255071 DEBUG barbicanclient.client [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.408 255071 DEBUG barbicanclient.v1.secrets [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/e925dc48-d2cc-466d-96b3-73d895a242d7 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.409 255071 INFO barbicanclient.base [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Calculated Secrets uuid ref: secrets/e925dc48-d2cc-466d-96b3-73d895a242d7
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.449 255071 DEBUG barbicanclient.client [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.449 255071 INFO barbicanclient.base [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Calculated Secrets uuid ref: secrets/e925dc48-d2cc-466d-96b3-73d895a242d7
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.466 255071 DEBUG barbicanclient.client [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.467 255071 INFO barbicanclient.base [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Calculated Secrets uuid ref: secrets/e925dc48-d2cc-466d-96b3-73d895a242d7
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.493 255071 DEBUG barbicanclient.client [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.493 255071 INFO barbicanclient.base [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Calculated Secrets uuid ref: secrets/e925dc48-d2cc-466d-96b3-73d895a242d7
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.516 255071 DEBUG barbicanclient.client [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.516 255071 INFO barbicanclient.base [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Calculated Secrets uuid ref: secrets/e925dc48-d2cc-466d-96b3-73d895a242d7
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.540 255071 DEBUG barbicanclient.client [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.540 255071 INFO barbicanclient.base [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Calculated Secrets uuid ref: secrets/e925dc48-d2cc-466d-96b3-73d895a242d7
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.580 255071 DEBUG barbicanclient.client [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.581 255071 INFO barbicanclient.base [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Calculated Secrets uuid ref: secrets/e925dc48-d2cc-466d-96b3-73d895a242d7
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.602 255071 DEBUG barbicanclient.client [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.603 255071 INFO barbicanclient.base [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Calculated Secrets uuid ref: secrets/e925dc48-d2cc-466d-96b3-73d895a242d7
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.625 255071 DEBUG barbicanclient.client [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.625 255071 INFO barbicanclient.base [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Calculated Secrets uuid ref: secrets/e925dc48-d2cc-466d-96b3-73d895a242d7
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.647 255071 DEBUG barbicanclient.client [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.648 255071 INFO barbicanclient.base [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Calculated Secrets uuid ref: secrets/e925dc48-d2cc-466d-96b3-73d895a242d7
Nov 29 08:08:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 448 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 723 KiB/s rd, 2.1 MiB/s wr, 127 op/s
Nov 29 08:08:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Nov 29 08:08:07 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1289120613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:07 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1813447968' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.699 255071 DEBUG barbicanclient.client [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.700 255071 INFO barbicanclient.base [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Calculated Secrets uuid ref: secrets/e925dc48-d2cc-466d-96b3-73d895a242d7
Nov 29 08:08:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Nov 29 08:08:07 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.722 255071 DEBUG barbicanclient.client [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.723 255071 INFO barbicanclient.base [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Calculated Secrets uuid ref: secrets/e925dc48-d2cc-466d-96b3-73d895a242d7
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.757 255071 DEBUG barbicanclient.client [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.757 255071 INFO barbicanclient.base [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Calculated Secrets uuid ref: secrets/e925dc48-d2cc-466d-96b3-73d895a242d7
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.803 255071 DEBUG barbicanclient.client [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.804 255071 INFO barbicanclient.base [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Calculated Secrets uuid ref: secrets/e925dc48-d2cc-466d-96b3-73d895a242d7
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.807 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.835 255071 DEBUG barbicanclient.client [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.835 255071 INFO barbicanclient.base [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Calculated Secrets uuid ref: secrets/e925dc48-d2cc-466d-96b3-73d895a242d7
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.864 255071 DEBUG barbicanclient.client [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.865 255071 DEBUG nova.virt.libvirt.host [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 08:08:07 compute-0 nova_compute[255040]:   <usage type="volume">
Nov 29 08:08:07 compute-0 nova_compute[255040]:     <volume>88b3e3ce-733e-4d1e-9625-90db86ff56b1</volume>
Nov 29 08:08:07 compute-0 nova_compute[255040]:   </usage>
Nov 29 08:08:07 compute-0 nova_compute[255040]: </secret>
Nov 29 08:08:07 compute-0 nova_compute[255040]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.881 255071 DEBUG nova.objects.instance [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lazy-loading 'flavor' on Instance uuid fbf20945-7898-4904-95c5-0047536f3eab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.907 255071 DEBUG nova.virt.libvirt.driver [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Attempting to attach volume 88b3e3ce-733e-4d1e-9625-90db86ff56b1 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:08:07 compute-0 nova_compute[255040]: 2025-11-29 08:08:07.911 255071 DEBUG nova.virt.libvirt.guest [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:08:07 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:08:07 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-88b3e3ce-733e-4d1e-9625-90db86ff56b1">
Nov 29 08:08:07 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:08:07 compute-0 nova_compute[255040]:   </source>
Nov 29 08:08:07 compute-0 nova_compute[255040]:   <auth username="openstack">
Nov 29 08:08:07 compute-0 nova_compute[255040]:     <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:08:07 compute-0 nova_compute[255040]:   </auth>
Nov 29 08:08:07 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:08:07 compute-0 nova_compute[255040]:   <serial>88b3e3ce-733e-4d1e-9625-90db86ff56b1</serial>
Nov 29 08:08:07 compute-0 nova_compute[255040]:   <encryption format="luks">
Nov 29 08:08:07 compute-0 nova_compute[255040]:     <secret type="passphrase" uuid="9cf8ff37-67c6-4090-9ec5-0477fc500364"/>
Nov 29 08:08:07 compute-0 nova_compute[255040]:   </encryption>
Nov 29 08:08:07 compute-0 nova_compute[255040]: </disk>
Nov 29 08:08:07 compute-0 nova_compute[255040]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]: {
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:     "0": [
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:         {
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "devices": [
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "/dev/loop3"
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             ],
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "lv_name": "ceph_lv0",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "lv_size": "21470642176",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "name": "ceph_lv0",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "tags": {
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.cluster_name": "ceph",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.crush_device_class": "",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.encrypted": "0",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.osd_id": "0",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.type": "block",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.vdo": "0"
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             },
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "type": "block",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "vg_name": "ceph_vg0"
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:         }
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:     ],
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:     "1": [
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:         {
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "devices": [
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "/dev/loop4"
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             ],
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "lv_name": "ceph_lv1",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "lv_size": "21470642176",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "name": "ceph_lv1",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "tags": {
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.cluster_name": "ceph",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.crush_device_class": "",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.encrypted": "0",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.osd_id": "1",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.type": "block",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.vdo": "0"
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             },
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "type": "block",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "vg_name": "ceph_vg1"
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:         }
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:     ],
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:     "2": [
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:         {
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "devices": [
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "/dev/loop5"
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             ],
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "lv_name": "ceph_lv2",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "lv_size": "21470642176",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "name": "ceph_lv2",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "tags": {
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.cluster_name": "ceph",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.crush_device_class": "",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.encrypted": "0",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.osd_id": "2",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.type": "block",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:                 "ceph.vdo": "0"
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             },
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "type": "block",
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:             "vg_name": "ceph_vg2"
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:         }
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]:     ]
Nov 29 08:08:08 compute-0 fervent_jepsen[280249]: }
Nov 29 08:08:08 compute-0 systemd[1]: libpod-b51dd70c18dae0b904376076b6691c71294ec1cb6fc8eea8fd316e3e665302d4.scope: Deactivated successfully.
Nov 29 08:08:08 compute-0 podman[280233]: 2025-11-29 08:08:08.118452592 +0000 UTC m=+1.053315489 container died b51dd70c18dae0b904376076b6691c71294ec1cb6fc8eea8fd316e3e665302d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jepsen, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 08:08:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-58cfe8d60267856634fe2821da2b84f421a77817ab3ff55e619229fd2125541e-merged.mount: Deactivated successfully.
Nov 29 08:08:08 compute-0 podman[280233]: 2025-11-29 08:08:08.1957599 +0000 UTC m=+1.130622787 container remove b51dd70c18dae0b904376076b6691c71294ec1cb6fc8eea8fd316e3e665302d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:08:08 compute-0 systemd[1]: libpod-conmon-b51dd70c18dae0b904376076b6691c71294ec1cb6fc8eea8fd316e3e665302d4.scope: Deactivated successfully.
Nov 29 08:08:08 compute-0 sudo[280105]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:08 compute-0 podman[280278]: 2025-11-29 08:08:08.266844992 +0000 UTC m=+0.103160185 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:08:08 compute-0 sudo[280305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:08 compute-0 sudo[280305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:08 compute-0 sudo[280305]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:08 compute-0 sudo[280333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:08:08 compute-0 sudo[280333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:08 compute-0 sudo[280333]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:08 compute-0 sudo[280358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:08 compute-0 sudo[280358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:08 compute-0 sudo[280358]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:08 compute-0 sudo[280383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 08:08:08 compute-0 sudo[280383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:08 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:08.614 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:08:08 compute-0 nova_compute[255040]: 2025-11-29 08:08:08.614 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:08 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:08.618 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:08:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:08:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:08:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:08:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:08:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:08:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:08:08 compute-0 ceph-mon[75237]: pgmap v1461: 305 pgs: 305 active+clean; 448 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 723 KiB/s rd, 2.1 MiB/s wr, 127 op/s
Nov 29 08:08:08 compute-0 ceph-mon[75237]: osdmap e257: 3 total, 3 up, 3 in
Nov 29 08:08:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Nov 29 08:08:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Nov 29 08:08:08 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Nov 29 08:08:08 compute-0 nova_compute[255040]: 2025-11-29 08:08:08.796 255071 DEBUG nova.compute.manager [req-96e0ad67-51de-402f-a4a2-5fd27f627da8 req-28f324c5-b2a6-4cf1-80d2-00c5c5982e9a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Received event network-changed-f819ff69-f947-468c-9e7a-6ba9cca9c85f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:08 compute-0 nova_compute[255040]: 2025-11-29 08:08:08.796 255071 DEBUG nova.compute.manager [req-96e0ad67-51de-402f-a4a2-5fd27f627da8 req-28f324c5-b2a6-4cf1-80d2-00c5c5982e9a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Refreshing instance network info cache due to event network-changed-f819ff69-f947-468c-9e7a-6ba9cca9c85f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:08:08 compute-0 nova_compute[255040]: 2025-11-29 08:08:08.796 255071 DEBUG oslo_concurrency.lockutils [req-96e0ad67-51de-402f-a4a2-5fd27f627da8 req-28f324c5-b2a6-4cf1-80d2-00c5c5982e9a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-e13306d3-0b4c-4937-8b4b-83605575ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:08:08 compute-0 nova_compute[255040]: 2025-11-29 08:08:08.796 255071 DEBUG oslo_concurrency.lockutils [req-96e0ad67-51de-402f-a4a2-5fd27f627da8 req-28f324c5-b2a6-4cf1-80d2-00c5c5982e9a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-e13306d3-0b4c-4937-8b4b-83605575ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:08:08 compute-0 nova_compute[255040]: 2025-11-29 08:08:08.796 255071 DEBUG nova.network.neutron [req-96e0ad67-51de-402f-a4a2-5fd27f627da8 req-28f324c5-b2a6-4cf1-80d2-00c5c5982e9a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Refreshing network info cache for port f819ff69-f947-468c-9e7a-6ba9cca9c85f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:08:08 compute-0 podman[280447]: 2025-11-29 08:08:08.90519925 +0000 UTC m=+0.048294970 container create bf0b3a108a7501b682081c02e75867d88eda7a9fdd568c12a5249109863a81ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dirac, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 08:08:08 compute-0 nova_compute[255040]: 2025-11-29 08:08:08.933 255071 DEBUG oslo_concurrency.lockutils [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "e13306d3-0b4c-4937-8b4b-83605575ce82" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:08 compute-0 nova_compute[255040]: 2025-11-29 08:08:08.934 255071 DEBUG oslo_concurrency.lockutils [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "e13306d3-0b4c-4937-8b4b-83605575ce82" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:08 compute-0 nova_compute[255040]: 2025-11-29 08:08:08.934 255071 DEBUG oslo_concurrency.lockutils [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "e13306d3-0b4c-4937-8b4b-83605575ce82-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:08 compute-0 nova_compute[255040]: 2025-11-29 08:08:08.935 255071 DEBUG oslo_concurrency.lockutils [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "e13306d3-0b4c-4937-8b4b-83605575ce82-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:08 compute-0 nova_compute[255040]: 2025-11-29 08:08:08.935 255071 DEBUG oslo_concurrency.lockutils [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "e13306d3-0b4c-4937-8b4b-83605575ce82-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:08 compute-0 nova_compute[255040]: 2025-11-29 08:08:08.937 255071 INFO nova.compute.manager [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Terminating instance
Nov 29 08:08:08 compute-0 nova_compute[255040]: 2025-11-29 08:08:08.938 255071 DEBUG nova.compute.manager [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:08:08 compute-0 systemd[1]: Started libpod-conmon-bf0b3a108a7501b682081c02e75867d88eda7a9fdd568c12a5249109863a81ea.scope.
Nov 29 08:08:08 compute-0 podman[280447]: 2025-11-29 08:08:08.885412508 +0000 UTC m=+0.028508248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:08:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:08:09 compute-0 podman[280447]: 2025-11-29 08:08:09.003532945 +0000 UTC m=+0.146628685 container init bf0b3a108a7501b682081c02e75867d88eda7a9fdd568c12a5249109863a81ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dirac, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 08:08:09 compute-0 kernel: tapf819ff69-f9 (unregistering): left promiscuous mode
Nov 29 08:08:09 compute-0 podman[280447]: 2025-11-29 08:08:09.015239719 +0000 UTC m=+0.158335439 container start bf0b3a108a7501b682081c02e75867d88eda7a9fdd568c12a5249109863a81ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:08:09 compute-0 NetworkManager[49116]: <info>  [1764403689.0165] device (tapf819ff69-f9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:08:09 compute-0 podman[280447]: 2025-11-29 08:08:09.022063233 +0000 UTC m=+0.165158983 container attach bf0b3a108a7501b682081c02e75867d88eda7a9fdd568c12a5249109863a81ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:08:09 compute-0 systemd[1]: libpod-bf0b3a108a7501b682081c02e75867d88eda7a9fdd568c12a5249109863a81ea.scope: Deactivated successfully.
Nov 29 08:08:09 compute-0 quizzical_dirac[280463]: 167 167
Nov 29 08:08:09 compute-0 conmon[280463]: conmon bf0b3a108a7501b68208 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bf0b3a108a7501b682081c02e75867d88eda7a9fdd568c12a5249109863a81ea.scope/container/memory.events
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.034 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:09 compute-0 ovn_controller[153295]: 2025-11-29T08:08:09Z|00117|binding|INFO|Releasing lport f819ff69-f947-468c-9e7a-6ba9cca9c85f from this chassis (sb_readonly=0)
Nov 29 08:08:09 compute-0 ovn_controller[153295]: 2025-11-29T08:08:09Z|00118|binding|INFO|Setting lport f819ff69-f947-468c-9e7a-6ba9cca9c85f down in Southbound
Nov 29 08:08:09 compute-0 ovn_controller[153295]: 2025-11-29T08:08:09Z|00119|binding|INFO|Removing iface tapf819ff69-f9 ovn-installed in OVS
Nov 29 08:08:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:09.044 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:71:48 10.100.0.13'], port_security=['fa:16:3e:0d:71:48 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e13306d3-0b4c-4937-8b4b-83605575ce82', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40f35c3c-5e61-44c9-af5e-70c7d4a4426c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e34fda55585f453b8b66f12e625234fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '553abf0a-6893-4b91-98a5-f4750edd0687', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76f3566f-5b18-4f8e-8a2b-ee02876f83ee, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=f819ff69-f947-468c-9e7a-6ba9cca9c85f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:08:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:09.046 163500 INFO neutron.agent.ovn.metadata.agent [-] Port f819ff69-f947-468c-9e7a-6ba9cca9c85f in datapath 40f35c3c-5e61-44c9-af5e-70c7d4a4426c unbound from our chassis
Nov 29 08:08:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:09.047 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 40f35c3c-5e61-44c9-af5e-70c7d4a4426c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.056 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:09.050 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[b34293ba-4e58-4cc5-bc22-f64b41ce63d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:09.060 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c namespace which is not needed anymore
Nov 29 08:08:09 compute-0 podman[280470]: 2025-11-29 08:08:09.086698762 +0000 UTC m=+0.035106706 container died bf0b3a108a7501b682081c02e75867d88eda7a9fdd568c12a5249109863a81ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dirac, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 08:08:09 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 29 08:08:09 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 21.026s CPU time.
Nov 29 08:08:09 compute-0 systemd-machined[216271]: Machine qemu-8-instance-00000008 terminated.
Nov 29 08:08:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-286b7fc2806149bcb660f2d98a63465d1096a1310820f39162010e91b7f6ea2e-merged.mount: Deactivated successfully.
Nov 29 08:08:09 compute-0 podman[280470]: 2025-11-29 08:08:09.134780405 +0000 UTC m=+0.083188329 container remove bf0b3a108a7501b682081c02e75867d88eda7a9fdd568c12a5249109863a81ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dirac, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 08:08:09 compute-0 systemd[1]: libpod-conmon-bf0b3a108a7501b682081c02e75867d88eda7a9fdd568c12a5249109863a81ea.scope: Deactivated successfully.
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.170 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.176 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.185 255071 INFO nova.virt.libvirt.driver [-] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Instance destroyed successfully.
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.185 255071 DEBUG nova.objects.instance [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lazy-loading 'resources' on Instance uuid e13306d3-0b4c-4937-8b4b-83605575ce82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.201 255071 DEBUG nova.virt.libvirt.vif [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:05:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-1728969715',display_name='tempest-TestStampPattern-server-1728969715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1728969715',id=8,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJE9RqYsz9HjQ/t1DVesVU5+xhErSoHBDhqDFMn5e1HnxCoCHbyhG0Ca+mVMomD/L3wNZd1oYWRpzT93dK7YeXeDz2hG7gc6vbzGWNmMv5BpvrM+1KI+r/GQ5ox5/o1aRQ==',key_name='tempest-TestStampPattern-1389223213',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:05:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e34fda55585f453b8b66f12e625234fe',ramdisk_id='',reservation_id='r-cm16igo4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestStampPattern-194782062',owner_user_name='tempest-TestStampPattern-194782062-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:06:37Z,user_data=None,user_id='c4f53a86d1eb4bdebed4ec5dd9b5ff45',uuid=e13306d3-0b4c-4937-8b4b-83605575ce82,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "address": "fa:16:3e:0d:71:48", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf819ff69-f9", "ovs_interfaceid": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.201 255071 DEBUG nova.network.os_vif_util [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Converting VIF {"id": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "address": "fa:16:3e:0d:71:48", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf819ff69-f9", "ovs_interfaceid": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.202 255071 DEBUG nova.network.os_vif_util [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0d:71:48,bridge_name='br-int',has_traffic_filtering=True,id=f819ff69-f947-468c-9e7a-6ba9cca9c85f,network=Network(40f35c3c-5e61-44c9-af5e-70c7d4a4426c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf819ff69-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.203 255071 DEBUG os_vif [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:71:48,bridge_name='br-int',has_traffic_filtering=True,id=f819ff69-f947-468c-9e7a-6ba9cca9c85f,network=Network(40f35c3c-5e61-44c9-af5e-70c7d4a4426c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf819ff69-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.208 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.209 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf819ff69-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.217 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.222 255071 INFO os_vif [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:71:48,bridge_name='br-int',has_traffic_filtering=True,id=f819ff69-f947-468c-9e7a-6ba9cca9c85f,network=Network(40f35c3c-5e61-44c9-af5e-70c7d4a4426c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf819ff69-f9')
Nov 29 08:08:09 compute-0 neutron-haproxy-ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c[276419]: [NOTICE]   (276423) : haproxy version is 2.8.14-c23fe91
Nov 29 08:08:09 compute-0 neutron-haproxy-ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c[276419]: [NOTICE]   (276423) : path to executable is /usr/sbin/haproxy
Nov 29 08:08:09 compute-0 neutron-haproxy-ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c[276419]: [WARNING]  (276423) : Exiting Master process...
Nov 29 08:08:09 compute-0 neutron-haproxy-ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c[276419]: [WARNING]  (276423) : Exiting Master process...
Nov 29 08:08:09 compute-0 neutron-haproxy-ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c[276419]: [ALERT]    (276423) : Current worker (276425) exited with code 143 (Terminated)
Nov 29 08:08:09 compute-0 neutron-haproxy-ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c[276419]: [WARNING]  (276423) : All workers exited. Exiting... (0)
Nov 29 08:08:09 compute-0 systemd[1]: libpod-2ee0b6df82bc277f980336cf56be5fe21731fefd74de7418c15461917a3c253d.scope: Deactivated successfully.
Nov 29 08:08:09 compute-0 podman[280517]: 2025-11-29 08:08:09.298782795 +0000 UTC m=+0.081902203 container died 2ee0b6df82bc277f980336cf56be5fe21731fefd74de7418c15461917a3c253d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.314 255071 DEBUG nova.compute.manager [req-a154e795-e4db-41a8-a430-55531eeab7e3 req-ea973a5c-ec4c-4dc0-ab34-fe7f9792f5a5 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Received event network-vif-unplugged-f819ff69-f947-468c-9e7a-6ba9cca9c85f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.315 255071 DEBUG oslo_concurrency.lockutils [req-a154e795-e4db-41a8-a430-55531eeab7e3 req-ea973a5c-ec4c-4dc0-ab34-fe7f9792f5a5 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "e13306d3-0b4c-4937-8b4b-83605575ce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.315 255071 DEBUG oslo_concurrency.lockutils [req-a154e795-e4db-41a8-a430-55531eeab7e3 req-ea973a5c-ec4c-4dc0-ab34-fe7f9792f5a5 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "e13306d3-0b4c-4937-8b4b-83605575ce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.315 255071 DEBUG oslo_concurrency.lockutils [req-a154e795-e4db-41a8-a430-55531eeab7e3 req-ea973a5c-ec4c-4dc0-ab34-fe7f9792f5a5 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "e13306d3-0b4c-4937-8b4b-83605575ce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.316 255071 DEBUG nova.compute.manager [req-a154e795-e4db-41a8-a430-55531eeab7e3 req-ea973a5c-ec4c-4dc0-ab34-fe7f9792f5a5 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] No waiting events found dispatching network-vif-unplugged-f819ff69-f947-468c-9e7a-6ba9cca9c85f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.316 255071 DEBUG nova.compute.manager [req-a154e795-e4db-41a8-a430-55531eeab7e3 req-ea973a5c-ec4c-4dc0-ab34-fe7f9792f5a5 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Received event network-vif-unplugged-f819ff69-f947-468c-9e7a-6ba9cca9c85f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:08:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2ee0b6df82bc277f980336cf56be5fe21731fefd74de7418c15461917a3c253d-userdata-shm.mount: Deactivated successfully.
Nov 29 08:08:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1774ff2c0222df0b25e867da676708186d4a35be5be0d0ea6a0908c0c5521f0-merged.mount: Deactivated successfully.
Nov 29 08:08:09 compute-0 podman[280517]: 2025-11-29 08:08:09.349630073 +0000 UTC m=+0.132749461 container cleanup 2ee0b6df82bc277f980336cf56be5fe21731fefd74de7418c15461917a3c253d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:08:09 compute-0 systemd[1]: libpod-conmon-2ee0b6df82bc277f980336cf56be5fe21731fefd74de7418c15461917a3c253d.scope: Deactivated successfully.
Nov 29 08:08:09 compute-0 podman[280554]: 2025-11-29 08:08:09.378571661 +0000 UTC m=+0.064942197 container create 3c1b516401d1fd00522b3d74b22f212bdba32b04befa7254300ed55dd459908a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 08:08:09 compute-0 systemd[1]: Started libpod-conmon-3c1b516401d1fd00522b3d74b22f212bdba32b04befa7254300ed55dd459908a.scope.
Nov 29 08:08:09 compute-0 podman[280578]: 2025-11-29 08:08:09.446702734 +0000 UTC m=+0.066096239 container remove 2ee0b6df82bc277f980336cf56be5fe21731fefd74de7418c15461917a3c253d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 08:08:09 compute-0 podman[280554]: 2025-11-29 08:08:09.352587432 +0000 UTC m=+0.038957988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:08:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:08:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:09.453 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[cab2c8e3-ffba-452d-857a-660bcd61c123]: (4, ('Sat Nov 29 08:08:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c (2ee0b6df82bc277f980336cf56be5fe21731fefd74de7418c15461917a3c253d)\n2ee0b6df82bc277f980336cf56be5fe21731fefd74de7418c15461917a3c253d\nSat Nov 29 08:08:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c (2ee0b6df82bc277f980336cf56be5fe21731fefd74de7418c15461917a3c253d)\n2ee0b6df82bc277f980336cf56be5fe21731fefd74de7418c15461917a3c253d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3dfb3f7917788cfb28afba862d15c936b732f4baec49749c616d785e2aba77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3dfb3f7917788cfb28afba862d15c936b732f4baec49749c616d785e2aba77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3dfb3f7917788cfb28afba862d15c936b732f4baec49749c616d785e2aba77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3dfb3f7917788cfb28afba862d15c936b732f4baec49749c616d785e2aba77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:09.474 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[b979cf54-256f-4e4b-b293-2bed93694022]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:09.476 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40f35c3c-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.478 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:09 compute-0 kernel: tap40f35c3c-50: left promiscuous mode
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.483 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:09.489 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[bd673867-1c73-41ef-a658-5f505ad3d281]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:09 compute-0 podman[280554]: 2025-11-29 08:08:09.490656926 +0000 UTC m=+0.177027472 container init 3c1b516401d1fd00522b3d74b22f212bdba32b04befa7254300ed55dd459908a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilbur, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 08:08:09 compute-0 podman[280554]: 2025-11-29 08:08:09.511134427 +0000 UTC m=+0.197504963 container start 3c1b516401d1fd00522b3d74b22f212bdba32b04befa7254300ed55dd459908a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilbur, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 08:08:09 compute-0 podman[280554]: 2025-11-29 08:08:09.515133514 +0000 UTC m=+0.201504070 container attach 3c1b516401d1fd00522b3d74b22f212bdba32b04befa7254300ed55dd459908a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.553 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:09.554 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[2291680f-f19a-472c-a797-e561c52c7529]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:09.556 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[06b7af7e-2bca-429a-9511-2a8193e5028e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:09.583 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[cb9c946c-9be6-49f9-85db-bc0414047e07]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579131, 'reachable_time': 22519, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280602, 'error': None, 'target': 'ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d40f35c3c\x2d5e61\x2d44c9\x2daf5e\x2d70c7d4a4426c.mount: Deactivated successfully.
Nov 29 08:08:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:09.589 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-40f35c3c-5e61-44c9-af5e-70c7d4a4426c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:08:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:09.590 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[c3bdb35f-cec2-45b1-a366-8020bb21ed6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 370 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 296 KiB/s wr, 162 op/s
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.689 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.734 255071 INFO nova.virt.libvirt.driver [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Deleting instance files /var/lib/nova/instances/e13306d3-0b4c-4937-8b4b-83605575ce82_del
Nov 29 08:08:09 compute-0 nova_compute[255040]: 2025-11-29 08:08:09.738 255071 INFO nova.virt.libvirt.driver [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Deletion of /var/lib/nova/instances/e13306d3-0b4c-4937-8b4b-83605575ce82_del complete
Nov 29 08:08:09 compute-0 ceph-mon[75237]: osdmap e258: 3 total, 3 up, 3 in
Nov 29 08:08:10 compute-0 nova_compute[255040]: 2025-11-29 08:08:10.010 255071 INFO nova.compute.manager [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Took 1.07 seconds to destroy the instance on the hypervisor.
Nov 29 08:08:10 compute-0 nova_compute[255040]: 2025-11-29 08:08:10.010 255071 DEBUG oslo.service.loopingcall [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:08:10 compute-0 nova_compute[255040]: 2025-11-29 08:08:10.011 255071 DEBUG nova.compute.manager [-] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:08:10 compute-0 nova_compute[255040]: 2025-11-29 08:08:10.011 255071 DEBUG nova.network.neutron [-] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:08:10 compute-0 nova_compute[255040]: 2025-11-29 08:08:10.279 255071 DEBUG nova.network.neutron [req-96e0ad67-51de-402f-a4a2-5fd27f627da8 req-28f324c5-b2a6-4cf1-80d2-00c5c5982e9a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Updated VIF entry in instance network info cache for port f819ff69-f947-468c-9e7a-6ba9cca9c85f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:08:10 compute-0 nova_compute[255040]: 2025-11-29 08:08:10.280 255071 DEBUG nova.network.neutron [req-96e0ad67-51de-402f-a4a2-5fd27f627da8 req-28f324c5-b2a6-4cf1-80d2-00c5c5982e9a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Updating instance_info_cache with network_info: [{"id": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "address": "fa:16:3e:0d:71:48", "network": {"id": "40f35c3c-5e61-44c9-af5e-70c7d4a4426c", "bridge": "br-int", "label": "tempest-TestStampPattern-808942618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e34fda55585f453b8b66f12e625234fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf819ff69-f9", "ovs_interfaceid": "f819ff69-f947-468c-9e7a-6ba9cca9c85f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:10 compute-0 nova_compute[255040]: 2025-11-29 08:08:10.309 255071 DEBUG oslo_concurrency.lockutils [req-96e0ad67-51de-402f-a4a2-5fd27f627da8 req-28f324c5-b2a6-4cf1-80d2-00c5c5982e9a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-e13306d3-0b4c-4937-8b4b-83605575ce82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:08:10 compute-0 nova_compute[255040]: 2025-11-29 08:08:10.421 255071 DEBUG nova.virt.libvirt.driver [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:08:10 compute-0 nova_compute[255040]: 2025-11-29 08:08:10.423 255071 DEBUG nova.virt.libvirt.driver [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:08:10 compute-0 nova_compute[255040]: 2025-11-29 08:08:10.424 255071 DEBUG nova.virt.libvirt.driver [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:08:10 compute-0 nova_compute[255040]: 2025-11-29 08:08:10.424 255071 DEBUG nova.virt.libvirt.driver [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] No VIF found with MAC fa:16:3e:59:76:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]: {
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]:         "osd_id": 2,
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]:         "type": "bluestore"
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]:     },
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]:         "osd_id": 0,
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]:         "type": "bluestore"
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]:     },
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]:         "osd_id": 1,
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]:         "type": "bluestore"
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]:     }
Nov 29 08:08:10 compute-0 quirky_wilbur[280594]: }
Nov 29 08:08:10 compute-0 systemd[1]: libpod-3c1b516401d1fd00522b3d74b22f212bdba32b04befa7254300ed55dd459908a.scope: Deactivated successfully.
Nov 29 08:08:10 compute-0 systemd[1]: libpod-3c1b516401d1fd00522b3d74b22f212bdba32b04befa7254300ed55dd459908a.scope: Consumed 1.048s CPU time.
Nov 29 08:08:10 compute-0 podman[280554]: 2025-11-29 08:08:10.567686641 +0000 UTC m=+1.254057187 container died 3c1b516401d1fd00522b3d74b22f212bdba32b04befa7254300ed55dd459908a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 08:08:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd3dfb3f7917788cfb28afba862d15c936b732f4baec49749c616d785e2aba77-merged.mount: Deactivated successfully.
Nov 29 08:08:10 compute-0 podman[280554]: 2025-11-29 08:08:10.63156838 +0000 UTC m=+1.317938926 container remove 3c1b516401d1fd00522b3d74b22f212bdba32b04befa7254300ed55dd459908a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wilbur, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:08:10 compute-0 systemd[1]: libpod-conmon-3c1b516401d1fd00522b3d74b22f212bdba32b04befa7254300ed55dd459908a.scope: Deactivated successfully.
Nov 29 08:08:10 compute-0 nova_compute[255040]: 2025-11-29 08:08:10.651 255071 DEBUG oslo_concurrency.lockutils [None req-f3acb46a-b21a-4a7d-b522-516777a4a121 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lock "fbf20945-7898-4904-95c5-0047536f3eab" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:10 compute-0 sudo[280383]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:08:10 compute-0 nova_compute[255040]: 2025-11-29 08:08:10.681 255071 DEBUG nova.network.neutron [-] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:10 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:08:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:08:10 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:08:10 compute-0 nova_compute[255040]: 2025-11-29 08:08:10.699 255071 INFO nova.compute.manager [-] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Took 0.69 seconds to deallocate network for instance.
Nov 29 08:08:10 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev ffecf412-19cb-47c4-8283-20ce22734f2c does not exist
Nov 29 08:08:10 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev f77e9b4a-4868-4d5a-829b-a6d6bc8cb78e does not exist
Nov 29 08:08:10 compute-0 nova_compute[255040]: 2025-11-29 08:08:10.758 255071 DEBUG oslo_concurrency.lockutils [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:10 compute-0 nova_compute[255040]: 2025-11-29 08:08:10.758 255071 DEBUG oslo_concurrency.lockutils [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:10 compute-0 sudo[280644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Nov 29 08:08:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Nov 29 08:08:10 compute-0 sudo[280644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:10 compute-0 sudo[280644]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:10 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Nov 29 08:08:10 compute-0 ceph-mon[75237]: pgmap v1464: 305 pgs: 305 active+clean; 370 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 296 KiB/s wr, 162 op/s
Nov 29 08:08:10 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:08:10 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:08:10 compute-0 nova_compute[255040]: 2025-11-29 08:08:10.842 255071 DEBUG oslo_concurrency.processutils [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:10 compute-0 sudo[280669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:08:10 compute-0 sudo[280669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:10 compute-0 sudo[280669]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:10 compute-0 nova_compute[255040]: 2025-11-29 08:08:10.879 255071 DEBUG nova.compute.manager [req-df1665f5-c49d-4f5c-a708-474f2826a4c2 req-3467e106-e641-4a54-9ef0-bedb366d35a0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Received event network-vif-deleted-f819ff69-f947-468c-9e7a-6ba9cca9c85f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:08:11 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/600665398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.310 255071 DEBUG oslo_concurrency.processutils [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.323 255071 DEBUG nova.compute.provider_tree [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.327 255071 DEBUG oslo_concurrency.lockutils [None req-23b2d532-c233-4ecd-b570-64bdeba13fe1 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Acquiring lock "fbf20945-7898-4904-95c5-0047536f3eab" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.329 255071 DEBUG oslo_concurrency.lockutils [None req-23b2d532-c233-4ecd-b570-64bdeba13fe1 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lock "fbf20945-7898-4904-95c5-0047536f3eab" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.349 255071 INFO nova.compute.manager [None req-23b2d532-c233-4ecd-b570-64bdeba13fe1 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Detaching volume 88b3e3ce-733e-4d1e-9625-90db86ff56b1
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.353 255071 DEBUG nova.scheduler.client.report [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.373 255071 DEBUG oslo_concurrency.lockutils [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.401 255071 INFO nova.scheduler.client.report [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Deleted allocations for instance e13306d3-0b4c-4937-8b4b-83605575ce82
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.405 255071 DEBUG nova.compute.manager [req-66c056e2-d2be-444e-82fd-c52e32fac5f0 req-bcb82172-2c02-469f-963a-d45114a0c5e0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Received event network-vif-plugged-f819ff69-f947-468c-9e7a-6ba9cca9c85f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.406 255071 DEBUG oslo_concurrency.lockutils [req-66c056e2-d2be-444e-82fd-c52e32fac5f0 req-bcb82172-2c02-469f-963a-d45114a0c5e0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "e13306d3-0b4c-4937-8b4b-83605575ce82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.406 255071 DEBUG oslo_concurrency.lockutils [req-66c056e2-d2be-444e-82fd-c52e32fac5f0 req-bcb82172-2c02-469f-963a-d45114a0c5e0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "e13306d3-0b4c-4937-8b4b-83605575ce82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.406 255071 DEBUG oslo_concurrency.lockutils [req-66c056e2-d2be-444e-82fd-c52e32fac5f0 req-bcb82172-2c02-469f-963a-d45114a0c5e0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "e13306d3-0b4c-4937-8b4b-83605575ce82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.406 255071 DEBUG nova.compute.manager [req-66c056e2-d2be-444e-82fd-c52e32fac5f0 req-bcb82172-2c02-469f-963a-d45114a0c5e0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] No waiting events found dispatching network-vif-plugged-f819ff69-f947-468c-9e7a-6ba9cca9c85f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.406 255071 WARNING nova.compute.manager [req-66c056e2-d2be-444e-82fd-c52e32fac5f0 req-bcb82172-2c02-469f-963a-d45114a0c5e0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Received unexpected event network-vif-plugged-f819ff69-f947-468c-9e7a-6ba9cca9c85f for instance with vm_state deleted and task_state None.
Nov 29 08:08:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.486 255071 INFO nova.virt.block_device [None req-23b2d532-c233-4ecd-b570-64bdeba13fe1 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Attempting to driver detach volume 88b3e3ce-733e-4d1e-9625-90db86ff56b1 from mountpoint /dev/vdb
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.489 255071 DEBUG oslo_concurrency.lockutils [None req-d650d663-04af-4dd7-9135-136b0d40a2c3 c4f53a86d1eb4bdebed4ec5dd9b5ff45 e34fda55585f453b8b66f12e625234fe - - default default] Lock "e13306d3-0b4c-4937-8b4b-83605575ce82" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.581 255071 DEBUG os_brick.encryptors [None req-23b2d532-c233-4ecd-b570-64bdeba13fe1 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Using volume encryption metadata '{'encryption_key_id': 'e925dc48-d2cc-466d-96b3-73d895a242d7', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-88b3e3ce-733e-4d1e-9625-90db86ff56b1', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '88b3e3ce-733e-4d1e-9625-90db86ff56b1', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'fbf20945-7898-4904-95c5-0047536f3eab', 'attached_at': '', 'detached_at': '', 'volume_id': '88b3e3ce-733e-4d1e-9625-90db86ff56b1', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.593 255071 DEBUG nova.virt.libvirt.driver [None req-23b2d532-c233-4ecd-b570-64bdeba13fe1 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Attempting to detach device vdb from instance fbf20945-7898-4904-95c5-0047536f3eab from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.593 255071 DEBUG nova.virt.libvirt.guest [None req-23b2d532-c233-4ecd-b570-64bdeba13fe1 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:08:11 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:08:11 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-88b3e3ce-733e-4d1e-9625-90db86ff56b1">
Nov 29 08:08:11 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:08:11 compute-0 nova_compute[255040]:   </source>
Nov 29 08:08:11 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:08:11 compute-0 nova_compute[255040]:   <serial>88b3e3ce-733e-4d1e-9625-90db86ff56b1</serial>
Nov 29 08:08:11 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:08:11 compute-0 nova_compute[255040]:   <encryption format="luks">
Nov 29 08:08:11 compute-0 nova_compute[255040]:     <secret type="passphrase" uuid="9cf8ff37-67c6-4090-9ec5-0477fc500364"/>
Nov 29 08:08:11 compute-0 nova_compute[255040]:   </encryption>
Nov 29 08:08:11 compute-0 nova_compute[255040]: </disk>
Nov 29 08:08:11 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.600 255071 INFO nova.virt.libvirt.driver [None req-23b2d532-c233-4ecd-b570-64bdeba13fe1 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Successfully detached device vdb from instance fbf20945-7898-4904-95c5-0047536f3eab from the persistent domain config.
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.600 255071 DEBUG nova.virt.libvirt.driver [None req-23b2d532-c233-4ecd-b570-64bdeba13fe1 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance fbf20945-7898-4904-95c5-0047536f3eab from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.601 255071 DEBUG nova.virt.libvirt.guest [None req-23b2d532-c233-4ecd-b570-64bdeba13fe1 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:08:11 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:08:11 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-88b3e3ce-733e-4d1e-9625-90db86ff56b1">
Nov 29 08:08:11 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:08:11 compute-0 nova_compute[255040]:   </source>
Nov 29 08:08:11 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:08:11 compute-0 nova_compute[255040]:   <serial>88b3e3ce-733e-4d1e-9625-90db86ff56b1</serial>
Nov 29 08:08:11 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:08:11 compute-0 nova_compute[255040]:   <encryption format="luks">
Nov 29 08:08:11 compute-0 nova_compute[255040]:     <secret type="passphrase" uuid="9cf8ff37-67c6-4090-9ec5-0477fc500364"/>
Nov 29 08:08:11 compute-0 nova_compute[255040]:   </encryption>
Nov 29 08:08:11 compute-0 nova_compute[255040]: </disk>
Nov 29 08:08:11 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:08:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 323 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 215 KiB/s rd, 44 KiB/s wr, 116 op/s
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.724 255071 DEBUG nova.virt.libvirt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Received event <DeviceRemovedEvent: 1764403691.7233431, fbf20945-7898-4904-95c5-0047536f3eab => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.726 255071 DEBUG nova.virt.libvirt.driver [None req-23b2d532-c233-4ecd-b570-64bdeba13fe1 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance fbf20945-7898-4904-95c5-0047536f3eab _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.729 255071 INFO nova.virt.libvirt.driver [None req-23b2d532-c233-4ecd-b570-64bdeba13fe1 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Successfully detached device vdb from instance fbf20945-7898-4904-95c5-0047536f3eab from the live domain config.
Nov 29 08:08:11 compute-0 ceph-mon[75237]: osdmap e259: 3 total, 3 up, 3 in
Nov 29 08:08:11 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/600665398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.888 255071 DEBUG nova.objects.instance [None req-23b2d532-c233-4ecd-b570-64bdeba13fe1 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lazy-loading 'flavor' on Instance uuid fbf20945-7898-4904-95c5-0047536f3eab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:11 compute-0 nova_compute[255040]: 2025-11-29 08:08:11.930 255071 DEBUG oslo_concurrency.lockutils [None req-23b2d532-c233-4ecd-b570-64bdeba13fe1 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lock "fbf20945-7898-4904-95c5-0047536f3eab" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:08:12 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/544111884' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:08:12 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/544111884' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:12 compute-0 sshd[189732]: drop connection #0 from [45.78.219.195]:40052 on [38.102.83.203]:22 penalty: exceeded LoginGraceTime
Nov 29 08:08:12 compute-0 nova_compute[255040]: 2025-11-29 08:08:12.890 255071 DEBUG oslo_concurrency.lockutils [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Acquiring lock "fbf20945-7898-4904-95c5-0047536f3eab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:12 compute-0 nova_compute[255040]: 2025-11-29 08:08:12.890 255071 DEBUG oslo_concurrency.lockutils [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lock "fbf20945-7898-4904-95c5-0047536f3eab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:12 compute-0 nova_compute[255040]: 2025-11-29 08:08:12.891 255071 DEBUG oslo_concurrency.lockutils [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Acquiring lock "fbf20945-7898-4904-95c5-0047536f3eab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:12 compute-0 nova_compute[255040]: 2025-11-29 08:08:12.891 255071 DEBUG oslo_concurrency.lockutils [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lock "fbf20945-7898-4904-95c5-0047536f3eab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:12 compute-0 nova_compute[255040]: 2025-11-29 08:08:12.891 255071 DEBUG oslo_concurrency.lockutils [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lock "fbf20945-7898-4904-95c5-0047536f3eab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:12 compute-0 nova_compute[255040]: 2025-11-29 08:08:12.892 255071 INFO nova.compute.manager [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Terminating instance
Nov 29 08:08:12 compute-0 nova_compute[255040]: 2025-11-29 08:08:12.893 255071 DEBUG nova.compute.manager [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:08:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Nov 29 08:08:13 compute-0 ceph-mon[75237]: pgmap v1466: 305 pgs: 305 active+clean; 323 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 215 KiB/s rd, 44 KiB/s wr, 116 op/s
Nov 29 08:08:13 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/544111884' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:13 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/544111884' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 293 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 243 KiB/s rd, 47 KiB/s wr, 163 op/s
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.214 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Nov 29 08:08:14 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Nov 29 08:08:14 compute-0 ceph-mon[75237]: pgmap v1467: 305 pgs: 305 active+clean; 293 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 243 KiB/s rd, 47 KiB/s wr, 163 op/s
Nov 29 08:08:14 compute-0 kernel: tap2eff6be1-35 (unregistering): left promiscuous mode
Nov 29 08:08:14 compute-0 NetworkManager[49116]: <info>  [1764403694.3969] device (tap2eff6be1-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:08:14 compute-0 ovn_controller[153295]: 2025-11-29T08:08:14Z|00120|binding|INFO|Releasing lport 2eff6be1-3572-4ee8-b40e-208a0051b03c from this chassis (sb_readonly=0)
Nov 29 08:08:14 compute-0 ovn_controller[153295]: 2025-11-29T08:08:14Z|00121|binding|INFO|Setting lport 2eff6be1-3572-4ee8-b40e-208a0051b03c down in Southbound
Nov 29 08:08:14 compute-0 ovn_controller[153295]: 2025-11-29T08:08:14Z|00122|binding|INFO|Removing iface tap2eff6be1-35 ovn-installed in OVS
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.412 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:14.420 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:76:07 10.100.0.4'], port_security=['fa:16:3e:59:76:07 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fbf20945-7898-4904-95c5-0047536f3eab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b360768-ee11-45df-a7b1-30c167686953', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eeda2edc1f464a5480a29e4ff783c9b7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e941dca4-f8e2-4c1a-8bd1-11bea4d6ba77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.220'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59d06473-cfb2-4bab-8c48-e5a28c2465ff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=2eff6be1-3572-4ee8-b40e-208a0051b03c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:08:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:14.421 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 2eff6be1-3572-4ee8-b40e-208a0051b03c in datapath 2b360768-ee11-45df-a7b1-30c167686953 unbound from our chassis
Nov 29 08:08:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:14.423 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b360768-ee11-45df-a7b1-30c167686953, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:08:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:14.424 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5579b2f0-d285-42b3-b793-f85aff4f1539]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:14.424 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b360768-ee11-45df-a7b1-30c167686953 namespace which is not needed anymore
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.428 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:14 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Nov 29 08:08:14 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 19.333s CPU time.
Nov 29 08:08:14 compute-0 systemd-machined[216271]: Machine qemu-11-instance-0000000b terminated.
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.518 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.524 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.535 255071 INFO nova.virt.libvirt.driver [-] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Instance destroyed successfully.
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.535 255071 DEBUG nova.objects.instance [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lazy-loading 'resources' on Instance uuid fbf20945-7898-4904-95c5-0047536f3eab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.548 255071 DEBUG nova.virt.libvirt.vif [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-131074981',display_name='tempest-TestEncryptedCinderVolumes-server-131074981',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-131074981',id=11,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMK9VaGmZ36kv/Mjcz8YDZVjW+jRY/a3Wky6io428BhUkvZBLvNhzuy/PTBG8aoHewyEfjcuHMib0cX1Pr+f0ccuUMkyY0get8uS8l8RvbqyeDD6q9/pKiBO0vGwSsUJ3Q==',key_name='tempest-keypair-1866085076',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eeda2edc1f464a5480a29e4ff783c9b7',ramdisk_id='',reservation_id='r-tiswropc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-2112377596',owner_user_name='tempest-TestEncryptedCinderVolumes-2112377596-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3299eefacb3a43a898b339895ff0f205',uuid=fbf20945-7898-4904-95c5-0047536f3eab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2eff6be1-3572-4ee8-b40e-208a0051b03c", "address": "fa:16:3e:59:76:07", "network": {"id": "2b360768-ee11-45df-a7b1-30c167686953", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2134492214-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eeda2edc1f464a5480a29e4ff783c9b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eff6be1-35", "ovs_interfaceid": "2eff6be1-3572-4ee8-b40e-208a0051b03c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.549 255071 DEBUG nova.network.os_vif_util [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Converting VIF {"id": "2eff6be1-3572-4ee8-b40e-208a0051b03c", "address": "fa:16:3e:59:76:07", "network": {"id": "2b360768-ee11-45df-a7b1-30c167686953", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2134492214-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eeda2edc1f464a5480a29e4ff783c9b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eff6be1-35", "ovs_interfaceid": "2eff6be1-3572-4ee8-b40e-208a0051b03c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.550 255071 DEBUG nova.network.os_vif_util [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:59:76:07,bridge_name='br-int',has_traffic_filtering=True,id=2eff6be1-3572-4ee8-b40e-208a0051b03c,network=Network(2b360768-ee11-45df-a7b1-30c167686953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eff6be1-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.550 255071 DEBUG os_vif [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:59:76:07,bridge_name='br-int',has_traffic_filtering=True,id=2eff6be1-3572-4ee8-b40e-208a0051b03c,network=Network(2b360768-ee11-45df-a7b1-30c167686953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eff6be1-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.551 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.552 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2eff6be1-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.587 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.589 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.592 255071 INFO os_vif [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:59:76:07,bridge_name='br-int',has_traffic_filtering=True,id=2eff6be1-3572-4ee8-b40e-208a0051b03c,network=Network(2b360768-ee11-45df-a7b1-30c167686953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eff6be1-35')
Nov 29 08:08:14 compute-0 neutron-haproxy-ovnmeta-2b360768-ee11-45df-a7b1-30c167686953[279366]: [NOTICE]   (279373) : haproxy version is 2.8.14-c23fe91
Nov 29 08:08:14 compute-0 neutron-haproxy-ovnmeta-2b360768-ee11-45df-a7b1-30c167686953[279366]: [NOTICE]   (279373) : path to executable is /usr/sbin/haproxy
Nov 29 08:08:14 compute-0 neutron-haproxy-ovnmeta-2b360768-ee11-45df-a7b1-30c167686953[279366]: [WARNING]  (279373) : Exiting Master process...
Nov 29 08:08:14 compute-0 neutron-haproxy-ovnmeta-2b360768-ee11-45df-a7b1-30c167686953[279366]: [WARNING]  (279373) : Exiting Master process...
Nov 29 08:08:14 compute-0 neutron-haproxy-ovnmeta-2b360768-ee11-45df-a7b1-30c167686953[279366]: [ALERT]    (279373) : Current worker (279377) exited with code 143 (Terminated)
Nov 29 08:08:14 compute-0 neutron-haproxy-ovnmeta-2b360768-ee11-45df-a7b1-30c167686953[279366]: [WARNING]  (279373) : All workers exited. Exiting... (0)
Nov 29 08:08:14 compute-0 systemd[1]: libpod-d88a4d980d3a273443f9c5f2781df40ce59fedb83515ae9de4badcc27fde307d.scope: Deactivated successfully.
Nov 29 08:08:14 compute-0 podman[280751]: 2025-11-29 08:08:14.636747715 +0000 UTC m=+0.086767985 container died d88a4d980d3a273443f9c5f2781df40ce59fedb83515ae9de4badcc27fde307d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b360768-ee11-45df-a7b1-30c167686953, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 08:08:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d88a4d980d3a273443f9c5f2781df40ce59fedb83515ae9de4badcc27fde307d-userdata-shm.mount: Deactivated successfully.
Nov 29 08:08:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcd4462a06011eab897301f40e8cec203a5cc30897bb84c176af69c9bc5c3da0-merged.mount: Deactivated successfully.
Nov 29 08:08:14 compute-0 podman[280751]: 2025-11-29 08:08:14.684998023 +0000 UTC m=+0.135018293 container cleanup d88a4d980d3a273443f9c5f2781df40ce59fedb83515ae9de4badcc27fde307d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b360768-ee11-45df-a7b1-30c167686953, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.691 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:14 compute-0 systemd[1]: libpod-conmon-d88a4d980d3a273443f9c5f2781df40ce59fedb83515ae9de4badcc27fde307d.scope: Deactivated successfully.
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.698 255071 DEBUG nova.compute.manager [req-bd52ab3b-5567-4981-882b-b6ff37d8afa3 req-e0f2e971-3a51-4e2f-88f1-69f4d6d13fda cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Received event network-vif-unplugged-2eff6be1-3572-4ee8-b40e-208a0051b03c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.699 255071 DEBUG oslo_concurrency.lockutils [req-bd52ab3b-5567-4981-882b-b6ff37d8afa3 req-e0f2e971-3a51-4e2f-88f1-69f4d6d13fda cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "fbf20945-7898-4904-95c5-0047536f3eab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.699 255071 DEBUG oslo_concurrency.lockutils [req-bd52ab3b-5567-4981-882b-b6ff37d8afa3 req-e0f2e971-3a51-4e2f-88f1-69f4d6d13fda cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fbf20945-7898-4904-95c5-0047536f3eab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.699 255071 DEBUG oslo_concurrency.lockutils [req-bd52ab3b-5567-4981-882b-b6ff37d8afa3 req-e0f2e971-3a51-4e2f-88f1-69f4d6d13fda cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fbf20945-7898-4904-95c5-0047536f3eab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.700 255071 DEBUG nova.compute.manager [req-bd52ab3b-5567-4981-882b-b6ff37d8afa3 req-e0f2e971-3a51-4e2f-88f1-69f4d6d13fda cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] No waiting events found dispatching network-vif-unplugged-2eff6be1-3572-4ee8-b40e-208a0051b03c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.700 255071 DEBUG nova.compute.manager [req-bd52ab3b-5567-4981-882b-b6ff37d8afa3 req-e0f2e971-3a51-4e2f-88f1-69f4d6d13fda cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Received event network-vif-unplugged-2eff6be1-3572-4ee8-b40e-208a0051b03c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:08:14 compute-0 podman[280800]: 2025-11-29 08:08:14.848228462 +0000 UTC m=+0.139601795 container remove d88a4d980d3a273443f9c5f2781df40ce59fedb83515ae9de4badcc27fde307d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b360768-ee11-45df-a7b1-30c167686953, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 08:08:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:14.857 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[7cce7e2b-65a0-44d7-ac8e-42c43bbf66d9]: (4, ('Sat Nov 29 08:08:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b360768-ee11-45df-a7b1-30c167686953 (d88a4d980d3a273443f9c5f2781df40ce59fedb83515ae9de4badcc27fde307d)\nd88a4d980d3a273443f9c5f2781df40ce59fedb83515ae9de4badcc27fde307d\nSat Nov 29 08:08:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b360768-ee11-45df-a7b1-30c167686953 (d88a4d980d3a273443f9c5f2781df40ce59fedb83515ae9de4badcc27fde307d)\nd88a4d980d3a273443f9c5f2781df40ce59fedb83515ae9de4badcc27fde307d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:14.860 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a302cd-0961-4919-8749-8b0992766588]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:14.861 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b360768-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.862 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:14 compute-0 kernel: tap2b360768-e0: left promiscuous mode
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.866 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:14.869 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[18c6f036-5aef-48aa-a9bd-fe8bd897e0f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:14 compute-0 nova_compute[255040]: 2025-11-29 08:08:14.883 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:14.886 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[9e58cb94-bd2f-41d3-a39d-f0bc10c08410]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:14.887 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[bea7be3b-455c-430c-b70c-e246cc9b6ebe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:14.909 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[e24d495f-9dee-4056-9b47-0b4722cc49d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588599, 'reachable_time': 16399, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280815, 'error': None, 'target': 'ovnmeta-2b360768-ee11-45df-a7b1-30c167686953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:14.912 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b360768-ee11-45df-a7b1-30c167686953 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:08:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:14.912 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[97bc473b-b197-4d41-ab07-631dbe363405]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:14 compute-0 systemd[1]: run-netns-ovnmeta\x2d2b360768\x2dee11\x2d45df\x2da7b1\x2d30c167686953.mount: Deactivated successfully.
Nov 29 08:08:15 compute-0 nova_compute[255040]: 2025-11-29 08:08:15.144 255071 INFO nova.virt.libvirt.driver [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Deleting instance files /var/lib/nova/instances/fbf20945-7898-4904-95c5-0047536f3eab_del
Nov 29 08:08:15 compute-0 nova_compute[255040]: 2025-11-29 08:08:15.144 255071 INFO nova.virt.libvirt.driver [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Deletion of /var/lib/nova/instances/fbf20945-7898-4904-95c5-0047536f3eab_del complete
Nov 29 08:08:15 compute-0 nova_compute[255040]: 2025-11-29 08:08:15.196 255071 INFO nova.compute.manager [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Took 2.30 seconds to destroy the instance on the hypervisor.
Nov 29 08:08:15 compute-0 nova_compute[255040]: 2025-11-29 08:08:15.197 255071 DEBUG oslo.service.loopingcall [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:08:15 compute-0 nova_compute[255040]: 2025-11-29 08:08:15.197 255071 DEBUG nova.compute.manager [-] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:08:15 compute-0 nova_compute[255040]: 2025-11-29 08:08:15.197 255071 DEBUG nova.network.neutron [-] [instance: fbf20945-7898-4904-95c5-0047536f3eab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:08:15 compute-0 ceph-mon[75237]: osdmap e260: 3 total, 3 up, 3 in
Nov 29 08:08:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 277 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 176 KiB/s rd, 52 KiB/s wr, 189 op/s
Nov 29 08:08:16 compute-0 nova_compute[255040]: 2025-11-29 08:08:16.100 255071 DEBUG nova.network.neutron [-] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:16 compute-0 nova_compute[255040]: 2025-11-29 08:08:16.118 255071 INFO nova.compute.manager [-] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Took 0.92 seconds to deallocate network for instance.
Nov 29 08:08:16 compute-0 nova_compute[255040]: 2025-11-29 08:08:16.170 255071 DEBUG oslo_concurrency.lockutils [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:16 compute-0 nova_compute[255040]: 2025-11-29 08:08:16.171 255071 DEBUG oslo_concurrency.lockutils [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:16 compute-0 nova_compute[255040]: 2025-11-29 08:08:16.209 255071 DEBUG nova.compute.manager [req-b6e8aa10-c859-4378-a0e7-7c1dce081236 req-7edfc98e-a041-4123-873e-81b2415f1265 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Received event network-vif-deleted-2eff6be1-3572-4ee8-b40e-208a0051b03c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:16 compute-0 nova_compute[255040]: 2025-11-29 08:08:16.238 255071 DEBUG oslo_concurrency.processutils [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Nov 29 08:08:16 compute-0 ceph-mon[75237]: pgmap v1469: 305 pgs: 305 active+clean; 277 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 176 KiB/s rd, 52 KiB/s wr, 189 op/s
Nov 29 08:08:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Nov 29 08:08:16 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Nov 29 08:08:16 compute-0 ovn_controller[153295]: 2025-11-29T08:08:16Z|00123|binding|INFO|Releasing lport 27ddf48d-41ab-4a2b-bcec-12ec830f91a5 from this chassis (sb_readonly=0)
Nov 29 08:08:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Nov 29 08:08:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Nov 29 08:08:16 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Nov 29 08:08:16 compute-0 nova_compute[255040]: 2025-11-29 08:08:16.506 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:16.622 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:08:16 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4061556998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:16 compute-0 nova_compute[255040]: 2025-11-29 08:08:16.712 255071 DEBUG oslo_concurrency.processutils [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:16 compute-0 nova_compute[255040]: 2025-11-29 08:08:16.725 255071 DEBUG nova.compute.provider_tree [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:08:16 compute-0 nova_compute[255040]: 2025-11-29 08:08:16.742 255071 DEBUG nova.scheduler.client.report [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:08:16 compute-0 nova_compute[255040]: 2025-11-29 08:08:16.768 255071 DEBUG oslo_concurrency.lockutils [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:16 compute-0 nova_compute[255040]: 2025-11-29 08:08:16.801 255071 DEBUG nova.compute.manager [req-f4737001-bd41-40e1-bd2d-e8b5c4150b8a req-b3870cd5-6d8e-4950-b12e-d4273119710c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Received event network-vif-plugged-2eff6be1-3572-4ee8-b40e-208a0051b03c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:16 compute-0 nova_compute[255040]: 2025-11-29 08:08:16.802 255071 DEBUG oslo_concurrency.lockutils [req-f4737001-bd41-40e1-bd2d-e8b5c4150b8a req-b3870cd5-6d8e-4950-b12e-d4273119710c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "fbf20945-7898-4904-95c5-0047536f3eab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:16 compute-0 nova_compute[255040]: 2025-11-29 08:08:16.803 255071 DEBUG oslo_concurrency.lockutils [req-f4737001-bd41-40e1-bd2d-e8b5c4150b8a req-b3870cd5-6d8e-4950-b12e-d4273119710c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fbf20945-7898-4904-95c5-0047536f3eab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:16 compute-0 nova_compute[255040]: 2025-11-29 08:08:16.804 255071 DEBUG oslo_concurrency.lockutils [req-f4737001-bd41-40e1-bd2d-e8b5c4150b8a req-b3870cd5-6d8e-4950-b12e-d4273119710c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fbf20945-7898-4904-95c5-0047536f3eab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:16 compute-0 nova_compute[255040]: 2025-11-29 08:08:16.804 255071 DEBUG nova.compute.manager [req-f4737001-bd41-40e1-bd2d-e8b5c4150b8a req-b3870cd5-6d8e-4950-b12e-d4273119710c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] No waiting events found dispatching network-vif-plugged-2eff6be1-3572-4ee8-b40e-208a0051b03c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:16 compute-0 nova_compute[255040]: 2025-11-29 08:08:16.804 255071 WARNING nova.compute.manager [req-f4737001-bd41-40e1-bd2d-e8b5c4150b8a req-b3870cd5-6d8e-4950-b12e-d4273119710c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Received unexpected event network-vif-plugged-2eff6be1-3572-4ee8-b40e-208a0051b03c for instance with vm_state deleted and task_state None.
Nov 29 08:08:16 compute-0 nova_compute[255040]: 2025-11-29 08:08:16.806 255071 INFO nova.scheduler.client.report [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Deleted allocations for instance fbf20945-7898-4904-95c5-0047536f3eab
Nov 29 08:08:16 compute-0 nova_compute[255040]: 2025-11-29 08:08:16.877 255071 DEBUG oslo_concurrency.lockutils [None req-86700b35-11db-478b-992d-bc328c124b6e 3299eefacb3a43a898b339895ff0f205 eeda2edc1f464a5480a29e4ff783c9b7 - - default default] Lock "fbf20945-7898-4904-95c5-0047536f3eab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:17 compute-0 ceph-mon[75237]: osdmap e261: 3 total, 3 up, 3 in
Nov 29 08:08:17 compute-0 ceph-mon[75237]: osdmap e262: 3 total, 3 up, 3 in
Nov 29 08:08:17 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/4061556998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Nov 29 08:08:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Nov 29 08:08:17 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Nov 29 08:08:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 277 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 43 KiB/s wr, 125 op/s
Nov 29 08:08:18 compute-0 ceph-mon[75237]: osdmap e263: 3 total, 3 up, 3 in
Nov 29 08:08:18 compute-0 ceph-mon[75237]: pgmap v1473: 305 pgs: 305 active+clean; 277 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 43 KiB/s wr, 125 op/s
Nov 29 08:08:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:08:18 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2370238908' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:08:18 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2370238908' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:19 compute-0 ovn_controller[153295]: 2025-11-29T08:08:19Z|00124|binding|INFO|Releasing lport 27ddf48d-41ab-4a2b-bcec-12ec830f91a5 from this chassis (sb_readonly=0)
Nov 29 08:08:19 compute-0 nova_compute[255040]: 2025-11-29 08:08:19.206 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:19 compute-0 nova_compute[255040]: 2025-11-29 08:08:19.614 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Nov 29 08:08:19 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2370238908' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:19 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2370238908' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Nov 29 08:08:19 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Nov 29 08:08:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 214 MiB data, 397 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 6.7 KiB/s wr, 68 op/s
Nov 29 08:08:19 compute-0 nova_compute[255040]: 2025-11-29 08:08:19.694 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:20 compute-0 ceph-mon[75237]: osdmap e264: 3 total, 3 up, 3 in
Nov 29 08:08:20 compute-0 ceph-mon[75237]: pgmap v1475: 305 pgs: 305 active+clean; 214 MiB data, 397 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 6.7 KiB/s wr, 68 op/s
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.310 255071 DEBUG oslo_concurrency.lockutils [None req-62f741a8-e3be-4600-983a-f1dea6e53692 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.310 255071 DEBUG oslo_concurrency.lockutils [None req-62f741a8-e3be-4600-983a-f1dea6e53692 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.327 255071 INFO nova.compute.manager [None req-62f741a8-e3be-4600-983a-f1dea6e53692 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Detaching volume cf12aba3-a386-4b4b-b57d-9f09288b68cb
Nov 29 08:08:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:21 compute-0 ovn_controller[153295]: 2025-11-29T08:08:21Z|00125|binding|INFO|Releasing lport 27ddf48d-41ab-4a2b-bcec-12ec830f91a5 from this chassis (sb_readonly=0)
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.544 255071 DEBUG oslo_concurrency.lockutils [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.562 255071 INFO nova.virt.block_device [None req-62f741a8-e3be-4600-983a-f1dea6e53692 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Attempting to driver detach volume cf12aba3-a386-4b4b-b57d-9f09288b68cb from mountpoint /dev/vdb
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.577 255071 DEBUG nova.virt.libvirt.driver [None req-62f741a8-e3be-4600-983a-f1dea6e53692 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Attempting to detach device vdb from instance fdfa056f-5aa2-4ec1-b558-19291f104ebd from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.577 255071 DEBUG nova.virt.libvirt.guest [None req-62f741a8-e3be-4600-983a-f1dea6e53692 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:08:21 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:08:21 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-cf12aba3-a386-4b4b-b57d-9f09288b68cb">
Nov 29 08:08:21 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:08:21 compute-0 nova_compute[255040]:   </source>
Nov 29 08:08:21 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:08:21 compute-0 nova_compute[255040]:   <serial>cf12aba3-a386-4b4b-b57d-9f09288b68cb</serial>
Nov 29 08:08:21 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:08:21 compute-0 nova_compute[255040]: </disk>
Nov 29 08:08:21 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.587 255071 INFO nova.virt.libvirt.driver [None req-62f741a8-e3be-4600-983a-f1dea6e53692 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Successfully detached device vdb from instance fdfa056f-5aa2-4ec1-b558-19291f104ebd from the persistent domain config.
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.588 255071 DEBUG nova.virt.libvirt.driver [None req-62f741a8-e3be-4600-983a-f1dea6e53692 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance fdfa056f-5aa2-4ec1-b558-19291f104ebd from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.589 255071 DEBUG nova.virt.libvirt.guest [None req-62f741a8-e3be-4600-983a-f1dea6e53692 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:08:21 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:08:21 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-cf12aba3-a386-4b4b-b57d-9f09288b68cb">
Nov 29 08:08:21 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:08:21 compute-0 nova_compute[255040]:   </source>
Nov 29 08:08:21 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:08:21 compute-0 nova_compute[255040]:   <serial>cf12aba3-a386-4b4b-b57d-9f09288b68cb</serial>
Nov 29 08:08:21 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:08:21 compute-0 nova_compute[255040]: </disk>
Nov 29 08:08:21 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.604 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 213 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 8.7 KiB/s wr, 106 op/s
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.714 255071 DEBUG nova.virt.libvirt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Received event <DeviceRemovedEvent: 1764403701.7136261, fdfa056f-5aa2-4ec1-b558-19291f104ebd => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.716 255071 DEBUG nova.virt.libvirt.driver [None req-62f741a8-e3be-4600-983a-f1dea6e53692 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance fdfa056f-5aa2-4ec1-b558-19291f104ebd _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.718 255071 INFO nova.virt.libvirt.driver [None req-62f741a8-e3be-4600-983a-f1dea6e53692 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Successfully detached device vdb from instance fdfa056f-5aa2-4ec1-b558-19291f104ebd from the live domain config.
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.872 255071 DEBUG nova.objects.instance [None req-62f741a8-e3be-4600-983a-f1dea6e53692 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lazy-loading 'flavor' on Instance uuid fdfa056f-5aa2-4ec1-b558-19291f104ebd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.910 255071 DEBUG oslo_concurrency.lockutils [None req-62f741a8-e3be-4600-983a-f1dea6e53692 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.912 255071 DEBUG oslo_concurrency.lockutils [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.912 255071 DEBUG oslo_concurrency.lockutils [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.913 255071 DEBUG oslo_concurrency.lockutils [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.913 255071 DEBUG oslo_concurrency.lockutils [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.914 255071 INFO nova.compute.manager [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Terminating instance
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.915 255071 DEBUG nova.compute.manager [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:08:21 compute-0 kernel: tapa591a89f-fb (unregistering): left promiscuous mode
Nov 29 08:08:21 compute-0 NetworkManager[49116]: <info>  [1764403701.9728] device (tapa591a89f-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.981 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-0 ovn_controller[153295]: 2025-11-29T08:08:21Z|00126|binding|INFO|Releasing lport a591a89f-fb00-4493-90c0-a41a373c5a5d from this chassis (sb_readonly=0)
Nov 29 08:08:21 compute-0 ovn_controller[153295]: 2025-11-29T08:08:21Z|00127|binding|INFO|Setting lport a591a89f-fb00-4493-90c0-a41a373c5a5d down in Southbound
Nov 29 08:08:21 compute-0 ovn_controller[153295]: 2025-11-29T08:08:21Z|00128|binding|INFO|Removing iface tapa591a89f-fb ovn-installed in OVS
Nov 29 08:08:21 compute-0 nova_compute[255040]: 2025-11-29 08:08:21.984 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:21 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:21.989 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:e1:53 10.100.0.13'], port_security=['fa:16:3e:1d:e1:53 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'fdfa056f-5aa2-4ec1-b558-19291f104ebd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1606039-8d07-4578-bb07-e1193dc21498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '87f822d62c8f4ac6bed1a893f2b9e73f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03433030-da4a-462a-bc74-36d4da632562', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.208'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c31b5f0-bc6c-4aab-ba94-61fe7903fc35, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=a591a89f-fb00-4493-90c0-a41a373c5a5d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:08:21 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:21.990 163500 INFO neutron.agent.ovn.metadata.agent [-] Port a591a89f-fb00-4493-90c0-a41a373c5a5d in datapath b1606039-8d07-4578-bb07-e1193dc21498 unbound from our chassis
Nov 29 08:08:21 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:21.991 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1606039-8d07-4578-bb07-e1193dc21498, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:08:21 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:21.992 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[af04c2ee-5078-45ad-ae4e-5a1c8cb0a5d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:21 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:21.993 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498 namespace which is not needed anymore
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.006 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:22 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 29 08:08:22 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 17.308s CPU time.
Nov 29 08:08:22 compute-0 systemd-machined[216271]: Machine qemu-12-instance-0000000c terminated.
Nov 29 08:08:22 compute-0 podman[280841]: 2025-11-29 08:08:22.105721186 +0000 UTC m=+0.099569938 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 29 08:08:22 compute-0 neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498[279506]: [NOTICE]   (279512) : haproxy version is 2.8.14-c23fe91
Nov 29 08:08:22 compute-0 neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498[279506]: [NOTICE]   (279512) : path to executable is /usr/sbin/haproxy
Nov 29 08:08:22 compute-0 neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498[279506]: [WARNING]  (279512) : Exiting Master process...
Nov 29 08:08:22 compute-0 neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498[279506]: [WARNING]  (279512) : Exiting Master process...
Nov 29 08:08:22 compute-0 neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498[279506]: [ALERT]    (279512) : Current worker (279514) exited with code 143 (Terminated)
Nov 29 08:08:22 compute-0 neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498[279506]: [WARNING]  (279512) : All workers exited. Exiting... (0)
Nov 29 08:08:22 compute-0 systemd[1]: libpod-8535d7ae63daf69785de3e946e3851c2f457c95f13354a2226c47d4d45ee95bd.scope: Deactivated successfully.
Nov 29 08:08:22 compute-0 podman[280884]: 2025-11-29 08:08:22.159530913 +0000 UTC m=+0.060451876 container died 8535d7ae63daf69785de3e946e3851c2f457c95f13354a2226c47d4d45ee95bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.167 255071 INFO nova.virt.libvirt.driver [-] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Instance destroyed successfully.
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.168 255071 DEBUG nova.objects.instance [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lazy-loading 'resources' on Instance uuid fdfa056f-5aa2-4ec1-b558-19291f104ebd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.183 255071 DEBUG nova.virt.libvirt.vif [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1434087890',display_name='tempest-VolumesSnapshotTestJSON-instance-1434087890',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1434087890',id=12,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL96ORfIik8BeRuckrnafOvlHRyjxUIWdH+at4r5POMR9+CtrL7JGCU8Mqd/H9E2HLcsOiM81E+IsSwOIPT+TVHX1Ez55V+XTvrnXrO+rTNDtC6s8MnjzygSU9av5U5xCQ==',key_name='tempest-keypair-748936652',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='87f822d62c8f4ac6bed1a893f2b9e73f',ramdisk_id='',reservation_id='r-368bhosc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-248670584',owner_user_name='tempest-VolumesSnapshotTestJSON-248670584-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='090eb6259968476885903b5734f6f67a',uuid=fdfa056f-5aa2-4ec1-b558-19291f104ebd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a591a89f-fb00-4493-90c0-a41a373c5a5d", "address": "fa:16:3e:1d:e1:53", "network": {"id": "b1606039-8d07-4578-bb07-e1193dc21498", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-920991102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87f822d62c8f4ac6bed1a893f2b9e73f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa591a89f-fb", "ovs_interfaceid": "a591a89f-fb00-4493-90c0-a41a373c5a5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.185 255071 DEBUG nova.network.os_vif_util [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Converting VIF {"id": "a591a89f-fb00-4493-90c0-a41a373c5a5d", "address": "fa:16:3e:1d:e1:53", "network": {"id": "b1606039-8d07-4578-bb07-e1193dc21498", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-920991102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87f822d62c8f4ac6bed1a893f2b9e73f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa591a89f-fb", "ovs_interfaceid": "a591a89f-fb00-4493-90c0-a41a373c5a5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.186 255071 DEBUG nova.network.os_vif_util [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1d:e1:53,bridge_name='br-int',has_traffic_filtering=True,id=a591a89f-fb00-4493-90c0-a41a373c5a5d,network=Network(b1606039-8d07-4578-bb07-e1193dc21498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa591a89f-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.186 255071 DEBUG os_vif [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:e1:53,bridge_name='br-int',has_traffic_filtering=True,id=a591a89f-fb00-4493-90c0-a41a373c5a5d,network=Network(b1606039-8d07-4578-bb07-e1193dc21498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa591a89f-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.188 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.188 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa591a89f-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.190 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.192 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8535d7ae63daf69785de3e946e3851c2f457c95f13354a2226c47d4d45ee95bd-userdata-shm.mount: Deactivated successfully.
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.194 255071 INFO os_vif [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:e1:53,bridge_name='br-int',has_traffic_filtering=True,id=a591a89f-fb00-4493-90c0-a41a373c5a5d,network=Network(b1606039-8d07-4578-bb07-e1193dc21498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa591a89f-fb')
Nov 29 08:08:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6494336fe498ee142c7068d1a4bfdc815a23e1c0ff5ff61813d866573877315-merged.mount: Deactivated successfully.
Nov 29 08:08:22 compute-0 podman[280884]: 2025-11-29 08:08:22.213227898 +0000 UTC m=+0.114148861 container cleanup 8535d7ae63daf69785de3e946e3851c2f457c95f13354a2226c47d4d45ee95bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:08:22 compute-0 systemd[1]: libpod-conmon-8535d7ae63daf69785de3e946e3851c2f457c95f13354a2226c47d4d45ee95bd.scope: Deactivated successfully.
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.274 255071 DEBUG nova.compute.manager [req-c9422404-e844-47de-a171-25d5db8096d1 req-2d856412-cb64-4d3d-bd25-76942049307c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Received event network-vif-unplugged-a591a89f-fb00-4493-90c0-a41a373c5a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.275 255071 DEBUG oslo_concurrency.lockutils [req-c9422404-e844-47de-a171-25d5db8096d1 req-2d856412-cb64-4d3d-bd25-76942049307c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.275 255071 DEBUG oslo_concurrency.lockutils [req-c9422404-e844-47de-a171-25d5db8096d1 req-2d856412-cb64-4d3d-bd25-76942049307c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.275 255071 DEBUG oslo_concurrency.lockutils [req-c9422404-e844-47de-a171-25d5db8096d1 req-2d856412-cb64-4d3d-bd25-76942049307c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.276 255071 DEBUG nova.compute.manager [req-c9422404-e844-47de-a171-25d5db8096d1 req-2d856412-cb64-4d3d-bd25-76942049307c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] No waiting events found dispatching network-vif-unplugged-a591a89f-fb00-4493-90c0-a41a373c5a5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.276 255071 DEBUG nova.compute.manager [req-c9422404-e844-47de-a171-25d5db8096d1 req-2d856412-cb64-4d3d-bd25-76942049307c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Received event network-vif-unplugged-a591a89f-fb00-4493-90c0-a41a373c5a5d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:08:22 compute-0 podman[280947]: 2025-11-29 08:08:22.300754911 +0000 UTC m=+0.054764813 container remove 8535d7ae63daf69785de3e946e3851c2f457c95f13354a2226c47d4d45ee95bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:08:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:22.309 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[9658d2e2-a229-4194-9903-a762c643e32b]: (4, ('Sat Nov 29 08:08:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498 (8535d7ae63daf69785de3e946e3851c2f457c95f13354a2226c47d4d45ee95bd)\n8535d7ae63daf69785de3e946e3851c2f457c95f13354a2226c47d4d45ee95bd\nSat Nov 29 08:08:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498 (8535d7ae63daf69785de3e946e3851c2f457c95f13354a2226c47d4d45ee95bd)\n8535d7ae63daf69785de3e946e3851c2f457c95f13354a2226c47d4d45ee95bd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:22.311 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[b4326604-51ca-4538-a3b9-a77f25e35de5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:22.313 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1606039-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.315 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:22 compute-0 kernel: tapb1606039-80: left promiscuous mode
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.317 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:22.320 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[2db881b2-e3b3-465f-82ea-12c7eddb97ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.336 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:22.339 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[7a959f34-6b3d-4b0b-a5e4-9ee33425ea2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:22.341 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5134846c-cbd8-47ee-8eec-5b175f7bcea1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:22.359 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[f031d7f7-814d-4bdc-88f3-d38b89e8589a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588782, 'reachable_time': 33929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280965, 'error': None, 'target': 'ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:22.363 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:08:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:22.363 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[1b4ebab6-cf53-40ee-95c4-1132d731135d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:22 compute-0 systemd[1]: run-netns-ovnmeta\x2db1606039\x2d8d07\x2d4578\x2dbb07\x2de1193dc21498.mount: Deactivated successfully.
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.594 255071 INFO nova.virt.libvirt.driver [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Deleting instance files /var/lib/nova/instances/fdfa056f-5aa2-4ec1-b558-19291f104ebd_del
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.595 255071 INFO nova.virt.libvirt.driver [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Deletion of /var/lib/nova/instances/fdfa056f-5aa2-4ec1-b558-19291f104ebd_del complete
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.662 255071 INFO nova.compute.manager [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Took 0.75 seconds to destroy the instance on the hypervisor.
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.663 255071 DEBUG oslo.service.loopingcall [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.663 255071 DEBUG nova.compute.manager [-] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:08:22 compute-0 nova_compute[255040]: 2025-11-29 08:08:22.663 255071 DEBUG nova.network.neutron [-] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:08:22 compute-0 ceph-mon[75237]: pgmap v1476: 305 pgs: 305 active+clean; 213 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 8.7 KiB/s wr, 106 op/s
Nov 29 08:08:23 compute-0 nova_compute[255040]: 2025-11-29 08:08:23.671 255071 DEBUG nova.network.neutron [-] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:23 compute-0 nova_compute[255040]: 2025-11-29 08:08:23.690 255071 INFO nova.compute.manager [-] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Took 1.03 seconds to deallocate network for instance.
Nov 29 08:08:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 179 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 7.3 KiB/s wr, 117 op/s
Nov 29 08:08:23 compute-0 nova_compute[255040]: 2025-11-29 08:08:23.823 255071 WARNING nova.volume.cinder [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Attachment e62618f1-4cf1-483f-b657-9c7818451899 does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = e62618f1-4cf1-483f-b657-9c7818451899. (HTTP 404) (Request-ID: req-c29b224d-d26c-41b1-a176-97962b889c3b)
Nov 29 08:08:23 compute-0 nova_compute[255040]: 2025-11-29 08:08:23.824 255071 INFO nova.compute.manager [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Took 0.13 seconds to detach 1 volumes for instance.
Nov 29 08:08:23 compute-0 nova_compute[255040]: 2025-11-29 08:08:23.869 255071 DEBUG oslo_concurrency.lockutils [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:23 compute-0 nova_compute[255040]: 2025-11-29 08:08:23.869 255071 DEBUG oslo_concurrency.lockutils [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:23 compute-0 nova_compute[255040]: 2025-11-29 08:08:23.924 255071 DEBUG oslo_concurrency.processutils [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:24 compute-0 nova_compute[255040]: 2025-11-29 08:08:24.182 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403689.1814513, e13306d3-0b4c-4937-8b4b-83605575ce82 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:08:24 compute-0 nova_compute[255040]: 2025-11-29 08:08:24.184 255071 INFO nova.compute.manager [-] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] VM Stopped (Lifecycle Event)
Nov 29 08:08:24 compute-0 nova_compute[255040]: 2025-11-29 08:08:24.201 255071 DEBUG nova.compute.manager [None req-39105ce0-16a9-4f9a-9be8-6807f9438f76 - - - - - -] [instance: e13306d3-0b4c-4937-8b4b-83605575ce82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:08:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:08:24 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/239422142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:24 compute-0 nova_compute[255040]: 2025-11-29 08:08:24.377 255071 DEBUG nova.compute.manager [req-5b1005a0-a2ca-41ba-b490-6969d0b9c798 req-7be22c25-face-43de-87ad-56d52319ae19 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Received event network-vif-plugged-a591a89f-fb00-4493-90c0-a41a373c5a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:24 compute-0 nova_compute[255040]: 2025-11-29 08:08:24.377 255071 DEBUG oslo_concurrency.lockutils [req-5b1005a0-a2ca-41ba-b490-6969d0b9c798 req-7be22c25-face-43de-87ad-56d52319ae19 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:24 compute-0 nova_compute[255040]: 2025-11-29 08:08:24.377 255071 DEBUG oslo_concurrency.lockutils [req-5b1005a0-a2ca-41ba-b490-6969d0b9c798 req-7be22c25-face-43de-87ad-56d52319ae19 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:24 compute-0 nova_compute[255040]: 2025-11-29 08:08:24.378 255071 DEBUG oslo_concurrency.lockutils [req-5b1005a0-a2ca-41ba-b490-6969d0b9c798 req-7be22c25-face-43de-87ad-56d52319ae19 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:24 compute-0 nova_compute[255040]: 2025-11-29 08:08:24.378 255071 DEBUG nova.compute.manager [req-5b1005a0-a2ca-41ba-b490-6969d0b9c798 req-7be22c25-face-43de-87ad-56d52319ae19 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] No waiting events found dispatching network-vif-plugged-a591a89f-fb00-4493-90c0-a41a373c5a5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:24 compute-0 nova_compute[255040]: 2025-11-29 08:08:24.378 255071 WARNING nova.compute.manager [req-5b1005a0-a2ca-41ba-b490-6969d0b9c798 req-7be22c25-face-43de-87ad-56d52319ae19 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Received unexpected event network-vif-plugged-a591a89f-fb00-4493-90c0-a41a373c5a5d for instance with vm_state deleted and task_state None.
Nov 29 08:08:24 compute-0 nova_compute[255040]: 2025-11-29 08:08:24.378 255071 DEBUG nova.compute.manager [req-5b1005a0-a2ca-41ba-b490-6969d0b9c798 req-7be22c25-face-43de-87ad-56d52319ae19 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Received event network-vif-deleted-a591a89f-fb00-4493-90c0-a41a373c5a5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:24 compute-0 nova_compute[255040]: 2025-11-29 08:08:24.381 255071 DEBUG oslo_concurrency.processutils [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:24 compute-0 nova_compute[255040]: 2025-11-29 08:08:24.388 255071 DEBUG nova.compute.provider_tree [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:08:24 compute-0 nova_compute[255040]: 2025-11-29 08:08:24.413 255071 DEBUG nova.scheduler.client.report [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:08:24 compute-0 nova_compute[255040]: 2025-11-29 08:08:24.442 255071 DEBUG oslo_concurrency.lockutils [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:24 compute-0 nova_compute[255040]: 2025-11-29 08:08:24.494 255071 INFO nova.scheduler.client.report [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Deleted allocations for instance fdfa056f-5aa2-4ec1-b558-19291f104ebd
Nov 29 08:08:24 compute-0 nova_compute[255040]: 2025-11-29 08:08:24.581 255071 DEBUG oslo_concurrency.lockutils [None req-64bbaaac-f9a2-4dd0-8492-fe48884717bb 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "fdfa056f-5aa2-4ec1-b558-19291f104ebd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:24 compute-0 nova_compute[255040]: 2025-11-29 08:08:24.697 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:24 compute-0 nova_compute[255040]: 2025-11-29 08:08:24.729 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:24 compute-0 ceph-mon[75237]: pgmap v1477: 305 pgs: 305 active+clean; 179 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 7.3 KiB/s wr, 117 op/s
Nov 29 08:08:24 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/239422142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:24 compute-0 nova_compute[255040]: 2025-11-29 08:08:24.875 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:24 compute-0 nova_compute[255040]: 2025-11-29 08:08:24.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 134 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 8.1 KiB/s wr, 129 op/s
Nov 29 08:08:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Nov 29 08:08:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Nov 29 08:08:26 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Nov 29 08:08:26 compute-0 ceph-mon[75237]: pgmap v1478: 305 pgs: 305 active+clean; 134 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 8.1 KiB/s wr, 129 op/s
Nov 29 08:08:26 compute-0 ceph-mon[75237]: osdmap e265: 3 total, 3 up, 3 in
Nov 29 08:08:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:08:26 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3558438702' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:08:26 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3558438702' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:27.130 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:27.130 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:27.131 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:27 compute-0 nova_compute[255040]: 2025-11-29 08:08:27.191 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 134 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 5.0 KiB/s wr, 98 op/s
Nov 29 08:08:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3558438702' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3558438702' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:27 compute-0 nova_compute[255040]: 2025-11-29 08:08:27.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Nov 29 08:08:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Nov 29 08:08:28 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 08:08:28 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 08:08:28 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Nov 29 08:08:28 compute-0 ceph-mon[75237]: pgmap v1480: 305 pgs: 305 active+clean; 134 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 5.0 KiB/s wr, 98 op/s
Nov 29 08:08:28 compute-0 nova_compute[255040]: 2025-11-29 08:08:28.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:28 compute-0 nova_compute[255040]: 2025-11-29 08:08:28.977 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:08:28 compute-0 nova_compute[255040]: 2025-11-29 08:08:28.994 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:08:29 compute-0 nova_compute[255040]: 2025-11-29 08:08:29.534 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403694.5332298, fbf20945-7898-4904-95c5-0047536f3eab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:08:29 compute-0 nova_compute[255040]: 2025-11-29 08:08:29.535 255071 INFO nova.compute.manager [-] [instance: fbf20945-7898-4904-95c5-0047536f3eab] VM Stopped (Lifecycle Event)
Nov 29 08:08:29 compute-0 nova_compute[255040]: 2025-11-29 08:08:29.557 255071 DEBUG nova.compute.manager [None req-054ed0fd-59e1-4502-8f5a-4bfbc597f53c - - - - - -] [instance: fbf20945-7898-4904-95c5-0047536f3eab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:08:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 100 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.6 KiB/s wr, 76 op/s
Nov 29 08:08:29 compute-0 nova_compute[255040]: 2025-11-29 08:08:29.698 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:29 compute-0 ceph-mon[75237]: osdmap e266: 3 total, 3 up, 3 in
Nov 29 08:08:29 compute-0 nova_compute[255040]: 2025-11-29 08:08:29.974 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Nov 29 08:08:30 compute-0 ceph-mon[75237]: pgmap v1482: 305 pgs: 305 active+clean; 100 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.6 KiB/s wr, 76 op/s
Nov 29 08:08:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Nov 29 08:08:30 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Nov 29 08:08:30 compute-0 nova_compute[255040]: 2025-11-29 08:08:30.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:30 compute-0 nova_compute[255040]: 2025-11-29 08:08:30.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:30 compute-0 nova_compute[255040]: 2025-11-29 08:08:30.976 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:08:30 compute-0 nova_compute[255040]: 2025-11-29 08:08:30.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:31 compute-0 nova_compute[255040]: 2025-11-29 08:08:31.002 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:31 compute-0 nova_compute[255040]: 2025-11-29 08:08:31.002 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:31 compute-0 nova_compute[255040]: 2025-11-29 08:08:31.002 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:31 compute-0 nova_compute[255040]: 2025-11-29 08:08:31.003 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:08:31 compute-0 nova_compute[255040]: 2025-11-29 08:08:31.003 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:08:31 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3681097496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:31 compute-0 nova_compute[255040]: 2025-11-29 08:08:31.479 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:31 compute-0 podman[281014]: 2025-11-29 08:08:31.6087218 +0000 UTC m=+0.075410840 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 08:08:31 compute-0 nova_compute[255040]: 2025-11-29 08:08:31.688 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:08:31 compute-0 nova_compute[255040]: 2025-11-29 08:08:31.690 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4491MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:08:31 compute-0 nova_compute[255040]: 2025-11-29 08:08:31.690 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:31 compute-0 nova_compute[255040]: 2025-11-29 08:08:31.691 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 88 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.3 KiB/s wr, 38 op/s
Nov 29 08:08:31 compute-0 nova_compute[255040]: 2025-11-29 08:08:31.765 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:08:31 compute-0 nova_compute[255040]: 2025-11-29 08:08:31.766 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:08:31 compute-0 nova_compute[255040]: 2025-11-29 08:08:31.786 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Refreshing inventories for resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 08:08:31 compute-0 nova_compute[255040]: 2025-11-29 08:08:31.803 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Updating ProviderTree inventory for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 08:08:31 compute-0 nova_compute[255040]: 2025-11-29 08:08:31.804 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Updating inventory in ProviderTree for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 08:08:31 compute-0 nova_compute[255040]: 2025-11-29 08:08:31.820 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Refreshing aggregate associations for resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 08:08:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Nov 29 08:08:31 compute-0 ceph-mon[75237]: osdmap e267: 3 total, 3 up, 3 in
Nov 29 08:08:31 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3681097496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:31 compute-0 nova_compute[255040]: 2025-11-29 08:08:31.850 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Refreshing trait associations for resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e, traits: COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_ABM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE,COMPUTE_NODE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 08:08:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Nov 29 08:08:31 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Nov 29 08:08:31 compute-0 nova_compute[255040]: 2025-11-29 08:08:31.871 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:32 compute-0 nova_compute[255040]: 2025-11-29 08:08:32.193 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:08:32 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3739564492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:32 compute-0 nova_compute[255040]: 2025-11-29 08:08:32.344 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:32 compute-0 nova_compute[255040]: 2025-11-29 08:08:32.351 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:08:32 compute-0 nova_compute[255040]: 2025-11-29 08:08:32.368 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:08:32 compute-0 nova_compute[255040]: 2025-11-29 08:08:32.390 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:08:32 compute-0 nova_compute[255040]: 2025-11-29 08:08:32.391 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:32 compute-0 ceph-mon[75237]: pgmap v1484: 305 pgs: 305 active+clean; 88 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.3 KiB/s wr, 38 op/s
Nov 29 08:08:32 compute-0 ceph-mon[75237]: osdmap e268: 3 total, 3 up, 3 in
Nov 29 08:08:32 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3739564492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:33 compute-0 nova_compute[255040]: 2025-11-29 08:08:33.386 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 88 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 3.3 KiB/s wr, 72 op/s
Nov 29 08:08:33 compute-0 nova_compute[255040]: 2025-11-29 08:08:33.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:34 compute-0 nova_compute[255040]: 2025-11-29 08:08:34.700 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:34 compute-0 ceph-mon[75237]: pgmap v1486: 305 pgs: 305 active+clean; 88 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 3.3 KiB/s wr, 72 op/s
Nov 29 08:08:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 88 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.6 KiB/s wr, 60 op/s
Nov 29 08:08:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Nov 29 08:08:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Nov 29 08:08:36 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Nov 29 08:08:36 compute-0 ceph-mon[75237]: pgmap v1487: 305 pgs: 305 active+clean; 88 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.6 KiB/s wr, 60 op/s
Nov 29 08:08:36 compute-0 ceph-mon[75237]: osdmap e269: 3 total, 3 up, 3 in
Nov 29 08:08:37 compute-0 nova_compute[255040]: 2025-11-29 08:08:37.161 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403702.1597757, fdfa056f-5aa2-4ec1-b558-19291f104ebd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:08:37 compute-0 nova_compute[255040]: 2025-11-29 08:08:37.161 255071 INFO nova.compute.manager [-] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] VM Stopped (Lifecycle Event)
Nov 29 08:08:37 compute-0 nova_compute[255040]: 2025-11-29 08:08:37.188 255071 DEBUG nova.compute.manager [None req-551ad078-0683-42f5-bb46-fe7c48c9dbc7 - - - - - -] [instance: fdfa056f-5aa2-4ec1-b558-19291f104ebd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:08:37 compute-0 nova_compute[255040]: 2025-11-29 08:08:37.196 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 88 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.9 KiB/s wr, 33 op/s
Nov 29 08:08:38 compute-0 nova_compute[255040]: 2025-11-29 08:08:38.419 255071 DEBUG oslo_concurrency.lockutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "169b2d31-6539-4279-bf7a-f46078e1d624" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:38 compute-0 nova_compute[255040]: 2025-11-29 08:08:38.419 255071 DEBUG oslo_concurrency.lockutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "169b2d31-6539-4279-bf7a-f46078e1d624" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:38 compute-0 nova_compute[255040]: 2025-11-29 08:08:38.440 255071 DEBUG nova.compute.manager [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:08:38 compute-0 nova_compute[255040]: 2025-11-29 08:08:38.516 255071 DEBUG oslo_concurrency.lockutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:38 compute-0 nova_compute[255040]: 2025-11-29 08:08:38.517 255071 DEBUG oslo_concurrency.lockutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:38 compute-0 nova_compute[255040]: 2025-11-29 08:08:38.524 255071 DEBUG nova.virt.hardware [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:08:38 compute-0 nova_compute[255040]: 2025-11-29 08:08:38.524 255071 INFO nova.compute.claims [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:08:38 compute-0 nova_compute[255040]: 2025-11-29 08:08:38.627 255071 DEBUG oslo_concurrency.processutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:08:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:08:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:08:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:08:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:08:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:08:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:08:38
Nov 29 08:08:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:08:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:08:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'volumes', 'default.rgw.meta', '.rgw.root', 'images', '.mgr', 'cephfs.cephfs.data', 'backups']
Nov 29 08:08:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:08:38 compute-0 podman[281075]: 2025-11-29 08:08:38.895400167 +0000 UTC m=+0.062657955 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 08:08:38 compute-0 ceph-mon[75237]: pgmap v1489: 305 pgs: 305 active+clean; 88 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.9 KiB/s wr, 33 op/s
Nov 29 08:08:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:08:39 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2296649273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.118 255071 DEBUG oslo_concurrency.processutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.125 255071 DEBUG nova.compute.provider_tree [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.146 255071 DEBUG nova.scheduler.client.report [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.169 255071 DEBUG oslo_concurrency.lockutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.170 255071 DEBUG nova.compute.manager [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.215 255071 DEBUG nova.compute.manager [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.216 255071 DEBUG nova.network.neutron [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.250 255071 INFO nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.272 255071 DEBUG nova.compute.manager [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.369 255071 DEBUG nova.compute.manager [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.371 255071 DEBUG nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.371 255071 INFO nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Creating image(s)
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.404 255071 DEBUG nova.storage.rbd_utils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] rbd image 169b2d31-6539-4279-bf7a-f46078e1d624_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.433 255071 DEBUG nova.storage.rbd_utils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] rbd image 169b2d31-6539-4279-bf7a-f46078e1d624_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.463 255071 DEBUG nova.storage.rbd_utils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] rbd image 169b2d31-6539-4279-bf7a-f46078e1d624_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.468 255071 DEBUG oslo_concurrency.processutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.503 255071 DEBUG nova.policy [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '090eb6259968476885903b5734f6f67a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '87f822d62c8f4ac6bed1a893f2b9e73f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.550 255071 DEBUG oslo_concurrency.processutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.551 255071 DEBUG oslo_concurrency.lockutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "55a6637599f7119d0d1afd670bb8713620840059" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.552 255071 DEBUG oslo_concurrency.lockutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.553 255071 DEBUG oslo_concurrency.lockutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.576 255071 DEBUG nova.storage.rbd_utils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] rbd image 169b2d31-6539-4279-bf7a-f46078e1d624_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.580 255071 DEBUG oslo_concurrency.processutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 169b2d31-6539-4279-bf7a-f46078e1d624_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 88 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.703 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:39 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2296649273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:39 compute-0 nova_compute[255040]: 2025-11-29 08:08:39.927 255071 DEBUG oslo_concurrency.processutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 169b2d31-6539-4279-bf7a-f46078e1d624_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:40 compute-0 nova_compute[255040]: 2025-11-29 08:08:40.000 255071 DEBUG nova.storage.rbd_utils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] resizing rbd image 169b2d31-6539-4279-bf7a-f46078e1d624_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:08:40 compute-0 nova_compute[255040]: 2025-11-29 08:08:40.114 255071 DEBUG nova.objects.instance [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lazy-loading 'migration_context' on Instance uuid 169b2d31-6539-4279-bf7a-f46078e1d624 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:40 compute-0 nova_compute[255040]: 2025-11-29 08:08:40.118 255071 DEBUG nova.network.neutron [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Successfully created port: 6d9c5f72-469c-4971-a11c-287eba2d8490 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:08:40 compute-0 nova_compute[255040]: 2025-11-29 08:08:40.141 255071 DEBUG nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:08:40 compute-0 nova_compute[255040]: 2025-11-29 08:08:40.141 255071 DEBUG nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Ensure instance console log exists: /var/lib/nova/instances/169b2d31-6539-4279-bf7a-f46078e1d624/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:08:40 compute-0 nova_compute[255040]: 2025-11-29 08:08:40.142 255071 DEBUG oslo_concurrency.lockutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:40 compute-0 nova_compute[255040]: 2025-11-29 08:08:40.142 255071 DEBUG oslo_concurrency.lockutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:40 compute-0 nova_compute[255040]: 2025-11-29 08:08:40.142 255071 DEBUG oslo_concurrency.lockutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:40 compute-0 ceph-mon[75237]: pgmap v1490: 305 pgs: 305 active+clean; 88 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Nov 29 08:08:41 compute-0 nova_compute[255040]: 2025-11-29 08:08:41.312 255071 DEBUG nova.network.neutron [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Successfully updated port: 6d9c5f72-469c-4971-a11c-287eba2d8490 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:08:41 compute-0 nova_compute[255040]: 2025-11-29 08:08:41.331 255071 DEBUG oslo_concurrency.lockutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "refresh_cache-169b2d31-6539-4279-bf7a-f46078e1d624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:08:41 compute-0 nova_compute[255040]: 2025-11-29 08:08:41.332 255071 DEBUG oslo_concurrency.lockutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquired lock "refresh_cache-169b2d31-6539-4279-bf7a-f46078e1d624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:08:41 compute-0 nova_compute[255040]: 2025-11-29 08:08:41.333 255071 DEBUG nova.network.neutron [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:08:41 compute-0 nova_compute[255040]: 2025-11-29 08:08:41.434 255071 DEBUG nova.compute.manager [req-1bec6ef2-6601-4ba2-a3c3-060d0694286b req-10b8896a-e7c9-4f08-a1cb-7705efa2377c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Received event network-changed-6d9c5f72-469c-4971-a11c-287eba2d8490 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:41 compute-0 nova_compute[255040]: 2025-11-29 08:08:41.435 255071 DEBUG nova.compute.manager [req-1bec6ef2-6601-4ba2-a3c3-060d0694286b req-10b8896a-e7c9-4f08-a1cb-7705efa2377c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Refreshing instance network info cache due to event network-changed-6d9c5f72-469c-4971-a11c-287eba2d8490. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:08:41 compute-0 nova_compute[255040]: 2025-11-29 08:08:41.435 255071 DEBUG oslo_concurrency.lockutils [req-1bec6ef2-6601-4ba2-a3c3-060d0694286b req-10b8896a-e7c9-4f08-a1cb-7705efa2377c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-169b2d31-6539-4279-bf7a-f46078e1d624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:08:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:41 compute-0 nova_compute[255040]: 2025-11-29 08:08:41.510 255071 DEBUG nova.network.neutron [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:08:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 95 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 327 KiB/s wr, 24 op/s
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.197 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.342 255071 DEBUG nova.network.neutron [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Updating instance_info_cache with network_info: [{"id": "6d9c5f72-469c-4971-a11c-287eba2d8490", "address": "fa:16:3e:22:1d:1e", "network": {"id": "b1606039-8d07-4578-bb07-e1193dc21498", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-920991102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87f822d62c8f4ac6bed1a893f2b9e73f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d9c5f72-46", "ovs_interfaceid": "6d9c5f72-469c-4971-a11c-287eba2d8490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.360 255071 DEBUG oslo_concurrency.lockutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Releasing lock "refresh_cache-169b2d31-6539-4279-bf7a-f46078e1d624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.361 255071 DEBUG nova.compute.manager [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Instance network_info: |[{"id": "6d9c5f72-469c-4971-a11c-287eba2d8490", "address": "fa:16:3e:22:1d:1e", "network": {"id": "b1606039-8d07-4578-bb07-e1193dc21498", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-920991102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87f822d62c8f4ac6bed1a893f2b9e73f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d9c5f72-46", "ovs_interfaceid": "6d9c5f72-469c-4971-a11c-287eba2d8490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.362 255071 DEBUG oslo_concurrency.lockutils [req-1bec6ef2-6601-4ba2-a3c3-060d0694286b req-10b8896a-e7c9-4f08-a1cb-7705efa2377c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-169b2d31-6539-4279-bf7a-f46078e1d624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.363 255071 DEBUG nova.network.neutron [req-1bec6ef2-6601-4ba2-a3c3-060d0694286b req-10b8896a-e7c9-4f08-a1cb-7705efa2377c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Refreshing network info cache for port 6d9c5f72-469c-4971-a11c-287eba2d8490 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.365 255071 DEBUG nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Start _get_guest_xml network_info=[{"id": "6d9c5f72-469c-4971-a11c-287eba2d8490", "address": "fa:16:3e:22:1d:1e", "network": {"id": "b1606039-8d07-4578-bb07-e1193dc21498", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-920991102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87f822d62c8f4ac6bed1a893f2b9e73f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d9c5f72-46", "ovs_interfaceid": "6d9c5f72-469c-4971-a11c-287eba2d8490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'image_id': '36a9388d-0d77-4d24-a915-be92247e5dbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.371 255071 WARNING nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.375 255071 DEBUG nova.virt.libvirt.host [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.376 255071 DEBUG nova.virt.libvirt.host [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.384 255071 DEBUG nova.virt.libvirt.host [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.385 255071 DEBUG nova.virt.libvirt.host [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.385 255071 DEBUG nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.386 255071 DEBUG nova.virt.hardware [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.386 255071 DEBUG nova.virt.hardware [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.386 255071 DEBUG nova.virt.hardware [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.387 255071 DEBUG nova.virt.hardware [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.387 255071 DEBUG nova.virt.hardware [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.387 255071 DEBUG nova.virt.hardware [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.388 255071 DEBUG nova.virt.hardware [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.388 255071 DEBUG nova.virt.hardware [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.388 255071 DEBUG nova.virt.hardware [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.388 255071 DEBUG nova.virt.hardware [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.389 255071 DEBUG nova.virt.hardware [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.392 255071 DEBUG oslo_concurrency.processutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:08:42 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/876467115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.857 255071 DEBUG oslo_concurrency.processutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.879 255071 DEBUG nova.storage.rbd_utils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] rbd image 169b2d31-6539-4279-bf7a-f46078e1d624_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:42 compute-0 nova_compute[255040]: 2025-11-29 08:08:42.884 255071 DEBUG oslo_concurrency.processutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:42 compute-0 ceph-mon[75237]: pgmap v1491: 305 pgs: 305 active+clean; 95 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 327 KiB/s wr, 24 op/s
Nov 29 08:08:42 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/876467115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:08:43 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/203952465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.336 255071 DEBUG oslo_concurrency.processutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.338 255071 DEBUG nova.virt.libvirt.vif [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:08:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1687597056',display_name='tempest-VolumesSnapshotTestJSON-instance-1687597056',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1687597056',id=13,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKzXCfFmv5CFzeg5CC3EBDIzgpMgUXWz+ppHsYmCwwce636iY+6Tiw98CYBaZyZ6d1ODwaICbKpBZNxJYT/FMGzwNKpoQdJgsjUs1+53EMI7xDZW99L2NGxqLHAuB8aCfQ==',key_name='tempest-keypair-1970025283',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='87f822d62c8f4ac6bed1a893f2b9e73f',ramdisk_id='',reservation_id='r-akg0sr7o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-248670584',owner_user_name='tempest-VolumesSnapshotTestJSON-248670584-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='090eb6259968476885903b5734f6f67a',uuid=169b2d31-6539-4279-bf7a-f46078e1d624,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6d9c5f72-469c-4971-a11c-287eba2d8490", "address": "fa:16:3e:22:1d:1e", "network": {"id": "b1606039-8d07-4578-bb07-e1193dc21498", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-920991102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87f822d62c8f4ac6bed1a893f2b9e73f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d9c5f72-46", "ovs_interfaceid": "6d9c5f72-469c-4971-a11c-287eba2d8490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.338 255071 DEBUG nova.network.os_vif_util [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Converting VIF {"id": "6d9c5f72-469c-4971-a11c-287eba2d8490", "address": "fa:16:3e:22:1d:1e", "network": {"id": "b1606039-8d07-4578-bb07-e1193dc21498", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-920991102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87f822d62c8f4ac6bed1a893f2b9e73f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d9c5f72-46", "ovs_interfaceid": "6d9c5f72-469c-4971-a11c-287eba2d8490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.339 255071 DEBUG nova.network.os_vif_util [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:1d:1e,bridge_name='br-int',has_traffic_filtering=True,id=6d9c5f72-469c-4971-a11c-287eba2d8490,network=Network(b1606039-8d07-4578-bb07-e1193dc21498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d9c5f72-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.341 255071 DEBUG nova.objects.instance [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lazy-loading 'pci_devices' on Instance uuid 169b2d31-6539-4279-bf7a-f46078e1d624 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:08:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:08:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:08:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:08:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:08:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:08:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:08:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:08:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.356 255071 DEBUG nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:08:43 compute-0 nova_compute[255040]:   <uuid>169b2d31-6539-4279-bf7a-f46078e1d624</uuid>
Nov 29 08:08:43 compute-0 nova_compute[255040]:   <name>instance-0000000d</name>
Nov 29 08:08:43 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:08:43 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:08:43 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <nova:name>tempest-VolumesSnapshotTestJSON-instance-1687597056</nova:name>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:08:42</nova:creationTime>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:08:43 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:08:43 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:08:43 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:08:43 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:08:43 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:08:43 compute-0 nova_compute[255040]:         <nova:user uuid="090eb6259968476885903b5734f6f67a">tempest-VolumesSnapshotTestJSON-248670584-project-member</nova:user>
Nov 29 08:08:43 compute-0 nova_compute[255040]:         <nova:project uuid="87f822d62c8f4ac6bed1a893f2b9e73f">tempest-VolumesSnapshotTestJSON-248670584</nova:project>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <nova:root type="image" uuid="36a9388d-0d77-4d24-a915-be92247e5dbc"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:08:43 compute-0 nova_compute[255040]:         <nova:port uuid="6d9c5f72-469c-4971-a11c-287eba2d8490">
Nov 29 08:08:43 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:08:43 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:08:43 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <system>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <entry name="serial">169b2d31-6539-4279-bf7a-f46078e1d624</entry>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <entry name="uuid">169b2d31-6539-4279-bf7a-f46078e1d624</entry>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     </system>
Nov 29 08:08:43 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:08:43 compute-0 nova_compute[255040]:   <os>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:   </os>
Nov 29 08:08:43 compute-0 nova_compute[255040]:   <features>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:   </features>
Nov 29 08:08:43 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:08:43 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:08:43 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/169b2d31-6539-4279-bf7a-f46078e1d624_disk">
Nov 29 08:08:43 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       </source>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:08:43 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/169b2d31-6539-4279-bf7a-f46078e1d624_disk.config">
Nov 29 08:08:43 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       </source>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:08:43 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:22:1d:1e"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <target dev="tap6d9c5f72-46"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/169b2d31-6539-4279-bf7a-f46078e1d624/console.log" append="off"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <video>
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     </video>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:08:43 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:08:43 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:08:43 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:08:43 compute-0 nova_compute[255040]: </domain>
Nov 29 08:08:43 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:08:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.358 255071 DEBUG nova.compute.manager [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Preparing to wait for external event network-vif-plugged-6d9c5f72-469c-4971-a11c-287eba2d8490 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.359 255071 DEBUG oslo_concurrency.lockutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "169b2d31-6539-4279-bf7a-f46078e1d624-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.359 255071 DEBUG oslo_concurrency.lockutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "169b2d31-6539-4279-bf7a-f46078e1d624-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.360 255071 DEBUG oslo_concurrency.lockutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "169b2d31-6539-4279-bf7a-f46078e1d624-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.361 255071 DEBUG nova.virt.libvirt.vif [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:08:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1687597056',display_name='tempest-VolumesSnapshotTestJSON-instance-1687597056',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1687597056',id=13,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKzXCfFmv5CFzeg5CC3EBDIzgpMgUXWz+ppHsYmCwwce636iY+6Tiw98CYBaZyZ6d1ODwaICbKpBZNxJYT/FMGzwNKpoQdJgsjUs1+53EMI7xDZW99L2NGxqLHAuB8aCfQ==',key_name='tempest-keypair-1970025283',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='87f822d62c8f4ac6bed1a893f2b9e73f',ramdisk_id='',reservation_id='r-akg0sr7o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-248670584',owner_user_name='tempest-VolumesSnapshotTestJSON-248670584-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='090eb6259968476885903b5734f6f67a',uuid=169b2d31-6539-4279-bf7a-f46078e1d624,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6d9c5f72-469c-4971-a11c-287eba2d8490", "address": "fa:16:3e:22:1d:1e", "network": {"id": "b1606039-8d07-4578-bb07-e1193dc21498", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-920991102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87f822d62c8f4ac6bed1a893f2b9e73f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d9c5f72-46", "ovs_interfaceid": "6d9c5f72-469c-4971-a11c-287eba2d8490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.361 255071 DEBUG nova.network.os_vif_util [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Converting VIF {"id": "6d9c5f72-469c-4971-a11c-287eba2d8490", "address": "fa:16:3e:22:1d:1e", "network": {"id": "b1606039-8d07-4578-bb07-e1193dc21498", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-920991102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87f822d62c8f4ac6bed1a893f2b9e73f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d9c5f72-46", "ovs_interfaceid": "6d9c5f72-469c-4971-a11c-287eba2d8490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.362 255071 DEBUG nova.network.os_vif_util [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:1d:1e,bridge_name='br-int',has_traffic_filtering=True,id=6d9c5f72-469c-4971-a11c-287eba2d8490,network=Network(b1606039-8d07-4578-bb07-e1193dc21498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d9c5f72-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.364 255071 DEBUG os_vif [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:1d:1e,bridge_name='br-int',has_traffic_filtering=True,id=6d9c5f72-469c-4971-a11c-287eba2d8490,network=Network(b1606039-8d07-4578-bb07-e1193dc21498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d9c5f72-46') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.365 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.366 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.366 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.370 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.371 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d9c5f72-46, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.371 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6d9c5f72-46, col_values=(('external_ids', {'iface-id': '6d9c5f72-469c-4971-a11c-287eba2d8490', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:1d:1e', 'vm-uuid': '169b2d31-6539-4279-bf7a-f46078e1d624'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.373 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:43 compute-0 NetworkManager[49116]: <info>  [1764403723.3746] manager: (tap6d9c5f72-46): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.375 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.383 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.384 255071 INFO os_vif [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:1d:1e,bridge_name='br-int',has_traffic_filtering=True,id=6d9c5f72-469c-4971-a11c-287eba2d8490,network=Network(b1606039-8d07-4578-bb07-e1193dc21498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d9c5f72-46')
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.429 255071 DEBUG nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.430 255071 DEBUG nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.430 255071 DEBUG nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] No VIF found with MAC fa:16:3e:22:1d:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.431 255071 INFO nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Using config drive
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.457 255071 DEBUG nova.storage.rbd_utils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] rbd image 169b2d31-6539-4279-bf7a-f46078e1d624_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.464 255071 DEBUG nova.network.neutron [req-1bec6ef2-6601-4ba2-a3c3-060d0694286b req-10b8896a-e7c9-4f08-a1cb-7705efa2377c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Updated VIF entry in instance network info cache for port 6d9c5f72-469c-4971-a11c-287eba2d8490. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.464 255071 DEBUG nova.network.neutron [req-1bec6ef2-6601-4ba2-a3c3-060d0694286b req-10b8896a-e7c9-4f08-a1cb-7705efa2377c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Updating instance_info_cache with network_info: [{"id": "6d9c5f72-469c-4971-a11c-287eba2d8490", "address": "fa:16:3e:22:1d:1e", "network": {"id": "b1606039-8d07-4578-bb07-e1193dc21498", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-920991102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87f822d62c8f4ac6bed1a893f2b9e73f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d9c5f72-46", "ovs_interfaceid": "6d9c5f72-469c-4971-a11c-287eba2d8490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.487 255071 DEBUG oslo_concurrency.lockutils [req-1bec6ef2-6601-4ba2-a3c3-060d0694286b req-10b8896a-e7c9-4f08-a1cb-7705efa2377c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-169b2d31-6539-4279-bf7a-f46078e1d624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:08:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 111 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 981 KiB/s wr, 32 op/s
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.768 255071 INFO nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Creating config drive at /var/lib/nova/instances/169b2d31-6539-4279-bf7a-f46078e1d624/disk.config
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.776 255071 DEBUG oslo_concurrency.processutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/169b2d31-6539-4279-bf7a-f46078e1d624/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxmtxmo3l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.914 255071 DEBUG oslo_concurrency.processutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/169b2d31-6539-4279-bf7a-f46078e1d624/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxmtxmo3l" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:43 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/203952465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.954 255071 DEBUG nova.storage.rbd_utils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] rbd image 169b2d31-6539-4279-bf7a-f46078e1d624_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:43 compute-0 nova_compute[255040]: 2025-11-29 08:08:43.959 255071 DEBUG oslo_concurrency.processutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/169b2d31-6539-4279-bf7a-f46078e1d624/disk.config 169b2d31-6539-4279-bf7a-f46078e1d624_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:44 compute-0 nova_compute[255040]: 2025-11-29 08:08:44.148 255071 DEBUG oslo_concurrency.processutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/169b2d31-6539-4279-bf7a-f46078e1d624/disk.config 169b2d31-6539-4279-bf7a-f46078e1d624_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:44 compute-0 nova_compute[255040]: 2025-11-29 08:08:44.149 255071 INFO nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Deleting local config drive /var/lib/nova/instances/169b2d31-6539-4279-bf7a-f46078e1d624/disk.config because it was imported into RBD.
Nov 29 08:08:44 compute-0 kernel: tap6d9c5f72-46: entered promiscuous mode
Nov 29 08:08:44 compute-0 NetworkManager[49116]: <info>  [1764403724.2089] manager: (tap6d9c5f72-46): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Nov 29 08:08:44 compute-0 nova_compute[255040]: 2025-11-29 08:08:44.210 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:44 compute-0 ovn_controller[153295]: 2025-11-29T08:08:44Z|00129|binding|INFO|Claiming lport 6d9c5f72-469c-4971-a11c-287eba2d8490 for this chassis.
Nov 29 08:08:44 compute-0 ovn_controller[153295]: 2025-11-29T08:08:44Z|00130|binding|INFO|6d9c5f72-469c-4971-a11c-287eba2d8490: Claiming fa:16:3e:22:1d:1e 10.100.0.4
Nov 29 08:08:44 compute-0 nova_compute[255040]: 2025-11-29 08:08:44.215 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.227 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:1d:1e 10.100.0.4'], port_security=['fa:16:3e:22:1d:1e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '169b2d31-6539-4279-bf7a-f46078e1d624', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1606039-8d07-4578-bb07-e1193dc21498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '87f822d62c8f4ac6bed1a893f2b9e73f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f2aadc9d-fab8-454c-8d6d-96d62ba75cc2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c31b5f0-bc6c-4aab-ba94-61fe7903fc35, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=6d9c5f72-469c-4971-a11c-287eba2d8490) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.228 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 6d9c5f72-469c-4971-a11c-287eba2d8490 in datapath b1606039-8d07-4578-bb07-e1193dc21498 bound to our chassis
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.230 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1606039-8d07-4578-bb07-e1193dc21498
Nov 29 08:08:44 compute-0 systemd-udevd[281398]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.247 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[da98521d-d5ed-4861-ade9-6dc87d60bd87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.248 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb1606039-81 in ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.250 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb1606039-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.250 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[6e77999c-e5a5-4a17-9eee-a9ccc4958fd6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.251 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[ae9f0a3d-96ed-4c5a-ac0c-ffbfc01787e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:44 compute-0 systemd-machined[216271]: New machine qemu-13-instance-0000000d.
Nov 29 08:08:44 compute-0 NetworkManager[49116]: <info>  [1764403724.2605] device (tap6d9c5f72-46): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:08:44 compute-0 NetworkManager[49116]: <info>  [1764403724.2620] device (tap6d9c5f72-46): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.267 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e14320-c008-4c1f-9fd2-d0506cfb98fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:44 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Nov 29 08:08:44 compute-0 nova_compute[255040]: 2025-11-29 08:08:44.292 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:44 compute-0 ovn_controller[153295]: 2025-11-29T08:08:44Z|00131|binding|INFO|Setting lport 6d9c5f72-469c-4971-a11c-287eba2d8490 ovn-installed in OVS
Nov 29 08:08:44 compute-0 ovn_controller[153295]: 2025-11-29T08:08:44Z|00132|binding|INFO|Setting lport 6d9c5f72-469c-4971-a11c-287eba2d8490 up in Southbound
Nov 29 08:08:44 compute-0 nova_compute[255040]: 2025-11-29 08:08:44.298 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.298 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5383781d-3510-4ba7-8cc6-7dcf70608ce4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.347 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[c8c382e1-9c7d-4d6b-b074-fb8b756d8908]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:44 compute-0 NetworkManager[49116]: <info>  [1764403724.3576] manager: (tapb1606039-80): new Veth device (/org/freedesktop/NetworkManager/Devices/74)
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.356 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[ad94716e-a9fe-43f1-aea3-07d302a69a02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:44 compute-0 systemd-udevd[281402]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.394 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[a442716f-c1c9-4686-b3db-e3cc655941f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.399 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[2f637b76-bd40-45a8-9be1-bf5fa65e0af7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:44 compute-0 NetworkManager[49116]: <info>  [1764403724.4265] device (tapb1606039-80): carrier: link connected
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.435 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca1bdae-a7f6-4151-b7b8-133bb9b7d118]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.460 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[f8762c90-0095-447f-b135-3c740b4d1750]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1606039-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:11:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596176, 'reachable_time': 36961, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281431, 'error': None, 'target': 'ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.480 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[49b20afc-0363-432f-95ef-ad04c68900c0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:11b1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 596176, 'tstamp': 596176}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281432, 'error': None, 'target': 'ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.505 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[079c355e-71df-4a43-a0c7-6cff5800a8d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1606039-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:11:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596176, 'reachable_time': 36961, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281433, 'error': None, 'target': 'ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.549 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[3f6db807-bf77-4af5-a6c7-c829c1aed7c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.618 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[d27c4fcc-6044-499e-bd76-5e6ec605cf02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.620 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1606039-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.621 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.621 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1606039-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:44 compute-0 nova_compute[255040]: 2025-11-29 08:08:44.623 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:44 compute-0 kernel: tapb1606039-80: entered promiscuous mode
Nov 29 08:08:44 compute-0 NetworkManager[49116]: <info>  [1764403724.6256] manager: (tapb1606039-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.626 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1606039-80, col_values=(('external_ids', {'iface-id': '27ddf48d-41ab-4a2b-bcec-12ec830f91a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:44 compute-0 ovn_controller[153295]: 2025-11-29T08:08:44Z|00133|binding|INFO|Releasing lport 27ddf48d-41ab-4a2b-bcec-12ec830f91a5 from this chassis (sb_readonly=0)
Nov 29 08:08:44 compute-0 nova_compute[255040]: 2025-11-29 08:08:44.627 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:44 compute-0 nova_compute[255040]: 2025-11-29 08:08:44.642 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.644 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b1606039-8d07-4578-bb07-e1193dc21498.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b1606039-8d07-4578-bb07-e1193dc21498.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.646 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[594ed963-5c43-4e1b-b7b8-9b385614bddc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.646 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-b1606039-8d07-4578-bb07-e1193dc21498
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/b1606039-8d07-4578-bb07-e1193dc21498.pid.haproxy
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID b1606039-8d07-4578-bb07-e1193dc21498
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:08:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:08:44.647 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498', 'env', 'PROCESS_TAG=haproxy-b1606039-8d07-4578-bb07-e1193dc21498', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b1606039-8d07-4578-bb07-e1193dc21498.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:08:44 compute-0 nova_compute[255040]: 2025-11-29 08:08:44.705 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:44 compute-0 ceph-mon[75237]: pgmap v1492: 305 pgs: 305 active+clean; 111 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 981 KiB/s wr, 32 op/s
Nov 29 08:08:45 compute-0 podman[281465]: 2025-11-29 08:08:45.04820694 +0000 UTC m=+0.053348755 container create 8076d3225d52e9ababb000a959af2e8eda9253238130c474de36e3cc7617ccd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 08:08:45 compute-0 podman[281465]: 2025-11-29 08:08:45.02031684 +0000 UTC m=+0.025458625 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:08:45 compute-0 systemd[1]: Started libpod-conmon-8076d3225d52e9ababb000a959af2e8eda9253238130c474de36e3cc7617ccd3.scope.
Nov 29 08:08:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:08:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/420aa4b62d308dd83f558cee406e30a228b3d8025a675691135947b0869b96f2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:45 compute-0 podman[281465]: 2025-11-29 08:08:45.19991972 +0000 UTC m=+0.205061575 container init 8076d3225d52e9ababb000a959af2e8eda9253238130c474de36e3cc7617ccd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:08:45 compute-0 podman[281465]: 2025-11-29 08:08:45.206463237 +0000 UTC m=+0.211605052 container start 8076d3225d52e9ababb000a959af2e8eda9253238130c474de36e3cc7617ccd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 08:08:45 compute-0 neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498[281480]: [NOTICE]   (281484) : New worker (281486) forked
Nov 29 08:08:45 compute-0 neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498[281480]: [NOTICE]   (281484) : Loading success.
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.502 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403725.5019813, 169b2d31-6539-4279-bf7a-f46078e1d624 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.503 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] VM Started (Lifecycle Event)
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.526 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.531 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403725.5057528, 169b2d31-6539-4279-bf7a-f46078e1d624 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.531 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] VM Paused (Lifecycle Event)
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.553 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.558 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.577 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.696 255071 DEBUG nova.compute.manager [req-1a3e8325-df85-466d-a996-7755b1499802 req-723490e7-6b70-492c-b4fe-324cc1cbb26b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Received event network-vif-plugged-6d9c5f72-469c-4971-a11c-287eba2d8490 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.697 255071 DEBUG oslo_concurrency.lockutils [req-1a3e8325-df85-466d-a996-7755b1499802 req-723490e7-6b70-492c-b4fe-324cc1cbb26b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "169b2d31-6539-4279-bf7a-f46078e1d624-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.697 255071 DEBUG oslo_concurrency.lockutils [req-1a3e8325-df85-466d-a996-7755b1499802 req-723490e7-6b70-492c-b4fe-324cc1cbb26b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "169b2d31-6539-4279-bf7a-f46078e1d624-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.698 255071 DEBUG oslo_concurrency.lockutils [req-1a3e8325-df85-466d-a996-7755b1499802 req-723490e7-6b70-492c-b4fe-324cc1cbb26b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "169b2d31-6539-4279-bf7a-f46078e1d624-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.698 255071 DEBUG nova.compute.manager [req-1a3e8325-df85-466d-a996-7755b1499802 req-723490e7-6b70-492c-b4fe-324cc1cbb26b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Processing event network-vif-plugged-6d9c5f72-469c-4971-a11c-287eba2d8490 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.699 255071 DEBUG nova.compute.manager [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.704 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403725.7036932, 169b2d31-6539-4279-bf7a-f46078e1d624 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.704 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] VM Resumed (Lifecycle Event)
Nov 29 08:08:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 134 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.1 MiB/s wr, 41 op/s
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.706 255071 DEBUG nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.710 255071 INFO nova.virt.libvirt.driver [-] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Instance spawned successfully.
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.711 255071 DEBUG nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.725 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.732 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.737 255071 DEBUG nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.737 255071 DEBUG nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.738 255071 DEBUG nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.738 255071 DEBUG nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.739 255071 DEBUG nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.740 255071 DEBUG nova.virt.libvirt.driver [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.751 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.786 255071 INFO nova.compute.manager [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Took 6.42 seconds to spawn the instance on the hypervisor.
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.787 255071 DEBUG nova.compute.manager [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.848 255071 INFO nova.compute.manager [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Took 7.36 seconds to build instance.
Nov 29 08:08:45 compute-0 nova_compute[255040]: 2025-11-29 08:08:45.863 255071 DEBUG oslo_concurrency.lockutils [None req-a5e6bc98-9941-433e-9bc9-5f61e03ce867 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "169b2d31-6539-4279-bf7a-f46078e1d624" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:46 compute-0 ceph-mon[75237]: pgmap v1493: 305 pgs: 305 active+clean; 134 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.1 MiB/s wr, 41 op/s
Nov 29 08:08:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 134 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.9 MiB/s wr, 36 op/s
Nov 29 08:08:47 compute-0 NetworkManager[49116]: <info>  [1764403727.7690] manager: (patch-provnet-0b50aea8-d2d6-4416-bd00-1ceabb7a7c1d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Nov 29 08:08:47 compute-0 NetworkManager[49116]: <info>  [1764403727.7716] manager: (patch-br-int-to-provnet-0b50aea8-d2d6-4416-bd00-1ceabb7a7c1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Nov 29 08:08:47 compute-0 nova_compute[255040]: 2025-11-29 08:08:47.767 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:47 compute-0 nova_compute[255040]: 2025-11-29 08:08:47.896 255071 DEBUG nova.compute.manager [req-177cbf3a-5adf-49f1-b6df-4ad4132bb6fe req-c8f96bb7-5f24-46b0-8b2e-59186f6fe759 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Received event network-vif-plugged-6d9c5f72-469c-4971-a11c-287eba2d8490 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:47 compute-0 nova_compute[255040]: 2025-11-29 08:08:47.896 255071 DEBUG oslo_concurrency.lockutils [req-177cbf3a-5adf-49f1-b6df-4ad4132bb6fe req-c8f96bb7-5f24-46b0-8b2e-59186f6fe759 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "169b2d31-6539-4279-bf7a-f46078e1d624-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:47 compute-0 nova_compute[255040]: 2025-11-29 08:08:47.897 255071 DEBUG oslo_concurrency.lockutils [req-177cbf3a-5adf-49f1-b6df-4ad4132bb6fe req-c8f96bb7-5f24-46b0-8b2e-59186f6fe759 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "169b2d31-6539-4279-bf7a-f46078e1d624-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:47 compute-0 nova_compute[255040]: 2025-11-29 08:08:47.897 255071 DEBUG oslo_concurrency.lockutils [req-177cbf3a-5adf-49f1-b6df-4ad4132bb6fe req-c8f96bb7-5f24-46b0-8b2e-59186f6fe759 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "169b2d31-6539-4279-bf7a-f46078e1d624-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:47 compute-0 nova_compute[255040]: 2025-11-29 08:08:47.897 255071 DEBUG nova.compute.manager [req-177cbf3a-5adf-49f1-b6df-4ad4132bb6fe req-c8f96bb7-5f24-46b0-8b2e-59186f6fe759 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] No waiting events found dispatching network-vif-plugged-6d9c5f72-469c-4971-a11c-287eba2d8490 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:47 compute-0 nova_compute[255040]: 2025-11-29 08:08:47.897 255071 WARNING nova.compute.manager [req-177cbf3a-5adf-49f1-b6df-4ad4132bb6fe req-c8f96bb7-5f24-46b0-8b2e-59186f6fe759 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Received unexpected event network-vif-plugged-6d9c5f72-469c-4971-a11c-287eba2d8490 for instance with vm_state active and task_state None.
Nov 29 08:08:47 compute-0 nova_compute[255040]: 2025-11-29 08:08:47.987 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:47 compute-0 ovn_controller[153295]: 2025-11-29T08:08:47Z|00134|binding|INFO|Releasing lport 27ddf48d-41ab-4a2b-bcec-12ec830f91a5 from this chassis (sb_readonly=0)
Nov 29 08:08:48 compute-0 nova_compute[255040]: 2025-11-29 08:08:48.009 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:48 compute-0 nova_compute[255040]: 2025-11-29 08:08:48.085 255071 DEBUG nova.compute.manager [req-46e51de2-208f-454b-a698-c0218008e92f req-536c8323-a203-473a-b12e-682c7e7569dc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Received event network-changed-6d9c5f72-469c-4971-a11c-287eba2d8490 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:48 compute-0 nova_compute[255040]: 2025-11-29 08:08:48.086 255071 DEBUG nova.compute.manager [req-46e51de2-208f-454b-a698-c0218008e92f req-536c8323-a203-473a-b12e-682c7e7569dc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Refreshing instance network info cache due to event network-changed-6d9c5f72-469c-4971-a11c-287eba2d8490. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:08:48 compute-0 nova_compute[255040]: 2025-11-29 08:08:48.086 255071 DEBUG oslo_concurrency.lockutils [req-46e51de2-208f-454b-a698-c0218008e92f req-536c8323-a203-473a-b12e-682c7e7569dc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-169b2d31-6539-4279-bf7a-f46078e1d624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:08:48 compute-0 nova_compute[255040]: 2025-11-29 08:08:48.087 255071 DEBUG oslo_concurrency.lockutils [req-46e51de2-208f-454b-a698-c0218008e92f req-536c8323-a203-473a-b12e-682c7e7569dc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-169b2d31-6539-4279-bf7a-f46078e1d624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:08:48 compute-0 nova_compute[255040]: 2025-11-29 08:08:48.087 255071 DEBUG nova.network.neutron [req-46e51de2-208f-454b-a698-c0218008e92f req-536c8323-a203-473a-b12e-682c7e7569dc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Refreshing network info cache for port 6d9c5f72-469c-4971-a11c-287eba2d8490 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:08:48 compute-0 nova_compute[255040]: 2025-11-29 08:08:48.373 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:49 compute-0 ceph-mon[75237]: pgmap v1494: 305 pgs: 305 active+clean; 134 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.9 MiB/s wr, 36 op/s
Nov 29 08:08:49 compute-0 nova_compute[255040]: 2025-11-29 08:08:49.473 255071 DEBUG nova.network.neutron [req-46e51de2-208f-454b-a698-c0218008e92f req-536c8323-a203-473a-b12e-682c7e7569dc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Updated VIF entry in instance network info cache for port 6d9c5f72-469c-4971-a11c-287eba2d8490. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:08:49 compute-0 nova_compute[255040]: 2025-11-29 08:08:49.475 255071 DEBUG nova.network.neutron [req-46e51de2-208f-454b-a698-c0218008e92f req-536c8323-a203-473a-b12e-682c7e7569dc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Updating instance_info_cache with network_info: [{"id": "6d9c5f72-469c-4971-a11c-287eba2d8490", "address": "fa:16:3e:22:1d:1e", "network": {"id": "b1606039-8d07-4578-bb07-e1193dc21498", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-920991102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87f822d62c8f4ac6bed1a893f2b9e73f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d9c5f72-46", "ovs_interfaceid": "6d9c5f72-469c-4971-a11c-287eba2d8490", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:49 compute-0 nova_compute[255040]: 2025-11-29 08:08:49.501 255071 DEBUG oslo_concurrency.lockutils [req-46e51de2-208f-454b-a698-c0218008e92f req-536c8323-a203-473a-b12e-682c7e7569dc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-169b2d31-6539-4279-bf7a-f46078e1d624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:08:49 compute-0 nova_compute[255040]: 2025-11-29 08:08:49.707 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 134 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Nov 29 08:08:50 compute-0 ceph-mon[75237]: pgmap v1495: 305 pgs: 305 active+clean; 134 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Nov 29 08:08:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 134 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Nov 29 08:08:52 compute-0 ceph-mon[75237]: pgmap v1496: 305 pgs: 305 active+clean; 134 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Nov 29 08:08:52 compute-0 podman[281539]: 2025-11-29 08:08:52.93768885 +0000 UTC m=+0.100252787 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 08:08:53 compute-0 nova_compute[255040]: 2025-11-29 08:08:53.376 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 134 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.5 MiB/s wr, 113 op/s
Nov 29 08:08:53 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:08:53 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.9 total, 600.0 interval
                                           Cumulative writes: 15K writes, 60K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 15K writes, 5053 syncs, 3.13 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8950 writes, 33K keys, 8950 commit groups, 1.0 writes per commit group, ingest: 19.90 MB, 0.03 MB/s
                                           Interval WAL: 8950 writes, 3694 syncs, 2.42 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:08:54 compute-0 nova_compute[255040]: 2025-11-29 08:08:54.190 255071 DEBUG oslo_concurrency.lockutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "818ae3fd-4905-4e82-8239-823ea098afa2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:54 compute-0 nova_compute[255040]: 2025-11-29 08:08:54.190 255071 DEBUG oslo_concurrency.lockutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "818ae3fd-4905-4e82-8239-823ea098afa2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:54 compute-0 nova_compute[255040]: 2025-11-29 08:08:54.207 255071 DEBUG nova.compute.manager [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:08:54 compute-0 nova_compute[255040]: 2025-11-29 08:08:54.271 255071 DEBUG oslo_concurrency.lockutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:54 compute-0 nova_compute[255040]: 2025-11-29 08:08:54.272 255071 DEBUG oslo_concurrency.lockutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:54 compute-0 nova_compute[255040]: 2025-11-29 08:08:54.280 255071 DEBUG nova.virt.hardware [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:08:54 compute-0 nova_compute[255040]: 2025-11-29 08:08:54.281 255071 INFO nova.compute.claims [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:08:54 compute-0 nova_compute[255040]: 2025-11-29 08:08:54.400 255071 DEBUG oslo_concurrency.processutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:54 compute-0 nova_compute[255040]: 2025-11-29 08:08:54.709 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:54 compute-0 ceph-mon[75237]: pgmap v1497: 305 pgs: 305 active+clean; 134 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.5 MiB/s wr, 113 op/s
Nov 29 08:08:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:08:54 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3638667198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:54 compute-0 nova_compute[255040]: 2025-11-29 08:08:54.900 255071 DEBUG oslo_concurrency.processutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:54 compute-0 nova_compute[255040]: 2025-11-29 08:08:54.907 255071 DEBUG nova.compute.provider_tree [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:08:54 compute-0 nova_compute[255040]: 2025-11-29 08:08:54.932 255071 DEBUG nova.scheduler.client.report [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:08:54 compute-0 nova_compute[255040]: 2025-11-29 08:08:54.963 255071 DEBUG oslo_concurrency.lockutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:54 compute-0 nova_compute[255040]: 2025-11-29 08:08:54.964 255071 DEBUG nova.compute.manager [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.012 255071 DEBUG nova.compute.manager [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.012 255071 DEBUG nova.network.neutron [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.033 255071 INFO nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.051 255071 DEBUG nova.compute.manager [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.097 255071 INFO nova.virt.block_device [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Booting with volume 7cda4f1f-ab4f-418d-b279-974d31701130 at /dev/vda
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.223 255071 DEBUG nova.policy [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5e62d407203540599a65ac50d5d447b9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3df24932e2a44aeab3c2aece8a045774', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.228 255071 DEBUG os_brick.utils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.230 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.248 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.248 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[a4820007-9628-4c74-be26-ba8a9afa8657]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.251 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.282 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.282 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[976e48b3-d204-426d-86e2-0cdd36bfc37e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.284 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.295 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.296 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[283bdc81-ffab-4e38-baf9-77e39259cd24]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.298 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[6ec26b50-f55e-465f-b9fe-ef66d07ada36]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.298 255071 DEBUG oslo_concurrency.processutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.326 255071 DEBUG oslo_concurrency.processutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.328 255071 DEBUG os_brick.initiator.connectors.lightos [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.329 255071 DEBUG os_brick.initiator.connectors.lightos [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.329 255071 DEBUG os_brick.initiator.connectors.lightos [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.329 255071 DEBUG os_brick.utils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] <== get_connector_properties: return (100ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.330 255071 DEBUG nova.virt.block_device [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Updating existing volume attachment record: 08bc5c9e-c0f1-4e4b-9ecc-785ed65fe95d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:08:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:08:55 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/666036044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 163 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.0 MiB/s wr, 111 op/s
Nov 29 08:08:55 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3638667198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:55 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/666036044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:08:55 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3201631605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:55 compute-0 nova_compute[255040]: 2025-11-29 08:08:55.908 255071 DEBUG nova.network.neutron [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Successfully created port: e34ad281-3e61-4f9c-a12e-ab5c59e7efdd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003487950323956502 of space, bias 1.0, pg target 0.10463850971869505 quantized to 32 (current 32)
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0005412841049684183 of space, bias 1.0, pg target 0.1623852314905255 quantized to 32 (current 32)
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:08:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:08:56 compute-0 nova_compute[255040]: 2025-11-29 08:08:56.227 255071 DEBUG nova.compute.manager [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:08:56 compute-0 nova_compute[255040]: 2025-11-29 08:08:56.229 255071 DEBUG nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:08:56 compute-0 nova_compute[255040]: 2025-11-29 08:08:56.230 255071 INFO nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Creating image(s)
Nov 29 08:08:56 compute-0 nova_compute[255040]: 2025-11-29 08:08:56.230 255071 DEBUG nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:08:56 compute-0 nova_compute[255040]: 2025-11-29 08:08:56.230 255071 DEBUG nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Ensure instance console log exists: /var/lib/nova/instances/818ae3fd-4905-4e82-8239-823ea098afa2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:08:56 compute-0 nova_compute[255040]: 2025-11-29 08:08:56.231 255071 DEBUG oslo_concurrency.lockutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:56 compute-0 nova_compute[255040]: 2025-11-29 08:08:56.231 255071 DEBUG oslo_concurrency.lockutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:56 compute-0 nova_compute[255040]: 2025-11-29 08:08:56.231 255071 DEBUG oslo_concurrency.lockutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:56 compute-0 nova_compute[255040]: 2025-11-29 08:08:56.602 255071 DEBUG nova.network.neutron [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Successfully updated port: e34ad281-3e61-4f9c-a12e-ab5c59e7efdd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:08:56 compute-0 nova_compute[255040]: 2025-11-29 08:08:56.615 255071 DEBUG oslo_concurrency.lockutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "refresh_cache-818ae3fd-4905-4e82-8239-823ea098afa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:08:56 compute-0 nova_compute[255040]: 2025-11-29 08:08:56.616 255071 DEBUG oslo_concurrency.lockutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquired lock "refresh_cache-818ae3fd-4905-4e82-8239-823ea098afa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:08:56 compute-0 nova_compute[255040]: 2025-11-29 08:08:56.616 255071 DEBUG nova.network.neutron [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:08:56 compute-0 nova_compute[255040]: 2025-11-29 08:08:56.704 255071 DEBUG nova.compute.manager [req-b1b11deb-bc70-4195-98ad-57b7ed7016b1 req-d8cb97c3-a4a0-494e-8cdd-c645b4787485 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Received event network-changed-e34ad281-3e61-4f9c-a12e-ab5c59e7efdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:56 compute-0 nova_compute[255040]: 2025-11-29 08:08:56.705 255071 DEBUG nova.compute.manager [req-b1b11deb-bc70-4195-98ad-57b7ed7016b1 req-d8cb97c3-a4a0-494e-8cdd-c645b4787485 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Refreshing instance network info cache due to event network-changed-e34ad281-3e61-4f9c-a12e-ab5c59e7efdd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:08:56 compute-0 nova_compute[255040]: 2025-11-29 08:08:56.706 255071 DEBUG oslo_concurrency.lockutils [req-b1b11deb-bc70-4195-98ad-57b7ed7016b1 req-d8cb97c3-a4a0-494e-8cdd-c645b4787485 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-818ae3fd-4905-4e82-8239-823ea098afa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:08:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Nov 29 08:08:56 compute-0 ceph-mon[75237]: pgmap v1498: 305 pgs: 305 active+clean; 163 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.0 MiB/s wr, 111 op/s
Nov 29 08:08:56 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3201631605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Nov 29 08:08:56 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Nov 29 08:08:57 compute-0 nova_compute[255040]: 2025-11-29 08:08:57.253 255071 DEBUG nova.network.neutron [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:08:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 163 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.2 MiB/s wr, 121 op/s
Nov 29 08:08:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Nov 29 08:08:57 compute-0 ceph-mon[75237]: osdmap e270: 3 total, 3 up, 3 in
Nov 29 08:08:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Nov 29 08:08:57 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.288 255071 DEBUG nova.network.neutron [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Updating instance_info_cache with network_info: [{"id": "e34ad281-3e61-4f9c-a12e-ab5c59e7efdd", "address": "fa:16:3e:8f:a9:7f", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape34ad281-3e", "ovs_interfaceid": "e34ad281-3e61-4f9c-a12e-ab5c59e7efdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.306 255071 DEBUG oslo_concurrency.lockutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Releasing lock "refresh_cache-818ae3fd-4905-4e82-8239-823ea098afa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.306 255071 DEBUG nova.compute.manager [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Instance network_info: |[{"id": "e34ad281-3e61-4f9c-a12e-ab5c59e7efdd", "address": "fa:16:3e:8f:a9:7f", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape34ad281-3e", "ovs_interfaceid": "e34ad281-3e61-4f9c-a12e-ab5c59e7efdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.306 255071 DEBUG oslo_concurrency.lockutils [req-b1b11deb-bc70-4195-98ad-57b7ed7016b1 req-d8cb97c3-a4a0-494e-8cdd-c645b4787485 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-818ae3fd-4905-4e82-8239-823ea098afa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.307 255071 DEBUG nova.network.neutron [req-b1b11deb-bc70-4195-98ad-57b7ed7016b1 req-d8cb97c3-a4a0-494e-8cdd-c645b4787485 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Refreshing network info cache for port e34ad281-3e61-4f9c-a12e-ab5c59e7efdd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.309 255071 DEBUG nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Start _get_guest_xml network_info=[{"id": "e34ad281-3e61-4f9c-a12e-ab5c59e7efdd", "address": "fa:16:3e:8f:a9:7f", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape34ad281-3e", "ovs_interfaceid": "e34ad281-3e61-4f9c-a12e-ab5c59e7efdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-7cda4f1f-ab4f-418d-b279-974d31701130', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '7cda4f1f-ab4f-418d-b279-974d31701130', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '818ae3fd-4905-4e82-8239-823ea098afa2', 'attached_at': '', 'detached_at': '', 'volume_id': '7cda4f1f-ab4f-418d-b279-974d31701130', 'serial': '7cda4f1f-ab4f-418d-b279-974d31701130'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'delete_on_termination': False, 'attachment_id': '08bc5c9e-c0f1-4e4b-9ecc-785ed65fe95d', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.315 255071 WARNING nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.320 255071 DEBUG nova.virt.libvirt.host [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.321 255071 DEBUG nova.virt.libvirt.host [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.327 255071 DEBUG nova.virt.libvirt.host [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.328 255071 DEBUG nova.virt.libvirt.host [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.329 255071 DEBUG nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.329 255071 DEBUG nova.virt.hardware [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.330 255071 DEBUG nova.virt.hardware [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.330 255071 DEBUG nova.virt.hardware [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.330 255071 DEBUG nova.virt.hardware [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.331 255071 DEBUG nova.virt.hardware [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.331 255071 DEBUG nova.virt.hardware [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.331 255071 DEBUG nova.virt.hardware [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.332 255071 DEBUG nova.virt.hardware [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.332 255071 DEBUG nova.virt.hardware [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.332 255071 DEBUG nova.virt.hardware [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.332 255071 DEBUG nova.virt.hardware [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.362 255071 DEBUG nova.storage.rbd_utils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] rbd image 818ae3fd-4905-4e82-8239-823ea098afa2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.366 255071 DEBUG oslo_concurrency.processutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.395 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:08:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3099433143' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:08:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3099433143' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:58 compute-0 ceph-mon[75237]: pgmap v1500: 305 pgs: 305 active+clean; 163 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.2 MiB/s wr, 121 op/s
Nov 29 08:08:58 compute-0 ceph-mon[75237]: osdmap e271: 3 total, 3 up, 3 in
Nov 29 08:08:58 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3099433143' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:58 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3099433143' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:08:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/500185443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:58 compute-0 nova_compute[255040]: 2025-11-29 08:08:58.885 255071 DEBUG oslo_concurrency.processutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.042 255071 DEBUG os_brick.encryptors [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Using volume encryption metadata '{'encryption_key_id': '3ad5cb92-a047-4ba9-849d-3f2d9984ac98', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-7cda4f1f-ab4f-418d-b279-974d31701130', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '7cda4f1f-ab4f-418d-b279-974d31701130', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '818ae3fd-4905-4e82-8239-823ea098afa2', 'attached_at': '', 'detached_at': '', 'volume_id': '7cda4f1f-ab4f-418d-b279-974d31701130', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.045 255071 DEBUG barbicanclient.client [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.060 255071 DEBUG barbicanclient.v1.secrets [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/3ad5cb92-a047-4ba9-849d-3f2d9984ac98 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.060 255071 INFO barbicanclient.base [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Calculated Secrets uuid ref: secrets/3ad5cb92-a047-4ba9-849d-3f2d9984ac98
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.095 255071 DEBUG barbicanclient.client [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.096 255071 INFO barbicanclient.base [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Calculated Secrets uuid ref: secrets/3ad5cb92-a047-4ba9-849d-3f2d9984ac98
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.122 255071 DEBUG barbicanclient.client [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.123 255071 INFO barbicanclient.base [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Calculated Secrets uuid ref: secrets/3ad5cb92-a047-4ba9-849d-3f2d9984ac98
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.175 255071 DEBUG barbicanclient.client [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.176 255071 INFO barbicanclient.base [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Calculated Secrets uuid ref: secrets/3ad5cb92-a047-4ba9-849d-3f2d9984ac98
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.210 255071 DEBUG barbicanclient.client [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.211 255071 INFO barbicanclient.base [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Calculated Secrets uuid ref: secrets/3ad5cb92-a047-4ba9-849d-3f2d9984ac98
Nov 29 08:08:59 compute-0 ovn_controller[153295]: 2025-11-29T08:08:59Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:22:1d:1e 10.100.0.4
Nov 29 08:08:59 compute-0 ovn_controller[153295]: 2025-11-29T08:08:59Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:22:1d:1e 10.100.0.4
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.247 255071 DEBUG barbicanclient.client [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.248 255071 INFO barbicanclient.base [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Calculated Secrets uuid ref: secrets/3ad5cb92-a047-4ba9-849d-3f2d9984ac98
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.275 255071 DEBUG barbicanclient.client [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.276 255071 INFO barbicanclient.base [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Calculated Secrets uuid ref: secrets/3ad5cb92-a047-4ba9-849d-3f2d9984ac98
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.280 255071 DEBUG nova.network.neutron [req-b1b11deb-bc70-4195-98ad-57b7ed7016b1 req-d8cb97c3-a4a0-494e-8cdd-c645b4787485 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Updated VIF entry in instance network info cache for port e34ad281-3e61-4f9c-a12e-ab5c59e7efdd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.281 255071 DEBUG nova.network.neutron [req-b1b11deb-bc70-4195-98ad-57b7ed7016b1 req-d8cb97c3-a4a0-494e-8cdd-c645b4787485 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Updating instance_info_cache with network_info: [{"id": "e34ad281-3e61-4f9c-a12e-ab5c59e7efdd", "address": "fa:16:3e:8f:a9:7f", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape34ad281-3e", "ovs_interfaceid": "e34ad281-3e61-4f9c-a12e-ab5c59e7efdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.305 255071 DEBUG oslo_concurrency.lockutils [req-b1b11deb-bc70-4195-98ad-57b7ed7016b1 req-d8cb97c3-a4a0-494e-8cdd-c645b4787485 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-818ae3fd-4905-4e82-8239-823ea098afa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.306 255071 DEBUG barbicanclient.client [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.306 255071 INFO barbicanclient.base [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Calculated Secrets uuid ref: secrets/3ad5cb92-a047-4ba9-849d-3f2d9984ac98
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.351 255071 DEBUG barbicanclient.client [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.352 255071 INFO barbicanclient.base [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Calculated Secrets uuid ref: secrets/3ad5cb92-a047-4ba9-849d-3f2d9984ac98
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.380 255071 DEBUG barbicanclient.client [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.381 255071 INFO barbicanclient.base [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Calculated Secrets uuid ref: secrets/3ad5cb92-a047-4ba9-849d-3f2d9984ac98
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.427 255071 DEBUG barbicanclient.client [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.428 255071 INFO barbicanclient.base [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Calculated Secrets uuid ref: secrets/3ad5cb92-a047-4ba9-849d-3f2d9984ac98
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.463 255071 DEBUG barbicanclient.client [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.464 255071 INFO barbicanclient.base [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Calculated Secrets uuid ref: secrets/3ad5cb92-a047-4ba9-849d-3f2d9984ac98
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.486 255071 DEBUG barbicanclient.client [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.487 255071 INFO barbicanclient.base [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Calculated Secrets uuid ref: secrets/3ad5cb92-a047-4ba9-849d-3f2d9984ac98
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.514 255071 DEBUG barbicanclient.client [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.514 255071 INFO barbicanclient.base [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Calculated Secrets uuid ref: secrets/3ad5cb92-a047-4ba9-849d-3f2d9984ac98
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.541 255071 DEBUG barbicanclient.client [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.542 255071 INFO barbicanclient.base [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Calculated Secrets uuid ref: secrets/3ad5cb92-a047-4ba9-849d-3f2d9984ac98
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.568 255071 DEBUG barbicanclient.client [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.569 255071 DEBUG nova.virt.libvirt.host [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 08:08:59 compute-0 nova_compute[255040]:   <usage type="volume">
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <volume>7cda4f1f-ab4f-418d-b279-974d31701130</volume>
Nov 29 08:08:59 compute-0 nova_compute[255040]:   </usage>
Nov 29 08:08:59 compute-0 nova_compute[255040]: </secret>
Nov 29 08:08:59 compute-0 nova_compute[255040]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.594 255071 DEBUG nova.virt.libvirt.vif [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:08:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1880769940',display_name='tempest-TestVolumeBootPattern-server-1880769940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1880769940',id=14,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3df24932e2a44aeab3c2aece8a045774',ramdisk_id='',reservation_id='r-818rtxsm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1666331213',owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:55Z,user_data=None,user_id='5e62d407203540599a65ac50d5d447b9',uuid=818ae3fd-4905-4e82-8239-823ea098afa2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e34ad281-3e61-4f9c-a12e-ab5c59e7efdd", "address": "fa:16:3e:8f:a9:7f", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape34ad281-3e", "ovs_interfaceid": "e34ad281-3e61-4f9c-a12e-ab5c59e7efdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.595 255071 DEBUG nova.network.os_vif_util [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converting VIF {"id": "e34ad281-3e61-4f9c-a12e-ab5c59e7efdd", "address": "fa:16:3e:8f:a9:7f", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape34ad281-3e", "ovs_interfaceid": "e34ad281-3e61-4f9c-a12e-ab5c59e7efdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.596 255071 DEBUG nova.network.os_vif_util [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:a9:7f,bridge_name='br-int',has_traffic_filtering=True,id=e34ad281-3e61-4f9c-a12e-ab5c59e7efdd,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape34ad281-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.597 255071 DEBUG nova.objects.instance [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lazy-loading 'pci_devices' on Instance uuid 818ae3fd-4905-4e82-8239-823ea098afa2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.610 255071 DEBUG nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:08:59 compute-0 nova_compute[255040]:   <uuid>818ae3fd-4905-4e82-8239-823ea098afa2</uuid>
Nov 29 08:08:59 compute-0 nova_compute[255040]:   <name>instance-0000000e</name>
Nov 29 08:08:59 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:08:59 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:08:59 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <nova:name>tempest-TestVolumeBootPattern-server-1880769940</nova:name>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:08:58</nova:creationTime>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:08:59 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:08:59 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:08:59 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:08:59 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:08:59 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:08:59 compute-0 nova_compute[255040]:         <nova:user uuid="5e62d407203540599a65ac50d5d447b9">tempest-TestVolumeBootPattern-1666331213-project-member</nova:user>
Nov 29 08:08:59 compute-0 nova_compute[255040]:         <nova:project uuid="3df24932e2a44aeab3c2aece8a045774">tempest-TestVolumeBootPattern-1666331213</nova:project>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:08:59 compute-0 nova_compute[255040]:         <nova:port uuid="e34ad281-3e61-4f9c-a12e-ab5c59e7efdd">
Nov 29 08:08:59 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:08:59 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:08:59 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <system>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <entry name="serial">818ae3fd-4905-4e82-8239-823ea098afa2</entry>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <entry name="uuid">818ae3fd-4905-4e82-8239-823ea098afa2</entry>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     </system>
Nov 29 08:08:59 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:08:59 compute-0 nova_compute[255040]:   <os>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:   </os>
Nov 29 08:08:59 compute-0 nova_compute[255040]:   <features>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:   </features>
Nov 29 08:08:59 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:08:59 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:08:59 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/818ae3fd-4905-4e82-8239-823ea098afa2_disk.config">
Nov 29 08:08:59 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       </source>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:08:59 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <source protocol="rbd" name="volumes/volume-7cda4f1f-ab4f-418d-b279-974d31701130">
Nov 29 08:08:59 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       </source>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:08:59 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <serial>7cda4f1f-ab4f-418d-b279-974d31701130</serial>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <encryption format="luks">
Nov 29 08:08:59 compute-0 nova_compute[255040]:         <secret type="passphrase" uuid="b4e0344e-814e-4abb-8535-0e912c401f0d"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       </encryption>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:8f:a9:7f"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <target dev="tape34ad281-3e"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/818ae3fd-4905-4e82-8239-823ea098afa2/console.log" append="off"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <video>
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     </video>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:08:59 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:08:59 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:08:59 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:08:59 compute-0 nova_compute[255040]: </domain>
Nov 29 08:08:59 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.611 255071 DEBUG nova.compute.manager [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Preparing to wait for external event network-vif-plugged-e34ad281-3e61-4f9c-a12e-ab5c59e7efdd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.612 255071 DEBUG oslo_concurrency.lockutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "818ae3fd-4905-4e82-8239-823ea098afa2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.612 255071 DEBUG oslo_concurrency.lockutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "818ae3fd-4905-4e82-8239-823ea098afa2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.613 255071 DEBUG oslo_concurrency.lockutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "818ae3fd-4905-4e82-8239-823ea098afa2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.614 255071 DEBUG nova.virt.libvirt.vif [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:08:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1880769940',display_name='tempest-TestVolumeBootPattern-server-1880769940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1880769940',id=14,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3df24932e2a44aeab3c2aece8a045774',ramdisk_id='',reservation_id='r-818rtxsm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1666331213',owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:55Z,user_data=None,user_id='5e62d407203540599a65ac50d5d447b9',uuid=818ae3fd-4905-4e82-8239-823ea098afa2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e34ad281-3e61-4f9c-a12e-ab5c59e7efdd", "address": "fa:16:3e:8f:a9:7f", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape34ad281-3e", "ovs_interfaceid": "e34ad281-3e61-4f9c-a12e-ab5c59e7efdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.614 255071 DEBUG nova.network.os_vif_util [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converting VIF {"id": "e34ad281-3e61-4f9c-a12e-ab5c59e7efdd", "address": "fa:16:3e:8f:a9:7f", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape34ad281-3e", "ovs_interfaceid": "e34ad281-3e61-4f9c-a12e-ab5c59e7efdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.615 255071 DEBUG nova.network.os_vif_util [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:a9:7f,bridge_name='br-int',has_traffic_filtering=True,id=e34ad281-3e61-4f9c-a12e-ab5c59e7efdd,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape34ad281-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.615 255071 DEBUG os_vif [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:a9:7f,bridge_name='br-int',has_traffic_filtering=True,id=e34ad281-3e61-4f9c-a12e-ab5c59e7efdd,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape34ad281-3e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.616 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.617 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.617 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.622 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.622 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape34ad281-3e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.623 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape34ad281-3e, col_values=(('external_ids', {'iface-id': 'e34ad281-3e61-4f9c-a12e-ab5c59e7efdd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:a9:7f', 'vm-uuid': '818ae3fd-4905-4e82-8239-823ea098afa2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.624 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:59 compute-0 NetworkManager[49116]: <info>  [1764403739.6260] manager: (tape34ad281-3e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.626 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.635 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.636 255071 INFO os_vif [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:a9:7f,bridge_name='br-int',has_traffic_filtering=True,id=e34ad281-3e61-4f9c-a12e-ab5c59e7efdd,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape34ad281-3e')
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.701 255071 DEBUG nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.701 255071 DEBUG nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.702 255071 DEBUG nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] No VIF found with MAC fa:16:3e:8f:a9:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.702 255071 INFO nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Using config drive
Nov 29 08:08:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 217 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.6 MiB/s wr, 122 op/s
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.730 255071 DEBUG nova.storage.rbd_utils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] rbd image 818ae3fd-4905-4e82-8239-823ea098afa2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:59 compute-0 nova_compute[255040]: 2025-11-29 08:08:59.736 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/500185443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:09:00 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3084684010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:00 compute-0 nova_compute[255040]: 2025-11-29 08:09:00.598 255071 INFO nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Creating config drive at /var/lib/nova/instances/818ae3fd-4905-4e82-8239-823ea098afa2/disk.config
Nov 29 08:09:00 compute-0 nova_compute[255040]: 2025-11-29 08:09:00.613 255071 DEBUG oslo_concurrency.processutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/818ae3fd-4905-4e82-8239-823ea098afa2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2e2xwj78 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:00 compute-0 nova_compute[255040]: 2025-11-29 08:09:00.765 255071 DEBUG oslo_concurrency.processutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/818ae3fd-4905-4e82-8239-823ea098afa2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2e2xwj78" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:00 compute-0 nova_compute[255040]: 2025-11-29 08:09:00.794 255071 DEBUG nova.storage.rbd_utils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] rbd image 818ae3fd-4905-4e82-8239-823ea098afa2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:00 compute-0 nova_compute[255040]: 2025-11-29 08:09:00.800 255071 DEBUG oslo_concurrency.processutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/818ae3fd-4905-4e82-8239-823ea098afa2/disk.config 818ae3fd-4905-4e82-8239-823ea098afa2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:00 compute-0 ceph-mon[75237]: pgmap v1502: 305 pgs: 305 active+clean; 217 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.6 MiB/s wr, 122 op/s
Nov 29 08:09:00 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3084684010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:00 compute-0 nova_compute[255040]: 2025-11-29 08:09:00.988 255071 DEBUG oslo_concurrency.processutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/818ae3fd-4905-4e82-8239-823ea098afa2/disk.config 818ae3fd-4905-4e82-8239-823ea098afa2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:00 compute-0 nova_compute[255040]: 2025-11-29 08:09:00.989 255071 INFO nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Deleting local config drive /var/lib/nova/instances/818ae3fd-4905-4e82-8239-823ea098afa2/disk.config because it was imported into RBD.
Nov 29 08:09:01 compute-0 NetworkManager[49116]: <info>  [1764403741.0534] manager: (tape34ad281-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Nov 29 08:09:01 compute-0 kernel: tape34ad281-3e: entered promiscuous mode
Nov 29 08:09:01 compute-0 nova_compute[255040]: 2025-11-29 08:09:01.059 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:01 compute-0 ovn_controller[153295]: 2025-11-29T08:09:01Z|00135|binding|INFO|Claiming lport e34ad281-3e61-4f9c-a12e-ab5c59e7efdd for this chassis.
Nov 29 08:09:01 compute-0 ovn_controller[153295]: 2025-11-29T08:09:01Z|00136|binding|INFO|e34ad281-3e61-4f9c-a12e-ab5c59e7efdd: Claiming fa:16:3e:8f:a9:7f 10.100.0.13
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.066 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:a9:7f 10.100.0.13'], port_security=['fa:16:3e:8f:a9:7f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '818ae3fd-4905-4e82-8239-823ea098afa2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3df24932e2a44aeab3c2aece8a045774', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a3cc607b-4336-4239-91d4-371fe33f0a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6d2be5e-00f1-4a95-b572-cb93402763d5, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=e34ad281-3e61-4f9c-a12e-ab5c59e7efdd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.068 163500 INFO neutron.agent.ovn.metadata.agent [-] Port e34ad281-3e61-4f9c-a12e-ab5c59e7efdd in datapath 6e23492e-beff-43f6-b4d1-f88ebeea0b6f bound to our chassis
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.069 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e23492e-beff-43f6-b4d1-f88ebeea0b6f
Nov 29 08:09:01 compute-0 ovn_controller[153295]: 2025-11-29T08:09:01Z|00137|binding|INFO|Setting lport e34ad281-3e61-4f9c-a12e-ab5c59e7efdd ovn-installed in OVS
Nov 29 08:09:01 compute-0 ovn_controller[153295]: 2025-11-29T08:09:01Z|00138|binding|INFO|Setting lport e34ad281-3e61-4f9c-a12e-ab5c59e7efdd up in Southbound
Nov 29 08:09:01 compute-0 nova_compute[255040]: 2025-11-29 08:09:01.077 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:01 compute-0 nova_compute[255040]: 2025-11-29 08:09:01.080 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.086 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[24db46d9-f8d5-4f80-98fa-17120557f495]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.087 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6e23492e-b1 in ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.089 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6e23492e-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.089 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[6289d8fe-0135-4e33-9e62-d8fc8444dfd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.090 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[1464c587-d780-4856-a68c-6d52b7aa7dba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 systemd-udevd[281709]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:09:01 compute-0 systemd-machined[216271]: New machine qemu-14-instance-0000000e.
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.105 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[9aa07399-633e-45cb-bd1d-02f67164392c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Nov 29 08:09:01 compute-0 NetworkManager[49116]: <info>  [1764403741.1111] device (tape34ad281-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:09:01 compute-0 NetworkManager[49116]: <info>  [1764403741.1125] device (tape34ad281-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.133 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[10d62ba8-9f53-40a0-84ee-83c0143a500b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.172 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[15541475-908d-4908-a0a2-e566ac6922b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.179 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[b291d5f6-e2a4-4513-b54f-4081006666c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 NetworkManager[49116]: <info>  [1764403741.1804] manager: (tap6e23492e-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/80)
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.217 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[698f3d7e-4609-4cbe-ac36-7e08c5b61c85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.222 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c62e13-d196-4877-a58f-78604583f7bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 NetworkManager[49116]: <info>  [1764403741.2471] device (tap6e23492e-b0): carrier: link connected
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.254 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[6b8fbf17-ae7e-4f3f-9085-51739d08520c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.276 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5d526888-4a10-451e-9c95-6c91d85a2562]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e23492e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:19:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597858, 'reachable_time': 24737, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281741, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.294 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[354cd320-8a0b-47ae-b33d-bfbe4045ca49]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:1984'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597858, 'tstamp': 597858}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281742, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.314 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[f52180aa-3da7-47ed-a014-6a0b82983ebc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e23492e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:19:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597858, 'reachable_time': 24737, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281743, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.353 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[51942cc6-cf9e-4838-9bb0-b667e37f2b8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.420 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[9a6cb2d9-8b9d-4fe4-a0f4-c948e2e8885f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.423 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e23492e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.423 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.424 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e23492e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:01 compute-0 nova_compute[255040]: 2025-11-29 08:09:01.426 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:01 compute-0 NetworkManager[49116]: <info>  [1764403741.4274] manager: (tap6e23492e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Nov 29 08:09:01 compute-0 kernel: tap6e23492e-b0: entered promiscuous mode
Nov 29 08:09:01 compute-0 nova_compute[255040]: 2025-11-29 08:09:01.430 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.431 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e23492e-b0, col_values=(('external_ids', {'iface-id': 'c7579d40-4225-44ab-93bd-e31c3efe399f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:01 compute-0 nova_compute[255040]: 2025-11-29 08:09:01.433 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:01 compute-0 ovn_controller[153295]: 2025-11-29T08:09:01Z|00139|binding|INFO|Releasing lport c7579d40-4225-44ab-93bd-e31c3efe399f from this chassis (sb_readonly=0)
Nov 29 08:09:01 compute-0 nova_compute[255040]: 2025-11-29 08:09:01.449 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:01 compute-0 nova_compute[255040]: 2025-11-29 08:09:01.451 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.452 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6e23492e-beff-43f6-b4d1-f88ebeea0b6f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6e23492e-beff-43f6-b4d1-f88ebeea0b6f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.453 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[9b24461a-d060-4568-a873-1053e9ff40c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.454 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-6e23492e-beff-43f6-b4d1-f88ebeea0b6f
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/6e23492e-beff-43f6-b4d1-f88ebeea0b6f.pid.haproxy
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID 6e23492e-beff-43f6-b4d1-f88ebeea0b6f
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:01.455 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'env', 'PROCESS_TAG=haproxy-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6e23492e-beff-43f6-b4d1-f88ebeea0b6f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:09:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:01 compute-0 nova_compute[255040]: 2025-11-29 08:09:01.631 255071 DEBUG nova.compute.manager [req-d62f6704-c6e8-44ed-bdc7-97c3fc534fdf req-06889554-c1aa-4307-9fe2-fd100835c4a1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Received event network-vif-plugged-e34ad281-3e61-4f9c-a12e-ab5c59e7efdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:01 compute-0 nova_compute[255040]: 2025-11-29 08:09:01.631 255071 DEBUG oslo_concurrency.lockutils [req-d62f6704-c6e8-44ed-bdc7-97c3fc534fdf req-06889554-c1aa-4307-9fe2-fd100835c4a1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "818ae3fd-4905-4e82-8239-823ea098afa2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:01 compute-0 nova_compute[255040]: 2025-11-29 08:09:01.632 255071 DEBUG oslo_concurrency.lockutils [req-d62f6704-c6e8-44ed-bdc7-97c3fc534fdf req-06889554-c1aa-4307-9fe2-fd100835c4a1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "818ae3fd-4905-4e82-8239-823ea098afa2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:01 compute-0 nova_compute[255040]: 2025-11-29 08:09:01.632 255071 DEBUG oslo_concurrency.lockutils [req-d62f6704-c6e8-44ed-bdc7-97c3fc534fdf req-06889554-c1aa-4307-9fe2-fd100835c4a1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "818ae3fd-4905-4e82-8239-823ea098afa2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:01 compute-0 nova_compute[255040]: 2025-11-29 08:09:01.632 255071 DEBUG nova.compute.manager [req-d62f6704-c6e8-44ed-bdc7-97c3fc534fdf req-06889554-c1aa-4307-9fe2-fd100835c4a1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Processing event network-vif-plugged-e34ad281-3e61-4f9c-a12e-ab5c59e7efdd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:09:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 247 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 8.0 MiB/s wr, 161 op/s
Nov 29 08:09:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Nov 29 08:09:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Nov 29 08:09:01 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Nov 29 08:09:01 compute-0 podman[281811]: 2025-11-29 08:09:01.876233584 +0000 UTC m=+0.065887013 container create 9041df447ef51a484e48cbccdc51c55a7b67c61d5302565b03359b771204ecfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:09:01 compute-0 systemd[1]: Started libpod-conmon-9041df447ef51a484e48cbccdc51c55a7b67c61d5302565b03359b771204ecfd.scope.
Nov 29 08:09:01 compute-0 podman[281811]: 2025-11-29 08:09:01.842496186 +0000 UTC m=+0.032149635 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:09:01 compute-0 podman[281815]: 2025-11-29 08:09:01.940581244 +0000 UTC m=+0.092188019 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 08:09:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7152d58232c26af1ec48e3a6c067abe92e729e01fa790715d836ff703b067343/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:01 compute-0 podman[281811]: 2025-11-29 08:09:01.984549487 +0000 UTC m=+0.174202946 container init 9041df447ef51a484e48cbccdc51c55a7b67c61d5302565b03359b771204ecfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:09:01 compute-0 podman[281811]: 2025-11-29 08:09:01.990358863 +0000 UTC m=+0.180012292 container start 9041df447ef51a484e48cbccdc51c55a7b67c61d5302565b03359b771204ecfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:09:02 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[281844]: [NOTICE]   (281850) : New worker (281852) forked
Nov 29 08:09:02 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[281844]: [NOTICE]   (281850) : Loading success.
Nov 29 08:09:02 compute-0 ceph-mon[75237]: pgmap v1503: 305 pgs: 305 active+clean; 247 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 8.0 MiB/s wr, 161 op/s
Nov 29 08:09:02 compute-0 ceph-mon[75237]: osdmap e272: 3 total, 3 up, 3 in
Nov 29 08:09:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 273 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 9.4 MiB/s wr, 210 op/s
Nov 29 08:09:03 compute-0 nova_compute[255040]: 2025-11-29 08:09:03.726 255071 DEBUG nova.compute.manager [req-7645273b-f872-4efa-9732-8745eedd1939 req-15698336-c562-4117-afe0-99a0bec1468a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Received event network-vif-plugged-e34ad281-3e61-4f9c-a12e-ab5c59e7efdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:03 compute-0 nova_compute[255040]: 2025-11-29 08:09:03.726 255071 DEBUG oslo_concurrency.lockutils [req-7645273b-f872-4efa-9732-8745eedd1939 req-15698336-c562-4117-afe0-99a0bec1468a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "818ae3fd-4905-4e82-8239-823ea098afa2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:03 compute-0 nova_compute[255040]: 2025-11-29 08:09:03.727 255071 DEBUG oslo_concurrency.lockutils [req-7645273b-f872-4efa-9732-8745eedd1939 req-15698336-c562-4117-afe0-99a0bec1468a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "818ae3fd-4905-4e82-8239-823ea098afa2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:03 compute-0 nova_compute[255040]: 2025-11-29 08:09:03.727 255071 DEBUG oslo_concurrency.lockutils [req-7645273b-f872-4efa-9732-8745eedd1939 req-15698336-c562-4117-afe0-99a0bec1468a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "818ae3fd-4905-4e82-8239-823ea098afa2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:03 compute-0 nova_compute[255040]: 2025-11-29 08:09:03.727 255071 DEBUG nova.compute.manager [req-7645273b-f872-4efa-9732-8745eedd1939 req-15698336-c562-4117-afe0-99a0bec1468a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] No waiting events found dispatching network-vif-plugged-e34ad281-3e61-4f9c-a12e-ab5c59e7efdd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:03 compute-0 nova_compute[255040]: 2025-11-29 08:09:03.727 255071 WARNING nova.compute.manager [req-7645273b-f872-4efa-9732-8745eedd1939 req-15698336-c562-4117-afe0-99a0bec1468a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Received unexpected event network-vif-plugged-e34ad281-3e61-4f9c-a12e-ab5c59e7efdd for instance with vm_state building and task_state spawning.
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.089 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403744.0881107, 818ae3fd-4905-4e82-8239-823ea098afa2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.092 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] VM Started (Lifecycle Event)
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.096 255071 DEBUG nova.compute.manager [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.101 255071 DEBUG nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.105 255071 INFO nova.virt.libvirt.driver [-] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Instance spawned successfully.
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.106 255071 DEBUG nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.122 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.131 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.135 255071 DEBUG nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.136 255071 DEBUG nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.136 255071 DEBUG nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.137 255071 DEBUG nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.137 255071 DEBUG nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.138 255071 DEBUG nova.virt.libvirt.driver [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.164 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.165 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403744.0885994, 818ae3fd-4905-4e82-8239-823ea098afa2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.165 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] VM Paused (Lifecycle Event)
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.202 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.207 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403744.1006675, 818ae3fd-4905-4e82-8239-823ea098afa2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.207 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] VM Resumed (Lifecycle Event)
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.212 255071 INFO nova.compute.manager [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Took 7.98 seconds to spawn the instance on the hypervisor.
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.212 255071 DEBUG nova.compute.manager [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.342 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.347 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.367 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.377 255071 INFO nova.compute.manager [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Took 10.14 seconds to build instance.
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.404 255071 DEBUG oslo_concurrency.lockutils [None req-93113914-b09b-41e0-ae64-1fc0b85dedfe 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "818ae3fd-4905-4e82-8239-823ea098afa2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.625 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:04 compute-0 nova_compute[255040]: 2025-11-29 08:09:04.717 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:04 compute-0 ceph-mon[75237]: pgmap v1505: 305 pgs: 305 active+clean; 273 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 9.4 MiB/s wr, 210 op/s
Nov 29 08:09:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 306 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 9.7 MiB/s wr, 227 op/s
Nov 29 08:09:05 compute-0 nova_compute[255040]: 2025-11-29 08:09:05.967 255071 DEBUG oslo_concurrency.lockutils [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "818ae3fd-4905-4e82-8239-823ea098afa2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:05 compute-0 nova_compute[255040]: 2025-11-29 08:09:05.968 255071 DEBUG oslo_concurrency.lockutils [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "818ae3fd-4905-4e82-8239-823ea098afa2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:05 compute-0 nova_compute[255040]: 2025-11-29 08:09:05.969 255071 DEBUG oslo_concurrency.lockutils [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "818ae3fd-4905-4e82-8239-823ea098afa2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:05 compute-0 nova_compute[255040]: 2025-11-29 08:09:05.969 255071 DEBUG oslo_concurrency.lockutils [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "818ae3fd-4905-4e82-8239-823ea098afa2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:05 compute-0 nova_compute[255040]: 2025-11-29 08:09:05.970 255071 DEBUG oslo_concurrency.lockutils [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "818ae3fd-4905-4e82-8239-823ea098afa2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.112 255071 INFO nova.compute.manager [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Terminating instance
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.115 255071 DEBUG nova.compute.manager [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:09:06 compute-0 kernel: tape34ad281-3e (unregistering): left promiscuous mode
Nov 29 08:09:06 compute-0 NetworkManager[49116]: <info>  [1764403746.1707] device (tape34ad281-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:09:06 compute-0 ovn_controller[153295]: 2025-11-29T08:09:06Z|00140|binding|INFO|Releasing lport e34ad281-3e61-4f9c-a12e-ab5c59e7efdd from this chassis (sb_readonly=0)
Nov 29 08:09:06 compute-0 ovn_controller[153295]: 2025-11-29T08:09:06Z|00141|binding|INFO|Setting lport e34ad281-3e61-4f9c-a12e-ab5c59e7efdd down in Southbound
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.184 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:06 compute-0 ovn_controller[153295]: 2025-11-29T08:09:06Z|00142|binding|INFO|Removing iface tape34ad281-3e ovn-installed in OVS
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.188 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:06.192 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:a9:7f 10.100.0.13'], port_security=['fa:16:3e:8f:a9:7f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '818ae3fd-4905-4e82-8239-823ea098afa2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3df24932e2a44aeab3c2aece8a045774', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a3cc607b-4336-4239-91d4-371fe33f0a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6d2be5e-00f1-4a95-b572-cb93402763d5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=e34ad281-3e61-4f9c-a12e-ab5c59e7efdd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:09:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:06.193 163500 INFO neutron.agent.ovn.metadata.agent [-] Port e34ad281-3e61-4f9c-a12e-ab5c59e7efdd in datapath 6e23492e-beff-43f6-b4d1-f88ebeea0b6f unbound from our chassis
Nov 29 08:09:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:06.195 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6e23492e-beff-43f6-b4d1-f88ebeea0b6f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:09:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:06.197 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[07daac75-a0d6-41e5-aae6-3613108d32a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:06.198 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f namespace which is not needed anymore
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.206 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:06 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Nov 29 08:09:06 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 3.569s CPU time.
Nov 29 08:09:06 compute-0 systemd-machined[216271]: Machine qemu-14-instance-0000000e terminated.
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.356 255071 INFO nova.virt.libvirt.driver [-] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Instance destroyed successfully.
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.357 255071 DEBUG nova.objects.instance [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lazy-loading 'resources' on Instance uuid 818ae3fd-4905-4e82-8239-823ea098afa2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:06 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[281844]: [NOTICE]   (281850) : haproxy version is 2.8.14-c23fe91
Nov 29 08:09:06 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[281844]: [NOTICE]   (281850) : path to executable is /usr/sbin/haproxy
Nov 29 08:09:06 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[281844]: [WARNING]  (281850) : Exiting Master process...
Nov 29 08:09:06 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[281844]: [ALERT]    (281850) : Current worker (281852) exited with code 143 (Terminated)
Nov 29 08:09:06 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[281844]: [WARNING]  (281850) : All workers exited. Exiting... (0)
Nov 29 08:09:06 compute-0 systemd[1]: libpod-9041df447ef51a484e48cbccdc51c55a7b67c61d5302565b03359b771204ecfd.scope: Deactivated successfully.
Nov 29 08:09:06 compute-0 podman[281891]: 2025-11-29 08:09:06.373916215 +0000 UTC m=+0.057716803 container died 9041df447ef51a484e48cbccdc51c55a7b67c61d5302565b03359b771204ecfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.373 255071 DEBUG nova.virt.libvirt.vif [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:08:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1880769940',display_name='tempest-TestVolumeBootPattern-server-1880769940',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1880769940',id=14,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3df24932e2a44aeab3c2aece8a045774',ramdisk_id='',reservation_id='r-818rtxsm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestVolumeBootPattern-1666331213',owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:04Z,user_data=None,user_id='5e62d407203540599a65ac50d5d447b9',uuid=818ae3fd-4905-4e82-8239-823ea098afa2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e34ad281-3e61-4f9c-a12e-ab5c59e7efdd", "address": "fa:16:3e:8f:a9:7f", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape34ad281-3e", "ovs_interfaceid": "e34ad281-3e61-4f9c-a12e-ab5c59e7efdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.374 255071 DEBUG nova.network.os_vif_util [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converting VIF {"id": "e34ad281-3e61-4f9c-a12e-ab5c59e7efdd", "address": "fa:16:3e:8f:a9:7f", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape34ad281-3e", "ovs_interfaceid": "e34ad281-3e61-4f9c-a12e-ab5c59e7efdd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.376 255071 DEBUG nova.network.os_vif_util [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:a9:7f,bridge_name='br-int',has_traffic_filtering=True,id=e34ad281-3e61-4f9c-a12e-ab5c59e7efdd,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape34ad281-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.377 255071 DEBUG os_vif [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:a9:7f,bridge_name='br-int',has_traffic_filtering=True,id=e34ad281-3e61-4f9c-a12e-ab5c59e7efdd,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape34ad281-3e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.380 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.381 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape34ad281-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.383 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.386 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.390 255071 INFO os_vif [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:a9:7f,bridge_name='br-int',has_traffic_filtering=True,id=e34ad281-3e61-4f9c-a12e-ab5c59e7efdd,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape34ad281-3e')
Nov 29 08:09:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-7152d58232c26af1ec48e3a6c067abe92e729e01fa790715d836ff703b067343-merged.mount: Deactivated successfully.
Nov 29 08:09:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9041df447ef51a484e48cbccdc51c55a7b67c61d5302565b03359b771204ecfd-userdata-shm.mount: Deactivated successfully.
Nov 29 08:09:06 compute-0 podman[281891]: 2025-11-29 08:09:06.426395606 +0000 UTC m=+0.110196184 container cleanup 9041df447ef51a484e48cbccdc51c55a7b67c61d5302565b03359b771204ecfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 08:09:06 compute-0 systemd[1]: libpod-conmon-9041df447ef51a484e48cbccdc51c55a7b67c61d5302565b03359b771204ecfd.scope: Deactivated successfully.
Nov 29 08:09:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:06 compute-0 podman[281948]: 2025-11-29 08:09:06.51169788 +0000 UTC m=+0.059381967 container remove 9041df447ef51a484e48cbccdc51c55a7b67c61d5302565b03359b771204ecfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:09:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:06.518 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[43698025-a72d-4028-bdff-2de81ef9f98d]: (4, ('Sat Nov 29 08:09:06 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f (9041df447ef51a484e48cbccdc51c55a7b67c61d5302565b03359b771204ecfd)\n9041df447ef51a484e48cbccdc51c55a7b67c61d5302565b03359b771204ecfd\nSat Nov 29 08:09:06 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f (9041df447ef51a484e48cbccdc51c55a7b67c61d5302565b03359b771204ecfd)\n9041df447ef51a484e48cbccdc51c55a7b67c61d5302565b03359b771204ecfd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:06.520 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[eefdc36b-ce74-411f-b6a8-0bcfbb16d67e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:06.522 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e23492e-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.572 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:06 compute-0 kernel: tap6e23492e-b0: left promiscuous mode
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.593 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:06.599 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[9ecc457f-33dc-4938-8582-2c91c4bad973]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:06.615 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[fd0d9296-4efe-481b-aba1-0b0fde4b3fc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:06.617 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[b2408d87-6409-4ed6-af67-d0b9d58a4b0e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:06.636 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4f2298-dc7a-47a8-bb47-e777c218b707]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597850, 'reachable_time': 20414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281966, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d6e23492e\x2dbeff\x2d43f6\x2db4d1\x2df88ebeea0b6f.mount: Deactivated successfully.
Nov 29 08:09:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:06.642 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:09:06 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:06.643 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[29d4a2a0-01a7-4072-b560-e335b9f2a465]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.676 255071 INFO nova.virt.libvirt.driver [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Deleting instance files /var/lib/nova/instances/818ae3fd-4905-4e82-8239-823ea098afa2_del
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.677 255071 INFO nova.virt.libvirt.driver [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Deletion of /var/lib/nova/instances/818ae3fd-4905-4e82-8239-823ea098afa2_del complete
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.726 255071 DEBUG nova.compute.manager [req-855c3474-4a9d-4058-8dfd-c08f47298441 req-7aaae873-eb13-4293-824a-49b97c5b63f5 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Received event network-vif-unplugged-e34ad281-3e61-4f9c-a12e-ab5c59e7efdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.727 255071 DEBUG oslo_concurrency.lockutils [req-855c3474-4a9d-4058-8dfd-c08f47298441 req-7aaae873-eb13-4293-824a-49b97c5b63f5 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "818ae3fd-4905-4e82-8239-823ea098afa2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.727 255071 DEBUG oslo_concurrency.lockutils [req-855c3474-4a9d-4058-8dfd-c08f47298441 req-7aaae873-eb13-4293-824a-49b97c5b63f5 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "818ae3fd-4905-4e82-8239-823ea098afa2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.728 255071 DEBUG oslo_concurrency.lockutils [req-855c3474-4a9d-4058-8dfd-c08f47298441 req-7aaae873-eb13-4293-824a-49b97c5b63f5 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "818ae3fd-4905-4e82-8239-823ea098afa2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.728 255071 DEBUG nova.compute.manager [req-855c3474-4a9d-4058-8dfd-c08f47298441 req-7aaae873-eb13-4293-824a-49b97c5b63f5 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] No waiting events found dispatching network-vif-unplugged-e34ad281-3e61-4f9c-a12e-ab5c59e7efdd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.728 255071 DEBUG nova.compute.manager [req-855c3474-4a9d-4058-8dfd-c08f47298441 req-7aaae873-eb13-4293-824a-49b97c5b63f5 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Received event network-vif-unplugged-e34ad281-3e61-4f9c-a12e-ab5c59e7efdd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.736 255071 INFO nova.compute.manager [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Took 0.62 seconds to destroy the instance on the hypervisor.
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.737 255071 DEBUG oslo.service.loopingcall [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.738 255071 DEBUG nova.compute.manager [-] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:09:06 compute-0 nova_compute[255040]: 2025-11-29 08:09:06.738 255071 DEBUG nova.network.neutron [-] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:09:06 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:09:06 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 16K writes, 63K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 16K writes, 4927 syncs, 3.26 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8252 writes, 32K keys, 8252 commit groups, 1.0 writes per commit group, ingest: 20.69 MB, 0.03 MB/s
                                           Interval WAL: 8253 writes, 3353 syncs, 2.46 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:09:06 compute-0 ceph-mon[75237]: pgmap v1506: 305 pgs: 305 active+clean; 306 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 9.7 MiB/s wr, 227 op/s
Nov 29 08:09:07 compute-0 nova_compute[255040]: 2025-11-29 08:09:07.436 255071 DEBUG nova.network.neutron [-] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:07 compute-0 nova_compute[255040]: 2025-11-29 08:09:07.458 255071 INFO nova.compute.manager [-] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Took 0.72 seconds to deallocate network for instance.
Nov 29 08:09:07 compute-0 nova_compute[255040]: 2025-11-29 08:09:07.688 255071 INFO nova.compute.manager [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Took 0.23 seconds to detach 1 volumes for instance.
Nov 29 08:09:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 306 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 7.9 MiB/s wr, 184 op/s
Nov 29 08:09:07 compute-0 nova_compute[255040]: 2025-11-29 08:09:07.728 255071 DEBUG oslo_concurrency.lockutils [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:07 compute-0 nova_compute[255040]: 2025-11-29 08:09:07.729 255071 DEBUG oslo_concurrency.lockutils [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:07 compute-0 nova_compute[255040]: 2025-11-29 08:09:07.815 255071 DEBUG oslo_concurrency.processutils [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:09:08 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2372467272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:08 compute-0 nova_compute[255040]: 2025-11-29 08:09:08.310 255071 DEBUG oslo_concurrency.processutils [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:08 compute-0 nova_compute[255040]: 2025-11-29 08:09:08.319 255071 DEBUG nova.compute.provider_tree [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:09:08 compute-0 nova_compute[255040]: 2025-11-29 08:09:08.340 255071 DEBUG nova.scheduler.client.report [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:09:08 compute-0 nova_compute[255040]: 2025-11-29 08:09:08.372 255071 DEBUG oslo_concurrency.lockutils [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:08 compute-0 nova_compute[255040]: 2025-11-29 08:09:08.406 255071 INFO nova.scheduler.client.report [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Deleted allocations for instance 818ae3fd-4905-4e82-8239-823ea098afa2
Nov 29 08:09:08 compute-0 nova_compute[255040]: 2025-11-29 08:09:08.471 255071 DEBUG oslo_concurrency.lockutils [None req-78f5e967-c991-45af-8be0-18d2222ee499 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "818ae3fd-4905-4e82-8239-823ea098afa2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:09:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:09:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:09:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:09:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:09:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:09:08 compute-0 nova_compute[255040]: 2025-11-29 08:09:08.784 255071 DEBUG oslo_concurrency.lockutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Acquiring lock "fb7c2a0f-da59-4d91-abfb-6de392bff759" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:08 compute-0 nova_compute[255040]: 2025-11-29 08:09:08.785 255071 DEBUG oslo_concurrency.lockutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Lock "fb7c2a0f-da59-4d91-abfb-6de392bff759" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:08 compute-0 nova_compute[255040]: 2025-11-29 08:09:08.806 255071 DEBUG nova.compute.manager [req-8809d199-cbc5-4295-b445-98b0f01248e9 req-26fa412e-27e3-42d0-ada4-1a52fcf62b26 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Received event network-vif-plugged-e34ad281-3e61-4f9c-a12e-ab5c59e7efdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:08 compute-0 nova_compute[255040]: 2025-11-29 08:09:08.806 255071 DEBUG oslo_concurrency.lockutils [req-8809d199-cbc5-4295-b445-98b0f01248e9 req-26fa412e-27e3-42d0-ada4-1a52fcf62b26 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "818ae3fd-4905-4e82-8239-823ea098afa2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:08 compute-0 nova_compute[255040]: 2025-11-29 08:09:08.807 255071 DEBUG oslo_concurrency.lockutils [req-8809d199-cbc5-4295-b445-98b0f01248e9 req-26fa412e-27e3-42d0-ada4-1a52fcf62b26 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "818ae3fd-4905-4e82-8239-823ea098afa2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:08 compute-0 nova_compute[255040]: 2025-11-29 08:09:08.807 255071 DEBUG oslo_concurrency.lockutils [req-8809d199-cbc5-4295-b445-98b0f01248e9 req-26fa412e-27e3-42d0-ada4-1a52fcf62b26 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "818ae3fd-4905-4e82-8239-823ea098afa2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:08 compute-0 nova_compute[255040]: 2025-11-29 08:09:08.807 255071 DEBUG nova.compute.manager [req-8809d199-cbc5-4295-b445-98b0f01248e9 req-26fa412e-27e3-42d0-ada4-1a52fcf62b26 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] No waiting events found dispatching network-vif-plugged-e34ad281-3e61-4f9c-a12e-ab5c59e7efdd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:08 compute-0 nova_compute[255040]: 2025-11-29 08:09:08.807 255071 WARNING nova.compute.manager [req-8809d199-cbc5-4295-b445-98b0f01248e9 req-26fa412e-27e3-42d0-ada4-1a52fcf62b26 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Received unexpected event network-vif-plugged-e34ad281-3e61-4f9c-a12e-ab5c59e7efdd for instance with vm_state deleted and task_state None.
Nov 29 08:09:08 compute-0 nova_compute[255040]: 2025-11-29 08:09:08.807 255071 DEBUG nova.compute.manager [req-8809d199-cbc5-4295-b445-98b0f01248e9 req-26fa412e-27e3-42d0-ada4-1a52fcf62b26 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Received event network-vif-deleted-e34ad281-3e61-4f9c-a12e-ab5c59e7efdd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:08 compute-0 nova_compute[255040]: 2025-11-29 08:09:08.808 255071 DEBUG nova.compute.manager [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:09:08 compute-0 nova_compute[255040]: 2025-11-29 08:09:08.896 255071 DEBUG oslo_concurrency.lockutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:08 compute-0 nova_compute[255040]: 2025-11-29 08:09:08.897 255071 DEBUG oslo_concurrency.lockutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:08 compute-0 ceph-mon[75237]: pgmap v1507: 305 pgs: 305 active+clean; 306 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 7.9 MiB/s wr, 184 op/s
Nov 29 08:09:08 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2372467272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:08 compute-0 nova_compute[255040]: 2025-11-29 08:09:08.912 255071 DEBUG nova.virt.hardware [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:09:08 compute-0 nova_compute[255040]: 2025-11-29 08:09:08.912 255071 INFO nova.compute.claims [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.040 255071 DEBUG oslo_concurrency.processutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:09:09 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/534462658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.518 255071 DEBUG oslo_concurrency.processutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.544 255071 DEBUG nova.compute.provider_tree [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.566 255071 DEBUG nova.scheduler.client.report [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.587 255071 DEBUG oslo_concurrency.lockutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.588 255071 DEBUG nova.compute.manager [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.629 255071 DEBUG nova.compute.manager [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.630 255071 DEBUG nova.network.neutron [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.647 255071 INFO nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.663 255071 DEBUG nova.compute.manager [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.701 255071 INFO nova.virt.block_device [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Booting with volume 98723f7f-2d58-40f2-8c56-14460211a9ed at /dev/vda
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.716 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 306 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.7 MiB/s wr, 125 op/s
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.789 255071 DEBUG nova.policy [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd35494d39d8d404891546638d8f87af5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b82a0d97ae1643c5827b47c48ab0fc71', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:09:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:09:09 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3273652176' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:09:09 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3273652176' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.848 255071 DEBUG os_brick.utils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.850 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.864 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.865 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[9b740a73-7ad2-49f7-83ea-414b94920553]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.867 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.877 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.877 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[2d8f0dbb-6dac-4f2d-8274-f18f8d17b1f6]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.880 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.891 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.891 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc99a8f-8db4-4054-95f7-27863167a608]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.894 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[8c9e2ace-9a7e-4011-84b8-d161fb69f0cc]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.895 255071 DEBUG oslo_concurrency.processutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:09 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/534462658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:09 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3273652176' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:09 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3273652176' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:09 compute-0 podman[282012]: 2025-11-29 08:09:09.91655705 +0000 UTC m=+0.073746284 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.933 255071 DEBUG oslo_concurrency.processutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] CMD "nvme version" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.936 255071 DEBUG os_brick.initiator.connectors.lightos [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.936 255071 DEBUG os_brick.initiator.connectors.lightos [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.936 255071 DEBUG os_brick.initiator.connectors.lightos [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.936 255071 DEBUG os_brick.utils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] <== get_connector_properties: return (88ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:09:09 compute-0 nova_compute[255040]: 2025-11-29 08:09:09.937 255071 DEBUG nova.virt.block_device [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Updating existing volume attachment record: ca8064ea-4b6f-4cdd-8d85-722439585dc4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:09:10 compute-0 nova_compute[255040]: 2025-11-29 08:09:10.452 255071 DEBUG nova.network.neutron [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Successfully created port: beabc602-cbf2-4e13-adbf-6e5254ac8e0a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:09:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:09:10 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1955074173' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:10 compute-0 ceph-mon[75237]: pgmap v1508: 305 pgs: 305 active+clean; 306 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.7 MiB/s wr, 125 op/s
Nov 29 08:09:10 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1955074173' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:10 compute-0 sudo[282039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:09:10 compute-0 sudo[282039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:10 compute-0 sudo[282039]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:10 compute-0 nova_compute[255040]: 2025-11-29 08:09:10.981 255071 DEBUG nova.compute.manager [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:09:10 compute-0 nova_compute[255040]: 2025-11-29 08:09:10.984 255071 DEBUG nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:09:10 compute-0 nova_compute[255040]: 2025-11-29 08:09:10.985 255071 INFO nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Creating image(s)
Nov 29 08:09:10 compute-0 nova_compute[255040]: 2025-11-29 08:09:10.986 255071 DEBUG nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:09:10 compute-0 nova_compute[255040]: 2025-11-29 08:09:10.987 255071 DEBUG nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Ensure instance console log exists: /var/lib/nova/instances/fb7c2a0f-da59-4d91-abfb-6de392bff759/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:09:10 compute-0 nova_compute[255040]: 2025-11-29 08:09:10.987 255071 DEBUG oslo_concurrency.lockutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:10 compute-0 nova_compute[255040]: 2025-11-29 08:09:10.988 255071 DEBUG oslo_concurrency.lockutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:10 compute-0 nova_compute[255040]: 2025-11-29 08:09:10.989 255071 DEBUG oslo_concurrency.lockutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:11 compute-0 sudo[282064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:09:11 compute-0 sudo[282064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:11 compute-0 sudo[282064]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:11 compute-0 sudo[282089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:09:11 compute-0 sudo[282089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:11 compute-0 sudo[282089]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:11 compute-0 sudo[282114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:09:11 compute-0 sudo[282114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:11 compute-0 nova_compute[255040]: 2025-11-29 08:09:11.219 255071 DEBUG nova.network.neutron [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Successfully updated port: beabc602-cbf2-4e13-adbf-6e5254ac8e0a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:09:11 compute-0 nova_compute[255040]: 2025-11-29 08:09:11.243 255071 DEBUG oslo_concurrency.lockutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Acquiring lock "refresh_cache-fb7c2a0f-da59-4d91-abfb-6de392bff759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:09:11 compute-0 nova_compute[255040]: 2025-11-29 08:09:11.244 255071 DEBUG oslo_concurrency.lockutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Acquired lock "refresh_cache-fb7c2a0f-da59-4d91-abfb-6de392bff759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:09:11 compute-0 nova_compute[255040]: 2025-11-29 08:09:11.244 255071 DEBUG nova.network.neutron [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:09:11 compute-0 nova_compute[255040]: 2025-11-29 08:09:11.307 255071 DEBUG nova.compute.manager [req-5a311832-4325-4f65-8e17-9abf30731a79 req-d1596d28-5292-47d1-aa35-8f860aaf2ef7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Received event network-changed-beabc602-cbf2-4e13-adbf-6e5254ac8e0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:11 compute-0 nova_compute[255040]: 2025-11-29 08:09:11.307 255071 DEBUG nova.compute.manager [req-5a311832-4325-4f65-8e17-9abf30731a79 req-d1596d28-5292-47d1-aa35-8f860aaf2ef7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Refreshing instance network info cache due to event network-changed-beabc602-cbf2-4e13-adbf-6e5254ac8e0a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:09:11 compute-0 nova_compute[255040]: 2025-11-29 08:09:11.307 255071 DEBUG oslo_concurrency.lockutils [req-5a311832-4325-4f65-8e17-9abf30731a79 req-d1596d28-5292-47d1-aa35-8f860aaf2ef7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-fb7c2a0f-da59-4d91-abfb-6de392bff759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:09:11 compute-0 nova_compute[255040]: 2025-11-29 08:09:11.384 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:11 compute-0 nova_compute[255040]: 2025-11-29 08:09:11.392 255071 DEBUG nova.network.neutron [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:09:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:11 compute-0 sudo[282114]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:09:11 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:09:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:09:11 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:09:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 306 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.5 MiB/s wr, 109 op/s
Nov 29 08:09:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:09:11 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:09:11 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev faf73bd1-b6a2-4652-8f99-4e2f21358647 does not exist
Nov 29 08:09:11 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 172243f8-d03a-4d07-95e6-22ec250ffebf does not exist
Nov 29 08:09:11 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev b700c21c-b16c-44eb-9fba-f94635d72bac does not exist
Nov 29 08:09:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:09:11 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:09:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:09:11 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:09:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:09:11 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:09:11 compute-0 sudo[282171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:09:11 compute-0 sudo[282171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:11 compute-0 sudo[282171]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:11 compute-0 sudo[282196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:09:11 compute-0 sudo[282196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:11 compute-0 sudo[282196]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:11 compute-0 sudo[282221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:09:11 compute-0 sudo[282221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:11 compute-0 sudo[282221]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:11 compute-0 sudo[282246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:09:12 compute-0 sudo[282246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.034 255071 DEBUG nova.network.neutron [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Updating instance_info_cache with network_info: [{"id": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "address": "fa:16:3e:60:82:3d", "network": {"id": "0a7ec55d-851c-4f69-99cf-2136c772174e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1026596325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b82a0d97ae1643c5827b47c48ab0fc71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbeabc602-cb", "ovs_interfaceid": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.058 255071 DEBUG oslo_concurrency.lockutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Releasing lock "refresh_cache-fb7c2a0f-da59-4d91-abfb-6de392bff759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.058 255071 DEBUG nova.compute.manager [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Instance network_info: |[{"id": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "address": "fa:16:3e:60:82:3d", "network": {"id": "0a7ec55d-851c-4f69-99cf-2136c772174e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1026596325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b82a0d97ae1643c5827b47c48ab0fc71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbeabc602-cb", "ovs_interfaceid": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.059 255071 DEBUG oslo_concurrency.lockutils [req-5a311832-4325-4f65-8e17-9abf30731a79 req-d1596d28-5292-47d1-aa35-8f860aaf2ef7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-fb7c2a0f-da59-4d91-abfb-6de392bff759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.059 255071 DEBUG nova.network.neutron [req-5a311832-4325-4f65-8e17-9abf30731a79 req-d1596d28-5292-47d1-aa35-8f860aaf2ef7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Refreshing network info cache for port beabc602-cbf2-4e13-adbf-6e5254ac8e0a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.062 255071 DEBUG nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Start _get_guest_xml network_info=[{"id": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "address": "fa:16:3e:60:82:3d", "network": {"id": "0a7ec55d-851c-4f69-99cf-2136c772174e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1026596325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b82a0d97ae1643c5827b47c48ab0fc71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbeabc602-cb", "ovs_interfaceid": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-98723f7f-2d58-40f2-8c56-14460211a9ed', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '98723f7f-2d58-40f2-8c56-14460211a9ed', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'fb7c2a0f-da59-4d91-abfb-6de392bff759', 'attached_at': '', 'detached_at': '', 'volume_id': '98723f7f-2d58-40f2-8c56-14460211a9ed', 'serial': '98723f7f-2d58-40f2-8c56-14460211a9ed'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'delete_on_termination': False, 'attachment_id': 'ca8064ea-4b6f-4cdd-8d85-722439585dc4', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.067 255071 WARNING nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.077 255071 DEBUG nova.virt.libvirt.host [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.078 255071 DEBUG nova.virt.libvirt.host [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.081 255071 DEBUG nova.virt.libvirt.host [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.081 255071 DEBUG nova.virt.libvirt.host [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.082 255071 DEBUG nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.082 255071 DEBUG nova.virt.hardware [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.083 255071 DEBUG nova.virt.hardware [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.083 255071 DEBUG nova.virt.hardware [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.083 255071 DEBUG nova.virt.hardware [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.083 255071 DEBUG nova.virt.hardware [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.084 255071 DEBUG nova.virt.hardware [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.084 255071 DEBUG nova.virt.hardware [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.084 255071 DEBUG nova.virt.hardware [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.084 255071 DEBUG nova.virt.hardware [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.085 255071 DEBUG nova.virt.hardware [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.085 255071 DEBUG nova.virt.hardware [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.116 255071 DEBUG nova.storage.rbd_utils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] rbd image fb7c2a0f-da59-4d91-abfb-6de392bff759_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.124 255071 DEBUG oslo_concurrency.processutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:12 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:09:12 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:09:12 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:09:12 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:09:12 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:09:12 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:09:12 compute-0 podman[282348]: 2025-11-29 08:09:12.405191531 +0000 UTC m=+0.047070517 container create 316692914d22bee926e77a9e89a6acc7446b32b74872efb447c51ab60125cc2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lovelace, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:09:12 compute-0 systemd[1]: Started libpod-conmon-316692914d22bee926e77a9e89a6acc7446b32b74872efb447c51ab60125cc2a.scope.
Nov 29 08:09:12 compute-0 podman[282348]: 2025-11-29 08:09:12.383893598 +0000 UTC m=+0.025772614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:09:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:09:12 compute-0 podman[282348]: 2025-11-29 08:09:12.507885903 +0000 UTC m=+0.149764919 container init 316692914d22bee926e77a9e89a6acc7446b32b74872efb447c51ab60125cc2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lovelace, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 08:09:12 compute-0 podman[282348]: 2025-11-29 08:09:12.518120438 +0000 UTC m=+0.159999424 container start 316692914d22bee926e77a9e89a6acc7446b32b74872efb447c51ab60125cc2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:09:12 compute-0 podman[282348]: 2025-11-29 08:09:12.52153808 +0000 UTC m=+0.163417096 container attach 316692914d22bee926e77a9e89a6acc7446b32b74872efb447c51ab60125cc2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lovelace, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 08:09:12 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:09:12 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 13K writes, 54K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 4015 syncs, 3.36 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6347 writes, 26K keys, 6347 commit groups, 1.0 writes per commit group, ingest: 19.07 MB, 0.03 MB/s
                                           Interval WAL: 6347 writes, 2569 syncs, 2.47 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:09:12 compute-0 hardcore_lovelace[282365]: 167 167
Nov 29 08:09:12 compute-0 systemd[1]: libpod-316692914d22bee926e77a9e89a6acc7446b32b74872efb447c51ab60125cc2a.scope: Deactivated successfully.
Nov 29 08:09:12 compute-0 podman[282348]: 2025-11-29 08:09:12.528908238 +0000 UTC m=+0.170787214 container died 316692914d22bee926e77a9e89a6acc7446b32b74872efb447c51ab60125cc2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 08:09:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-603f78516c9d19f176991957a530f57fd90cdca9bfa93b9e3237b3ae37787e41-merged.mount: Deactivated successfully.
Nov 29 08:09:12 compute-0 podman[282348]: 2025-11-29 08:09:12.574299789 +0000 UTC m=+0.216178775 container remove 316692914d22bee926e77a9e89a6acc7446b32b74872efb447c51ab60125cc2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:09:12 compute-0 systemd[1]: libpod-conmon-316692914d22bee926e77a9e89a6acc7446b32b74872efb447c51ab60125cc2a.scope: Deactivated successfully.
Nov 29 08:09:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:09:12 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2312849481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.635 255071 DEBUG oslo_concurrency.processutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.663 255071 DEBUG nova.virt.libvirt.vif [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-110471317',display_name='tempest-TestVolumeBackupRestore-server-110471317',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-110471317',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH/xyTZP4ASJ0cxi1kioa3QVMCW3LA7If4GnthVEkBTP7C1Y9t2v6xrSBYUsfIwbI+dkIDldNWyWJhdAyt0g4ZJdGVF4vKANTylMU2zJMN3r5qJ2x1ZOtIcri0Br71jRUg==',key_name='tempest-TestVolumeBackupRestore-1588688224',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b82a0d97ae1643c5827b47c48ab0fc71',ramdisk_id='',reservation_id='r-kmdzpea0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1109760377',owner_user_name='tempest-TestVolumeBackupRestore-1109760377-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:09Z,user_data=None,user_id='d35494d39d8d404891546638d8f87af5',uuid=fb7c2a0f-da59-4d91-abfb-6de392bff759,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "address": "fa:16:3e:60:82:3d", "network": {"id": "0a7ec55d-851c-4f69-99cf-2136c772174e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1026596325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b82a0d97ae1643c5827b47c48ab0fc71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbeabc602-cb", "ovs_interfaceid": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.663 255071 DEBUG nova.network.os_vif_util [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Converting VIF {"id": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "address": "fa:16:3e:60:82:3d", "network": {"id": "0a7ec55d-851c-4f69-99cf-2136c772174e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1026596325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b82a0d97ae1643c5827b47c48ab0fc71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbeabc602-cb", "ovs_interfaceid": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.665 255071 DEBUG nova.network.os_vif_util [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:82:3d,bridge_name='br-int',has_traffic_filtering=True,id=beabc602-cbf2-4e13-adbf-6e5254ac8e0a,network=Network(0a7ec55d-851c-4f69-99cf-2136c772174e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbeabc602-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.668 255071 DEBUG nova.objects.instance [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Lazy-loading 'pci_devices' on Instance uuid fb7c2a0f-da59-4d91-abfb-6de392bff759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.690 255071 DEBUG nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:09:12 compute-0 nova_compute[255040]:   <uuid>fb7c2a0f-da59-4d91-abfb-6de392bff759</uuid>
Nov 29 08:09:12 compute-0 nova_compute[255040]:   <name>instance-0000000f</name>
Nov 29 08:09:12 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:09:12 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:09:12 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <nova:name>tempest-TestVolumeBackupRestore-server-110471317</nova:name>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:09:12</nova:creationTime>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:09:12 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:09:12 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:09:12 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:09:12 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:09:12 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:09:12 compute-0 nova_compute[255040]:         <nova:user uuid="d35494d39d8d404891546638d8f87af5">tempest-TestVolumeBackupRestore-1109760377-project-member</nova:user>
Nov 29 08:09:12 compute-0 nova_compute[255040]:         <nova:project uuid="b82a0d97ae1643c5827b47c48ab0fc71">tempest-TestVolumeBackupRestore-1109760377</nova:project>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:09:12 compute-0 nova_compute[255040]:         <nova:port uuid="beabc602-cbf2-4e13-adbf-6e5254ac8e0a">
Nov 29 08:09:12 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:09:12 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:09:12 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <system>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <entry name="serial">fb7c2a0f-da59-4d91-abfb-6de392bff759</entry>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <entry name="uuid">fb7c2a0f-da59-4d91-abfb-6de392bff759</entry>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     </system>
Nov 29 08:09:12 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:09:12 compute-0 nova_compute[255040]:   <os>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:   </os>
Nov 29 08:09:12 compute-0 nova_compute[255040]:   <features>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:   </features>
Nov 29 08:09:12 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:09:12 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:09:12 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/fb7c2a0f-da59-4d91-abfb-6de392bff759_disk.config">
Nov 29 08:09:12 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       </source>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:09:12 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <source protocol="rbd" name="volumes/volume-98723f7f-2d58-40f2-8c56-14460211a9ed">
Nov 29 08:09:12 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       </source>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:09:12 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <serial>98723f7f-2d58-40f2-8c56-14460211a9ed</serial>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:60:82:3d"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <target dev="tapbeabc602-cb"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/fb7c2a0f-da59-4d91-abfb-6de392bff759/console.log" append="off"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <video>
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     </video>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:09:12 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:09:12 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:09:12 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:09:12 compute-0 nova_compute[255040]: </domain>
Nov 29 08:09:12 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.692 255071 DEBUG nova.compute.manager [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Preparing to wait for external event network-vif-plugged-beabc602-cbf2-4e13-adbf-6e5254ac8e0a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.693 255071 DEBUG oslo_concurrency.lockutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Acquiring lock "fb7c2a0f-da59-4d91-abfb-6de392bff759-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.693 255071 DEBUG oslo_concurrency.lockutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Lock "fb7c2a0f-da59-4d91-abfb-6de392bff759-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.694 255071 DEBUG oslo_concurrency.lockutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Lock "fb7c2a0f-da59-4d91-abfb-6de392bff759-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.695 255071 DEBUG nova.virt.libvirt.vif [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-110471317',display_name='tempest-TestVolumeBackupRestore-server-110471317',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-110471317',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH/xyTZP4ASJ0cxi1kioa3QVMCW3LA7If4GnthVEkBTP7C1Y9t2v6xrSBYUsfIwbI+dkIDldNWyWJhdAyt0g4ZJdGVF4vKANTylMU2zJMN3r5qJ2x1ZOtIcri0Br71jRUg==',key_name='tempest-TestVolumeBackupRestore-1588688224',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b82a0d97ae1643c5827b47c48ab0fc71',ramdisk_id='',reservation_id='r-kmdzpea0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1109760377',owner_user_name='tempest-TestVolumeBackupRestore-1109760377-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:09Z,user_data=None,user_id='d35494d39d8d404891546638d8f87af5',uuid=fb7c2a0f-da59-4d91-abfb-6de392bff759,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "address": "fa:16:3e:60:82:3d", "network": {"id": "0a7ec55d-851c-4f69-99cf-2136c772174e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1026596325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b82a0d97ae1643c5827b47c48ab0fc71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbeabc602-cb", "ovs_interfaceid": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.695 255071 DEBUG nova.network.os_vif_util [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Converting VIF {"id": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "address": "fa:16:3e:60:82:3d", "network": {"id": "0a7ec55d-851c-4f69-99cf-2136c772174e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1026596325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b82a0d97ae1643c5827b47c48ab0fc71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbeabc602-cb", "ovs_interfaceid": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.696 255071 DEBUG nova.network.os_vif_util [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:82:3d,bridge_name='br-int',has_traffic_filtering=True,id=beabc602-cbf2-4e13-adbf-6e5254ac8e0a,network=Network(0a7ec55d-851c-4f69-99cf-2136c772174e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbeabc602-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.696 255071 DEBUG os_vif [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:82:3d,bridge_name='br-int',has_traffic_filtering=True,id=beabc602-cbf2-4e13-adbf-6e5254ac8e0a,network=Network(0a7ec55d-851c-4f69-99cf-2136c772174e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbeabc602-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.698 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.698 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.699 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.705 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.705 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbeabc602-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.706 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbeabc602-cb, col_values=(('external_ids', {'iface-id': 'beabc602-cbf2-4e13-adbf-6e5254ac8e0a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:82:3d', 'vm-uuid': 'fb7c2a0f-da59-4d91-abfb-6de392bff759'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.708 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:12 compute-0 NetworkManager[49116]: <info>  [1764403752.7096] manager: (tapbeabc602-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.710 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.717 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.719 255071 INFO os_vif [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:82:3d,bridge_name='br-int',has_traffic_filtering=True,id=beabc602-cbf2-4e13-adbf-6e5254ac8e0a,network=Network(0a7ec55d-851c-4f69-99cf-2136c772174e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbeabc602-cb')
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.780 255071 DEBUG nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.780 255071 DEBUG nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.780 255071 DEBUG nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] No VIF found with MAC fa:16:3e:60:82:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.781 255071 INFO nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Using config drive
Nov 29 08:09:12 compute-0 podman[282391]: 2025-11-29 08:09:12.790242846 +0000 UTC m=+0.056523351 container create 3bb4264d273e52e643ab5c12960855f42db3b4d20cad0ed0d3a926bedb168e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 08:09:12 compute-0 nova_compute[255040]: 2025-11-29 08:09:12.810 255071 DEBUG nova.storage.rbd_utils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] rbd image fb7c2a0f-da59-4d91-abfb-6de392bff759_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:12 compute-0 systemd[1]: Started libpod-conmon-3bb4264d273e52e643ab5c12960855f42db3b4d20cad0ed0d3a926bedb168e20.scope.
Nov 29 08:09:12 compute-0 podman[282391]: 2025-11-29 08:09:12.765668896 +0000 UTC m=+0.031949421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:09:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d502b258b6841c2f617c9e05eb22d76476d74752fbb8192c7ccc57101e05c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d502b258b6841c2f617c9e05eb22d76476d74752fbb8192c7ccc57101e05c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d502b258b6841c2f617c9e05eb22d76476d74752fbb8192c7ccc57101e05c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d502b258b6841c2f617c9e05eb22d76476d74752fbb8192c7ccc57101e05c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d502b258b6841c2f617c9e05eb22d76476d74752fbb8192c7ccc57101e05c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:12 compute-0 podman[282391]: 2025-11-29 08:09:12.906468092 +0000 UTC m=+0.172748627 container init 3bb4264d273e52e643ab5c12960855f42db3b4d20cad0ed0d3a926bedb168e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 29 08:09:12 compute-0 podman[282391]: 2025-11-29 08:09:12.918429804 +0000 UTC m=+0.184710309 container start 3bb4264d273e52e643ab5c12960855f42db3b4d20cad0ed0d3a926bedb168e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:09:12 compute-0 podman[282391]: 2025-11-29 08:09:12.922698169 +0000 UTC m=+0.188978684 container attach 3bb4264d273e52e643ab5c12960855f42db3b4d20cad0ed0d3a926bedb168e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 08:09:13 compute-0 ceph-mon[75237]: pgmap v1509: 305 pgs: 305 active+clean; 306 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.5 MiB/s wr, 109 op/s
Nov 29 08:09:13 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2312849481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:13 compute-0 nova_compute[255040]: 2025-11-29 08:09:13.404 255071 INFO nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Creating config drive at /var/lib/nova/instances/fb7c2a0f-da59-4d91-abfb-6de392bff759/disk.config
Nov 29 08:09:13 compute-0 nova_compute[255040]: 2025-11-29 08:09:13.410 255071 DEBUG oslo_concurrency.processutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fb7c2a0f-da59-4d91-abfb-6de392bff759/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppq7jynqw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:13 compute-0 nova_compute[255040]: 2025-11-29 08:09:13.542 255071 DEBUG nova.network.neutron [req-5a311832-4325-4f65-8e17-9abf30731a79 req-d1596d28-5292-47d1-aa35-8f860aaf2ef7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Updated VIF entry in instance network info cache for port beabc602-cbf2-4e13-adbf-6e5254ac8e0a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:09:13 compute-0 nova_compute[255040]: 2025-11-29 08:09:13.543 255071 DEBUG nova.network.neutron [req-5a311832-4325-4f65-8e17-9abf30731a79 req-d1596d28-5292-47d1-aa35-8f860aaf2ef7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Updating instance_info_cache with network_info: [{"id": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "address": "fa:16:3e:60:82:3d", "network": {"id": "0a7ec55d-851c-4f69-99cf-2136c772174e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1026596325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b82a0d97ae1643c5827b47c48ab0fc71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbeabc602-cb", "ovs_interfaceid": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:13 compute-0 nova_compute[255040]: 2025-11-29 08:09:13.546 255071 DEBUG oslo_concurrency.processutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fb7c2a0f-da59-4d91-abfb-6de392bff759/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppq7jynqw" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:13 compute-0 nova_compute[255040]: 2025-11-29 08:09:13.571 255071 DEBUG nova.storage.rbd_utils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] rbd image fb7c2a0f-da59-4d91-abfb-6de392bff759_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:13 compute-0 nova_compute[255040]: 2025-11-29 08:09:13.576 255071 DEBUG oslo_concurrency.processutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fb7c2a0f-da59-4d91-abfb-6de392bff759/disk.config fb7c2a0f-da59-4d91-abfb-6de392bff759_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 306 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 98 op/s
Nov 29 08:09:13 compute-0 nova_compute[255040]: 2025-11-29 08:09:13.734 255071 DEBUG oslo_concurrency.processutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fb7c2a0f-da59-4d91-abfb-6de392bff759/disk.config fb7c2a0f-da59-4d91-abfb-6de392bff759_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:13 compute-0 nova_compute[255040]: 2025-11-29 08:09:13.735 255071 INFO nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Deleting local config drive /var/lib/nova/instances/fb7c2a0f-da59-4d91-abfb-6de392bff759/disk.config because it was imported into RBD.
Nov 29 08:09:13 compute-0 kernel: tapbeabc602-cb: entered promiscuous mode
Nov 29 08:09:13 compute-0 NetworkManager[49116]: <info>  [1764403753.7935] manager: (tapbeabc602-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/83)
Nov 29 08:09:13 compute-0 ovn_controller[153295]: 2025-11-29T08:09:13Z|00143|binding|INFO|Claiming lport beabc602-cbf2-4e13-adbf-6e5254ac8e0a for this chassis.
Nov 29 08:09:13 compute-0 ovn_controller[153295]: 2025-11-29T08:09:13Z|00144|binding|INFO|beabc602-cbf2-4e13-adbf-6e5254ac8e0a: Claiming fa:16:3e:60:82:3d 10.100.0.5
Nov 29 08:09:13 compute-0 nova_compute[255040]: 2025-11-29 08:09:13.793 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:13 compute-0 ovn_controller[153295]: 2025-11-29T08:09:13Z|00145|binding|INFO|Setting lport beabc602-cbf2-4e13-adbf-6e5254ac8e0a ovn-installed in OVS
Nov 29 08:09:13 compute-0 nova_compute[255040]: 2025-11-29 08:09:13.813 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:13 compute-0 nova_compute[255040]: 2025-11-29 08:09:13.816 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:13 compute-0 systemd-udevd[282493]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:09:13 compute-0 NetworkManager[49116]: <info>  [1764403753.8434] device (tapbeabc602-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:09:13 compute-0 systemd-machined[216271]: New machine qemu-15-instance-0000000f.
Nov 29 08:09:13 compute-0 NetworkManager[49116]: <info>  [1764403753.8448] device (tapbeabc602-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:09:13 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000f.
Nov 29 08:09:13 compute-0 ovn_controller[153295]: 2025-11-29T08:09:13Z|00146|binding|INFO|Setting lport beabc602-cbf2-4e13-adbf-6e5254ac8e0a up in Southbound
Nov 29 08:09:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:13.892 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:82:3d 10.100.0.5'], port_security=['fa:16:3e:60:82:3d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'fb7c2a0f-da59-4d91-abfb-6de392bff759', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a7ec55d-851c-4f69-99cf-2136c772174e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b82a0d97ae1643c5827b47c48ab0fc71', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f3d9dde9-3ce8-45c8-ac1f-27139dcb9640', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e47df327-b9d6-42fa-8a7d-8e08f0aa09b4, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=beabc602-cbf2-4e13-adbf-6e5254ac8e0a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:09:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:13.894 163500 INFO neutron.agent.ovn.metadata.agent [-] Port beabc602-cbf2-4e13-adbf-6e5254ac8e0a in datapath 0a7ec55d-851c-4f69-99cf-2136c772174e bound to our chassis
Nov 29 08:09:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:13.895 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0a7ec55d-851c-4f69-99cf-2136c772174e
Nov 29 08:09:13 compute-0 nova_compute[255040]: 2025-11-29 08:09:13.907 255071 DEBUG oslo_concurrency.lockutils [req-5a311832-4325-4f65-8e17-9abf30731a79 req-d1596d28-5292-47d1-aa35-8f860aaf2ef7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-fb7c2a0f-da59-4d91-abfb-6de392bff759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:09:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:13.912 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[a30bb2ab-72d1-4652-89ca-1a98c8bafcbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:13.913 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0a7ec55d-81 in ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:09:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:13.916 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0a7ec55d-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:09:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:13.916 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[35be3b9a-ea30-4616-95db-04a4f7b48a32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:13.918 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[15c0cb13-ca23-4522-8486-5e528d95bc27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:13.937 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[24302906-5c17-412a-8441-6a7b397c82f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:13.966 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[4875426e-3dd5-48ad-9a63-fda96bfabf4f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:14.006 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[851a8256-9e1f-4498-bbab-979e61e576ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:14.014 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[2f6f0b0f-3c64-4750-9ebe-f3e306ffd6da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:14 compute-0 NetworkManager[49116]: <info>  [1764403754.0190] manager: (tap0a7ec55d-80): new Veth device (/org/freedesktop/NetworkManager/Devices/84)
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:14.069 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[b903100f-48c5-478e-939f-232a79cb33bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:14.074 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[8d3dc199-aa1a-4d14-9c6a-934a05eaac23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:14 compute-0 upbeat_darwin[282426]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:09:14 compute-0 upbeat_darwin[282426]: --> relative data size: 1.0
Nov 29 08:09:14 compute-0 upbeat_darwin[282426]: --> All data devices are unavailable
Nov 29 08:09:14 compute-0 NetworkManager[49116]: <info>  [1764403754.1072] device (tap0a7ec55d-80): carrier: link connected
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:14.113 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[e74fffb2-a2c7-4951-8cc4-23c1bfcce552]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:14 compute-0 systemd[1]: libpod-3bb4264d273e52e643ab5c12960855f42db3b4d20cad0ed0d3a926bedb168e20.scope: Deactivated successfully.
Nov 29 08:09:14 compute-0 systemd[1]: libpod-3bb4264d273e52e643ab5c12960855f42db3b4d20cad0ed0d3a926bedb168e20.scope: Consumed 1.127s CPU time.
Nov 29 08:09:14 compute-0 podman[282391]: 2025-11-29 08:09:14.140556712 +0000 UTC m=+1.406837217 container died 3bb4264d273e52e643ab5c12960855f42db3b4d20cad0ed0d3a926bedb168e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:14.137 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[8beb7304-e453-418d-b0ed-35c01b7c457a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a7ec55d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:02:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599144, 'reachable_time': 41047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282541, 'error': None, 'target': 'ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.162 255071 DEBUG nova.compute.manager [req-7c785183-3d0b-4c11-a927-54e67de4318b req-56002699-9581-4375-a44e-d3934c2f2f4a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Received event network-vif-plugged-beabc602-cbf2-4e13-adbf-6e5254ac8e0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.162 255071 DEBUG oslo_concurrency.lockutils [req-7c785183-3d0b-4c11-a927-54e67de4318b req-56002699-9581-4375-a44e-d3934c2f2f4a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "fb7c2a0f-da59-4d91-abfb-6de392bff759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.163 255071 DEBUG oslo_concurrency.lockutils [req-7c785183-3d0b-4c11-a927-54e67de4318b req-56002699-9581-4375-a44e-d3934c2f2f4a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fb7c2a0f-da59-4d91-abfb-6de392bff759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.163 255071 DEBUG oslo_concurrency.lockutils [req-7c785183-3d0b-4c11-a927-54e67de4318b req-56002699-9581-4375-a44e-d3934c2f2f4a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fb7c2a0f-da59-4d91-abfb-6de392bff759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.163 255071 DEBUG nova.compute.manager [req-7c785183-3d0b-4c11-a927-54e67de4318b req-56002699-9581-4375-a44e-d3934c2f2f4a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Processing event network-vif-plugged-beabc602-cbf2-4e13-adbf-6e5254ac8e0a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:14.162 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[72db5811-6743-4b05-beb0-eb815aae76fe]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:248'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599144, 'tstamp': 599144}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282542, 'error': None, 'target': 'ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-23d502b258b6841c2f617c9e05eb22d76476d74752fbb8192c7ccc57101e05c4-merged.mount: Deactivated successfully.
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:14.195 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[030ae74e-7d81-4502-8ede-c94cb74e5251]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a7ec55d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:02:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599144, 'reachable_time': 41047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282550, 'error': None, 'target': 'ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:14 compute-0 podman[282391]: 2025-11-29 08:09:14.212391874 +0000 UTC m=+1.478672379 container remove 3bb4264d273e52e643ab5c12960855f42db3b4d20cad0ed0d3a926bedb168e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_darwin, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:09:14 compute-0 systemd[1]: libpod-conmon-3bb4264d273e52e643ab5c12960855f42db3b4d20cad0ed0d3a926bedb168e20.scope: Deactivated successfully.
Nov 29 08:09:14 compute-0 sudo[282246]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:14.268 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[8c21f85a-bd92-4e70-b75d-69f2918e3737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.283 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:14.286 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:09:14 compute-0 ceph-mon[75237]: pgmap v1510: 305 pgs: 305 active+clean; 306 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 98 op/s
Nov 29 08:09:14 compute-0 sudo[282559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:14.385 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[229d5eeb-528c-41e9-a3bd-ae7ad3963e59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:14.388 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a7ec55d-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:14.388 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:14.389 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a7ec55d-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:14 compute-0 NetworkManager[49116]: <info>  [1764403754.3921] manager: (tap0a7ec55d-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Nov 29 08:09:14 compute-0 sudo[282559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:14 compute-0 kernel: tap0a7ec55d-80: entered promiscuous mode
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.391 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:14 compute-0 sudo[282559]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.397 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:14.399 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0a7ec55d-80, col_values=(('external_ids', {'iface-id': 'bbc57954-0077-4866-8998-5669f1d21b04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.401 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:14 compute-0 ovn_controller[153295]: 2025-11-29T08:09:14Z|00147|binding|INFO|Releasing lport bbc57954-0077-4866-8998-5669f1d21b04 from this chassis (sb_readonly=0)
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.418 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.419 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:14.422 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0a7ec55d-851c-4f69-99cf-2136c772174e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0a7ec55d-851c-4f69-99cf-2136c772174e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:14.423 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[0e9312bf-f407-4ea8-9d30-69900e8f4f7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:14.424 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-0a7ec55d-851c-4f69-99cf-2136c772174e
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/0a7ec55d-851c-4f69-99cf-2136c772174e.pid.haproxy
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID 0a7ec55d-851c-4f69-99cf-2136c772174e
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:09:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:14.426 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e', 'env', 'PROCESS_TAG=haproxy-0a7ec55d-851c-4f69-99cf-2136c772174e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0a7ec55d-851c-4f69-99cf-2136c772174e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:09:14 compute-0 sudo[282588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:09:14 compute-0 sudo[282588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:14 compute-0 sudo[282588]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:14 compute-0 sudo[282616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:09:14 compute-0 sudo[282616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:14 compute-0 sudo[282616]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:14 compute-0 sudo[282641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 08:09:14 compute-0 sudo[282641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.719 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.830 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403754.8300853, fb7c2a0f-da59-4d91-abfb-6de392bff759 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.832 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] VM Started (Lifecycle Event)
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.835 255071 DEBUG nova.compute.manager [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.838 255071 DEBUG nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.843 255071 INFO nova.virt.libvirt.driver [-] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Instance spawned successfully.
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.843 255071 DEBUG nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.883 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.890 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.894 255071 DEBUG nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.894 255071 DEBUG nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.895 255071 DEBUG nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.895 255071 DEBUG nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.895 255071 DEBUG nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.896 255071 DEBUG nova.virt.libvirt.driver [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:14 compute-0 podman[282750]: 2025-11-29 08:09:14.902953396 +0000 UTC m=+0.064456565 container create 5bfaa43e00c200d20a2f72bcd8ed1b2e5d38127ec04632bc47f753b2abc82acf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.931 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.933 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403754.830448, fb7c2a0f-da59-4d91-abfb-6de392bff759 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.933 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] VM Paused (Lifecycle Event)
Nov 29 08:09:14 compute-0 systemd[1]: Started libpod-conmon-5bfaa43e00c200d20a2f72bcd8ed1b2e5d38127ec04632bc47f753b2abc82acf.scope.
Nov 29 08:09:14 compute-0 podman[282750]: 2025-11-29 08:09:14.869795614 +0000 UTC m=+0.031298803 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.978 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.983 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403754.8381438, fb7c2a0f-da59-4d91-abfb-6de392bff759 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.984 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] VM Resumed (Lifecycle Event)
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.990 255071 INFO nova.compute.manager [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Took 4.01 seconds to spawn the instance on the hypervisor.
Nov 29 08:09:14 compute-0 nova_compute[255040]: 2025-11-29 08:09:14.991 255071 DEBUG nova.compute.manager [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:09:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae84079397835d5e2155c6fb0d16a37b8268301ad805cc19cc5f4404fcfcf94b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:15 compute-0 nova_compute[255040]: 2025-11-29 08:09:15.004 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:15 compute-0 nova_compute[255040]: 2025-11-29 08:09:15.008 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:09:15 compute-0 podman[282750]: 2025-11-29 08:09:15.015953845 +0000 UTC m=+0.177457034 container init 5bfaa43e00c200d20a2f72bcd8ed1b2e5d38127ec04632bc47f753b2abc82acf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 08:09:15 compute-0 podman[282750]: 2025-11-29 08:09:15.021798672 +0000 UTC m=+0.183301841 container start 5bfaa43e00c200d20a2f72bcd8ed1b2e5d38127ec04632bc47f753b2abc82acf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:09:15 compute-0 nova_compute[255040]: 2025-11-29 08:09:15.031 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:09:15 compute-0 neutron-haproxy-ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e[282786]: [NOTICE]   (282790) : New worker (282793) forked
Nov 29 08:09:15 compute-0 neutron-haproxy-ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e[282786]: [NOTICE]   (282790) : Loading success.
Nov 29 08:09:15 compute-0 nova_compute[255040]: 2025-11-29 08:09:15.063 255071 INFO nova.compute.manager [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Took 6.19 seconds to build instance.
Nov 29 08:09:15 compute-0 nova_compute[255040]: 2025-11-29 08:09:15.082 255071 DEBUG oslo_concurrency.lockutils [None req-1f951b1f-ab2d-40c3-93d8-095af89a85e9 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Lock "fb7c2a0f-da59-4d91-abfb-6de392bff759" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.297s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:15 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:15.114 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:09:15 compute-0 podman[282792]: 2025-11-29 08:09:15.123696623 +0000 UTC m=+0.052940295 container create a79bd8e3df9c0ae7d6e667c1c77241f904f54e68010f6b5f30e94db385251938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 08:09:15 compute-0 systemd[1]: Started libpod-conmon-a79bd8e3df9c0ae7d6e667c1c77241f904f54e68010f6b5f30e94db385251938.scope.
Nov 29 08:09:15 compute-0 podman[282792]: 2025-11-29 08:09:15.106374197 +0000 UTC m=+0.035617899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:09:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:09:15 compute-0 podman[282792]: 2025-11-29 08:09:15.258027935 +0000 UTC m=+0.187271637 container init a79bd8e3df9c0ae7d6e667c1c77241f904f54e68010f6b5f30e94db385251938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:09:15 compute-0 podman[282792]: 2025-11-29 08:09:15.269846213 +0000 UTC m=+0.199089885 container start a79bd8e3df9c0ae7d6e667c1c77241f904f54e68010f6b5f30e94db385251938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elgamal, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 08:09:15 compute-0 podman[282792]: 2025-11-29 08:09:15.27422352 +0000 UTC m=+0.203467192 container attach a79bd8e3df9c0ae7d6e667c1c77241f904f54e68010f6b5f30e94db385251938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 08:09:15 compute-0 heuristic_elgamal[282817]: 167 167
Nov 29 08:09:15 compute-0 systemd[1]: libpod-a79bd8e3df9c0ae7d6e667c1c77241f904f54e68010f6b5f30e94db385251938.scope: Deactivated successfully.
Nov 29 08:09:15 compute-0 podman[282792]: 2025-11-29 08:09:15.280631203 +0000 UTC m=+0.209874875 container died a79bd8e3df9c0ae7d6e667c1c77241f904f54e68010f6b5f30e94db385251938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:09:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-51cf0bf0b8d6253d79d3667276ada798dcca9c821024409a56106ce00c04324c-merged.mount: Deactivated successfully.
Nov 29 08:09:15 compute-0 podman[282792]: 2025-11-29 08:09:15.349596478 +0000 UTC m=+0.278840150 container remove a79bd8e3df9c0ae7d6e667c1c77241f904f54e68010f6b5f30e94db385251938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:09:15 compute-0 systemd[1]: libpod-conmon-a79bd8e3df9c0ae7d6e667c1c77241f904f54e68010f6b5f30e94db385251938.scope: Deactivated successfully.
Nov 29 08:09:15 compute-0 podman[282843]: 2025-11-29 08:09:15.564383995 +0000 UTC m=+0.057944870 container create 4e5ac702fb1ac31e0cf246fef7cd489d06072de029cea2e4da2e1550fa27073e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:09:15 compute-0 systemd[1]: Started libpod-conmon-4e5ac702fb1ac31e0cf246fef7cd489d06072de029cea2e4da2e1550fa27073e.scope.
Nov 29 08:09:15 compute-0 podman[282843]: 2025-11-29 08:09:15.537245604 +0000 UTC m=+0.030806299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:09:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:09:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bed56294e1d157a1fe121a0b3c0757657c16312adc4751b5b9210308ea8cadc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bed56294e1d157a1fe121a0b3c0757657c16312adc4751b5b9210308ea8cadc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bed56294e1d157a1fe121a0b3c0757657c16312adc4751b5b9210308ea8cadc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bed56294e1d157a1fe121a0b3c0757657c16312adc4751b5b9210308ea8cadc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:15 compute-0 podman[282843]: 2025-11-29 08:09:15.67949623 +0000 UTC m=+0.173056895 container init 4e5ac702fb1ac31e0cf246fef7cd489d06072de029cea2e4da2e1550fa27073e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mayer, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 08:09:15 compute-0 podman[282843]: 2025-11-29 08:09:15.687818574 +0000 UTC m=+0.181379229 container start 4e5ac702fb1ac31e0cf246fef7cd489d06072de029cea2e4da2e1550fa27073e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 08:09:15 compute-0 podman[282843]: 2025-11-29 08:09:15.691913954 +0000 UTC m=+0.185474639 container attach 4e5ac702fb1ac31e0cf246fef7cd489d06072de029cea2e4da2e1550fa27073e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mayer, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 08:09:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 306 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.1 MiB/s wr, 76 op/s
Nov 29 08:09:15 compute-0 ceph-mgr[75527]: [devicehealth INFO root] Check health
Nov 29 08:09:16 compute-0 nova_compute[255040]: 2025-11-29 08:09:16.274 255071 DEBUG nova.compute.manager [req-dca5736a-21d1-4e29-90df-c4e77b6e0ee7 req-eae6ecaf-8384-4256-93e2-1ef2372817d6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Received event network-vif-plugged-beabc602-cbf2-4e13-adbf-6e5254ac8e0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:16 compute-0 nova_compute[255040]: 2025-11-29 08:09:16.276 255071 DEBUG oslo_concurrency.lockutils [req-dca5736a-21d1-4e29-90df-c4e77b6e0ee7 req-eae6ecaf-8384-4256-93e2-1ef2372817d6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "fb7c2a0f-da59-4d91-abfb-6de392bff759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:16 compute-0 nova_compute[255040]: 2025-11-29 08:09:16.277 255071 DEBUG oslo_concurrency.lockutils [req-dca5736a-21d1-4e29-90df-c4e77b6e0ee7 req-eae6ecaf-8384-4256-93e2-1ef2372817d6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fb7c2a0f-da59-4d91-abfb-6de392bff759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:16 compute-0 nova_compute[255040]: 2025-11-29 08:09:16.277 255071 DEBUG oslo_concurrency.lockutils [req-dca5736a-21d1-4e29-90df-c4e77b6e0ee7 req-eae6ecaf-8384-4256-93e2-1ef2372817d6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fb7c2a0f-da59-4d91-abfb-6de392bff759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:16 compute-0 nova_compute[255040]: 2025-11-29 08:09:16.277 255071 DEBUG nova.compute.manager [req-dca5736a-21d1-4e29-90df-c4e77b6e0ee7 req-eae6ecaf-8384-4256-93e2-1ef2372817d6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] No waiting events found dispatching network-vif-plugged-beabc602-cbf2-4e13-adbf-6e5254ac8e0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:16 compute-0 nova_compute[255040]: 2025-11-29 08:09:16.278 255071 WARNING nova.compute.manager [req-dca5736a-21d1-4e29-90df-c4e77b6e0ee7 req-eae6ecaf-8384-4256-93e2-1ef2372817d6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Received unexpected event network-vif-plugged-beabc602-cbf2-4e13-adbf-6e5254ac8e0a for instance with vm_state active and task_state None.
Nov 29 08:09:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:16 compute-0 jovial_mayer[282860]: {
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:     "0": [
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:         {
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "devices": [
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "/dev/loop3"
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             ],
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "lv_name": "ceph_lv0",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "lv_size": "21470642176",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "name": "ceph_lv0",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "tags": {
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.cluster_name": "ceph",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.crush_device_class": "",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.encrypted": "0",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.osd_id": "0",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.type": "block",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.vdo": "0"
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             },
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "type": "block",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "vg_name": "ceph_vg0"
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:         }
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:     ],
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:     "1": [
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:         {
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "devices": [
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "/dev/loop4"
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             ],
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "lv_name": "ceph_lv1",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "lv_size": "21470642176",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "name": "ceph_lv1",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "tags": {
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.cluster_name": "ceph",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.crush_device_class": "",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.encrypted": "0",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.osd_id": "1",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.type": "block",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.vdo": "0"
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             },
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "type": "block",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "vg_name": "ceph_vg1"
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:         }
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:     ],
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:     "2": [
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:         {
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "devices": [
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "/dev/loop5"
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             ],
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "lv_name": "ceph_lv2",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "lv_size": "21470642176",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "name": "ceph_lv2",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "tags": {
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.cluster_name": "ceph",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.crush_device_class": "",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.encrypted": "0",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.osd_id": "2",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.type": "block",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:                 "ceph.vdo": "0"
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             },
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "type": "block",
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:             "vg_name": "ceph_vg2"
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:         }
Nov 29 08:09:16 compute-0 jovial_mayer[282860]:     ]
Nov 29 08:09:16 compute-0 jovial_mayer[282860]: }
Nov 29 08:09:16 compute-0 systemd[1]: libpod-4e5ac702fb1ac31e0cf246fef7cd489d06072de029cea2e4da2e1550fa27073e.scope: Deactivated successfully.
Nov 29 08:09:16 compute-0 podman[282843]: 2025-11-29 08:09:16.578893048 +0000 UTC m=+1.072453713 container died 4e5ac702fb1ac31e0cf246fef7cd489d06072de029cea2e4da2e1550fa27073e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 08:09:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bed56294e1d157a1fe121a0b3c0757657c16312adc4751b5b9210308ea8cadc-merged.mount: Deactivated successfully.
Nov 29 08:09:16 compute-0 podman[282843]: 2025-11-29 08:09:16.652053217 +0000 UTC m=+1.145613892 container remove 4e5ac702fb1ac31e0cf246fef7cd489d06072de029cea2e4da2e1550fa27073e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 29 08:09:16 compute-0 systemd[1]: libpod-conmon-4e5ac702fb1ac31e0cf246fef7cd489d06072de029cea2e4da2e1550fa27073e.scope: Deactivated successfully.
Nov 29 08:09:16 compute-0 sudo[282641]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:16 compute-0 sudo[282883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:09:16 compute-0 ceph-mon[75237]: pgmap v1511: 305 pgs: 305 active+clean; 306 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.1 MiB/s wr, 76 op/s
Nov 29 08:09:16 compute-0 sudo[282883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:16 compute-0 sudo[282883]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:16 compute-0 sudo[282908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:09:16 compute-0 sudo[282908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:16 compute-0 sudo[282908]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:16 compute-0 sudo[282933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:09:16 compute-0 sudo[282933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:16 compute-0 sudo[282933]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:17 compute-0 sudo[282958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 08:09:17 compute-0 sudo[282958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:17 compute-0 nova_compute[255040]: 2025-11-29 08:09:17.368 255071 DEBUG nova.compute.manager [req-d03779a8-3d44-4c7d-b459-1260fa67db41 req-b53f83e1-b6e8-4da4-84c2-0ca749c92208 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Received event network-changed-beabc602-cbf2-4e13-adbf-6e5254ac8e0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:17 compute-0 nova_compute[255040]: 2025-11-29 08:09:17.369 255071 DEBUG nova.compute.manager [req-d03779a8-3d44-4c7d-b459-1260fa67db41 req-b53f83e1-b6e8-4da4-84c2-0ca749c92208 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Refreshing instance network info cache due to event network-changed-beabc602-cbf2-4e13-adbf-6e5254ac8e0a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:09:17 compute-0 nova_compute[255040]: 2025-11-29 08:09:17.370 255071 DEBUG oslo_concurrency.lockutils [req-d03779a8-3d44-4c7d-b459-1260fa67db41 req-b53f83e1-b6e8-4da4-84c2-0ca749c92208 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-fb7c2a0f-da59-4d91-abfb-6de392bff759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:09:17 compute-0 nova_compute[255040]: 2025-11-29 08:09:17.370 255071 DEBUG oslo_concurrency.lockutils [req-d03779a8-3d44-4c7d-b459-1260fa67db41 req-b53f83e1-b6e8-4da4-84c2-0ca749c92208 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-fb7c2a0f-da59-4d91-abfb-6de392bff759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:09:17 compute-0 nova_compute[255040]: 2025-11-29 08:09:17.370 255071 DEBUG nova.network.neutron [req-d03779a8-3d44-4c7d-b459-1260fa67db41 req-b53f83e1-b6e8-4da4-84c2-0ca749c92208 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Refreshing network info cache for port beabc602-cbf2-4e13-adbf-6e5254ac8e0a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:09:17 compute-0 podman[283024]: 2025-11-29 08:09:17.404994676 +0000 UTC m=+0.053482030 container create fb210645d03b584f30215b6c38521fef51c2c2d2334867e513bf80f88c4fc63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_clarke, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:09:17 compute-0 systemd[1]: Started libpod-conmon-fb210645d03b584f30215b6c38521fef51c2c2d2334867e513bf80f88c4fc63a.scope.
Nov 29 08:09:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:09:17 compute-0 podman[283024]: 2025-11-29 08:09:17.38471032 +0000 UTC m=+0.033197714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:09:17 compute-0 podman[283024]: 2025-11-29 08:09:17.504241195 +0000 UTC m=+0.152728589 container init fb210645d03b584f30215b6c38521fef51c2c2d2334867e513bf80f88c4fc63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 08:09:17 compute-0 podman[283024]: 2025-11-29 08:09:17.514203413 +0000 UTC m=+0.162690797 container start fb210645d03b584f30215b6c38521fef51c2c2d2334867e513bf80f88c4fc63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_clarke, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 08:09:17 compute-0 tender_clarke[283038]: 167 167
Nov 29 08:09:17 compute-0 podman[283024]: 2025-11-29 08:09:17.520342618 +0000 UTC m=+0.168830012 container attach fb210645d03b584f30215b6c38521fef51c2c2d2334867e513bf80f88c4fc63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 08:09:17 compute-0 systemd[1]: libpod-fb210645d03b584f30215b6c38521fef51c2c2d2334867e513bf80f88c4fc63a.scope: Deactivated successfully.
Nov 29 08:09:17 compute-0 podman[283024]: 2025-11-29 08:09:17.522951289 +0000 UTC m=+0.171438653 container died fb210645d03b584f30215b6c38521fef51c2c2d2334867e513bf80f88c4fc63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:09:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f04c40b442c290590da3963d299d46cb2aaed0e7d8132b2c7a2241ba6117d06-merged.mount: Deactivated successfully.
Nov 29 08:09:17 compute-0 podman[283024]: 2025-11-29 08:09:17.575770498 +0000 UTC m=+0.224257862 container remove fb210645d03b584f30215b6c38521fef51c2c2d2334867e513bf80f88c4fc63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_clarke, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:09:17 compute-0 systemd[1]: libpod-conmon-fb210645d03b584f30215b6c38521fef51c2c2d2334867e513bf80f88c4fc63a.scope: Deactivated successfully.
Nov 29 08:09:17 compute-0 nova_compute[255040]: 2025-11-29 08:09:17.709 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 306 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 14 KiB/s wr, 45 op/s
Nov 29 08:09:17 compute-0 podman[283062]: 2025-11-29 08:09:17.857592959 +0000 UTC m=+0.086854007 container create d0f274f054cc5590124344a5ed404a3a2d70bb3e7ee1e24df26454856f965747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:09:17 compute-0 podman[283062]: 2025-11-29 08:09:17.818056265 +0000 UTC m=+0.047317343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:09:17 compute-0 systemd[1]: Started libpod-conmon-d0f274f054cc5590124344a5ed404a3a2d70bb3e7ee1e24df26454856f965747.scope.
Nov 29 08:09:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da1c480e269f42cbfe8697e6a0451eb675c6aad7d305ba8bdfe7a0372406904/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da1c480e269f42cbfe8697e6a0451eb675c6aad7d305ba8bdfe7a0372406904/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da1c480e269f42cbfe8697e6a0451eb675c6aad7d305ba8bdfe7a0372406904/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da1c480e269f42cbfe8697e6a0451eb675c6aad7d305ba8bdfe7a0372406904/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:18 compute-0 podman[283062]: 2025-11-29 08:09:18.007400297 +0000 UTC m=+0.236661375 container init d0f274f054cc5590124344a5ed404a3a2d70bb3e7ee1e24df26454856f965747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 08:09:18 compute-0 podman[283062]: 2025-11-29 08:09:18.015810534 +0000 UTC m=+0.245071582 container start d0f274f054cc5590124344a5ed404a3a2d70bb3e7ee1e24df26454856f965747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:09:18 compute-0 podman[283062]: 2025-11-29 08:09:18.052602513 +0000 UTC m=+0.281863591 container attach d0f274f054cc5590124344a5ed404a3a2d70bb3e7ee1e24df26454856f965747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:09:18 compute-0 nova_compute[255040]: 2025-11-29 08:09:18.634 255071 DEBUG nova.network.neutron [req-d03779a8-3d44-4c7d-b459-1260fa67db41 req-b53f83e1-b6e8-4da4-84c2-0ca749c92208 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Updated VIF entry in instance network info cache for port beabc602-cbf2-4e13-adbf-6e5254ac8e0a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:09:18 compute-0 nova_compute[255040]: 2025-11-29 08:09:18.636 255071 DEBUG nova.network.neutron [req-d03779a8-3d44-4c7d-b459-1260fa67db41 req-b53f83e1-b6e8-4da4-84c2-0ca749c92208 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Updating instance_info_cache with network_info: [{"id": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "address": "fa:16:3e:60:82:3d", "network": {"id": "0a7ec55d-851c-4f69-99cf-2136c772174e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1026596325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b82a0d97ae1643c5827b47c48ab0fc71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbeabc602-cb", "ovs_interfaceid": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:18 compute-0 nova_compute[255040]: 2025-11-29 08:09:18.653 255071 DEBUG oslo_concurrency.lockutils [req-d03779a8-3d44-4c7d-b459-1260fa67db41 req-b53f83e1-b6e8-4da4-84c2-0ca749c92208 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-fb7c2a0f-da59-4d91-abfb-6de392bff759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:09:18 compute-0 ceph-mon[75237]: pgmap v1512: 305 pgs: 305 active+clean; 306 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 14 KiB/s wr, 45 op/s
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]: {
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]:         "osd_id": 2,
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]:         "type": "bluestore"
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]:     },
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]:         "osd_id": 0,
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]:         "type": "bluestore"
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]:     },
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]:         "osd_id": 1,
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]:         "type": "bluestore"
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]:     }
Nov 29 08:09:19 compute-0 wonderful_proskuriakova[283076]: }
Nov 29 08:09:19 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:19.117 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:19 compute-0 systemd[1]: libpod-d0f274f054cc5590124344a5ed404a3a2d70bb3e7ee1e24df26454856f965747.scope: Deactivated successfully.
Nov 29 08:09:19 compute-0 podman[283062]: 2025-11-29 08:09:19.157385765 +0000 UTC m=+1.386646813 container died d0f274f054cc5590124344a5ed404a3a2d70bb3e7ee1e24df26454856f965747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:09:19 compute-0 systemd[1]: libpod-d0f274f054cc5590124344a5ed404a3a2d70bb3e7ee1e24df26454856f965747.scope: Consumed 1.134s CPU time.
Nov 29 08:09:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-4da1c480e269f42cbfe8697e6a0451eb675c6aad7d305ba8bdfe7a0372406904-merged.mount: Deactivated successfully.
Nov 29 08:09:19 compute-0 podman[283062]: 2025-11-29 08:09:19.237210642 +0000 UTC m=+1.466471710 container remove d0f274f054cc5590124344a5ed404a3a2d70bb3e7ee1e24df26454856f965747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:09:19 compute-0 systemd[1]: libpod-conmon-d0f274f054cc5590124344a5ed404a3a2d70bb3e7ee1e24df26454856f965747.scope: Deactivated successfully.
Nov 29 08:09:19 compute-0 sudo[282958]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:09:19 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:09:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:09:19 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:09:19 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev a5d20369-4cff-48c7-be76-28e80d8230a3 does not exist
Nov 29 08:09:19 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev a6dcdc70-f1bf-4a24-8d2b-bb322f1c8cb1 does not exist
Nov 29 08:09:19 compute-0 sudo[283123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:09:19 compute-0 sudo[283123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:19 compute-0 sudo[283123]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:19 compute-0 sudo[283148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:09:19 compute-0 sudo[283148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:19 compute-0 sudo[283148]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:19 compute-0 nova_compute[255040]: 2025-11-29 08:09:19.487 255071 DEBUG nova.compute.manager [req-587c00fe-7f3b-47de-822d-425a96819878 req-9fe29763-57d7-45ec-8b9f-8fc3b7bba775 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Received event network-changed-beabc602-cbf2-4e13-adbf-6e5254ac8e0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:19 compute-0 nova_compute[255040]: 2025-11-29 08:09:19.487 255071 DEBUG nova.compute.manager [req-587c00fe-7f3b-47de-822d-425a96819878 req-9fe29763-57d7-45ec-8b9f-8fc3b7bba775 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Refreshing instance network info cache due to event network-changed-beabc602-cbf2-4e13-adbf-6e5254ac8e0a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:09:19 compute-0 nova_compute[255040]: 2025-11-29 08:09:19.488 255071 DEBUG oslo_concurrency.lockutils [req-587c00fe-7f3b-47de-822d-425a96819878 req-9fe29763-57d7-45ec-8b9f-8fc3b7bba775 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-fb7c2a0f-da59-4d91-abfb-6de392bff759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:09:19 compute-0 nova_compute[255040]: 2025-11-29 08:09:19.488 255071 DEBUG oslo_concurrency.lockutils [req-587c00fe-7f3b-47de-822d-425a96819878 req-9fe29763-57d7-45ec-8b9f-8fc3b7bba775 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-fb7c2a0f-da59-4d91-abfb-6de392bff759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:09:19 compute-0 nova_compute[255040]: 2025-11-29 08:09:19.488 255071 DEBUG nova.network.neutron [req-587c00fe-7f3b-47de-822d-425a96819878 req-9fe29763-57d7-45ec-8b9f-8fc3b7bba775 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Refreshing network info cache for port beabc602-cbf2-4e13-adbf-6e5254ac8e0a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:09:19 compute-0 nova_compute[255040]: 2025-11-29 08:09:19.721 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 336 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.5 MiB/s wr, 103 op/s
Nov 29 08:09:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Nov 29 08:09:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Nov 29 08:09:19 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.276 255071 DEBUG oslo_concurrency.lockutils [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "169b2d31-6539-4279-bf7a-f46078e1d624" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.277 255071 DEBUG oslo_concurrency.lockutils [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "169b2d31-6539-4279-bf7a-f46078e1d624" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.292 255071 DEBUG nova.objects.instance [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lazy-loading 'flavor' on Instance uuid 169b2d31-6539-4279-bf7a-f46078e1d624 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:20 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:09:20 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:09:20 compute-0 ceph-mon[75237]: pgmap v1513: 305 pgs: 305 active+clean; 336 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.5 MiB/s wr, 103 op/s
Nov 29 08:09:20 compute-0 ceph-mon[75237]: osdmap e273: 3 total, 3 up, 3 in
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.312 255071 INFO nova.virt.libvirt.driver [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Ignoring supplied device name: /dev/vdb
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.333 255071 DEBUG oslo_concurrency.lockutils [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "169b2d31-6539-4279-bf7a-f46078e1d624" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.521 255071 DEBUG oslo_concurrency.lockutils [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "169b2d31-6539-4279-bf7a-f46078e1d624" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.522 255071 DEBUG oslo_concurrency.lockutils [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "169b2d31-6539-4279-bf7a-f46078e1d624" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.523 255071 INFO nova.compute.manager [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Attaching volume 735f9d0b-6554-4018-b9ec-79e9d7c3b890 to /dev/vdb
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.692 255071 DEBUG os_brick.utils [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.694 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.711 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.711 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[89789a69-5e99-4707-b4cf-69438611657b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.713 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.723 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.723 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[025fd609-9a32-49d8-ae96-3bd6e10906d6]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.726 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.736 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.737 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[3e80abd0-10a8-4353-9df7-6ba4ca529ce4]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.739 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[c76fec3f-88e8-46bf-a528-467dff17aaa3]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.740 255071 DEBUG oslo_concurrency.processutils [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.768 255071 DEBUG oslo_concurrency.processutils [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.772 255071 DEBUG os_brick.initiator.connectors.lightos [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.772 255071 DEBUG os_brick.initiator.connectors.lightos [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.773 255071 DEBUG os_brick.initiator.connectors.lightos [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.773 255071 DEBUG os_brick.utils [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] <== get_connector_properties: return (80ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.774 255071 DEBUG nova.virt.block_device [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Updating existing volume attachment record: 2de13d4f-331f-47e8-8f1d-af3e6bb4ac18 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.879 255071 DEBUG nova.network.neutron [req-587c00fe-7f3b-47de-822d-425a96819878 req-9fe29763-57d7-45ec-8b9f-8fc3b7bba775 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Updated VIF entry in instance network info cache for port beabc602-cbf2-4e13-adbf-6e5254ac8e0a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.880 255071 DEBUG nova.network.neutron [req-587c00fe-7f3b-47de-822d-425a96819878 req-9fe29763-57d7-45ec-8b9f-8fc3b7bba775 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Updating instance_info_cache with network_info: [{"id": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "address": "fa:16:3e:60:82:3d", "network": {"id": "0a7ec55d-851c-4f69-99cf-2136c772174e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1026596325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b82a0d97ae1643c5827b47c48ab0fc71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbeabc602-cb", "ovs_interfaceid": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.898 255071 DEBUG oslo_concurrency.lockutils [req-587c00fe-7f3b-47de-822d-425a96819878 req-9fe29763-57d7-45ec-8b9f-8fc3b7bba775 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-fb7c2a0f-da59-4d91-abfb-6de392bff759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.899 255071 DEBUG nova.compute.manager [req-587c00fe-7f3b-47de-822d-425a96819878 req-9fe29763-57d7-45ec-8b9f-8fc3b7bba775 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Received event network-changed-beabc602-cbf2-4e13-adbf-6e5254ac8e0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.899 255071 DEBUG nova.compute.manager [req-587c00fe-7f3b-47de-822d-425a96819878 req-9fe29763-57d7-45ec-8b9f-8fc3b7bba775 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Refreshing instance network info cache due to event network-changed-beabc602-cbf2-4e13-adbf-6e5254ac8e0a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.899 255071 DEBUG oslo_concurrency.lockutils [req-587c00fe-7f3b-47de-822d-425a96819878 req-9fe29763-57d7-45ec-8b9f-8fc3b7bba775 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-fb7c2a0f-da59-4d91-abfb-6de392bff759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.900 255071 DEBUG oslo_concurrency.lockutils [req-587c00fe-7f3b-47de-822d-425a96819878 req-9fe29763-57d7-45ec-8b9f-8fc3b7bba775 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-fb7c2a0f-da59-4d91-abfb-6de392bff759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:09:20 compute-0 nova_compute[255040]: 2025-11-29 08:09:20.900 255071 DEBUG nova.network.neutron [req-587c00fe-7f3b-47de-822d-425a96819878 req-9fe29763-57d7-45ec-8b9f-8fc3b7bba775 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Refreshing network info cache for port beabc602-cbf2-4e13-adbf-6e5254ac8e0a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:09:21 compute-0 nova_compute[255040]: 2025-11-29 08:09:21.354 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403746.3536685, 818ae3fd-4905-4e82-8239-823ea098afa2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:21 compute-0 nova_compute[255040]: 2025-11-29 08:09:21.355 255071 INFO nova.compute.manager [-] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] VM Stopped (Lifecycle Event)
Nov 29 08:09:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:09:21 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1333365702' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:21 compute-0 nova_compute[255040]: 2025-11-29 08:09:21.381 255071 DEBUG nova.compute.manager [None req-67d815e9-fc15-461d-a05b-fd0f53ab1c34 - - - - - -] [instance: 818ae3fd-4905-4e82-8239-823ea098afa2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:21 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1333365702' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:21 compute-0 nova_compute[255040]: 2025-11-29 08:09:21.473 255071 DEBUG nova.objects.instance [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lazy-loading 'flavor' on Instance uuid 169b2d31-6539-4279-bf7a-f46078e1d624 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:21 compute-0 nova_compute[255040]: 2025-11-29 08:09:21.494 255071 DEBUG nova.virt.libvirt.driver [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Attempting to attach volume 735f9d0b-6554-4018-b9ec-79e9d7c3b890 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:09:21 compute-0 nova_compute[255040]: 2025-11-29 08:09:21.498 255071 DEBUG nova.virt.libvirt.guest [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:09:21 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:09:21 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-735f9d0b-6554-4018-b9ec-79e9d7c3b890">
Nov 29 08:09:21 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:09:21 compute-0 nova_compute[255040]:   </source>
Nov 29 08:09:21 compute-0 nova_compute[255040]:   <auth username="openstack">
Nov 29 08:09:21 compute-0 nova_compute[255040]:     <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:09:21 compute-0 nova_compute[255040]:   </auth>
Nov 29 08:09:21 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:09:21 compute-0 nova_compute[255040]:   <serial>735f9d0b-6554-4018-b9ec-79e9d7c3b890</serial>
Nov 29 08:09:21 compute-0 nova_compute[255040]: </disk>
Nov 29 08:09:21 compute-0 nova_compute[255040]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:09:21 compute-0 nova_compute[255040]: 2025-11-29 08:09:21.632 255071 DEBUG nova.virt.libvirt.driver [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:21 compute-0 nova_compute[255040]: 2025-11-29 08:09:21.633 255071 DEBUG nova.virt.libvirt.driver [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:21 compute-0 nova_compute[255040]: 2025-11-29 08:09:21.634 255071 DEBUG nova.virt.libvirt.driver [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:21 compute-0 nova_compute[255040]: 2025-11-29 08:09:21.634 255071 DEBUG nova.virt.libvirt.driver [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] No VIF found with MAC fa:16:3e:22:1d:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:09:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 352 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Nov 29 08:09:21 compute-0 nova_compute[255040]: 2025-11-29 08:09:21.882 255071 DEBUG oslo_concurrency.lockutils [None req-4b5c58a0-b345-428d-a512-3880b4c73b7b 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "169b2d31-6539-4279-bf7a-f46078e1d624" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.359s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:22 compute-0 ceph-mon[75237]: pgmap v1515: 305 pgs: 305 active+clean; 352 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Nov 29 08:09:22 compute-0 nova_compute[255040]: 2025-11-29 08:09:22.525 255071 DEBUG nova.network.neutron [req-587c00fe-7f3b-47de-822d-425a96819878 req-9fe29763-57d7-45ec-8b9f-8fc3b7bba775 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Updated VIF entry in instance network info cache for port beabc602-cbf2-4e13-adbf-6e5254ac8e0a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:09:22 compute-0 nova_compute[255040]: 2025-11-29 08:09:22.526 255071 DEBUG nova.network.neutron [req-587c00fe-7f3b-47de-822d-425a96819878 req-9fe29763-57d7-45ec-8b9f-8fc3b7bba775 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Updating instance_info_cache with network_info: [{"id": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "address": "fa:16:3e:60:82:3d", "network": {"id": "0a7ec55d-851c-4f69-99cf-2136c772174e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1026596325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b82a0d97ae1643c5827b47c48ab0fc71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbeabc602-cb", "ovs_interfaceid": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:22 compute-0 nova_compute[255040]: 2025-11-29 08:09:22.542 255071 DEBUG oslo_concurrency.lockutils [req-587c00fe-7f3b-47de-822d-425a96819878 req-9fe29763-57d7-45ec-8b9f-8fc3b7bba775 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-fb7c2a0f-da59-4d91-abfb-6de392bff759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:09:22 compute-0 nova_compute[255040]: 2025-11-29 08:09:22.712 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:22 compute-0 nova_compute[255040]: 2025-11-29 08:09:22.758 255071 DEBUG oslo_concurrency.lockutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:22 compute-0 nova_compute[255040]: 2025-11-29 08:09:22.759 255071 DEBUG oslo_concurrency.lockutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:22 compute-0 nova_compute[255040]: 2025-11-29 08:09:22.774 255071 DEBUG nova.compute.manager [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:09:22 compute-0 nova_compute[255040]: 2025-11-29 08:09:22.850 255071 DEBUG oslo_concurrency.lockutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:22 compute-0 nova_compute[255040]: 2025-11-29 08:09:22.851 255071 DEBUG oslo_concurrency.lockutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:22 compute-0 nova_compute[255040]: 2025-11-29 08:09:22.858 255071 DEBUG nova.virt.hardware [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:09:22 compute-0 nova_compute[255040]: 2025-11-29 08:09:22.860 255071 INFO nova.compute.claims [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:09:22 compute-0 nova_compute[255040]: 2025-11-29 08:09:22.995 255071 DEBUG oslo_concurrency.processutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Nov 29 08:09:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Nov 29 08:09:23 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Nov 29 08:09:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:09:23 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1780189110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:23 compute-0 nova_compute[255040]: 2025-11-29 08:09:23.536 255071 DEBUG oslo_concurrency.processutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:23 compute-0 nova_compute[255040]: 2025-11-29 08:09:23.545 255071 DEBUG nova.compute.provider_tree [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:09:23 compute-0 nova_compute[255040]: 2025-11-29 08:09:23.574 255071 DEBUG nova.scheduler.client.report [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:09:23 compute-0 nova_compute[255040]: 2025-11-29 08:09:23.609 255071 DEBUG oslo_concurrency.lockutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:23 compute-0 nova_compute[255040]: 2025-11-29 08:09:23.610 255071 DEBUG nova.compute.manager [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:09:23 compute-0 nova_compute[255040]: 2025-11-29 08:09:23.673 255071 DEBUG nova.compute.manager [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:09:23 compute-0 nova_compute[255040]: 2025-11-29 08:09:23.674 255071 DEBUG nova.network.neutron [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:09:23 compute-0 nova_compute[255040]: 2025-11-29 08:09:23.702 255071 INFO nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:09:23 compute-0 nova_compute[255040]: 2025-11-29 08:09:23.724 255071 DEBUG nova.compute.manager [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:09:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 352 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 161 op/s
Nov 29 08:09:23 compute-0 nova_compute[255040]: 2025-11-29 08:09:23.786 255071 INFO nova.virt.block_device [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Booting with volume snapshot bbfb2452-058a-4421-befa-4929cf2f17a4 at /dev/vda
Nov 29 08:09:23 compute-0 podman[283222]: 2025-11-29 08:09:23.957821167 +0000 UTC m=+0.117513591 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:09:24 compute-0 nova_compute[255040]: 2025-11-29 08:09:24.008 255071 DEBUG nova.policy [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5e62d407203540599a65ac50d5d447b9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3df24932e2a44aeab3c2aece8a045774', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:09:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Nov 29 08:09:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Nov 29 08:09:24 compute-0 ceph-mon[75237]: osdmap e274: 3 total, 3 up, 3 in
Nov 29 08:09:24 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1780189110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:24 compute-0 ceph-mon[75237]: pgmap v1517: 305 pgs: 305 active+clean; 352 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 161 op/s
Nov 29 08:09:24 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Nov 29 08:09:24 compute-0 nova_compute[255040]: 2025-11-29 08:09:24.628 255071 DEBUG nova.network.neutron [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Successfully created port: 8c0aa401-308c-4663-a9d5-da4c2100c7c3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:09:24 compute-0 nova_compute[255040]: 2025-11-29 08:09:24.724 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:25 compute-0 ceph-mon[75237]: osdmap e275: 3 total, 3 up, 3 in
Nov 29 08:09:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 352 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 631 KiB/s wr, 113 op/s
Nov 29 08:09:26 compute-0 nova_compute[255040]: 2025-11-29 08:09:26.446 255071 DEBUG nova.network.neutron [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Successfully updated port: 8c0aa401-308c-4663-a9d5-da4c2100c7c3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:09:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Nov 29 08:09:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Nov 29 08:09:26 compute-0 nova_compute[255040]: 2025-11-29 08:09:26.469 255071 DEBUG oslo_concurrency.lockutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "refresh_cache-c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:09:26 compute-0 nova_compute[255040]: 2025-11-29 08:09:26.469 255071 DEBUG oslo_concurrency.lockutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquired lock "refresh_cache-c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:09:26 compute-0 nova_compute[255040]: 2025-11-29 08:09:26.469 255071 DEBUG nova.network.neutron [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:09:26 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Nov 29 08:09:26 compute-0 ceph-mon[75237]: pgmap v1519: 305 pgs: 305 active+clean; 352 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 631 KiB/s wr, 113 op/s
Nov 29 08:09:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:26 compute-0 nova_compute[255040]: 2025-11-29 08:09:26.554 255071 DEBUG nova.compute.manager [req-9f4fba49-b660-4fda-8795-10730c88259b req-4cc3ac09-d13f-46d7-b3aa-1c1ed52e0332 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Received event network-changed-8c0aa401-308c-4663-a9d5-da4c2100c7c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:26 compute-0 nova_compute[255040]: 2025-11-29 08:09:26.555 255071 DEBUG nova.compute.manager [req-9f4fba49-b660-4fda-8795-10730c88259b req-4cc3ac09-d13f-46d7-b3aa-1c1ed52e0332 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Refreshing instance network info cache due to event network-changed-8c0aa401-308c-4663-a9d5-da4c2100c7c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:09:26 compute-0 nova_compute[255040]: 2025-11-29 08:09:26.555 255071 DEBUG oslo_concurrency.lockutils [req-9f4fba49-b660-4fda-8795-10730c88259b req-4cc3ac09-d13f-46d7-b3aa-1c1ed52e0332 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:09:26 compute-0 nova_compute[255040]: 2025-11-29 08:09:26.616 255071 DEBUG nova.network.neutron [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:09:26 compute-0 nova_compute[255040]: 2025-11-29 08:09:26.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:27.131 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:27.131 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:27.132 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:27 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 08:09:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Nov 29 08:09:27 compute-0 ceph-mon[75237]: osdmap e276: 3 total, 3 up, 3 in
Nov 29 08:09:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Nov 29 08:09:27 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Nov 29 08:09:27 compute-0 nova_compute[255040]: 2025-11-29 08:09:27.715 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 352 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Nov 29 08:09:27 compute-0 nova_compute[255040]: 2025-11-29 08:09:27.800 255071 DEBUG nova.network.neutron [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Updating instance_info_cache with network_info: [{"id": "8c0aa401-308c-4663-a9d5-da4c2100c7c3", "address": "fa:16:3e:7e:04:ee", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c0aa401-30", "ovs_interfaceid": "8c0aa401-308c-4663-a9d5-da4c2100c7c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:27 compute-0 nova_compute[255040]: 2025-11-29 08:09:27.822 255071 DEBUG oslo_concurrency.lockutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Releasing lock "refresh_cache-c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:09:27 compute-0 nova_compute[255040]: 2025-11-29 08:09:27.822 255071 DEBUG nova.compute.manager [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Instance network_info: |[{"id": "8c0aa401-308c-4663-a9d5-da4c2100c7c3", "address": "fa:16:3e:7e:04:ee", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c0aa401-30", "ovs_interfaceid": "8c0aa401-308c-4663-a9d5-da4c2100c7c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:09:27 compute-0 nova_compute[255040]: 2025-11-29 08:09:27.823 255071 DEBUG oslo_concurrency.lockutils [req-9f4fba49-b660-4fda-8795-10730c88259b req-4cc3ac09-d13f-46d7-b3aa-1c1ed52e0332 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:09:27 compute-0 nova_compute[255040]: 2025-11-29 08:09:27.823 255071 DEBUG nova.network.neutron [req-9f4fba49-b660-4fda-8795-10730c88259b req-4cc3ac09-d13f-46d7-b3aa-1c1ed52e0332 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Refreshing network info cache for port 8c0aa401-308c-4663-a9d5-da4c2100c7c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:09:27 compute-0 nova_compute[255040]: 2025-11-29 08:09:27.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:27 compute-0 nova_compute[255040]: 2025-11-29 08:09:27.984 255071 DEBUG os_brick.utils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:09:27 compute-0 nova_compute[255040]: 2025-11-29 08:09:27.986 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:28 compute-0 nova_compute[255040]: 2025-11-29 08:09:28.000 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:28 compute-0 nova_compute[255040]: 2025-11-29 08:09:28.000 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[30c96556-f1b8-49b3-b01d-f2c4ed2f2e35]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:28 compute-0 nova_compute[255040]: 2025-11-29 08:09:28.003 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:28 compute-0 nova_compute[255040]: 2025-11-29 08:09:28.014 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:28 compute-0 nova_compute[255040]: 2025-11-29 08:09:28.015 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[58ca7b4c-6fe6-4e0a-a3b8-c6a6115b33e6]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:28 compute-0 nova_compute[255040]: 2025-11-29 08:09:28.018 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:28 compute-0 nova_compute[255040]: 2025-11-29 08:09:28.029 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:28 compute-0 nova_compute[255040]: 2025-11-29 08:09:28.030 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[1f1e5ac1-5f5a-4764-8aff-61ce9017d690]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:28 compute-0 nova_compute[255040]: 2025-11-29 08:09:28.032 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a8b494-682b-44d5-a218-182b514e35a6]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:28 compute-0 nova_compute[255040]: 2025-11-29 08:09:28.032 255071 DEBUG oslo_concurrency.processutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:28 compute-0 nova_compute[255040]: 2025-11-29 08:09:28.066 255071 DEBUG oslo_concurrency.processutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:28 compute-0 nova_compute[255040]: 2025-11-29 08:09:28.070 255071 DEBUG os_brick.initiator.connectors.lightos [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:09:28 compute-0 nova_compute[255040]: 2025-11-29 08:09:28.071 255071 DEBUG os_brick.initiator.connectors.lightos [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:09:28 compute-0 nova_compute[255040]: 2025-11-29 08:09:28.071 255071 DEBUG os_brick.initiator.connectors.lightos [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:09:28 compute-0 nova_compute[255040]: 2025-11-29 08:09:28.072 255071 DEBUG os_brick.utils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] <== get_connector_properties: return (87ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:09:28 compute-0 nova_compute[255040]: 2025-11-29 08:09:28.072 255071 DEBUG nova.virt.block_device [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Updating existing volume attachment record: 956cc6fb-72ac-4bdf-9afa-9d61ec338756 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:09:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Nov 29 08:09:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Nov 29 08:09:28 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Nov 29 08:09:28 compute-0 ceph-mon[75237]: osdmap e277: 3 total, 3 up, 3 in
Nov 29 08:09:28 compute-0 ceph-mon[75237]: pgmap v1522: 305 pgs: 305 active+clean; 352 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Nov 29 08:09:28 compute-0 ovn_controller[153295]: 2025-11-29T08:09:28Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:60:82:3d 10.100.0.5
Nov 29 08:09:28 compute-0 ovn_controller[153295]: 2025-11-29T08:09:28Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:60:82:3d 10.100.0.5
Nov 29 08:09:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:09:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/867044657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:28 compute-0 nova_compute[255040]: 2025-11-29 08:09:28.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:28 compute-0 nova_compute[255040]: 2025-11-29 08:09:28.977 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:09:28 compute-0 nova_compute[255040]: 2025-11-29 08:09:28.978 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.003 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.052 255071 DEBUG nova.compute.manager [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.054 255071 DEBUG nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.055 255071 INFO nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Creating image(s)
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.055 255071 DEBUG nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.056 255071 DEBUG nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Ensure instance console log exists: /var/lib/nova/instances/c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.056 255071 DEBUG oslo_concurrency.lockutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.056 255071 DEBUG oslo_concurrency.lockutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.057 255071 DEBUG oslo_concurrency.lockutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.059 255071 DEBUG nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Start _get_guest_xml network_info=[{"id": "8c0aa401-308c-4663-a9d5-da4c2100c7c3", "address": "fa:16:3e:7e:04:ee", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c0aa401-30", "ovs_interfaceid": "8c0aa401-308c-4663-a9d5-da4c2100c7c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-dbaad290-ad3d-482c-8419-7a5198594e31', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'dbaad290-ad3d-482c-8419-7a5198594e31', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec', 'attached_at': '', 'detached_at': '', 'volume_id': 'dbaad290-ad3d-482c-8419-7a5198594e31', 'serial': 'dbaad290-ad3d-482c-8419-7a5198594e31'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'delete_on_termination': True, 'attachment_id': '956cc6fb-72ac-4bdf-9afa-9d61ec338756', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.064 255071 WARNING nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.070 255071 DEBUG nova.virt.libvirt.host [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.071 255071 DEBUG nova.virt.libvirt.host [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.076 255071 DEBUG nova.virt.libvirt.host [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.077 255071 DEBUG nova.virt.libvirt.host [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.077 255071 DEBUG nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.078 255071 DEBUG nova.virt.hardware [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.078 255071 DEBUG nova.virt.hardware [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.079 255071 DEBUG nova.virt.hardware [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.079 255071 DEBUG nova.virt.hardware [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.079 255071 DEBUG nova.virt.hardware [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.079 255071 DEBUG nova.virt.hardware [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.080 255071 DEBUG nova.virt.hardware [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.080 255071 DEBUG nova.virt.hardware [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.080 255071 DEBUG nova.virt.hardware [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.081 255071 DEBUG nova.virt.hardware [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.082 255071 DEBUG nova.virt.hardware [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.108 255071 DEBUG nova.storage.rbd_utils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] rbd image c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.114 255071 DEBUG oslo_concurrency.processutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.471 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "refresh_cache-169b2d31-6539-4279-bf7a-f46078e1d624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.472 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquired lock "refresh_cache-169b2d31-6539-4279-bf7a-f46078e1d624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.472 255071 DEBUG nova.network.neutron [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.473 255071 DEBUG nova.objects.instance [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 169b2d31-6539-4279-bf7a-f46078e1d624 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.495 255071 DEBUG nova.network.neutron [req-9f4fba49-b660-4fda-8795-10730c88259b req-4cc3ac09-d13f-46d7-b3aa-1c1ed52e0332 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Updated VIF entry in instance network info cache for port 8c0aa401-308c-4663-a9d5-da4c2100c7c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.495 255071 DEBUG nova.network.neutron [req-9f4fba49-b660-4fda-8795-10730c88259b req-4cc3ac09-d13f-46d7-b3aa-1c1ed52e0332 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Updating instance_info_cache with network_info: [{"id": "8c0aa401-308c-4663-a9d5-da4c2100c7c3", "address": "fa:16:3e:7e:04:ee", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c0aa401-30", "ovs_interfaceid": "8c0aa401-308c-4663-a9d5-da4c2100c7c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.516 255071 DEBUG oslo_concurrency.lockutils [req-9f4fba49-b660-4fda-8795-10730c88259b req-4cc3ac09-d13f-46d7-b3aa-1c1ed52e0332 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:09:29 compute-0 ceph-mon[75237]: osdmap e278: 3 total, 3 up, 3 in
Nov 29 08:09:29 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/867044657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:09:29 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1408891514' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.599 255071 DEBUG oslo_concurrency.processutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.626 255071 DEBUG oslo_concurrency.lockutils [None req-df45d472-0c94-46ce-969f-a92a9a06d9d8 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "169b2d31-6539-4279-bf7a-f46078e1d624" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.626 255071 DEBUG oslo_concurrency.lockutils [None req-df45d472-0c94-46ce-969f-a92a9a06d9d8 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "169b2d31-6539-4279-bf7a-f46078e1d624" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.629 255071 DEBUG nova.virt.libvirt.vif [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1241050705',display_name='tempest-TestVolumeBootPattern-server-1241050705',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1241050705',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3df24932e2a44aeab3c2aece8a045774',ramdisk_id='',reservation_id='r-3wqdzjmw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1666331213',owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:23Z,user_data=None,user_id='5e62d407203540599a65ac50d5d447b9',uuid=c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c0aa401-308c-4663-a9d5-da4c2100c7c3", "address": "fa:16:3e:7e:04:ee", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c0aa401-30", "ovs_interfaceid": "8c0aa401-308c-4663-a9d5-da4c2100c7c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.629 255071 DEBUG nova.network.os_vif_util [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converting VIF {"id": "8c0aa401-308c-4663-a9d5-da4c2100c7c3", "address": "fa:16:3e:7e:04:ee", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c0aa401-30", "ovs_interfaceid": "8c0aa401-308c-4663-a9d5-da4c2100c7c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.630 255071 DEBUG nova.network.os_vif_util [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:04:ee,bridge_name='br-int',has_traffic_filtering=True,id=8c0aa401-308c-4663-a9d5-da4c2100c7c3,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c0aa401-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.631 255071 DEBUG nova.objects.instance [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lazy-loading 'pci_devices' on Instance uuid c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.648 255071 DEBUG nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:09:29 compute-0 nova_compute[255040]:   <uuid>c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec</uuid>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   <name>instance-00000010</name>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <nova:name>tempest-TestVolumeBootPattern-server-1241050705</nova:name>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:09:29</nova:creationTime>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:09:29 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:09:29 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:09:29 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:09:29 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:09:29 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:09:29 compute-0 nova_compute[255040]:         <nova:user uuid="5e62d407203540599a65ac50d5d447b9">tempest-TestVolumeBootPattern-1666331213-project-member</nova:user>
Nov 29 08:09:29 compute-0 nova_compute[255040]:         <nova:project uuid="3df24932e2a44aeab3c2aece8a045774">tempest-TestVolumeBootPattern-1666331213</nova:project>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:09:29 compute-0 nova_compute[255040]:         <nova:port uuid="8c0aa401-308c-4663-a9d5-da4c2100c7c3">
Nov 29 08:09:29 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <system>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <entry name="serial">c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec</entry>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <entry name="uuid">c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec</entry>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     </system>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   <os>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   </os>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   <features>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   </features>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec_disk.config">
Nov 29 08:09:29 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       </source>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:09:29 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <source protocol="rbd" name="volumes/volume-dbaad290-ad3d-482c-8419-7a5198594e31">
Nov 29 08:09:29 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       </source>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:09:29 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <serial>dbaad290-ad3d-482c-8419-7a5198594e31</serial>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:7e:04:ee"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <target dev="tap8c0aa401-30"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec/console.log" append="off"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <video>
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     </video>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:09:29 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:09:29 compute-0 nova_compute[255040]: </domain>
Nov 29 08:09:29 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.648 255071 DEBUG nova.compute.manager [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Preparing to wait for external event network-vif-plugged-8c0aa401-308c-4663-a9d5-da4c2100c7c3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.648 255071 DEBUG oslo_concurrency.lockutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.649 255071 DEBUG oslo_concurrency.lockutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.649 255071 DEBUG oslo_concurrency.lockutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.649 255071 DEBUG nova.virt.libvirt.vif [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1241050705',display_name='tempest-TestVolumeBootPattern-server-1241050705',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1241050705',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3df24932e2a44aeab3c2aece8a045774',ramdisk_id='',reservation_id='r-3wqdzjmw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1666331213',owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:23Z,user_data=None,user_id='5e62d407203540599a65ac50d5d447b9',uuid=c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c0aa401-308c-4663-a9d5-da4c2100c7c3", "address": "fa:16:3e:7e:04:ee", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c0aa401-30", "ovs_interfaceid": "8c0aa401-308c-4663-a9d5-da4c2100c7c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.650 255071 DEBUG nova.network.os_vif_util [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converting VIF {"id": "8c0aa401-308c-4663-a9d5-da4c2100c7c3", "address": "fa:16:3e:7e:04:ee", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c0aa401-30", "ovs_interfaceid": "8c0aa401-308c-4663-a9d5-da4c2100c7c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.650 255071 DEBUG nova.network.os_vif_util [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:04:ee,bridge_name='br-int',has_traffic_filtering=True,id=8c0aa401-308c-4663-a9d5-da4c2100c7c3,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c0aa401-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.651 255071 DEBUG os_vif [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:04:ee,bridge_name='br-int',has_traffic_filtering=True,id=8c0aa401-308c-4663-a9d5-da4c2100c7c3,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c0aa401-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.652 255071 INFO nova.compute.manager [None req-df45d472-0c94-46ce-969f-a92a9a06d9d8 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Detaching volume 735f9d0b-6554-4018-b9ec-79e9d7c3b890
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.654 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.654 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.655 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.660 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.661 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c0aa401-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.661 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8c0aa401-30, col_values=(('external_ids', {'iface-id': '8c0aa401-308c-4663-a9d5-da4c2100c7c3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:04:ee', 'vm-uuid': 'c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:29 compute-0 NetworkManager[49116]: <info>  [1764403769.6649] manager: (tap8c0aa401-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.666 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.693 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.698 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.699 255071 INFO os_vif [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:04:ee,bridge_name='br-int',has_traffic_filtering=True,id=8c0aa401-308c-4663-a9d5-da4c2100c7c3,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c0aa401-30')
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.726 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 377 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 524 KiB/s rd, 3.2 MiB/s wr, 190 op/s
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.760 255071 DEBUG nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.760 255071 DEBUG nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.761 255071 DEBUG nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] No VIF found with MAC fa:16:3e:7e:04:ee, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.761 255071 INFO nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Using config drive
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.783 255071 DEBUG nova.storage.rbd_utils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] rbd image c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.811 255071 INFO nova.virt.block_device [None req-df45d472-0c94-46ce-969f-a92a9a06d9d8 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Attempting to driver detach volume 735f9d0b-6554-4018-b9ec-79e9d7c3b890 from mountpoint /dev/vdb
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.817 255071 DEBUG oslo_concurrency.lockutils [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "169b2d31-6539-4279-bf7a-f46078e1d624" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.831 255071 DEBUG nova.virt.libvirt.driver [None req-df45d472-0c94-46ce-969f-a92a9a06d9d8 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Attempting to detach device vdb from instance 169b2d31-6539-4279-bf7a-f46078e1d624 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.832 255071 DEBUG nova.virt.libvirt.guest [None req-df45d472-0c94-46ce-969f-a92a9a06d9d8 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:09:29 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-735f9d0b-6554-4018-b9ec-79e9d7c3b890">
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   </source>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   <serial>735f9d0b-6554-4018-b9ec-79e9d7c3b890</serial>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]: </disk>
Nov 29 08:09:29 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.942 255071 INFO nova.virt.libvirt.driver [None req-df45d472-0c94-46ce-969f-a92a9a06d9d8 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Successfully detached device vdb from instance 169b2d31-6539-4279-bf7a-f46078e1d624 from the persistent domain config.
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.943 255071 DEBUG nova.virt.libvirt.driver [None req-df45d472-0c94-46ce-969f-a92a9a06d9d8 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 169b2d31-6539-4279-bf7a-f46078e1d624 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:09:29 compute-0 nova_compute[255040]: 2025-11-29 08:09:29.944 255071 DEBUG nova.virt.libvirt.guest [None req-df45d472-0c94-46ce-969f-a92a9a06d9d8 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:09:29 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-735f9d0b-6554-4018-b9ec-79e9d7c3b890">
Nov 29 08:09:29 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   </source>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   <serial>735f9d0b-6554-4018-b9ec-79e9d7c3b890</serial>
Nov 29 08:09:29 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:09:29 compute-0 nova_compute[255040]: </disk>
Nov 29 08:09:29 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.007 255071 DEBUG nova.virt.libvirt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Received event <DeviceRemovedEvent: 1764403770.0064635, 169b2d31-6539-4279-bf7a-f46078e1d624 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.008 255071 DEBUG nova.virt.libvirt.driver [None req-df45d472-0c94-46ce-969f-a92a9a06d9d8 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 169b2d31-6539-4279-bf7a-f46078e1d624 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.011 255071 INFO nova.virt.libvirt.driver [None req-df45d472-0c94-46ce-969f-a92a9a06d9d8 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Successfully detached device vdb from instance 169b2d31-6539-4279-bf7a-f46078e1d624 from the live domain config.
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.202 255071 DEBUG nova.objects.instance [None req-df45d472-0c94-46ce-969f-a92a9a06d9d8 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lazy-loading 'flavor' on Instance uuid 169b2d31-6539-4279-bf7a-f46078e1d624 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.251 255071 DEBUG oslo_concurrency.lockutils [None req-df45d472-0c94-46ce-969f-a92a9a06d9d8 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "169b2d31-6539-4279-bf7a-f46078e1d624" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.253 255071 DEBUG oslo_concurrency.lockutils [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "169b2d31-6539-4279-bf7a-f46078e1d624" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.253 255071 DEBUG oslo_concurrency.lockutils [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "169b2d31-6539-4279-bf7a-f46078e1d624-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.253 255071 DEBUG oslo_concurrency.lockutils [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "169b2d31-6539-4279-bf7a-f46078e1d624-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.253 255071 DEBUG oslo_concurrency.lockutils [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "169b2d31-6539-4279-bf7a-f46078e1d624-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.255 255071 INFO nova.compute.manager [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Terminating instance
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.256 255071 DEBUG nova.compute.manager [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.275 255071 INFO nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Creating config drive at /var/lib/nova/instances/c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec/disk.config
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.285 255071 DEBUG oslo_concurrency.processutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy1eotkyk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:30 compute-0 kernel: tap6d9c5f72-46 (unregistering): left promiscuous mode
Nov 29 08:09:30 compute-0 NetworkManager[49116]: <info>  [1764403770.3347] device (tap6d9c5f72-46): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.347 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:30 compute-0 ovn_controller[153295]: 2025-11-29T08:09:30Z|00148|binding|INFO|Releasing lport 6d9c5f72-469c-4971-a11c-287eba2d8490 from this chassis (sb_readonly=0)
Nov 29 08:09:30 compute-0 ovn_controller[153295]: 2025-11-29T08:09:30Z|00149|binding|INFO|Setting lport 6d9c5f72-469c-4971-a11c-287eba2d8490 down in Southbound
Nov 29 08:09:30 compute-0 ovn_controller[153295]: 2025-11-29T08:09:30Z|00150|binding|INFO|Removing iface tap6d9c5f72-46 ovn-installed in OVS
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.350 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:30.354 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:1d:1e 10.100.0.4'], port_security=['fa:16:3e:22:1d:1e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '169b2d31-6539-4279-bf7a-f46078e1d624', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1606039-8d07-4578-bb07-e1193dc21498', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '87f822d62c8f4ac6bed1a893f2b9e73f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f2aadc9d-fab8-454c-8d6d-96d62ba75cc2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.214'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c31b5f0-bc6c-4aab-ba94-61fe7903fc35, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=6d9c5f72-469c-4971-a11c-287eba2d8490) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:09:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:30.355 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 6d9c5f72-469c-4971-a11c-287eba2d8490 in datapath b1606039-8d07-4578-bb07-e1193dc21498 unbound from our chassis
Nov 29 08:09:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:30.357 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1606039-8d07-4578-bb07-e1193dc21498, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:09:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:30.358 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[6e1e2509-c8c2-4f3b-92e4-325b08a50f2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:30.359 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498 namespace which is not needed anymore
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.366 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:30 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 29 08:09:30 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 16.768s CPU time.
Nov 29 08:09:30 compute-0 systemd-machined[216271]: Machine qemu-13-instance-0000000d terminated.
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.500 255071 INFO nova.virt.libvirt.driver [-] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Instance destroyed successfully.
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.501 255071 DEBUG nova.objects.instance [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lazy-loading 'resources' on Instance uuid 169b2d31-6539-4279-bf7a-f46078e1d624 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.519 255071 DEBUG nova.virt.libvirt.vif [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:08:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1687597056',display_name='tempest-VolumesSnapshotTestJSON-instance-1687597056',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1687597056',id=13,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKzXCfFmv5CFzeg5CC3EBDIzgpMgUXWz+ppHsYmCwwce636iY+6Tiw98CYBaZyZ6d1ODwaICbKpBZNxJYT/FMGzwNKpoQdJgsjUs1+53EMI7xDZW99L2NGxqLHAuB8aCfQ==',key_name='tempest-keypair-1970025283',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:08:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='87f822d62c8f4ac6bed1a893f2b9e73f',ramdisk_id='',reservation_id='r-akg0sr7o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-248670584',owner_user_name='tempest-VolumesSnapshotTestJSON-248670584-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:08:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='090eb6259968476885903b5734f6f67a',uuid=169b2d31-6539-4279-bf7a-f46078e1d624,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6d9c5f72-469c-4971-a11c-287eba2d8490", "address": "fa:16:3e:22:1d:1e", "network": {"id": "b1606039-8d07-4578-bb07-e1193dc21498", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-920991102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87f822d62c8f4ac6bed1a893f2b9e73f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d9c5f72-46", "ovs_interfaceid": "6d9c5f72-469c-4971-a11c-287eba2d8490", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.520 255071 DEBUG nova.network.os_vif_util [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Converting VIF {"id": "6d9c5f72-469c-4971-a11c-287eba2d8490", "address": "fa:16:3e:22:1d:1e", "network": {"id": "b1606039-8d07-4578-bb07-e1193dc21498", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-920991102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87f822d62c8f4ac6bed1a893f2b9e73f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d9c5f72-46", "ovs_interfaceid": "6d9c5f72-469c-4971-a11c-287eba2d8490", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.521 255071 DEBUG nova.network.os_vif_util [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:22:1d:1e,bridge_name='br-int',has_traffic_filtering=True,id=6d9c5f72-469c-4971-a11c-287eba2d8490,network=Network(b1606039-8d07-4578-bb07-e1193dc21498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d9c5f72-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.521 255071 DEBUG os_vif [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:22:1d:1e,bridge_name='br-int',has_traffic_filtering=True,id=6d9c5f72-469c-4971-a11c-287eba2d8490,network=Network(b1606039-8d07-4578-bb07-e1193dc21498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d9c5f72-46') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:09:30 compute-0 neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498[281480]: [NOTICE]   (281484) : haproxy version is 2.8.14-c23fe91
Nov 29 08:09:30 compute-0 neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498[281480]: [NOTICE]   (281484) : path to executable is /usr/sbin/haproxy
Nov 29 08:09:30 compute-0 neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498[281480]: [WARNING]  (281484) : Exiting Master process...
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.527 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:30 compute-0 neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498[281480]: [ALERT]    (281484) : Current worker (281486) exited with code 143 (Terminated)
Nov 29 08:09:30 compute-0 neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498[281480]: [WARNING]  (281484) : All workers exited. Exiting... (0)
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.529 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d9c5f72-46, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:30 compute-0 systemd[1]: libpod-8076d3225d52e9ababb000a959af2e8eda9253238130c474de36e3cc7617ccd3.scope: Deactivated successfully.
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.532 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.536 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:09:30 compute-0 podman[283342]: 2025-11-29 08:09:30.537180223 +0000 UTC m=+0.056610183 container died 8076d3225d52e9ababb000a959af2e8eda9253238130c474de36e3cc7617ccd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.539 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.542 255071 INFO os_vif [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:22:1d:1e,bridge_name='br-int',has_traffic_filtering=True,id=6d9c5f72-469c-4971-a11c-287eba2d8490,network=Network(b1606039-8d07-4578-bb07-e1193dc21498),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d9c5f72-46')
Nov 29 08:09:30 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1408891514' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:30 compute-0 ceph-mon[75237]: pgmap v1524: 305 pgs: 305 active+clean; 377 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 524 KiB/s rd, 3.2 MiB/s wr, 190 op/s
Nov 29 08:09:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8076d3225d52e9ababb000a959af2e8eda9253238130c474de36e3cc7617ccd3-userdata-shm.mount: Deactivated successfully.
Nov 29 08:09:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-420aa4b62d308dd83f558cee406e30a228b3d8025a675691135947b0869b96f2-merged.mount: Deactivated successfully.
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.575 255071 DEBUG nova.compute.manager [req-c097445b-dc2e-42de-b91d-8a6757bfc5fa req-3c1fe849-2f44-45a3-b614-6eb50453fa6a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Received event network-vif-unplugged-6d9c5f72-469c-4971-a11c-287eba2d8490 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.575 255071 DEBUG oslo_concurrency.lockutils [req-c097445b-dc2e-42de-b91d-8a6757bfc5fa req-3c1fe849-2f44-45a3-b614-6eb50453fa6a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "169b2d31-6539-4279-bf7a-f46078e1d624-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.575 255071 DEBUG oslo_concurrency.lockutils [req-c097445b-dc2e-42de-b91d-8a6757bfc5fa req-3c1fe849-2f44-45a3-b614-6eb50453fa6a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "169b2d31-6539-4279-bf7a-f46078e1d624-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.576 255071 DEBUG oslo_concurrency.lockutils [req-c097445b-dc2e-42de-b91d-8a6757bfc5fa req-3c1fe849-2f44-45a3-b614-6eb50453fa6a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "169b2d31-6539-4279-bf7a-f46078e1d624-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.576 255071 DEBUG nova.compute.manager [req-c097445b-dc2e-42de-b91d-8a6757bfc5fa req-3c1fe849-2f44-45a3-b614-6eb50453fa6a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] No waiting events found dispatching network-vif-unplugged-6d9c5f72-469c-4971-a11c-287eba2d8490 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.577 255071 DEBUG nova.compute.manager [req-c097445b-dc2e-42de-b91d-8a6757bfc5fa req-3c1fe849-2f44-45a3-b614-6eb50453fa6a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Received event network-vif-unplugged-6d9c5f72-469c-4971-a11c-287eba2d8490 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:09:30 compute-0 podman[283342]: 2025-11-29 08:09:30.591534655 +0000 UTC m=+0.110964615 container cleanup 8076d3225d52e9ababb000a959af2e8eda9253238130c474de36e3cc7617ccd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:09:30 compute-0 systemd[1]: libpod-conmon-8076d3225d52e9ababb000a959af2e8eda9253238130c474de36e3cc7617ccd3.scope: Deactivated successfully.
Nov 29 08:09:30 compute-0 podman[283401]: 2025-11-29 08:09:30.665572247 +0000 UTC m=+0.050307004 container remove 8076d3225d52e9ababb000a959af2e8eda9253238130c474de36e3cc7617ccd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:09:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:30.672 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[1d233be9-f18c-4335-afd0-1aff03873e76]: (4, ('Sat Nov 29 08:09:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498 (8076d3225d52e9ababb000a959af2e8eda9253238130c474de36e3cc7617ccd3)\n8076d3225d52e9ababb000a959af2e8eda9253238130c474de36e3cc7617ccd3\nSat Nov 29 08:09:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498 (8076d3225d52e9ababb000a959af2e8eda9253238130c474de36e3cc7617ccd3)\n8076d3225d52e9ababb000a959af2e8eda9253238130c474de36e3cc7617ccd3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:30.675 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[509f650a-7010-4855-880b-ba121b22e564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:30.677 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1606039-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:30 compute-0 kernel: tapb1606039-80: left promiscuous mode
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.680 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.699 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:30.703 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[05071e51-b2af-4f98-81e8-ea060311d582]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:30.717 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[102f9aa2-2b96-4cf4-b06a-4014253eca69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:30.719 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[f6422576-5677-4d6d-981c-f1b0aa791b9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:30.740 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe21c2f-f2ac-4761-9b50-e77d7a93683a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596168, 'reachable_time': 25601, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283418, 'error': None, 'target': 'ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:30 compute-0 systemd[1]: run-netns-ovnmeta\x2db1606039\x2d8d07\x2d4578\x2dbb07\x2de1193dc21498.mount: Deactivated successfully.
Nov 29 08:09:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:30.744 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b1606039-8d07-4578-bb07-e1193dc21498 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:09:30 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:30.744 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[6666093d-93f7-46c7-a1cd-fe2024ff2b0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.784 255071 DEBUG nova.network.neutron [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Updating instance_info_cache with network_info: [{"id": "6d9c5f72-469c-4971-a11c-287eba2d8490", "address": "fa:16:3e:22:1d:1e", "network": {"id": "b1606039-8d07-4578-bb07-e1193dc21498", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-920991102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87f822d62c8f4ac6bed1a893f2b9e73f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d9c5f72-46", "ovs_interfaceid": "6d9c5f72-469c-4971-a11c-287eba2d8490", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.803 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Releasing lock "refresh_cache-169b2d31-6539-4279-bf7a-f46078e1d624" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.804 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.804 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.990 255071 INFO nova.virt.libvirt.driver [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Deleting instance files /var/lib/nova/instances/169b2d31-6539-4279-bf7a-f46078e1d624_del
Nov 29 08:09:30 compute-0 nova_compute[255040]: 2025-11-29 08:09:30.991 255071 INFO nova.virt.libvirt.driver [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Deletion of /var/lib/nova/instances/169b2d31-6539-4279-bf7a-f46078e1d624_del complete
Nov 29 08:09:31 compute-0 nova_compute[255040]: 2025-11-29 08:09:31.044 255071 INFO nova.compute.manager [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Took 0.79 seconds to destroy the instance on the hypervisor.
Nov 29 08:09:31 compute-0 nova_compute[255040]: 2025-11-29 08:09:31.045 255071 DEBUG oslo.service.loopingcall [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:09:31 compute-0 nova_compute[255040]: 2025-11-29 08:09:31.045 255071 DEBUG nova.compute.manager [-] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:09:31 compute-0 nova_compute[255040]: 2025-11-29 08:09:31.045 255071 DEBUG nova.network.neutron [-] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:09:31 compute-0 nova_compute[255040]: 2025-11-29 08:09:31.434 255071 DEBUG oslo_concurrency.processutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy1eotkyk" returned: 0 in 1.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:31 compute-0 nova_compute[255040]: 2025-11-29 08:09:31.465 255071 DEBUG nova.storage.rbd_utils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] rbd image c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:31 compute-0 nova_compute[255040]: 2025-11-29 08:09:31.471 255071 DEBUG oslo_concurrency.processutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec/disk.config c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:31 compute-0 nova_compute[255040]: 2025-11-29 08:09:31.647 255071 DEBUG oslo_concurrency.processutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec/disk.config c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:31 compute-0 nova_compute[255040]: 2025-11-29 08:09:31.648 255071 INFO nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Deleting local config drive /var/lib/nova/instances/c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec/disk.config because it was imported into RBD.
Nov 29 08:09:31 compute-0 kernel: tap8c0aa401-30: entered promiscuous mode
Nov 29 08:09:31 compute-0 NetworkManager[49116]: <info>  [1764403771.7222] manager: (tap8c0aa401-30): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Nov 29 08:09:31 compute-0 systemd-udevd[283323]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:09:31 compute-0 ovn_controller[153295]: 2025-11-29T08:09:31Z|00151|binding|INFO|Claiming lport 8c0aa401-308c-4663-a9d5-da4c2100c7c3 for this chassis.
Nov 29 08:09:31 compute-0 ovn_controller[153295]: 2025-11-29T08:09:31Z|00152|binding|INFO|8c0aa401-308c-4663-a9d5-da4c2100c7c3: Claiming fa:16:3e:7e:04:ee 10.100.0.10
Nov 29 08:09:31 compute-0 nova_compute[255040]: 2025-11-29 08:09:31.724 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 391 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 690 KiB/s rd, 4.2 MiB/s wr, 237 op/s
Nov 29 08:09:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:31.737 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:04:ee 10.100.0.10'], port_security=['fa:16:3e:7e:04:ee 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3df24932e2a44aeab3c2aece8a045774', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a3cc607b-4336-4239-91d4-371fe33f0a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6d2be5e-00f1-4a95-b572-cb93402763d5, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=8c0aa401-308c-4663-a9d5-da4c2100c7c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:09:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:31.738 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 8c0aa401-308c-4663-a9d5-da4c2100c7c3 in datapath 6e23492e-beff-43f6-b4d1-f88ebeea0b6f bound to our chassis
Nov 29 08:09:31 compute-0 NetworkManager[49116]: <info>  [1764403771.7409] device (tap8c0aa401-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:09:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:31.740 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e23492e-beff-43f6-b4d1-f88ebeea0b6f
Nov 29 08:09:31 compute-0 ovn_controller[153295]: 2025-11-29T08:09:31Z|00153|binding|INFO|Setting lport 8c0aa401-308c-4663-a9d5-da4c2100c7c3 ovn-installed in OVS
Nov 29 08:09:31 compute-0 ovn_controller[153295]: 2025-11-29T08:09:31Z|00154|binding|INFO|Setting lport 8c0aa401-308c-4663-a9d5-da4c2100c7c3 up in Southbound
Nov 29 08:09:31 compute-0 NetworkManager[49116]: <info>  [1764403771.7433] device (tap8c0aa401-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:09:31 compute-0 nova_compute[255040]: 2025-11-29 08:09:31.744 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:31 compute-0 nova_compute[255040]: 2025-11-29 08:09:31.749 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:31.755 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[da78c196-74d2-42fc-8357-d5fb82dbc289]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:31.756 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6e23492e-b1 in ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:09:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:31.759 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6e23492e-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:09:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:31.760 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[32b96c19-7f8a-40f3-a931-6418380929ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:31.760 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[58d55069-8072-4c9c-ab6b-03bd06c39cb8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:31 compute-0 systemd-machined[216271]: New machine qemu-16-instance-00000010.
Nov 29 08:09:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:31.776 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[c9c9b0f9-dddc-485d-8980-f05972283a5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:31 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-00000010.
Nov 29 08:09:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:31.795 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6a42f0-a146-4f08-8e41-826f31502b7b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:31.832 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[d653d228-524f-4de6-94b6-10d0829e09d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:31 compute-0 NetworkManager[49116]: <info>  [1764403771.8394] manager: (tap6e23492e-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/88)
Nov 29 08:09:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:31.837 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3aaeee-57b2-4f5c-9e91-1a6c4ff8b54f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:31.878 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[ceb921e6-9b3d-44e0-ac40-445bb69afe07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:31.881 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[d876f508-9fc8-498a-82c1-a1eca2a5f943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:31 compute-0 NetworkManager[49116]: <info>  [1764403771.9048] device (tap6e23492e-b0): carrier: link connected
Nov 29 08:09:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:31.913 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[45466f39-d87a-448f-99a9-d90d647255e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:31.935 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[379bf98c-04f6-4bde-afcc-845bf9587632]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e23492e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:19:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600924, 'reachable_time': 41298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283501, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:31.954 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[29878235-d419-4663-abac-6cddfd602bc0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:1984'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 600924, 'tstamp': 600924}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283502, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:31 compute-0 nova_compute[255040]: 2025-11-29 08:09:31.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:31 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:31.975 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[8568d7c3-026e-44e0-b7be-2133d4321e3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e23492e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:19:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600924, 'reachable_time': 41298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283503, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.003 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.004 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.004 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.004 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.005 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:32.019 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[a919acb8-e951-4604-9500-97a99a046c95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:32.111 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[cf06826a-bb02-4f0a-b318-d9e29f7c10e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:32.113 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e23492e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:32.113 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.113 255071 DEBUG nova.network.neutron [-] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:32.114 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e23492e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.116 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:32 compute-0 NetworkManager[49116]: <info>  [1764403772.1168] manager: (tap6e23492e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Nov 29 08:09:32 compute-0 kernel: tap6e23492e-b0: entered promiscuous mode
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.126 255071 DEBUG nova.compute.manager [req-a94c87b0-2692-487e-9d37-a826735ad10a req-0b8475ae-a3c6-43f7-afb0-91acc72f2f52 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Received event network-vif-plugged-8c0aa401-308c-4663-a9d5-da4c2100c7c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.127 255071 DEBUG oslo_concurrency.lockutils [req-a94c87b0-2692-487e-9d37-a826735ad10a req-0b8475ae-a3c6-43f7-afb0-91acc72f2f52 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.127 255071 DEBUG oslo_concurrency.lockutils [req-a94c87b0-2692-487e-9d37-a826735ad10a req-0b8475ae-a3c6-43f7-afb0-91acc72f2f52 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.127 255071 DEBUG oslo_concurrency.lockutils [req-a94c87b0-2692-487e-9d37-a826735ad10a req-0b8475ae-a3c6-43f7-afb0-91acc72f2f52 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.128 255071 DEBUG nova.compute.manager [req-a94c87b0-2692-487e-9d37-a826735ad10a req-0b8475ae-a3c6-43f7-afb0-91acc72f2f52 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Processing event network-vif-plugged-8c0aa401-308c-4663-a9d5-da4c2100c7c3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:32.127 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e23492e-b0, col_values=(('external_ids', {'iface-id': 'c7579d40-4225-44ab-93bd-e31c3efe399f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:32 compute-0 ovn_controller[153295]: 2025-11-29T08:09:32Z|00155|binding|INFO|Releasing lport c7579d40-4225-44ab-93bd-e31c3efe399f from this chassis (sb_readonly=0)
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.129 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:32.132 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6e23492e-beff-43f6-b4d1-f88ebeea0b6f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6e23492e-beff-43f6-b4d1-f88ebeea0b6f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:32.139 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5d0fb7e5-368a-4f8c-861f-6aeace185c2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:32.141 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-6e23492e-beff-43f6-b4d1-f88ebeea0b6f
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/6e23492e-beff-43f6-b4d1-f88ebeea0b6f.pid.haproxy
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID 6e23492e-beff-43f6-b4d1-f88ebeea0b6f
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:09:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:32.141 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'env', 'PROCESS_TAG=haproxy-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6e23492e-beff-43f6-b4d1-f88ebeea0b6f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.144 255071 INFO nova.compute.manager [-] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Took 1.10 seconds to deallocate network for instance.
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.150 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.252 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403772.2516255, c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.252 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] VM Started (Lifecycle Event)
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.256 255071 DEBUG nova.compute.manager [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.259 255071 WARNING nova.volume.cinder [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Attachment 2de13d4f-331f-47e8-8f1d-af3e6bb4ac18 does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = 2de13d4f-331f-47e8-8f1d-af3e6bb4ac18. (HTTP 404) (Request-ID: req-0b863235-34a8-47fc-8b96-8aebd4fe3802)
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.261 255071 INFO nova.compute.manager [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Took 0.12 seconds to detach 1 volumes for instance.
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.268 255071 DEBUG nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.275 255071 INFO nova.virt.libvirt.driver [-] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Instance spawned successfully.
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.276 255071 DEBUG nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.303 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.311 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.315 255071 DEBUG nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.315 255071 DEBUG nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.316 255071 DEBUG nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.316 255071 DEBUG nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.317 255071 DEBUG nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.317 255071 DEBUG nova.virt.libvirt.driver [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.354 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.355 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403772.2553592, c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.356 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] VM Paused (Lifecycle Event)
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.358 255071 DEBUG oslo_concurrency.lockutils [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.358 255071 DEBUG oslo_concurrency.lockutils [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.376 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.384 255071 INFO nova.compute.manager [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Took 3.33 seconds to spawn the instance on the hypervisor.
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.385 255071 DEBUG nova.compute.manager [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.386 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403772.266779, c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.386 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] VM Resumed (Lifecycle Event)
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.402 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.406 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.437 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.454 255071 DEBUG oslo_concurrency.processutils [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:09:32 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3584937852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.491 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.495 255071 INFO nova.compute.manager [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Took 9.68 seconds to build instance.
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.512 255071 DEBUG oslo_concurrency.lockutils [None req-6af0df68-7bc1-41df-9be2-203cfa20fec7 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:32 compute-0 podman[283599]: 2025-11-29 08:09:32.575358399 +0000 UTC m=+0.072562213 container create f0cb7531d73308e83b189b8230f58d3e2628d581538b2576c8bac5252d4c50fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.620 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.620 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:09:32 compute-0 systemd[1]: Started libpod-conmon-f0cb7531d73308e83b189b8230f58d3e2628d581538b2576c8bac5252d4c50fc.scope.
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.628 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.629 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:09:32 compute-0 podman[283599]: 2025-11-29 08:09:32.542175186 +0000 UTC m=+0.039379010 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:09:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:09:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bd95fedb1751edfdf582d2ba6ee7968665c3915b00149979b0dbb65578d2817/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:32 compute-0 podman[283611]: 2025-11-29 08:09:32.671812223 +0000 UTC m=+0.115610341 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 08:09:32 compute-0 podman[283599]: 2025-11-29 08:09:32.688848161 +0000 UTC m=+0.186052005 container init f0cb7531d73308e83b189b8230f58d3e2628d581538b2576c8bac5252d4c50fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.693 255071 DEBUG nova.compute.manager [req-ff9f0594-379a-43b0-b175-4c659f5f7c24 req-035ce6fe-6ad3-4a78-bdbd-697a940cc407 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Received event network-vif-plugged-6d9c5f72-469c-4971-a11c-287eba2d8490 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.694 255071 DEBUG oslo_concurrency.lockutils [req-ff9f0594-379a-43b0-b175-4c659f5f7c24 req-035ce6fe-6ad3-4a78-bdbd-697a940cc407 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "169b2d31-6539-4279-bf7a-f46078e1d624-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.694 255071 DEBUG oslo_concurrency.lockutils [req-ff9f0594-379a-43b0-b175-4c659f5f7c24 req-035ce6fe-6ad3-4a78-bdbd-697a940cc407 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "169b2d31-6539-4279-bf7a-f46078e1d624-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.694 255071 DEBUG oslo_concurrency.lockutils [req-ff9f0594-379a-43b0-b175-4c659f5f7c24 req-035ce6fe-6ad3-4a78-bdbd-697a940cc407 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "169b2d31-6539-4279-bf7a-f46078e1d624-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.695 255071 DEBUG nova.compute.manager [req-ff9f0594-379a-43b0-b175-4c659f5f7c24 req-035ce6fe-6ad3-4a78-bdbd-697a940cc407 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] No waiting events found dispatching network-vif-plugged-6d9c5f72-469c-4971-a11c-287eba2d8490 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.695 255071 WARNING nova.compute.manager [req-ff9f0594-379a-43b0-b175-4c659f5f7c24 req-035ce6fe-6ad3-4a78-bdbd-697a940cc407 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Received unexpected event network-vif-plugged-6d9c5f72-469c-4971-a11c-287eba2d8490 for instance with vm_state deleted and task_state None.
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.695 255071 DEBUG nova.compute.manager [req-ff9f0594-379a-43b0-b175-4c659f5f7c24 req-035ce6fe-6ad3-4a78-bdbd-697a940cc407 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Received event network-vif-deleted-6d9c5f72-469c-4971-a11c-287eba2d8490 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:32 compute-0 podman[283599]: 2025-11-29 08:09:32.696213108 +0000 UTC m=+0.193416933 container start f0cb7531d73308e83b189b8230f58d3e2628d581538b2576c8bac5252d4c50fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:09:32 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[283642]: [NOTICE]   (283655) : New worker (283657) forked
Nov 29 08:09:32 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[283642]: [NOTICE]   (283655) : Loading success.
Nov 29 08:09:32 compute-0 ceph-mon[75237]: pgmap v1525: 305 pgs: 305 active+clean; 391 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 690 KiB/s rd, 4.2 MiB/s wr, 237 op/s
Nov 29 08:09:32 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3584937852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.923 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.924 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4152MB free_disk=59.942543029785156GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.925 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:09:32 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/224324639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:32 compute-0 nova_compute[255040]: 2025-11-29 08:09:32.996 255071 DEBUG oslo_concurrency.processutils [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:33 compute-0 nova_compute[255040]: 2025-11-29 08:09:33.003 255071 DEBUG nova.compute.provider_tree [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:09:33 compute-0 nova_compute[255040]: 2025-11-29 08:09:33.021 255071 DEBUG nova.scheduler.client.report [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:09:33 compute-0 nova_compute[255040]: 2025-11-29 08:09:33.057 255071 DEBUG oslo_concurrency.lockutils [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:33 compute-0 nova_compute[255040]: 2025-11-29 08:09:33.061 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:33 compute-0 nova_compute[255040]: 2025-11-29 08:09:33.109 255071 INFO nova.scheduler.client.report [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Deleted allocations for instance 169b2d31-6539-4279-bf7a-f46078e1d624
Nov 29 08:09:33 compute-0 nova_compute[255040]: 2025-11-29 08:09:33.185 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance fb7c2a0f-da59-4d91-abfb-6de392bff759 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:09:33 compute-0 nova_compute[255040]: 2025-11-29 08:09:33.186 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:09:33 compute-0 nova_compute[255040]: 2025-11-29 08:09:33.186 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:09:33 compute-0 nova_compute[255040]: 2025-11-29 08:09:33.187 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:09:33 compute-0 nova_compute[255040]: 2025-11-29 08:09:33.237 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:33 compute-0 nova_compute[255040]: 2025-11-29 08:09:33.269 255071 DEBUG oslo_concurrency.lockutils [None req-e1e4cc5c-b10d-42cd-b210-85ace3d7534a 090eb6259968476885903b5734f6f67a 87f822d62c8f4ac6bed1a893f2b9e73f - - default default] Lock "169b2d31-6539-4279-bf7a-f46078e1d624" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.016s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:09:33 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1123644453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:33 compute-0 nova_compute[255040]: 2025-11-29 08:09:33.729 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 365 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 1018 KiB/s rd, 3.5 MiB/s wr, 240 op/s
Nov 29 08:09:33 compute-0 nova_compute[255040]: 2025-11-29 08:09:33.736 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:09:33 compute-0 nova_compute[255040]: 2025-11-29 08:09:33.766 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:09:33 compute-0 nova_compute[255040]: 2025-11-29 08:09:33.802 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:09:33 compute-0 nova_compute[255040]: 2025-11-29 08:09:33.803 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:34 compute-0 nova_compute[255040]: 2025-11-29 08:09:34.252 255071 DEBUG nova.compute.manager [req-86fc4ef8-77c5-4082-9225-8c95b99cd9a4 req-3833f363-b7b8-47f4-b69c-531fbd74e81a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Received event network-vif-plugged-8c0aa401-308c-4663-a9d5-da4c2100c7c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:34 compute-0 nova_compute[255040]: 2025-11-29 08:09:34.252 255071 DEBUG oslo_concurrency.lockutils [req-86fc4ef8-77c5-4082-9225-8c95b99cd9a4 req-3833f363-b7b8-47f4-b69c-531fbd74e81a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:34 compute-0 nova_compute[255040]: 2025-11-29 08:09:34.253 255071 DEBUG oslo_concurrency.lockutils [req-86fc4ef8-77c5-4082-9225-8c95b99cd9a4 req-3833f363-b7b8-47f4-b69c-531fbd74e81a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:34 compute-0 nova_compute[255040]: 2025-11-29 08:09:34.253 255071 DEBUG oslo_concurrency.lockutils [req-86fc4ef8-77c5-4082-9225-8c95b99cd9a4 req-3833f363-b7b8-47f4-b69c-531fbd74e81a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:34 compute-0 nova_compute[255040]: 2025-11-29 08:09:34.253 255071 DEBUG nova.compute.manager [req-86fc4ef8-77c5-4082-9225-8c95b99cd9a4 req-3833f363-b7b8-47f4-b69c-531fbd74e81a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] No waiting events found dispatching network-vif-plugged-8c0aa401-308c-4663-a9d5-da4c2100c7c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:34 compute-0 nova_compute[255040]: 2025-11-29 08:09:34.253 255071 WARNING nova.compute.manager [req-86fc4ef8-77c5-4082-9225-8c95b99cd9a4 req-3833f363-b7b8-47f4-b69c-531fbd74e81a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Received unexpected event network-vif-plugged-8c0aa401-308c-4663-a9d5-da4c2100c7c3 for instance with vm_state active and task_state None.
Nov 29 08:09:34 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/224324639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:34 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1123644453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:34 compute-0 nova_compute[255040]: 2025-11-29 08:09:34.730 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:34 compute-0 nova_compute[255040]: 2025-11-29 08:09:34.797 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:34 compute-0 nova_compute[255040]: 2025-11-29 08:09:34.799 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:34 compute-0 nova_compute[255040]: 2025-11-29 08:09:34.799 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:09:34 compute-0 nova_compute[255040]: 2025-11-29 08:09:34.800 255071 DEBUG oslo_concurrency.lockutils [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:34 compute-0 nova_compute[255040]: 2025-11-29 08:09:34.800 255071 DEBUG oslo_concurrency.lockutils [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:34 compute-0 nova_compute[255040]: 2025-11-29 08:09:34.801 255071 DEBUG oslo_concurrency.lockutils [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:34 compute-0 nova_compute[255040]: 2025-11-29 08:09:34.801 255071 DEBUG oslo_concurrency.lockutils [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:34 compute-0 nova_compute[255040]: 2025-11-29 08:09:34.801 255071 DEBUG oslo_concurrency.lockutils [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:34 compute-0 nova_compute[255040]: 2025-11-29 08:09:34.803 255071 INFO nova.compute.manager [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Terminating instance
Nov 29 08:09:34 compute-0 nova_compute[255040]: 2025-11-29 08:09:34.804 255071 DEBUG nova.compute.manager [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:09:34 compute-0 kernel: tap8c0aa401-30 (unregistering): left promiscuous mode
Nov 29 08:09:34 compute-0 NetworkManager[49116]: <info>  [1764403774.8488] device (tap8c0aa401-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:09:34 compute-0 ovn_controller[153295]: 2025-11-29T08:09:34Z|00156|binding|INFO|Releasing lport 8c0aa401-308c-4663-a9d5-da4c2100c7c3 from this chassis (sb_readonly=0)
Nov 29 08:09:34 compute-0 ovn_controller[153295]: 2025-11-29T08:09:34Z|00157|binding|INFO|Setting lport 8c0aa401-308c-4663-a9d5-da4c2100c7c3 down in Southbound
Nov 29 08:09:34 compute-0 ovn_controller[153295]: 2025-11-29T08:09:34Z|00158|binding|INFO|Removing iface tap8c0aa401-30 ovn-installed in OVS
Nov 29 08:09:34 compute-0 nova_compute[255040]: 2025-11-29 08:09:34.902 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:34 compute-0 nova_compute[255040]: 2025-11-29 08:09:34.904 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:34 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:34.910 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:04:ee 10.100.0.10'], port_security=['fa:16:3e:7e:04:ee 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3df24932e2a44aeab3c2aece8a045774', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a3cc607b-4336-4239-91d4-371fe33f0a2f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6d2be5e-00f1-4a95-b572-cb93402763d5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=8c0aa401-308c-4663-a9d5-da4c2100c7c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:09:34 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:34.912 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 8c0aa401-308c-4663-a9d5-da4c2100c7c3 in datapath 6e23492e-beff-43f6-b4d1-f88ebeea0b6f unbound from our chassis
Nov 29 08:09:34 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:34.913 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6e23492e-beff-43f6-b4d1-f88ebeea0b6f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:09:34 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:34.914 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[b32633f1-6c7c-40ed-b7f7-e0a9f094c691]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:34 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:34.915 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f namespace which is not needed anymore
Nov 29 08:09:34 compute-0 nova_compute[255040]: 2025-11-29 08:09:34.918 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:34 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Deactivated successfully.
Nov 29 08:09:34 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Consumed 3.082s CPU time.
Nov 29 08:09:34 compute-0 systemd-machined[216271]: Machine qemu-16-instance-00000010 terminated.
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.048 255071 INFO nova.virt.libvirt.driver [-] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Instance destroyed successfully.
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.049 255071 DEBUG nova.objects.instance [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lazy-loading 'resources' on Instance uuid c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.065 255071 DEBUG nova.virt.libvirt.vif [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1241050705',display_name='tempest-TestVolumeBootPattern-server-1241050705',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1241050705',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3df24932e2a44aeab3c2aece8a045774',ramdisk_id='',reservation_id='r-3wqdzjmw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1666331213',owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:32Z,user_data=None,user_id='5e62d407203540599a65ac50d5d447b9',uuid=c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8c0aa401-308c-4663-a9d5-da4c2100c7c3", "address": "fa:16:3e:7e:04:ee", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c0aa401-30", "ovs_interfaceid": "8c0aa401-308c-4663-a9d5-da4c2100c7c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.066 255071 DEBUG nova.network.os_vif_util [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converting VIF {"id": "8c0aa401-308c-4663-a9d5-da4c2100c7c3", "address": "fa:16:3e:7e:04:ee", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c0aa401-30", "ovs_interfaceid": "8c0aa401-308c-4663-a9d5-da4c2100c7c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.066 255071 DEBUG nova.network.os_vif_util [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:04:ee,bridge_name='br-int',has_traffic_filtering=True,id=8c0aa401-308c-4663-a9d5-da4c2100c7c3,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c0aa401-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.067 255071 DEBUG os_vif [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:04:ee,bridge_name='br-int',has_traffic_filtering=True,id=8c0aa401-308c-4663-a9d5-da4c2100c7c3,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c0aa401-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.069 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.069 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c0aa401-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.071 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.073 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.077 255071 INFO os_vif [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:04:ee,bridge_name='br-int',has_traffic_filtering=True,id=8c0aa401-308c-4663-a9d5-da4c2100c7c3,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c0aa401-30')
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.507 255071 DEBUG oslo_concurrency.lockutils [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Acquiring lock "fb7c2a0f-da59-4d91-abfb-6de392bff759" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.508 255071 DEBUG oslo_concurrency.lockutils [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Lock "fb7c2a0f-da59-4d91-abfb-6de392bff759" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.508 255071 DEBUG oslo_concurrency.lockutils [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Acquiring lock "fb7c2a0f-da59-4d91-abfb-6de392bff759-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.508 255071 DEBUG oslo_concurrency.lockutils [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Lock "fb7c2a0f-da59-4d91-abfb-6de392bff759-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.509 255071 DEBUG oslo_concurrency.lockutils [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Lock "fb7c2a0f-da59-4d91-abfb-6de392bff759-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.510 255071 INFO nova.compute.manager [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Terminating instance
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.511 255071 DEBUG nova.compute.manager [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:09:35 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[283642]: [NOTICE]   (283655) : haproxy version is 2.8.14-c23fe91
Nov 29 08:09:35 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[283642]: [NOTICE]   (283655) : path to executable is /usr/sbin/haproxy
Nov 29 08:09:35 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[283642]: [WARNING]  (283655) : Exiting Master process...
Nov 29 08:09:35 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[283642]: [ALERT]    (283655) : Current worker (283657) exited with code 143 (Terminated)
Nov 29 08:09:35 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[283642]: [WARNING]  (283655) : All workers exited. Exiting... (0)
Nov 29 08:09:35 compute-0 systemd[1]: libpod-f0cb7531d73308e83b189b8230f58d3e2628d581538b2576c8bac5252d4c50fc.scope: Deactivated successfully.
Nov 29 08:09:35 compute-0 podman[283714]: 2025-11-29 08:09:35.581772503 +0000 UTC m=+0.561141562 container died f0cb7531d73308e83b189b8230f58d3e2628d581538b2576c8bac5252d4c50fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 08:09:35 compute-0 ceph-mon[75237]: pgmap v1526: 305 pgs: 305 active+clean; 365 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 1018 KiB/s rd, 3.5 MiB/s wr, 240 op/s
Nov 29 08:09:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f0cb7531d73308e83b189b8230f58d3e2628d581538b2576c8bac5252d4c50fc-userdata-shm.mount: Deactivated successfully.
Nov 29 08:09:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bd95fedb1751edfdf582d2ba6ee7968665c3915b00149979b0dbb65578d2817-merged.mount: Deactivated successfully.
Nov 29 08:09:35 compute-0 kernel: tapbeabc602-cb (unregistering): left promiscuous mode
Nov 29 08:09:35 compute-0 NetworkManager[49116]: <info>  [1764403775.6433] device (tapbeabc602-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:09:35 compute-0 podman[283714]: 2025-11-29 08:09:35.651702094 +0000 UTC m=+0.631071143 container cleanup f0cb7531d73308e83b189b8230f58d3e2628d581538b2576c8bac5252d4c50fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:09:35 compute-0 ovn_controller[153295]: 2025-11-29T08:09:35Z|00159|binding|INFO|Releasing lport beabc602-cbf2-4e13-adbf-6e5254ac8e0a from this chassis (sb_readonly=0)
Nov 29 08:09:35 compute-0 ovn_controller[153295]: 2025-11-29T08:09:35Z|00160|binding|INFO|Setting lport beabc602-cbf2-4e13-adbf-6e5254ac8e0a down in Southbound
Nov 29 08:09:35 compute-0 ovn_controller[153295]: 2025-11-29T08:09:35Z|00161|binding|INFO|Removing iface tapbeabc602-cb ovn-installed in OVS
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.655 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:35.662 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:82:3d 10.100.0.5'], port_security=['fa:16:3e:60:82:3d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'fb7c2a0f-da59-4d91-abfb-6de392bff759', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a7ec55d-851c-4f69-99cf-2136c772174e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b82a0d97ae1643c5827b47c48ab0fc71', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f3d9dde9-3ce8-45c8-ac1f-27139dcb9640', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e47df327-b9d6-42fa-8a7d-8e08f0aa09b4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=beabc602-cbf2-4e13-adbf-6e5254ac8e0a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:09:35 compute-0 systemd[1]: libpod-conmon-f0cb7531d73308e83b189b8230f58d3e2628d581538b2576c8bac5252d4c50fc.scope: Deactivated successfully.
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.679 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:35 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Nov 29 08:09:35 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Consumed 15.018s CPU time.
Nov 29 08:09:35 compute-0 systemd-machined[216271]: Machine qemu-15-instance-0000000f terminated.
Nov 29 08:09:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 316 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.2 MiB/s wr, 322 op/s
Nov 29 08:09:35 compute-0 NetworkManager[49116]: <info>  [1764403775.7358] manager: (tapbeabc602-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/90)
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.737 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.743 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:35 compute-0 podman[283774]: 2025-11-29 08:09:35.744288374 +0000 UTC m=+0.056359467 container remove f0cb7531d73308e83b189b8230f58d3e2628d581538b2576c8bac5252d4c50fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:09:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:35.752 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[52ae6afe-5671-4a53-a659-5834de5b50d8]: (4, ('Sat Nov 29 08:09:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f (f0cb7531d73308e83b189b8230f58d3e2628d581538b2576c8bac5252d4c50fc)\nf0cb7531d73308e83b189b8230f58d3e2628d581538b2576c8bac5252d4c50fc\nSat Nov 29 08:09:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f (f0cb7531d73308e83b189b8230f58d3e2628d581538b2576c8bac5252d4c50fc)\nf0cb7531d73308e83b189b8230f58d3e2628d581538b2576c8bac5252d4c50fc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.755 255071 INFO nova.virt.libvirt.driver [-] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Instance destroyed successfully.
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.755 255071 DEBUG nova.objects.instance [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Lazy-loading 'resources' on Instance uuid fb7c2a0f-da59-4d91-abfb-6de392bff759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:35.755 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[85b8a157-27eb-4e99-824b-0b9d110d76df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:35.757 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e23492e-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.759 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:35 compute-0 kernel: tap6e23492e-b0: left promiscuous mode
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.779 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.780 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:35.783 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f26a9f-a77c-47fa-83bd-61eb98c914bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.790 255071 DEBUG nova.virt.libvirt.vif [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-110471317',display_name='tempest-TestVolumeBackupRestore-server-110471317',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-110471317',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH/xyTZP4ASJ0cxi1kioa3QVMCW3LA7If4GnthVEkBTP7C1Y9t2v6xrSBYUsfIwbI+dkIDldNWyWJhdAyt0g4ZJdGVF4vKANTylMU2zJMN3r5qJ2x1ZOtIcri0Br71jRUg==',key_name='tempest-TestVolumeBackupRestore-1588688224',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b82a0d97ae1643c5827b47c48ab0fc71',ramdisk_id='',reservation_id='r-kmdzpea0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBackupRestore-1109760377',owner_user_name='tempest-TestVolumeBackupRestore-1109760377-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:15Z,user_data=None,user_id='d35494d39d8d404891546638d8f87af5',uuid=fb7c2a0f-da59-4d91-abfb-6de392bff759,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "address": "fa:16:3e:60:82:3d", "network": {"id": "0a7ec55d-851c-4f69-99cf-2136c772174e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1026596325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b82a0d97ae1643c5827b47c48ab0fc71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbeabc602-cb", "ovs_interfaceid": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.791 255071 DEBUG nova.network.os_vif_util [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Converting VIF {"id": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "address": "fa:16:3e:60:82:3d", "network": {"id": "0a7ec55d-851c-4f69-99cf-2136c772174e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1026596325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b82a0d97ae1643c5827b47c48ab0fc71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbeabc602-cb", "ovs_interfaceid": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.791 255071 DEBUG nova.network.os_vif_util [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:60:82:3d,bridge_name='br-int',has_traffic_filtering=True,id=beabc602-cbf2-4e13-adbf-6e5254ac8e0a,network=Network(0a7ec55d-851c-4f69-99cf-2136c772174e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbeabc602-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.792 255071 DEBUG os_vif [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:82:3d,bridge_name='br-int',has_traffic_filtering=True,id=beabc602-cbf2-4e13-adbf-6e5254ac8e0a,network=Network(0a7ec55d-851c-4f69-99cf-2136c772174e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbeabc602-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.794 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.794 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbeabc602-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.796 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.799 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:09:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:35.799 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[3f560be7-e3dd-4947-b639-1541408d2cd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.800 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:35.800 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[682ca24b-869c-43d1-ae0f-23d8fa22d7dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.803 255071 INFO os_vif [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:82:3d,bridge_name='br-int',has_traffic_filtering=True,id=beabc602-cbf2-4e13-adbf-6e5254ac8e0a,network=Network(0a7ec55d-851c-4f69-99cf-2136c772174e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbeabc602-cb')
Nov 29 08:09:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:35.820 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[687ff2c5-44b6-49c8-a572-2ba32ab8ae06]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600916, 'reachable_time': 25038, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283798, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:35.823 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:09:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:35.823 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[365febd2-228c-4d8f-972a-158cda3eb760]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:35.824 163500 INFO neutron.agent.ovn.metadata.agent [-] Port beabc602-cbf2-4e13-adbf-6e5254ac8e0a in datapath 0a7ec55d-851c-4f69-99cf-2136c772174e unbound from our chassis
Nov 29 08:09:35 compute-0 systemd[1]: run-netns-ovnmeta\x2d6e23492e\x2dbeff\x2d43f6\x2db4d1\x2df88ebeea0b6f.mount: Deactivated successfully.
Nov 29 08:09:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:35.827 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0a7ec55d-851c-4f69-99cf-2136c772174e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:09:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:35.828 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5e4256ca-a822-4531-9286-96de5b7926d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:35 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:35.828 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e namespace which is not needed anymore
Nov 29 08:09:35 compute-0 nova_compute[255040]: 2025-11-29 08:09:35.981 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:36 compute-0 nova_compute[255040]: 2025-11-29 08:09:36.357 255071 DEBUG nova.compute.manager [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Received event network-vif-unplugged-8c0aa401-308c-4663-a9d5-da4c2100c7c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:36 compute-0 nova_compute[255040]: 2025-11-29 08:09:36.358 255071 DEBUG oslo_concurrency.lockutils [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:36 compute-0 nova_compute[255040]: 2025-11-29 08:09:36.358 255071 DEBUG oslo_concurrency.lockutils [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:36 compute-0 nova_compute[255040]: 2025-11-29 08:09:36.358 255071 DEBUG oslo_concurrency.lockutils [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:36 compute-0 nova_compute[255040]: 2025-11-29 08:09:36.359 255071 DEBUG nova.compute.manager [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] No waiting events found dispatching network-vif-unplugged-8c0aa401-308c-4663-a9d5-da4c2100c7c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:36 compute-0 nova_compute[255040]: 2025-11-29 08:09:36.359 255071 DEBUG nova.compute.manager [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Received event network-vif-unplugged-8c0aa401-308c-4663-a9d5-da4c2100c7c3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:09:36 compute-0 nova_compute[255040]: 2025-11-29 08:09:36.359 255071 DEBUG nova.compute.manager [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Received event network-changed-beabc602-cbf2-4e13-adbf-6e5254ac8e0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:36 compute-0 nova_compute[255040]: 2025-11-29 08:09:36.359 255071 DEBUG nova.compute.manager [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Refreshing instance network info cache due to event network-changed-beabc602-cbf2-4e13-adbf-6e5254ac8e0a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:09:36 compute-0 nova_compute[255040]: 2025-11-29 08:09:36.360 255071 DEBUG oslo_concurrency.lockutils [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-fb7c2a0f-da59-4d91-abfb-6de392bff759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:09:36 compute-0 nova_compute[255040]: 2025-11-29 08:09:36.360 255071 DEBUG oslo_concurrency.lockutils [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-fb7c2a0f-da59-4d91-abfb-6de392bff759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:09:36 compute-0 nova_compute[255040]: 2025-11-29 08:09:36.360 255071 DEBUG nova.network.neutron [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Refreshing network info cache for port beabc602-cbf2-4e13-adbf-6e5254ac8e0a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:09:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Nov 29 08:09:37 compute-0 neutron-haproxy-ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e[282786]: [NOTICE]   (282790) : haproxy version is 2.8.14-c23fe91
Nov 29 08:09:37 compute-0 neutron-haproxy-ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e[282786]: [NOTICE]   (282790) : path to executable is /usr/sbin/haproxy
Nov 29 08:09:37 compute-0 neutron-haproxy-ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e[282786]: [WARNING]  (282790) : Exiting Master process...
Nov 29 08:09:37 compute-0 neutron-haproxy-ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e[282786]: [ALERT]    (282790) : Current worker (282793) exited with code 143 (Terminated)
Nov 29 08:09:37 compute-0 neutron-haproxy-ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e[282786]: [WARNING]  (282790) : All workers exited. Exiting... (0)
Nov 29 08:09:37 compute-0 systemd[1]: libpod-5bfaa43e00c200d20a2f72bcd8ed1b2e5d38127ec04632bc47f753b2abc82acf.scope: Deactivated successfully.
Nov 29 08:09:37 compute-0 podman[283834]: 2025-11-29 08:09:37.2604633 +0000 UTC m=+1.335890359 container died 5bfaa43e00c200d20a2f72bcd8ed1b2e5d38127ec04632bc47f753b2abc82acf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 08:09:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 316 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 264 op/s
Nov 29 08:09:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Nov 29 08:09:37 compute-0 nova_compute[255040]: 2025-11-29 08:09:37.970 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:38 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Nov 29 08:09:38 compute-0 ceph-mon[75237]: pgmap v1527: 305 pgs: 305 active+clean; 316 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.2 MiB/s wr, 322 op/s
Nov 29 08:09:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5bfaa43e00c200d20a2f72bcd8ed1b2e5d38127ec04632bc47f753b2abc82acf-userdata-shm.mount: Deactivated successfully.
Nov 29 08:09:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae84079397835d5e2155c6fb0d16a37b8268301ad805cc19cc5f4404fcfcf94b-merged.mount: Deactivated successfully.
Nov 29 08:09:38 compute-0 podman[283834]: 2025-11-29 08:09:38.212616718 +0000 UTC m=+2.288043777 container cleanup 5bfaa43e00c200d20a2f72bcd8ed1b2e5d38127ec04632bc47f753b2abc82acf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:09:38 compute-0 nova_compute[255040]: 2025-11-29 08:09:38.227 255071 INFO nova.virt.libvirt.driver [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Deleting instance files /var/lib/nova/instances/c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec_del
Nov 29 08:09:38 compute-0 nova_compute[255040]: 2025-11-29 08:09:38.230 255071 INFO nova.virt.libvirt.driver [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Deletion of /var/lib/nova/instances/c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec_del complete
Nov 29 08:09:38 compute-0 systemd[1]: libpod-conmon-5bfaa43e00c200d20a2f72bcd8ed1b2e5d38127ec04632bc47f753b2abc82acf.scope: Deactivated successfully.
Nov 29 08:09:38 compute-0 nova_compute[255040]: 2025-11-29 08:09:38.318 255071 INFO nova.compute.manager [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Took 3.51 seconds to destroy the instance on the hypervisor.
Nov 29 08:09:38 compute-0 nova_compute[255040]: 2025-11-29 08:09:38.318 255071 DEBUG oslo.service.loopingcall [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:09:38 compute-0 nova_compute[255040]: 2025-11-29 08:09:38.319 255071 DEBUG nova.compute.manager [-] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:09:38 compute-0 nova_compute[255040]: 2025-11-29 08:09:38.319 255071 DEBUG nova.network.neutron [-] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:09:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:09:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:09:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:09:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:09:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:09:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:09:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:09:38
Nov 29 08:09:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:09:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:09:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['vms', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.control', 'images', 'default.rgw.log']
Nov 29 08:09:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:09:39 compute-0 nova_compute[255040]: 2025-11-29 08:09:39.732 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 315 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 951 KiB/s wr, 196 op/s
Nov 29 08:09:40 compute-0 podman[283865]: 2025-11-29 08:09:40.373699988 +0000 UTC m=+2.132169733 container remove 5bfaa43e00c200d20a2f72bcd8ed1b2e5d38127ec04632bc47f753b2abc82acf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:09:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:40.380 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[17a07940-3328-4b13-a612-b1fcb587b222]: (4, ('Sat Nov 29 08:09:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e (5bfaa43e00c200d20a2f72bcd8ed1b2e5d38127ec04632bc47f753b2abc82acf)\n5bfaa43e00c200d20a2f72bcd8ed1b2e5d38127ec04632bc47f753b2abc82acf\nSat Nov 29 08:09:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e (5bfaa43e00c200d20a2f72bcd8ed1b2e5d38127ec04632bc47f753b2abc82acf)\n5bfaa43e00c200d20a2f72bcd8ed1b2e5d38127ec04632bc47f753b2abc82acf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:40.382 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[597bb0e9-2110-4ab6-bb76-c27e7bd4db5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:40.383 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a7ec55d-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:40 compute-0 kernel: tap0a7ec55d-80: left promiscuous mode
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.386 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.401 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:40.405 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad7d995-d8e0-475f-b6b5-080981369cc0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:40.423 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[329ffd0c-40d8-479b-843b-32ec5015bb45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:40.425 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[25f7c9b2-a4d5-4c44-9a00-1764536c6689]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:40.447 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb59f75-a0e0-483d-b891-d5e0f75356e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599133, 'reachable_time': 36615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283879, 'error': None, 'target': 'ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d0a7ec55d\x2d851c\x2d4f69\x2d99cf\x2d2136c772174e.mount: Deactivated successfully.
Nov 29 08:09:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:40.450 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0a7ec55d-851c-4f69-99cf-2136c772174e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:09:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:09:40.450 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[2c92b914-c005-4373-8cac-1edd3d957e32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:40 compute-0 podman[283878]: 2025-11-29 08:09:40.523545488 +0000 UTC m=+0.074605197 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.552 255071 DEBUG nova.network.neutron [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Updated VIF entry in instance network info cache for port beabc602-cbf2-4e13-adbf-6e5254ac8e0a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.553 255071 DEBUG nova.network.neutron [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Updating instance_info_cache with network_info: [{"id": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "address": "fa:16:3e:60:82:3d", "network": {"id": "0a7ec55d-851c-4f69-99cf-2136c772174e", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1026596325-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b82a0d97ae1643c5827b47c48ab0fc71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbeabc602-cb", "ovs_interfaceid": "beabc602-cbf2-4e13-adbf-6e5254ac8e0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.581 255071 DEBUG oslo_concurrency.lockutils [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-fb7c2a0f-da59-4d91-abfb-6de392bff759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.582 255071 DEBUG nova.compute.manager [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Received event network-vif-plugged-8c0aa401-308c-4663-a9d5-da4c2100c7c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.582 255071 DEBUG oslo_concurrency.lockutils [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.582 255071 DEBUG oslo_concurrency.lockutils [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.583 255071 DEBUG oslo_concurrency.lockutils [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.583 255071 DEBUG nova.compute.manager [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] No waiting events found dispatching network-vif-plugged-8c0aa401-308c-4663-a9d5-da4c2100c7c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.583 255071 WARNING nova.compute.manager [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Received unexpected event network-vif-plugged-8c0aa401-308c-4663-a9d5-da4c2100c7c3 for instance with vm_state active and task_state deleting.
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.583 255071 DEBUG nova.compute.manager [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Received event network-vif-unplugged-beabc602-cbf2-4e13-adbf-6e5254ac8e0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.583 255071 DEBUG oslo_concurrency.lockutils [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "fb7c2a0f-da59-4d91-abfb-6de392bff759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.584 255071 DEBUG oslo_concurrency.lockutils [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fb7c2a0f-da59-4d91-abfb-6de392bff759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.584 255071 DEBUG oslo_concurrency.lockutils [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fb7c2a0f-da59-4d91-abfb-6de392bff759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.584 255071 DEBUG nova.compute.manager [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] No waiting events found dispatching network-vif-unplugged-beabc602-cbf2-4e13-adbf-6e5254ac8e0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.585 255071 DEBUG nova.compute.manager [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Received event network-vif-unplugged-beabc602-cbf2-4e13-adbf-6e5254ac8e0a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.585 255071 DEBUG nova.compute.manager [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Received event network-vif-plugged-beabc602-cbf2-4e13-adbf-6e5254ac8e0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.585 255071 DEBUG oslo_concurrency.lockutils [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "fb7c2a0f-da59-4d91-abfb-6de392bff759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.585 255071 DEBUG oslo_concurrency.lockutils [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fb7c2a0f-da59-4d91-abfb-6de392bff759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.585 255071 DEBUG oslo_concurrency.lockutils [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "fb7c2a0f-da59-4d91-abfb-6de392bff759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.586 255071 DEBUG nova.compute.manager [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] No waiting events found dispatching network-vif-plugged-beabc602-cbf2-4e13-adbf-6e5254ac8e0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.586 255071 WARNING nova.compute.manager [req-68f37357-c4fe-4077-8f37-2aeff31ad408 req-189e50c5-0871-4fa5-a134-b5fbdb9a967d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Received unexpected event network-vif-plugged-beabc602-cbf2-4e13-adbf-6e5254ac8e0a for instance with vm_state active and task_state deleting.
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.763 255071 DEBUG nova.network.neutron [-] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.781 255071 INFO nova.compute.manager [-] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Took 2.46 seconds to deallocate network for instance.
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.796 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:40 compute-0 nova_compute[255040]: 2025-11-29 08:09:40.893 255071 DEBUG nova.compute.manager [req-4008dc3f-e062-41d7-bdff-7e1468b974eb req-cb6c0733-463a-49d8-8c6f-c54ddf47f5f1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Received event network-vif-deleted-8c0aa401-308c-4663-a9d5-da4c2100c7c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:40 compute-0 ceph-mon[75237]: pgmap v1528: 305 pgs: 305 active+clean; 316 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 264 op/s
Nov 29 08:09:40 compute-0 ceph-mon[75237]: osdmap e279: 3 total, 3 up, 3 in
Nov 29 08:09:41 compute-0 nova_compute[255040]: 2025-11-29 08:09:41.013 255071 INFO nova.compute.manager [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Took 0.23 seconds to detach 1 volumes for instance.
Nov 29 08:09:41 compute-0 nova_compute[255040]: 2025-11-29 08:09:41.016 255071 DEBUG nova.compute.manager [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Deleting volume: dbaad290-ad3d-482c-8419-7a5198594e31 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Nov 29 08:09:41 compute-0 nova_compute[255040]: 2025-11-29 08:09:41.233 255071 DEBUG oslo_concurrency.lockutils [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:41 compute-0 nova_compute[255040]: 2025-11-29 08:09:41.234 255071 DEBUG oslo_concurrency.lockutils [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:41 compute-0 nova_compute[255040]: 2025-11-29 08:09:41.302 255071 DEBUG oslo_concurrency.processutils [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 315 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 49 KiB/s wr, 159 op/s
Nov 29 08:09:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:09:42 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/842229991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:42 compute-0 nova_compute[255040]: 2025-11-29 08:09:42.764 255071 DEBUG oslo_concurrency.processutils [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:42 compute-0 nova_compute[255040]: 2025-11-29 08:09:42.774 255071 DEBUG nova.compute.provider_tree [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:09:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Nov 29 08:09:42 compute-0 nova_compute[255040]: 2025-11-29 08:09:42.807 255071 DEBUG nova.scheduler.client.report [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:09:42 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 29 08:09:42 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:09:42.838429) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:09:42 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 29 08:09:42 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403782838680, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2277, "num_deletes": 266, "total_data_size": 3390264, "memory_usage": 3445168, "flush_reason": "Manual Compaction"}
Nov 29 08:09:42 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 29 08:09:42 compute-0 nova_compute[255040]: 2025-11-29 08:09:42.839 255071 DEBUG oslo_concurrency.lockutils [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:42 compute-0 nova_compute[255040]: 2025-11-29 08:09:42.879 255071 INFO nova.scheduler.client.report [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Deleted allocations for instance c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec
Nov 29 08:09:42 compute-0 nova_compute[255040]: 2025-11-29 08:09:42.973 255071 DEBUG oslo_concurrency.lockutils [None req-0e784e39-c219-4c33-8de3-f6bd9ae52c21 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:09:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:09:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:09:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:09:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:09:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:09:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:09:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:09:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:09:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:09:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 315 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 32 KiB/s wr, 127 op/s
Nov 29 08:09:44 compute-0 nova_compute[255040]: 2025-11-29 08:09:44.735 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403785426857, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3327080, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26810, "largest_seqno": 29086, "table_properties": {"data_size": 3316587, "index_size": 6731, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22325, "raw_average_key_size": 21, "raw_value_size": 3295425, "raw_average_value_size": 3120, "num_data_blocks": 293, "num_entries": 1056, "num_filter_entries": 1056, "num_deletions": 266, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403589, "oldest_key_time": 1764403589, "file_creation_time": 1764403782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 2588482 microseconds, and 10415 cpu microseconds.
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:09:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:09:45.426939) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3327080 bytes OK
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:09:45.426967) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:09:45.451249) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:09:45.451329) EVENT_LOG_v1 {"time_micros": 1764403785451313, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:09:45.451371) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3380479, prev total WAL file size 3422971, number of live WAL files 2.
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:09:45.453393) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3249KB)], [59(8002KB)]
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403785453631, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 11521847, "oldest_snapshot_seqno": -1}
Nov 29 08:09:45 compute-0 ceph-mon[75237]: pgmap v1530: 305 pgs: 305 active+clean; 315 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 951 KiB/s wr, 196 op/s
Nov 29 08:09:45 compute-0 nova_compute[255040]: 2025-11-29 08:09:45.497 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403770.4956226, 169b2d31-6539-4279-bf7a-f46078e1d624 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:45 compute-0 nova_compute[255040]: 2025-11-29 08:09:45.497 255071 INFO nova.compute.manager [-] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] VM Stopped (Lifecycle Event)
Nov 29 08:09:45 compute-0 nova_compute[255040]: 2025-11-29 08:09:45.524 255071 DEBUG nova.compute.manager [None req-7f989c96-db70-4856-bb7b-8d6adec81ba4 - - - - - -] [instance: 169b2d31-6539-4279-bf7a-f46078e1d624] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 315 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 481 KiB/s rd, 3.2 KiB/s wr, 60 op/s
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5873 keys, 9789381 bytes, temperature: kUnknown
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403785750507, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9789381, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9746519, "index_size": 27072, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14725, "raw_key_size": 147209, "raw_average_key_size": 25, "raw_value_size": 9637229, "raw_average_value_size": 1640, "num_data_blocks": 1098, "num_entries": 5873, "num_filter_entries": 5873, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764403785, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:09:45.750941) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9789381 bytes
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:09:45.758753) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 38.8 rd, 33.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.8 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 6411, records dropped: 538 output_compression: NoCompression
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:09:45.758818) EVENT_LOG_v1 {"time_micros": 1764403785758792, "job": 32, "event": "compaction_finished", "compaction_time_micros": 297023, "compaction_time_cpu_micros": 29467, "output_level": 6, "num_output_files": 1, "total_output_size": 9789381, "num_input_records": 6411, "num_output_records": 5873, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403785760102, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 29 08:09:45 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403785762664, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:09:45.453241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:09:45.762699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:09:45.762706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:09:45.762707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:09:45.762709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:09:45 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:09:45.762710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:09:45 compute-0 nova_compute[255040]: 2025-11-29 08:09:45.798 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:46 compute-0 ceph-mon[75237]: pgmap v1531: 305 pgs: 305 active+clean; 315 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 49 KiB/s wr, 159 op/s
Nov 29 08:09:46 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/842229991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:46 compute-0 ceph-mon[75237]: pgmap v1532: 305 pgs: 305 active+clean; 315 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 32 KiB/s wr, 127 op/s
Nov 29 08:09:46 compute-0 ceph-mon[75237]: pgmap v1534: 305 pgs: 305 active+clean; 315 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 481 KiB/s rd, 3.2 KiB/s wr, 60 op/s
Nov 29 08:09:46 compute-0 ceph-mon[75237]: osdmap e280: 3 total, 3 up, 3 in
Nov 29 08:09:46 compute-0 nova_compute[255040]: 2025-11-29 08:09:46.730 255071 INFO nova.virt.libvirt.driver [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Deleting instance files /var/lib/nova/instances/fb7c2a0f-da59-4d91-abfb-6de392bff759_del
Nov 29 08:09:46 compute-0 nova_compute[255040]: 2025-11-29 08:09:46.731 255071 INFO nova.virt.libvirt.driver [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Deletion of /var/lib/nova/instances/fb7c2a0f-da59-4d91-abfb-6de392bff759_del complete
Nov 29 08:09:46 compute-0 nova_compute[255040]: 2025-11-29 08:09:46.805 255071 INFO nova.compute.manager [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Took 11.29 seconds to destroy the instance on the hypervisor.
Nov 29 08:09:46 compute-0 nova_compute[255040]: 2025-11-29 08:09:46.806 255071 DEBUG oslo.service.loopingcall [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:09:46 compute-0 nova_compute[255040]: 2025-11-29 08:09:46.806 255071 DEBUG nova.compute.manager [-] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:09:46 compute-0 nova_compute[255040]: 2025-11-29 08:09:46.806 255071 DEBUG nova.network.neutron [-] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:09:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:09:47 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3008630659' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:09:47 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3008630659' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:47 compute-0 nova_compute[255040]: 2025-11-29 08:09:47.334 255071 DEBUG nova.network.neutron [-] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:47 compute-0 nova_compute[255040]: 2025-11-29 08:09:47.354 255071 INFO nova.compute.manager [-] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Took 0.55 seconds to deallocate network for instance.
Nov 29 08:09:47 compute-0 nova_compute[255040]: 2025-11-29 08:09:47.407 255071 DEBUG nova.compute.manager [req-92ab698f-d7cf-408d-99f5-ace1c0a7c2c3 req-4332dc89-d679-4176-a8a6-f0d5cd1801ac cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Received event network-vif-deleted-beabc602-cbf2-4e13-adbf-6e5254ac8e0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:47 compute-0 nova_compute[255040]: 2025-11-29 08:09:47.505 255071 INFO nova.compute.manager [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Took 0.15 seconds to detach 1 volumes for instance.
Nov 29 08:09:47 compute-0 nova_compute[255040]: 2025-11-29 08:09:47.561 255071 DEBUG oslo_concurrency.lockutils [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:47 compute-0 nova_compute[255040]: 2025-11-29 08:09:47.562 255071 DEBUG oslo_concurrency.lockutils [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:47 compute-0 nova_compute[255040]: 2025-11-29 08:09:47.621 255071 DEBUG oslo_concurrency.processutils [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:47 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3008630659' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:47 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3008630659' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 315 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 402 KiB/s rd, 2.7 KiB/s wr, 50 op/s
Nov 29 08:09:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:09:48 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3425372073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:48 compute-0 nova_compute[255040]: 2025-11-29 08:09:48.091 255071 DEBUG oslo_concurrency.processutils [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:48 compute-0 nova_compute[255040]: 2025-11-29 08:09:48.100 255071 DEBUG nova.compute.provider_tree [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:09:48 compute-0 nova_compute[255040]: 2025-11-29 08:09:48.125 255071 DEBUG nova.scheduler.client.report [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:09:48 compute-0 nova_compute[255040]: 2025-11-29 08:09:48.148 255071 DEBUG oslo_concurrency.lockutils [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:48 compute-0 nova_compute[255040]: 2025-11-29 08:09:48.173 255071 INFO nova.scheduler.client.report [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Deleted allocations for instance fb7c2a0f-da59-4d91-abfb-6de392bff759
Nov 29 08:09:48 compute-0 nova_compute[255040]: 2025-11-29 08:09:48.243 255071 DEBUG oslo_concurrency.lockutils [None req-873bd9f4-03b7-4215-801f-913ebc7d85b0 d35494d39d8d404891546638d8f87af5 b82a0d97ae1643c5827b47c48ab0fc71 - - default default] Lock "fb7c2a0f-da59-4d91-abfb-6de392bff759" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Nov 29 08:09:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Nov 29 08:09:48 compute-0 ceph-mon[75237]: pgmap v1535: 305 pgs: 305 active+clean; 315 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 402 KiB/s rd, 2.7 KiB/s wr, 50 op/s
Nov 29 08:09:48 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3425372073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:48 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Nov 29 08:09:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:09:49 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3117228883' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:09:49 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3117228883' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Nov 29 08:09:49 compute-0 ceph-mon[75237]: osdmap e281: 3 total, 3 up, 3 in
Nov 29 08:09:49 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3117228883' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:49 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3117228883' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Nov 29 08:09:49 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Nov 29 08:09:49 compute-0 nova_compute[255040]: 2025-11-29 08:09:49.738 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 315 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 5.5 KiB/s wr, 121 op/s
Nov 29 08:09:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:09:49 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3692282000' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:09:49 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3692282000' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:50 compute-0 nova_compute[255040]: 2025-11-29 08:09:50.047 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403775.044274, c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:50 compute-0 nova_compute[255040]: 2025-11-29 08:09:50.047 255071 INFO nova.compute.manager [-] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] VM Stopped (Lifecycle Event)
Nov 29 08:09:50 compute-0 nova_compute[255040]: 2025-11-29 08:09:50.065 255071 DEBUG nova.compute.manager [None req-9caf4a5c-d5db-47f7-a0a2-b7f463aac7b6 - - - - - -] [instance: c2551f3d-fd2e-4a4c-aa2f-a9bee6d269ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:09:50 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2308366991' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:09:50 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2308366991' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:09:50 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2720402337' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:09:50 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2720402337' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Nov 29 08:09:50 compute-0 ceph-mon[75237]: osdmap e282: 3 total, 3 up, 3 in
Nov 29 08:09:50 compute-0 ceph-mon[75237]: pgmap v1538: 305 pgs: 305 active+clean; 315 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 5.5 KiB/s wr, 121 op/s
Nov 29 08:09:50 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3692282000' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:50 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3692282000' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:50 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2308366991' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:50 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2308366991' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:50 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2720402337' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:50 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2720402337' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Nov 29 08:09:50 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Nov 29 08:09:50 compute-0 nova_compute[255040]: 2025-11-29 08:09:50.753 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403775.751573, fb7c2a0f-da59-4d91-abfb-6de392bff759 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:50 compute-0 nova_compute[255040]: 2025-11-29 08:09:50.753 255071 INFO nova.compute.manager [-] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] VM Stopped (Lifecycle Event)
Nov 29 08:09:50 compute-0 nova_compute[255040]: 2025-11-29 08:09:50.776 255071 DEBUG nova.compute.manager [None req-50e6979e-fbb5-4503-b951-bac276e13dd1 - - - - - -] [instance: fb7c2a0f-da59-4d91-abfb-6de392bff759] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:50 compute-0 nova_compute[255040]: 2025-11-29 08:09:50.843 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:51 compute-0 ceph-mon[75237]: osdmap e283: 3 total, 3 up, 3 in
Nov 29 08:09:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 219 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 8.5 KiB/s wr, 185 op/s
Nov 29 08:09:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:09:51 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4064693450' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:09:51 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4064693450' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:52 compute-0 ceph-mon[75237]: pgmap v1540: 305 pgs: 305 active+clean; 219 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 8.5 KiB/s wr, 185 op/s
Nov 29 08:09:52 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4064693450' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:52 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4064693450' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 141 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 940 KiB/s rd, 14 KiB/s wr, 258 op/s
Nov 29 08:09:53 compute-0 nova_compute[255040]: 2025-11-29 08:09:53.752 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:54 compute-0 nova_compute[255040]: 2025-11-29 08:09:54.071 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:54 compute-0 nova_compute[255040]: 2025-11-29 08:09:54.741 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Nov 29 08:09:54 compute-0 ceph-mon[75237]: pgmap v1541: 305 pgs: 305 active+clean; 141 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 940 KiB/s rd, 14 KiB/s wr, 258 op/s
Nov 29 08:09:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Nov 29 08:09:54 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Nov 29 08:09:54 compute-0 podman[283950]: 2025-11-29 08:09:54.953479686 +0000 UTC m=+0.116514535 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:09:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 88 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 11 KiB/s wr, 242 op/s
Nov 29 08:09:55 compute-0 ceph-mon[75237]: osdmap e284: 3 total, 3 up, 3 in
Nov 29 08:09:55 compute-0 nova_compute[255040]: 2025-11-29 08:09:55.846 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003470144925766751 of space, bias 1.0, pg target 0.10410434777300254 quantized to 32 (current 32)
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:09:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:09:56 compute-0 ceph-mon[75237]: pgmap v1543: 305 pgs: 305 active+clean; 88 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 11 KiB/s wr, 242 op/s
Nov 29 08:09:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:09:57 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/776917611' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:09:57 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/776917611' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Nov 29 08:09:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Nov 29 08:09:57 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Nov 29 08:09:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 88 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.8 KiB/s wr, 138 op/s
Nov 29 08:09:57 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/776917611' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:57 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/776917611' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:57 compute-0 ceph-mon[75237]: osdmap e285: 3 total, 3 up, 3 in
Nov 29 08:09:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:09:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2339021628' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:09:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2339021628' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:58 compute-0 ceph-mon[75237]: pgmap v1545: 305 pgs: 305 active+clean; 88 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.8 KiB/s wr, 138 op/s
Nov 29 08:09:58 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2339021628' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:58 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2339021628' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Nov 29 08:09:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Nov 29 08:09:59 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Nov 29 08:09:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 130 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.5 MiB/s wr, 175 op/s
Nov 29 08:09:59 compute-0 nova_compute[255040]: 2025-11-29 08:09:59.743 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:00 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3326379265' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:00 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3326379265' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:00 compute-0 ceph-mon[75237]: osdmap e286: 3 total, 3 up, 3 in
Nov 29 08:10:00 compute-0 ceph-mon[75237]: pgmap v1547: 305 pgs: 305 active+clean; 130 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.5 MiB/s wr, 175 op/s
Nov 29 08:10:00 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3326379265' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:00 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3326379265' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:00 compute-0 nova_compute[255040]: 2025-11-29 08:10:00.847 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 134 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.1 MiB/s wr, 121 op/s
Nov 29 08:10:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:02 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2016129114' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:02 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2016129114' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:02 compute-0 ceph-mon[75237]: pgmap v1548: 305 pgs: 305 active+clean; 134 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.1 MiB/s wr, 121 op/s
Nov 29 08:10:02 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2016129114' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:02 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2016129114' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:02 compute-0 podman[283976]: 2025-11-29 08:10:02.881005959 +0000 UTC m=+0.050072638 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 08:10:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 134 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 2.7 MiB/s wr, 147 op/s
Nov 29 08:10:03 compute-0 nova_compute[255040]: 2025-11-29 08:10:03.789 255071 DEBUG oslo_concurrency.lockutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "40011a89-5ea1-4ffe-bda7-a3116abd2267" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:03 compute-0 nova_compute[255040]: 2025-11-29 08:10:03.790 255071 DEBUG oslo_concurrency.lockutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "40011a89-5ea1-4ffe-bda7-a3116abd2267" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:03 compute-0 nova_compute[255040]: 2025-11-29 08:10:03.807 255071 DEBUG nova.compute.manager [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:10:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:10:03 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3180706608' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:03 compute-0 nova_compute[255040]: 2025-11-29 08:10:03.877 255071 DEBUG oslo_concurrency.lockutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:03 compute-0 nova_compute[255040]: 2025-11-29 08:10:03.877 255071 DEBUG oslo_concurrency.lockutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:03 compute-0 nova_compute[255040]: 2025-11-29 08:10:03.888 255071 DEBUG nova.virt.hardware [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:10:03 compute-0 nova_compute[255040]: 2025-11-29 08:10:03.888 255071 INFO nova.compute.claims [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:10:03 compute-0 nova_compute[255040]: 2025-11-29 08:10:03.991 255071 DEBUG oslo_concurrency.processutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:10:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3257126866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.481 255071 DEBUG oslo_concurrency.processutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.488 255071 DEBUG nova.compute.provider_tree [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.510 255071 DEBUG nova.scheduler.client.report [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.539 255071 DEBUG oslo_concurrency.lockutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.539 255071 DEBUG nova.compute.manager [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.579 255071 DEBUG nova.compute.manager [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.580 255071 DEBUG nova.network.neutron [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.596 255071 INFO nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.610 255071 DEBUG nova.compute.manager [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.645 255071 INFO nova.virt.block_device [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Booting with volume 4e507a0b-03e8-4934-a54e-56137eebde3a at /dev/vda
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.746 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.792 255071 DEBUG nova.policy [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5e62d407203540599a65ac50d5d447b9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3df24932e2a44aeab3c2aece8a045774', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:10:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Nov 29 08:10:04 compute-0 ceph-mon[75237]: pgmap v1549: 305 pgs: 305 active+clean; 134 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 2.7 MiB/s wr, 147 op/s
Nov 29 08:10:04 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3180706608' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:04 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3257126866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.828 255071 DEBUG os_brick.utils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:10:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.830 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:04 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.843 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.843 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[20ce47e4-de2e-4d29-80dc-5685791be092]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.845 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.856 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.857 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[de8e075c-c8ab-4cd1-81fe-e58183153d7c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.860 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.873 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.873 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[087ca371-2993-4f9e-9416-e0729960f6e9]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.876 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[097c5969-cc33-4f8e-928f-5e483d2d5ab6]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.877 255071 DEBUG oslo_concurrency.processutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.923 255071 DEBUG oslo_concurrency.processutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "nvme version" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.927 255071 DEBUG os_brick.initiator.connectors.lightos [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.928 255071 DEBUG os_brick.initiator.connectors.lightos [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.929 255071 DEBUG os_brick.initiator.connectors.lightos [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.929 255071 DEBUG os_brick.utils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] <== get_connector_properties: return (100ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:10:04 compute-0 nova_compute[255040]: 2025-11-29 08:10:04.930 255071 DEBUG nova.virt.block_device [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Updating existing volume attachment record: db836ae1-ebff-413a-9f04-3117d8fe4d2c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:10:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:10:05 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/284793949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 134 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 2.7 MiB/s wr, 159 op/s
Nov 29 08:10:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Nov 29 08:10:05 compute-0 ceph-mon[75237]: osdmap e287: 3 total, 3 up, 3 in
Nov 29 08:10:05 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/284793949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:05 compute-0 nova_compute[255040]: 2025-11-29 08:10:05.849 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Nov 29 08:10:05 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Nov 29 08:10:05 compute-0 nova_compute[255040]: 2025-11-29 08:10:05.929 255071 DEBUG nova.network.neutron [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Successfully created port: 3c03306d-f387-4844-a235-2eaba1efde2e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:10:05 compute-0 nova_compute[255040]: 2025-11-29 08:10:05.934 255071 DEBUG nova.compute.manager [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:10:05 compute-0 nova_compute[255040]: 2025-11-29 08:10:05.936 255071 DEBUG nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:10:05 compute-0 nova_compute[255040]: 2025-11-29 08:10:05.936 255071 INFO nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Creating image(s)
Nov 29 08:10:05 compute-0 nova_compute[255040]: 2025-11-29 08:10:05.937 255071 DEBUG nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:10:05 compute-0 nova_compute[255040]: 2025-11-29 08:10:05.937 255071 DEBUG nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Ensure instance console log exists: /var/lib/nova/instances/40011a89-5ea1-4ffe-bda7-a3116abd2267/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:10:05 compute-0 nova_compute[255040]: 2025-11-29 08:10:05.937 255071 DEBUG oslo_concurrency.lockutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:05 compute-0 nova_compute[255040]: 2025-11-29 08:10:05.938 255071 DEBUG oslo_concurrency.lockutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:05 compute-0 nova_compute[255040]: 2025-11-29 08:10:05.938 255071 DEBUG oslo_concurrency.lockutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:06 compute-0 nova_compute[255040]: 2025-11-29 08:10:06.480 255071 DEBUG nova.network.neutron [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Successfully updated port: 3c03306d-f387-4844-a235-2eaba1efde2e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:10:06 compute-0 nova_compute[255040]: 2025-11-29 08:10:06.494 255071 DEBUG oslo_concurrency.lockutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "refresh_cache-40011a89-5ea1-4ffe-bda7-a3116abd2267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:10:06 compute-0 nova_compute[255040]: 2025-11-29 08:10:06.495 255071 DEBUG oslo_concurrency.lockutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquired lock "refresh_cache-40011a89-5ea1-4ffe-bda7-a3116abd2267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:10:06 compute-0 nova_compute[255040]: 2025-11-29 08:10:06.495 255071 DEBUG nova.network.neutron [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:10:06 compute-0 nova_compute[255040]: 2025-11-29 08:10:06.626 255071 DEBUG nova.compute.manager [req-d8a9278e-6cf3-4c90-b297-4c5590afe990 req-bf869e03-4d3f-415d-979f-26f7831b4dc8 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Received event network-changed-3c03306d-f387-4844-a235-2eaba1efde2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:06 compute-0 nova_compute[255040]: 2025-11-29 08:10:06.627 255071 DEBUG nova.compute.manager [req-d8a9278e-6cf3-4c90-b297-4c5590afe990 req-bf869e03-4d3f-415d-979f-26f7831b4dc8 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Refreshing instance network info cache due to event network-changed-3c03306d-f387-4844-a235-2eaba1efde2e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:10:06 compute-0 nova_compute[255040]: 2025-11-29 08:10:06.627 255071 DEBUG oslo_concurrency.lockutils [req-d8a9278e-6cf3-4c90-b297-4c5590afe990 req-bf869e03-4d3f-415d-979f-26f7831b4dc8 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-40011a89-5ea1-4ffe-bda7-a3116abd2267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:10:06 compute-0 nova_compute[255040]: 2025-11-29 08:10:06.701 255071 DEBUG nova.network.neutron [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:10:06 compute-0 ceph-mon[75237]: pgmap v1551: 305 pgs: 305 active+clean; 134 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 2.7 MiB/s wr, 159 op/s
Nov 29 08:10:06 compute-0 ceph-mon[75237]: osdmap e288: 3 total, 3 up, 3 in
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.549 255071 DEBUG nova.network.neutron [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Updating instance_info_cache with network_info: [{"id": "3c03306d-f387-4844-a235-2eaba1efde2e", "address": "fa:16:3e:98:0b:ca", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c03306d-f3", "ovs_interfaceid": "3c03306d-f387-4844-a235-2eaba1efde2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.583 255071 DEBUG oslo_concurrency.lockutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Releasing lock "refresh_cache-40011a89-5ea1-4ffe-bda7-a3116abd2267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.583 255071 DEBUG nova.compute.manager [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Instance network_info: |[{"id": "3c03306d-f387-4844-a235-2eaba1efde2e", "address": "fa:16:3e:98:0b:ca", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c03306d-f3", "ovs_interfaceid": "3c03306d-f387-4844-a235-2eaba1efde2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.584 255071 DEBUG oslo_concurrency.lockutils [req-d8a9278e-6cf3-4c90-b297-4c5590afe990 req-bf869e03-4d3f-415d-979f-26f7831b4dc8 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-40011a89-5ea1-4ffe-bda7-a3116abd2267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.584 255071 DEBUG nova.network.neutron [req-d8a9278e-6cf3-4c90-b297-4c5590afe990 req-bf869e03-4d3f-415d-979f-26f7831b4dc8 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Refreshing network info cache for port 3c03306d-f387-4844-a235-2eaba1efde2e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.587 255071 DEBUG nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Start _get_guest_xml network_info=[{"id": "3c03306d-f387-4844-a235-2eaba1efde2e", "address": "fa:16:3e:98:0b:ca", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c03306d-f3", "ovs_interfaceid": "3c03306d-f387-4844-a235-2eaba1efde2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4e507a0b-03e8-4934-a54e-56137eebde3a', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4e507a0b-03e8-4934-a54e-56137eebde3a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '40011a89-5ea1-4ffe-bda7-a3116abd2267', 'attached_at': '', 'detached_at': '', 'volume_id': '4e507a0b-03e8-4934-a54e-56137eebde3a', 'serial': '4e507a0b-03e8-4934-a54e-56137eebde3a'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'delete_on_termination': True, 'attachment_id': 'db836ae1-ebff-413a-9f04-3117d8fe4d2c', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.592 255071 WARNING nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.598 255071 DEBUG nova.virt.libvirt.host [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.599 255071 DEBUG nova.virt.libvirt.host [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.602 255071 DEBUG nova.virt.libvirt.host [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.602 255071 DEBUG nova.virt.libvirt.host [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.603 255071 DEBUG nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.603 255071 DEBUG nova.virt.hardware [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.603 255071 DEBUG nova.virt.hardware [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.603 255071 DEBUG nova.virt.hardware [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.604 255071 DEBUG nova.virt.hardware [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.604 255071 DEBUG nova.virt.hardware [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.604 255071 DEBUG nova.virt.hardware [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.604 255071 DEBUG nova.virt.hardware [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.605 255071 DEBUG nova.virt.hardware [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.605 255071 DEBUG nova.virt.hardware [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.605 255071 DEBUG nova.virt.hardware [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.605 255071 DEBUG nova.virt.hardware [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.633 255071 DEBUG nova.storage.rbd_utils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] rbd image 40011a89-5ea1-4ffe-bda7-a3116abd2267_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:10:07 compute-0 nova_compute[255040]: 2025-11-29 08:10:07.638 255071 DEBUG oslo_concurrency.processutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Nov 29 08:10:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Nov 29 08:10:07 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Nov 29 08:10:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 134 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 3.3 KiB/s wr, 74 op/s
Nov 29 08:10:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:10:08 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1510111608' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.131 255071 DEBUG oslo_concurrency.processutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.170 255071 DEBUG nova.virt.libvirt.vif [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:10:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-1377407486',display_name='tempest-TestVolumeBootPattern-volume-backed-server-1377407486',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-1377407486',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPEbvAc9/jx3S8T+wMMpuDgL3NZ7787JivLCfsQ6S8S7rjlgDHEMLkK8QzfAaHZjQKlWuCCrWH3SBgFrm+yzUG9F59LWD7hG53fMSRwk/Ued5TFWRCaTfMEt25DEVMC+Hw==',key_name='tempest-keypair-1595562543',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3df24932e2a44aeab3c2aece8a045774',ramdisk_id='',reservation_id='r-atc77nhd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1666331213',owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:10:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e62d407203540599a65ac50d5d447b9',uuid=40011a89-5ea1-4ffe-bda7-a3116abd2267,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c03306d-f387-4844-a235-2eaba1efde2e", "address": "fa:16:3e:98:0b:ca", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c03306d-f3", "ovs_interfaceid": "3c03306d-f387-4844-a235-2eaba1efde2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.171 255071 DEBUG nova.network.os_vif_util [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converting VIF {"id": "3c03306d-f387-4844-a235-2eaba1efde2e", "address": "fa:16:3e:98:0b:ca", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c03306d-f3", "ovs_interfaceid": "3c03306d-f387-4844-a235-2eaba1efde2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.172 255071 DEBUG nova.network.os_vif_util [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:0b:ca,bridge_name='br-int',has_traffic_filtering=True,id=3c03306d-f387-4844-a235-2eaba1efde2e,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c03306d-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.173 255071 DEBUG nova.objects.instance [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lazy-loading 'pci_devices' on Instance uuid 40011a89-5ea1-4ffe-bda7-a3116abd2267 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.188 255071 DEBUG nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:10:08 compute-0 nova_compute[255040]:   <uuid>40011a89-5ea1-4ffe-bda7-a3116abd2267</uuid>
Nov 29 08:10:08 compute-0 nova_compute[255040]:   <name>instance-00000011</name>
Nov 29 08:10:08 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:10:08 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:10:08 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <nova:name>tempest-TestVolumeBootPattern-volume-backed-server-1377407486</nova:name>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:10:07</nova:creationTime>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:10:08 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:10:08 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:10:08 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:10:08 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:10:08 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:10:08 compute-0 nova_compute[255040]:         <nova:user uuid="5e62d407203540599a65ac50d5d447b9">tempest-TestVolumeBootPattern-1666331213-project-member</nova:user>
Nov 29 08:10:08 compute-0 nova_compute[255040]:         <nova:project uuid="3df24932e2a44aeab3c2aece8a045774">tempest-TestVolumeBootPattern-1666331213</nova:project>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:10:08 compute-0 nova_compute[255040]:         <nova:port uuid="3c03306d-f387-4844-a235-2eaba1efde2e">
Nov 29 08:10:08 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:10:08 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:10:08 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <system>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <entry name="serial">40011a89-5ea1-4ffe-bda7-a3116abd2267</entry>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <entry name="uuid">40011a89-5ea1-4ffe-bda7-a3116abd2267</entry>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     </system>
Nov 29 08:10:08 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:10:08 compute-0 nova_compute[255040]:   <os>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:   </os>
Nov 29 08:10:08 compute-0 nova_compute[255040]:   <features>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:   </features>
Nov 29 08:10:08 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:10:08 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:10:08 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/40011a89-5ea1-4ffe-bda7-a3116abd2267_disk.config">
Nov 29 08:10:08 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       </source>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:10:08 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <source protocol="rbd" name="volumes/volume-4e507a0b-03e8-4934-a54e-56137eebde3a">
Nov 29 08:10:08 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       </source>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:10:08 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <serial>4e507a0b-03e8-4934-a54e-56137eebde3a</serial>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:98:0b:ca"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <target dev="tap3c03306d-f3"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/40011a89-5ea1-4ffe-bda7-a3116abd2267/console.log" append="off"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <video>
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     </video>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:10:08 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:10:08 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:10:08 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:10:08 compute-0 nova_compute[255040]: </domain>
Nov 29 08:10:08 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.190 255071 DEBUG nova.compute.manager [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Preparing to wait for external event network-vif-plugged-3c03306d-f387-4844-a235-2eaba1efde2e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.190 255071 DEBUG oslo_concurrency.lockutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "40011a89-5ea1-4ffe-bda7-a3116abd2267-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.191 255071 DEBUG oslo_concurrency.lockutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "40011a89-5ea1-4ffe-bda7-a3116abd2267-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.191 255071 DEBUG oslo_concurrency.lockutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "40011a89-5ea1-4ffe-bda7-a3116abd2267-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.192 255071 DEBUG nova.virt.libvirt.vif [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:10:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-1377407486',display_name='tempest-TestVolumeBootPattern-volume-backed-server-1377407486',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-1377407486',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPEbvAc9/jx3S8T+wMMpuDgL3NZ7787JivLCfsQ6S8S7rjlgDHEMLkK8QzfAaHZjQKlWuCCrWH3SBgFrm+yzUG9F59LWD7hG53fMSRwk/Ued5TFWRCaTfMEt25DEVMC+Hw==',key_name='tempest-keypair-1595562543',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3df24932e2a44aeab3c2aece8a045774',ramdisk_id='',reservation_id='r-atc77nhd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1666331213',owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:10:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e62d407203540599a65ac50d5d447b9',uuid=40011a89-5ea1-4ffe-bda7-a3116abd2267,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c03306d-f387-4844-a235-2eaba1efde2e", "address": "fa:16:3e:98:0b:ca", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c03306d-f3", "ovs_interfaceid": "3c03306d-f387-4844-a235-2eaba1efde2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.192 255071 DEBUG nova.network.os_vif_util [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converting VIF {"id": "3c03306d-f387-4844-a235-2eaba1efde2e", "address": "fa:16:3e:98:0b:ca", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c03306d-f3", "ovs_interfaceid": "3c03306d-f387-4844-a235-2eaba1efde2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.193 255071 DEBUG nova.network.os_vif_util [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:0b:ca,bridge_name='br-int',has_traffic_filtering=True,id=3c03306d-f387-4844-a235-2eaba1efde2e,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c03306d-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.193 255071 DEBUG os_vif [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:0b:ca,bridge_name='br-int',has_traffic_filtering=True,id=3c03306d-f387-4844-a235-2eaba1efde2e,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c03306d-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.194 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.195 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.195 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.200 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.201 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c03306d-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.201 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3c03306d-f3, col_values=(('external_ids', {'iface-id': '3c03306d-f387-4844-a235-2eaba1efde2e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:0b:ca', 'vm-uuid': '40011a89-5ea1-4ffe-bda7-a3116abd2267'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.203 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:08 compute-0 NetworkManager[49116]: <info>  [1764403808.2045] manager: (tap3c03306d-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.205 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.209 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.211 255071 INFO os_vif [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:0b:ca,bridge_name='br-int',has_traffic_filtering=True,id=3c03306d-f387-4844-a235-2eaba1efde2e,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c03306d-f3')
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.265 255071 DEBUG nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.265 255071 DEBUG nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.265 255071 DEBUG nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] No VIF found with MAC fa:16:3e:98:0b:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.266 255071 INFO nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Using config drive
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.288 255071 DEBUG nova.storage.rbd_utils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] rbd image 40011a89-5ea1-4ffe-bda7-a3116abd2267_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.668 255071 INFO nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Creating config drive at /var/lib/nova/instances/40011a89-5ea1-4ffe-bda7-a3116abd2267/disk.config
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.677 255071 DEBUG oslo_concurrency.processutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/40011a89-5ea1-4ffe-bda7-a3116abd2267/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpavftna6o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:10:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:10:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:10:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:10:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:10:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:10:08 compute-0 ceph-mon[75237]: osdmap e289: 3 total, 3 up, 3 in
Nov 29 08:10:08 compute-0 ceph-mon[75237]: pgmap v1554: 305 pgs: 305 active+clean; 134 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 3.3 KiB/s wr, 74 op/s
Nov 29 08:10:08 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1510111608' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.816 255071 DEBUG oslo_concurrency.processutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/40011a89-5ea1-4ffe-bda7-a3116abd2267/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpavftna6o" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.844 255071 DEBUG nova.storage.rbd_utils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] rbd image 40011a89-5ea1-4ffe-bda7-a3116abd2267_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.849 255071 DEBUG oslo_concurrency.processutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/40011a89-5ea1-4ffe-bda7-a3116abd2267/disk.config 40011a89-5ea1-4ffe-bda7-a3116abd2267_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.969 255071 DEBUG nova.network.neutron [req-d8a9278e-6cf3-4c90-b297-4c5590afe990 req-bf869e03-4d3f-415d-979f-26f7831b4dc8 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Updated VIF entry in instance network info cache for port 3c03306d-f387-4844-a235-2eaba1efde2e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.970 255071 DEBUG nova.network.neutron [req-d8a9278e-6cf3-4c90-b297-4c5590afe990 req-bf869e03-4d3f-415d-979f-26f7831b4dc8 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Updating instance_info_cache with network_info: [{"id": "3c03306d-f387-4844-a235-2eaba1efde2e", "address": "fa:16:3e:98:0b:ca", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c03306d-f3", "ovs_interfaceid": "3c03306d-f387-4844-a235-2eaba1efde2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:10:08 compute-0 nova_compute[255040]: 2025-11-29 08:10:08.987 255071 DEBUG oslo_concurrency.lockutils [req-d8a9278e-6cf3-4c90-b297-4c5590afe990 req-bf869e03-4d3f-415d-979f-26f7831b4dc8 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-40011a89-5ea1-4ffe-bda7-a3116abd2267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.017 255071 DEBUG oslo_concurrency.processutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/40011a89-5ea1-4ffe-bda7-a3116abd2267/disk.config 40011a89-5ea1-4ffe-bda7-a3116abd2267_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.018 255071 INFO nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Deleting local config drive /var/lib/nova/instances/40011a89-5ea1-4ffe-bda7-a3116abd2267/disk.config because it was imported into RBD.
Nov 29 08:10:09 compute-0 kernel: tap3c03306d-f3: entered promiscuous mode
Nov 29 08:10:09 compute-0 NetworkManager[49116]: <info>  [1764403809.0709] manager: (tap3c03306d-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/92)
Nov 29 08:10:09 compute-0 ovn_controller[153295]: 2025-11-29T08:10:09Z|00162|binding|INFO|Claiming lport 3c03306d-f387-4844-a235-2eaba1efde2e for this chassis.
Nov 29 08:10:09 compute-0 ovn_controller[153295]: 2025-11-29T08:10:09Z|00163|binding|INFO|3c03306d-f387-4844-a235-2eaba1efde2e: Claiming fa:16:3e:98:0b:ca 10.100.0.11
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.072 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.077 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.087 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:0b:ca 10.100.0.11'], port_security=['fa:16:3e:98:0b:ca 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '40011a89-5ea1-4ffe-bda7-a3116abd2267', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3df24932e2a44aeab3c2aece8a045774', 'neutron:revision_number': '2', 'neutron:security_group_ids': '24a98d15-8f29-4bfa-8cd9-cf25fb940203', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6d2be5e-00f1-4a95-b572-cb93402763d5, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=3c03306d-f387-4844-a235-2eaba1efde2e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.088 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 3c03306d-f387-4844-a235-2eaba1efde2e in datapath 6e23492e-beff-43f6-b4d1-f88ebeea0b6f bound to our chassis
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.089 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e23492e-beff-43f6-b4d1-f88ebeea0b6f
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.104 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[c9cabd63-85bc-4f02-95a7-a185945a41ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.105 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6e23492e-b1 in ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:10:09 compute-0 systemd-udevd[284139]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.107 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6e23492e-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.107 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[17e7f3ce-689d-477e-980c-d9d42f9fadef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.108 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[d8d87324-be9c-49fa-bd90-56be4a8e7a41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:09 compute-0 systemd-machined[216271]: New machine qemu-17-instance-00000011.
Nov 29 08:10:09 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000011.
Nov 29 08:10:09 compute-0 NetworkManager[49116]: <info>  [1764403809.1258] device (tap3c03306d-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:10:09 compute-0 NetworkManager[49116]: <info>  [1764403809.1267] device (tap3c03306d-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.126 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[9ddb2d54-612e-4ac7-9a51-2c10d7faf9d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.154 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.153 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[6696314b-4abe-45c2-af49-a70f468d6036]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:09 compute-0 ovn_controller[153295]: 2025-11-29T08:10:09Z|00164|binding|INFO|Setting lport 3c03306d-f387-4844-a235-2eaba1efde2e ovn-installed in OVS
Nov 29 08:10:09 compute-0 ovn_controller[153295]: 2025-11-29T08:10:09Z|00165|binding|INFO|Setting lport 3c03306d-f387-4844-a235-2eaba1efde2e up in Southbound
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.163 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.187 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[da46a728-0e0b-4631-8c3d-77f8e026162c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.194 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[2f6bbcda-ba6e-4165-b214-f7f68d92f669]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:09 compute-0 NetworkManager[49116]: <info>  [1764403809.1962] manager: (tap6e23492e-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/93)
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.230 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca7e966-783a-4e51-9a42-35b6c5a0809f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.233 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[3a6c5ec4-7627-4a41-bdcd-9e3bf6c420e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:09 compute-0 NetworkManager[49116]: <info>  [1764403809.2556] device (tap6e23492e-b0): carrier: link connected
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.259 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[16b90670-c74d-433b-9ded-4d41c607bd7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.287 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[ac63601f-6fd9-4f31-99ef-5ae06e6a416f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e23492e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:19:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604659, 'reachable_time': 30731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284171, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.309 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[a58723f0-373c-4346-ae89-32e310fffd17]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:1984'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604659, 'tstamp': 604659}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284172, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.334 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[ec733349-61bb-44ae-a632-e430912b4712]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e23492e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:19:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604659, 'reachable_time': 30731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284173, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.343 255071 DEBUG nova.compute.manager [req-3792ec62-f47a-4dbd-becf-af461d4f4cdd req-7afa6729-53fe-4e4a-a428-d24cf25b31b1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Received event network-vif-plugged-3c03306d-f387-4844-a235-2eaba1efde2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.344 255071 DEBUG oslo_concurrency.lockutils [req-3792ec62-f47a-4dbd-becf-af461d4f4cdd req-7afa6729-53fe-4e4a-a428-d24cf25b31b1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "40011a89-5ea1-4ffe-bda7-a3116abd2267-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.344 255071 DEBUG oslo_concurrency.lockutils [req-3792ec62-f47a-4dbd-becf-af461d4f4cdd req-7afa6729-53fe-4e4a-a428-d24cf25b31b1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "40011a89-5ea1-4ffe-bda7-a3116abd2267-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.344 255071 DEBUG oslo_concurrency.lockutils [req-3792ec62-f47a-4dbd-becf-af461d4f4cdd req-7afa6729-53fe-4e4a-a428-d24cf25b31b1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "40011a89-5ea1-4ffe-bda7-a3116abd2267-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.345 255071 DEBUG nova.compute.manager [req-3792ec62-f47a-4dbd-becf-af461d4f4cdd req-7afa6729-53fe-4e4a-a428-d24cf25b31b1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Processing event network-vif-plugged-3c03306d-f387-4844-a235-2eaba1efde2e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.375 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[1cdbe622-501f-4478-829d-bfd338a6ebc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.450 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[c38e1603-21f0-4b75-a9aa-97eae83b7aa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.453 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e23492e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.453 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.453 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e23492e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.456 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:09 compute-0 kernel: tap6e23492e-b0: entered promiscuous mode
Nov 29 08:10:09 compute-0 NetworkManager[49116]: <info>  [1764403809.4572] manager: (tap6e23492e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/94)
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.460 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e23492e-b0, col_values=(('external_ids', {'iface-id': 'c7579d40-4225-44ab-93bd-e31c3efe399f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.461 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:09 compute-0 ovn_controller[153295]: 2025-11-29T08:10:09Z|00166|binding|INFO|Releasing lport c7579d40-4225-44ab-93bd-e31c3efe399f from this chassis (sb_readonly=0)
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.476 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.478 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6e23492e-beff-43f6-b4d1-f88ebeea0b6f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6e23492e-beff-43f6-b4d1-f88ebeea0b6f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.479 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[1b284541-15ba-49bb-8f38-61c275cfe637]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.480 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-6e23492e-beff-43f6-b4d1-f88ebeea0b6f
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/6e23492e-beff-43f6-b4d1-f88ebeea0b6f.pid.haproxy
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID 6e23492e-beff-43f6-b4d1-f88ebeea0b6f
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:10:09 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:09.481 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'env', 'PROCESS_TAG=haproxy-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6e23492e-beff-43f6-b4d1-f88ebeea0b6f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.748 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 134 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 4.2 KiB/s wr, 79 op/s
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.873 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403809.872516, 40011a89-5ea1-4ffe-bda7-a3116abd2267 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.873 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] VM Started (Lifecycle Event)
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.876 255071 DEBUG nova.compute.manager [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.881 255071 DEBUG nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.885 255071 INFO nova.virt.libvirt.driver [-] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Instance spawned successfully.
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.886 255071 DEBUG nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:10:09 compute-0 podman[284246]: 2025-11-29 08:10:09.895075314 +0000 UTC m=+0.061897976 container create 8d67db01b08bb417faadfa0d40637da2a3530762562f55aeb9c931ad8bc53b92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.895 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.910 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.915 255071 DEBUG nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.916 255071 DEBUG nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.916 255071 DEBUG nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.917 255071 DEBUG nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.917 255071 DEBUG nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.918 255071 DEBUG nova.virt.libvirt.driver [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.943 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.944 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403809.8728611, 40011a89-5ea1-4ffe-bda7-a3116abd2267 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.944 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] VM Paused (Lifecycle Event)
Nov 29 08:10:09 compute-0 systemd[1]: Started libpod-conmon-8d67db01b08bb417faadfa0d40637da2a3530762562f55aeb9c931ad8bc53b92.scope.
Nov 29 08:10:09 compute-0 podman[284246]: 2025-11-29 08:10:09.860868054 +0000 UTC m=+0.027690726 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.970 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.976 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403809.8794477, 40011a89-5ea1-4ffe-bda7-a3116abd2267 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.977 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] VM Resumed (Lifecycle Event)
Nov 29 08:10:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.982 255071 INFO nova.compute.manager [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Took 4.05 seconds to spawn the instance on the hypervisor.
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.982 255071 DEBUG nova.compute.manager [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e243a254c2ee5c0c953424e3ee356b6dd4e5fbf7395b4e95a734c64d2a51de/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:09 compute-0 nova_compute[255040]: 2025-11-29 08:10:09.999 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:10 compute-0 podman[284246]: 2025-11-29 08:10:10.002289227 +0000 UTC m=+0.169111899 container init 8d67db01b08bb417faadfa0d40637da2a3530762562f55aeb9c931ad8bc53b92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:10:10 compute-0 nova_compute[255040]: 2025-11-29 08:10:10.003 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:10:10 compute-0 podman[284246]: 2025-11-29 08:10:10.008726961 +0000 UTC m=+0.175549613 container start 8d67db01b08bb417faadfa0d40637da2a3530762562f55aeb9c931ad8bc53b92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:10:10 compute-0 nova_compute[255040]: 2025-11-29 08:10:10.025 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:10:10 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[284262]: [NOTICE]   (284266) : New worker (284268) forked
Nov 29 08:10:10 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[284262]: [NOTICE]   (284266) : Loading success.
Nov 29 08:10:10 compute-0 nova_compute[255040]: 2025-11-29 08:10:10.048 255071 INFO nova.compute.manager [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Took 6.20 seconds to build instance.
Nov 29 08:10:10 compute-0 nova_compute[255040]: 2025-11-29 08:10:10.068 255071 DEBUG oslo_concurrency.lockutils [None req-e886a1f4-56f0-4b62-bc1b-eee7e428c6b5 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "40011a89-5ea1-4ffe-bda7-a3116abd2267" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:10:10 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/830064787' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:10 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2271944283' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:10 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2271944283' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:10 compute-0 ceph-mon[75237]: pgmap v1555: 305 pgs: 305 active+clean; 134 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 4.2 KiB/s wr, 79 op/s
Nov 29 08:10:10 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/830064787' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:10 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2271944283' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:10 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2271944283' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:10 compute-0 podman[284277]: 2025-11-29 08:10:10.89778091 +0000 UTC m=+0.067554037 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 08:10:10 compute-0 NetworkManager[49116]: <info>  [1764403810.9638] manager: (patch-provnet-0b50aea8-d2d6-4416-bd00-1ceabb7a7c1d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Nov 29 08:10:10 compute-0 NetworkManager[49116]: <info>  [1764403810.9646] manager: (patch-br-int-to-provnet-0b50aea8-d2d6-4416-bd00-1ceabb7a7c1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Nov 29 08:10:10 compute-0 nova_compute[255040]: 2025-11-29 08:10:10.963 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:11 compute-0 nova_compute[255040]: 2025-11-29 08:10:11.069 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:11 compute-0 ovn_controller[153295]: 2025-11-29T08:10:11Z|00167|binding|INFO|Releasing lport c7579d40-4225-44ab-93bd-e31c3efe399f from this chassis (sb_readonly=0)
Nov 29 08:10:11 compute-0 nova_compute[255040]: 2025-11-29 08:10:11.079 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:11 compute-0 nova_compute[255040]: 2025-11-29 08:10:11.286 255071 DEBUG nova.compute.manager [req-40bd7754-0a4f-4c68-a9ca-6843ec200ed7 req-3f79a1f9-a38f-414b-8ae8-d41c490e93c5 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Received event network-changed-3c03306d-f387-4844-a235-2eaba1efde2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:11 compute-0 nova_compute[255040]: 2025-11-29 08:10:11.286 255071 DEBUG nova.compute.manager [req-40bd7754-0a4f-4c68-a9ca-6843ec200ed7 req-3f79a1f9-a38f-414b-8ae8-d41c490e93c5 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Refreshing instance network info cache due to event network-changed-3c03306d-f387-4844-a235-2eaba1efde2e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:10:11 compute-0 nova_compute[255040]: 2025-11-29 08:10:11.286 255071 DEBUG oslo_concurrency.lockutils [req-40bd7754-0a4f-4c68-a9ca-6843ec200ed7 req-3f79a1f9-a38f-414b-8ae8-d41c490e93c5 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-40011a89-5ea1-4ffe-bda7-a3116abd2267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:10:11 compute-0 nova_compute[255040]: 2025-11-29 08:10:11.286 255071 DEBUG oslo_concurrency.lockutils [req-40bd7754-0a4f-4c68-a9ca-6843ec200ed7 req-3f79a1f9-a38f-414b-8ae8-d41c490e93c5 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-40011a89-5ea1-4ffe-bda7-a3116abd2267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:10:11 compute-0 nova_compute[255040]: 2025-11-29 08:10:11.286 255071 DEBUG nova.network.neutron [req-40bd7754-0a4f-4c68-a9ca-6843ec200ed7 req-3f79a1f9-a38f-414b-8ae8-d41c490e93c5 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Refreshing network info cache for port 3c03306d-f387-4844-a235-2eaba1efde2e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:10:11 compute-0 nova_compute[255040]: 2025-11-29 08:10:11.454 255071 DEBUG nova.compute.manager [req-fe3733ff-7b7c-4aef-88e5-227621409b7d req-3ab4d640-fed1-43bf-86fd-b064e7910153 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Received event network-vif-plugged-3c03306d-f387-4844-a235-2eaba1efde2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:11 compute-0 nova_compute[255040]: 2025-11-29 08:10:11.454 255071 DEBUG oslo_concurrency.lockutils [req-fe3733ff-7b7c-4aef-88e5-227621409b7d req-3ab4d640-fed1-43bf-86fd-b064e7910153 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "40011a89-5ea1-4ffe-bda7-a3116abd2267-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:11 compute-0 nova_compute[255040]: 2025-11-29 08:10:11.455 255071 DEBUG oslo_concurrency.lockutils [req-fe3733ff-7b7c-4aef-88e5-227621409b7d req-3ab4d640-fed1-43bf-86fd-b064e7910153 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "40011a89-5ea1-4ffe-bda7-a3116abd2267-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:11 compute-0 nova_compute[255040]: 2025-11-29 08:10:11.455 255071 DEBUG oslo_concurrency.lockutils [req-fe3733ff-7b7c-4aef-88e5-227621409b7d req-3ab4d640-fed1-43bf-86fd-b064e7910153 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "40011a89-5ea1-4ffe-bda7-a3116abd2267-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:11 compute-0 nova_compute[255040]: 2025-11-29 08:10:11.456 255071 DEBUG nova.compute.manager [req-fe3733ff-7b7c-4aef-88e5-227621409b7d req-3ab4d640-fed1-43bf-86fd-b064e7910153 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] No waiting events found dispatching network-vif-plugged-3c03306d-f387-4844-a235-2eaba1efde2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:10:11 compute-0 nova_compute[255040]: 2025-11-29 08:10:11.456 255071 WARNING nova.compute.manager [req-fe3733ff-7b7c-4aef-88e5-227621409b7d req-3ab4d640-fed1-43bf-86fd-b064e7910153 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Received unexpected event network-vif-plugged-3c03306d-f387-4844-a235-2eaba1efde2e for instance with vm_state active and task_state None.
Nov 29 08:10:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 134 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 28 KiB/s wr, 71 op/s
Nov 29 08:10:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Nov 29 08:10:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Nov 29 08:10:11 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Nov 29 08:10:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:12 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1686440402' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:12 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1686440402' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:12 compute-0 nova_compute[255040]: 2025-11-29 08:10:12.494 255071 DEBUG nova.network.neutron [req-40bd7754-0a4f-4c68-a9ca-6843ec200ed7 req-3f79a1f9-a38f-414b-8ae8-d41c490e93c5 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Updated VIF entry in instance network info cache for port 3c03306d-f387-4844-a235-2eaba1efde2e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:10:12 compute-0 nova_compute[255040]: 2025-11-29 08:10:12.496 255071 DEBUG nova.network.neutron [req-40bd7754-0a4f-4c68-a9ca-6843ec200ed7 req-3f79a1f9-a38f-414b-8ae8-d41c490e93c5 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Updating instance_info_cache with network_info: [{"id": "3c03306d-f387-4844-a235-2eaba1efde2e", "address": "fa:16:3e:98:0b:ca", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c03306d-f3", "ovs_interfaceid": "3c03306d-f387-4844-a235-2eaba1efde2e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:10:12 compute-0 nova_compute[255040]: 2025-11-29 08:10:12.522 255071 DEBUG oslo_concurrency.lockutils [req-40bd7754-0a4f-4c68-a9ca-6843ec200ed7 req-3f79a1f9-a38f-414b-8ae8-d41c490e93c5 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-40011a89-5ea1-4ffe-bda7-a3116abd2267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:10:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Nov 29 08:10:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Nov 29 08:10:12 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Nov 29 08:10:12 compute-0 ceph-mon[75237]: pgmap v1556: 305 pgs: 305 active+clean; 134 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 28 KiB/s wr, 71 op/s
Nov 29 08:10:12 compute-0 ceph-mon[75237]: osdmap e290: 3 total, 3 up, 3 in
Nov 29 08:10:12 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1686440402' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:12 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1686440402' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:12 compute-0 ceph-mon[75237]: osdmap e291: 3 total, 3 up, 3 in
Nov 29 08:10:13 compute-0 nova_compute[255040]: 2025-11-29 08:10:13.204 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:13 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3482446403' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:13 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3482446403' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 134 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 37 KiB/s wr, 194 op/s
Nov 29 08:10:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Nov 29 08:10:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Nov 29 08:10:13 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Nov 29 08:10:13 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3482446403' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:13 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3482446403' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:14 compute-0 nova_compute[255040]: 2025-11-29 08:10:14.753 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:14 compute-0 ceph-mon[75237]: pgmap v1559: 305 pgs: 305 active+clean; 134 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 37 KiB/s wr, 194 op/s
Nov 29 08:10:14 compute-0 ceph-mon[75237]: osdmap e292: 3 total, 3 up, 3 in
Nov 29 08:10:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:14 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3790612549' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:14 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3790612549' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 134 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 36 KiB/s wr, 298 op/s
Nov 29 08:10:15 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3790612549' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:15 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3790612549' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:16 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3801725041' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:16 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3801725041' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:10:16 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3426572582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:16 compute-0 ceph-mon[75237]: pgmap v1561: 305 pgs: 305 active+clean; 134 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 36 KiB/s wr, 298 op/s
Nov 29 08:10:16 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3801725041' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:16 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3801725041' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:16 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3426572582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 305 active+clean; 134 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 6.7 KiB/s wr, 282 op/s
Nov 29 08:10:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Nov 29 08:10:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Nov 29 08:10:17 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Nov 29 08:10:18 compute-0 nova_compute[255040]: 2025-11-29 08:10:18.250 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Nov 29 08:10:18 compute-0 ceph-mon[75237]: pgmap v1562: 305 pgs: 305 active+clean; 134 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 6.7 KiB/s wr, 282 op/s
Nov 29 08:10:18 compute-0 ceph-mon[75237]: osdmap e293: 3 total, 3 up, 3 in
Nov 29 08:10:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Nov 29 08:10:18 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Nov 29 08:10:19 compute-0 sudo[284300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:19 compute-0 sudo[284300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:19 compute-0 sudo[284300]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:19 compute-0 sudo[284325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:10:19 compute-0 sudo[284325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:19 compute-0 sudo[284325]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:19 compute-0 sudo[284350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:19 compute-0 sudo[284350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:19 compute-0 sudo[284350]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 134 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.5 KiB/s wr, 254 op/s
Nov 29 08:10:19 compute-0 nova_compute[255040]: 2025-11-29 08:10:19.753 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:19 compute-0 sudo[284375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:10:19 compute-0 sudo[284375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:19 compute-0 ceph-mon[75237]: osdmap e294: 3 total, 3 up, 3 in
Nov 29 08:10:20 compute-0 sudo[284375]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:10:20 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:10:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:10:20 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:10:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:10:20 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:10:20 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev e5c9c89e-3baa-4591-bbec-472adaedad04 does not exist
Nov 29 08:10:20 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 6901f747-bfac-4c51-98e4-0fa0212594ff does not exist
Nov 29 08:10:20 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev c7a459d4-44b3-4532-9ee3-f1f5dda25adf does not exist
Nov 29 08:10:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:10:20 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:10:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:10:20 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:10:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:10:20 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:10:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:10:20 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3058986771' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:20 compute-0 sudo[284431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:20 compute-0 sudo[284431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:20 compute-0 sudo[284431]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:20 compute-0 sudo[284456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:10:20 compute-0 sudo[284456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:20 compute-0 sudo[284456]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:20 compute-0 sudo[284481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:20 compute-0 sudo[284481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:20 compute-0 sudo[284481]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:20 compute-0 sudo[284506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:10:20 compute-0 sudo[284506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:20 compute-0 ceph-mon[75237]: pgmap v1565: 305 pgs: 305 active+clean; 134 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.5 KiB/s wr, 254 op/s
Nov 29 08:10:20 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:10:20 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:10:20 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:10:20 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:10:20 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:10:20 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:10:20 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3058986771' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:21 compute-0 podman[284570]: 2025-11-29 08:10:21.107141491 +0000 UTC m=+0.054370433 container create 9e88a119d0bf7e5c5143bb7bab7b8b64478b70c0c9f22b8cc32070aba2807cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:10:21 compute-0 podman[284570]: 2025-11-29 08:10:21.085358386 +0000 UTC m=+0.032587358 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:10:21 compute-0 systemd[1]: Started libpod-conmon-9e88a119d0bf7e5c5143bb7bab7b8b64478b70c0c9f22b8cc32070aba2807cff.scope.
Nov 29 08:10:21 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:10:21 compute-0 podman[284570]: 2025-11-29 08:10:21.239932393 +0000 UTC m=+0.187161345 container init 9e88a119d0bf7e5c5143bb7bab7b8b64478b70c0c9f22b8cc32070aba2807cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:10:21 compute-0 podman[284570]: 2025-11-29 08:10:21.249787888 +0000 UTC m=+0.197016830 container start 9e88a119d0bf7e5c5143bb7bab7b8b64478b70c0c9f22b8cc32070aba2807cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mcnulty, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:10:21 compute-0 podman[284570]: 2025-11-29 08:10:21.254268688 +0000 UTC m=+0.201497630 container attach 9e88a119d0bf7e5c5143bb7bab7b8b64478b70c0c9f22b8cc32070aba2807cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:10:21 compute-0 eager_mcnulty[284584]: 167 167
Nov 29 08:10:21 compute-0 systemd[1]: libpod-9e88a119d0bf7e5c5143bb7bab7b8b64478b70c0c9f22b8cc32070aba2807cff.scope: Deactivated successfully.
Nov 29 08:10:21 compute-0 conmon[284584]: conmon 9e88a119d0bf7e5c5143 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e88a119d0bf7e5c5143bb7bab7b8b64478b70c0c9f22b8cc32070aba2807cff.scope/container/memory.events
Nov 29 08:10:21 compute-0 podman[284590]: 2025-11-29 08:10:21.312780132 +0000 UTC m=+0.035381812 container died 9e88a119d0bf7e5c5143bb7bab7b8b64478b70c0c9f22b8cc32070aba2807cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 08:10:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8a5dbe33092423df50b15c8168db113f4dffc7cf5c094a25e0b2c7c7e29a4e8-merged.mount: Deactivated successfully.
Nov 29 08:10:21 compute-0 podman[284590]: 2025-11-29 08:10:21.359816977 +0000 UTC m=+0.082418657 container remove 9e88a119d0bf7e5c5143bb7bab7b8b64478b70c0c9f22b8cc32070aba2807cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mcnulty, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:10:21 compute-0 systemd[1]: libpod-conmon-9e88a119d0bf7e5c5143bb7bab7b8b64478b70c0c9f22b8cc32070aba2807cff.scope: Deactivated successfully.
Nov 29 08:10:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Nov 29 08:10:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Nov 29 08:10:21 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Nov 29 08:10:21 compute-0 podman[284611]: 2025-11-29 08:10:21.565772846 +0000 UTC m=+0.061550396 container create 20ee30dfa522e1bf756eb4d8f7fe6a526be501d4293b2953534ecfef5884031c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilson, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:10:21 compute-0 systemd[1]: Started libpod-conmon-20ee30dfa522e1bf756eb4d8f7fe6a526be501d4293b2953534ecfef5884031c.scope.
Nov 29 08:10:21 compute-0 podman[284611]: 2025-11-29 08:10:21.540342092 +0000 UTC m=+0.036119662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:10:21 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3ebd42131814f18d2dff8c9a82f0838a9f761e9211a9c043ffbc01fb7a02d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3ebd42131814f18d2dff8c9a82f0838a9f761e9211a9c043ffbc01fb7a02d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3ebd42131814f18d2dff8c9a82f0838a9f761e9211a9c043ffbc01fb7a02d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3ebd42131814f18d2dff8c9a82f0838a9f761e9211a9c043ffbc01fb7a02d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3ebd42131814f18d2dff8c9a82f0838a9f761e9211a9c043ffbc01fb7a02d1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:21 compute-0 podman[284611]: 2025-11-29 08:10:21.692957987 +0000 UTC m=+0.188735567 container init 20ee30dfa522e1bf756eb4d8f7fe6a526be501d4293b2953534ecfef5884031c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:10:21 compute-0 podman[284611]: 2025-11-29 08:10:21.701112335 +0000 UTC m=+0.196889895 container start 20ee30dfa522e1bf756eb4d8f7fe6a526be501d4293b2953534ecfef5884031c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilson, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 08:10:21 compute-0 podman[284611]: 2025-11-29 08:10:21.706647114 +0000 UTC m=+0.202424694 container attach 20ee30dfa522e1bf756eb4d8f7fe6a526be501d4293b2953534ecfef5884031c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:10:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 134 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 4.8 KiB/s wr, 89 op/s
Nov 29 08:10:22 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 08:10:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Nov 29 08:10:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Nov 29 08:10:22 compute-0 ceph-mon[75237]: osdmap e295: 3 total, 3 up, 3 in
Nov 29 08:10:22 compute-0 ceph-mon[75237]: pgmap v1567: 305 pgs: 305 active+clean; 134 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 4.8 KiB/s wr, 89 op/s
Nov 29 08:10:22 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Nov 29 08:10:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Nov 29 08:10:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Nov 29 08:10:22 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Nov 29 08:10:22 compute-0 strange_wilson[284627]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:10:22 compute-0 strange_wilson[284627]: --> relative data size: 1.0
Nov 29 08:10:22 compute-0 strange_wilson[284627]: --> All data devices are unavailable
Nov 29 08:10:22 compute-0 systemd[1]: libpod-20ee30dfa522e1bf756eb4d8f7fe6a526be501d4293b2953534ecfef5884031c.scope: Deactivated successfully.
Nov 29 08:10:22 compute-0 podman[284611]: 2025-11-29 08:10:22.939236524 +0000 UTC m=+1.435014074 container died 20ee30dfa522e1bf756eb4d8f7fe6a526be501d4293b2953534ecfef5884031c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:10:22 compute-0 systemd[1]: libpod-20ee30dfa522e1bf756eb4d8f7fe6a526be501d4293b2953534ecfef5884031c.scope: Consumed 1.157s CPU time.
Nov 29 08:10:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec3ebd42131814f18d2dff8c9a82f0838a9f761e9211a9c043ffbc01fb7a02d1-merged.mount: Deactivated successfully.
Nov 29 08:10:23 compute-0 podman[284611]: 2025-11-29 08:10:23.024313991 +0000 UTC m=+1.520091541 container remove 20ee30dfa522e1bf756eb4d8f7fe6a526be501d4293b2953534ecfef5884031c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:10:23 compute-0 systemd[1]: libpod-conmon-20ee30dfa522e1bf756eb4d8f7fe6a526be501d4293b2953534ecfef5884031c.scope: Deactivated successfully.
Nov 29 08:10:23 compute-0 sudo[284506]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:23 compute-0 sudo[284667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:23 compute-0 sudo[284667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:23 compute-0 sudo[284667]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:23 compute-0 ovn_controller[153295]: 2025-11-29T08:10:23Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:98:0b:ca 10.100.0.11
Nov 29 08:10:23 compute-0 ovn_controller[153295]: 2025-11-29T08:10:23Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:98:0b:ca 10.100.0.11
Nov 29 08:10:23 compute-0 sudo[284692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:10:23 compute-0 sudo[284692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:23 compute-0 sudo[284692]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:23 compute-0 nova_compute[255040]: 2025-11-29 08:10:23.285 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:23 compute-0 sudo[284717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:23 compute-0 sudo[284717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:23 compute-0 sudo[284717]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:23 compute-0 sudo[284742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 08:10:23 compute-0 sudo[284742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:23 compute-0 ceph-mon[75237]: osdmap e296: 3 total, 3 up, 3 in
Nov 29 08:10:23 compute-0 ceph-mon[75237]: osdmap e297: 3 total, 3 up, 3 in
Nov 29 08:10:23 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 08:10:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 143 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 1.1 MiB/s wr, 74 op/s
Nov 29 08:10:23 compute-0 podman[284807]: 2025-11-29 08:10:23.857001946 +0000 UTC m=+0.058369211 container create a0d6316e0b83b2df1f4d60ff543527a8f24ecc129df4e895cda3b5f57889a996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendeleev, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:10:23 compute-0 systemd[1]: Started libpod-conmon-a0d6316e0b83b2df1f4d60ff543527a8f24ecc129df4e895cda3b5f57889a996.scope.
Nov 29 08:10:23 compute-0 podman[284807]: 2025-11-29 08:10:23.826723691 +0000 UTC m=+0.028090976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:10:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:10:23 compute-0 podman[284807]: 2025-11-29 08:10:23.955521885 +0000 UTC m=+0.156889180 container init a0d6316e0b83b2df1f4d60ff543527a8f24ecc129df4e895cda3b5f57889a996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 08:10:23 compute-0 podman[284807]: 2025-11-29 08:10:23.963251084 +0000 UTC m=+0.164618349 container start a0d6316e0b83b2df1f4d60ff543527a8f24ecc129df4e895cda3b5f57889a996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 08:10:23 compute-0 podman[284807]: 2025-11-29 08:10:23.96797705 +0000 UTC m=+0.169344425 container attach a0d6316e0b83b2df1f4d60ff543527a8f24ecc129df4e895cda3b5f57889a996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:10:23 compute-0 vigorous_mendeleev[284824]: 167 167
Nov 29 08:10:23 compute-0 systemd[1]: libpod-a0d6316e0b83b2df1f4d60ff543527a8f24ecc129df4e895cda3b5f57889a996.scope: Deactivated successfully.
Nov 29 08:10:23 compute-0 podman[284807]: 2025-11-29 08:10:23.972972975 +0000 UTC m=+0.174340240 container died a0d6316e0b83b2df1f4d60ff543527a8f24ecc129df4e895cda3b5f57889a996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendeleev, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:10:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c38a26618b8bb78f1cb422357cfc3894b093b09c7fb40156474ea526c8b55a2-merged.mount: Deactivated successfully.
Nov 29 08:10:24 compute-0 podman[284807]: 2025-11-29 08:10:24.01855507 +0000 UTC m=+0.219922335 container remove a0d6316e0b83b2df1f4d60ff543527a8f24ecc129df4e895cda3b5f57889a996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:10:24 compute-0 systemd[1]: libpod-conmon-a0d6316e0b83b2df1f4d60ff543527a8f24ecc129df4e895cda3b5f57889a996.scope: Deactivated successfully.
Nov 29 08:10:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:10:24 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2050267714' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:24 compute-0 podman[284847]: 2025-11-29 08:10:24.202999201 +0000 UTC m=+0.050693954 container create 28249ff14ac8326cd76e99e89456737ee048f77a2a582c409ebc9c119e396ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:10:24 compute-0 systemd[1]: Started libpod-conmon-28249ff14ac8326cd76e99e89456737ee048f77a2a582c409ebc9c119e396ba8.scope.
Nov 29 08:10:24 compute-0 podman[284847]: 2025-11-29 08:10:24.179116379 +0000 UTC m=+0.026811162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:10:24 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:10:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f036bb6efe6b55427d9fcb58e176fb0c5a8da297f1c6d1ce0eccc217b95df3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f036bb6efe6b55427d9fcb58e176fb0c5a8da297f1c6d1ce0eccc217b95df3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f036bb6efe6b55427d9fcb58e176fb0c5a8da297f1c6d1ce0eccc217b95df3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f036bb6efe6b55427d9fcb58e176fb0c5a8da297f1c6d1ce0eccc217b95df3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:24 compute-0 podman[284847]: 2025-11-29 08:10:24.301247323 +0000 UTC m=+0.148942096 container init 28249ff14ac8326cd76e99e89456737ee048f77a2a582c409ebc9c119e396ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 08:10:24 compute-0 podman[284847]: 2025-11-29 08:10:24.309742922 +0000 UTC m=+0.157437675 container start 28249ff14ac8326cd76e99e89456737ee048f77a2a582c409ebc9c119e396ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 08:10:24 compute-0 podman[284847]: 2025-11-29 08:10:24.314545851 +0000 UTC m=+0.162240624 container attach 28249ff14ac8326cd76e99e89456737ee048f77a2a582c409ebc9c119e396ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 29 08:10:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Nov 29 08:10:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Nov 29 08:10:24 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Nov 29 08:10:24 compute-0 ceph-mon[75237]: pgmap v1570: 305 pgs: 305 active+clean; 143 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 1.1 MiB/s wr, 74 op/s
Nov 29 08:10:24 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2050267714' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:24 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2930010491' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:24 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2930010491' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:24 compute-0 nova_compute[255040]: 2025-11-29 08:10:24.756 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:25 compute-0 strange_wilson[284864]: {
Nov 29 08:10:25 compute-0 strange_wilson[284864]:     "0": [
Nov 29 08:10:25 compute-0 strange_wilson[284864]:         {
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "devices": [
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "/dev/loop3"
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             ],
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "lv_name": "ceph_lv0",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "lv_size": "21470642176",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "name": "ceph_lv0",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "tags": {
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.cluster_name": "ceph",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.crush_device_class": "",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.encrypted": "0",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.osd_id": "0",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.type": "block",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.vdo": "0"
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             },
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "type": "block",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "vg_name": "ceph_vg0"
Nov 29 08:10:25 compute-0 strange_wilson[284864]:         }
Nov 29 08:10:25 compute-0 strange_wilson[284864]:     ],
Nov 29 08:10:25 compute-0 strange_wilson[284864]:     "1": [
Nov 29 08:10:25 compute-0 strange_wilson[284864]:         {
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "devices": [
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "/dev/loop4"
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             ],
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "lv_name": "ceph_lv1",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "lv_size": "21470642176",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "name": "ceph_lv1",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "tags": {
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.cluster_name": "ceph",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.crush_device_class": "",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.encrypted": "0",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.osd_id": "1",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.type": "block",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.vdo": "0"
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             },
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "type": "block",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "vg_name": "ceph_vg1"
Nov 29 08:10:25 compute-0 strange_wilson[284864]:         }
Nov 29 08:10:25 compute-0 strange_wilson[284864]:     ],
Nov 29 08:10:25 compute-0 strange_wilson[284864]:     "2": [
Nov 29 08:10:25 compute-0 strange_wilson[284864]:         {
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "devices": [
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "/dev/loop5"
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             ],
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "lv_name": "ceph_lv2",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "lv_size": "21470642176",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "name": "ceph_lv2",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "tags": {
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.cluster_name": "ceph",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.crush_device_class": "",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.encrypted": "0",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.osd_id": "2",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.type": "block",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:                 "ceph.vdo": "0"
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             },
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "type": "block",
Nov 29 08:10:25 compute-0 strange_wilson[284864]:             "vg_name": "ceph_vg2"
Nov 29 08:10:25 compute-0 strange_wilson[284864]:         }
Nov 29 08:10:25 compute-0 strange_wilson[284864]:     ]
Nov 29 08:10:25 compute-0 strange_wilson[284864]: }
Nov 29 08:10:25 compute-0 systemd[1]: libpod-28249ff14ac8326cd76e99e89456737ee048f77a2a582c409ebc9c119e396ba8.scope: Deactivated successfully.
Nov 29 08:10:25 compute-0 podman[284874]: 2025-11-29 08:10:25.22401305 +0000 UTC m=+0.032972047 container died 28249ff14ac8326cd76e99e89456737ee048f77a2a582c409ebc9c119e396ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:10:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-72f036bb6efe6b55427d9fcb58e176fb0c5a8da297f1c6d1ce0eccc217b95df3-merged.mount: Deactivated successfully.
Nov 29 08:10:25 compute-0 podman[284874]: 2025-11-29 08:10:25.285526284 +0000 UTC m=+0.094485251 container remove 28249ff14ac8326cd76e99e89456737ee048f77a2a582c409ebc9c119e396ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 08:10:25 compute-0 systemd[1]: libpod-conmon-28249ff14ac8326cd76e99e89456737ee048f77a2a582c409ebc9c119e396ba8.scope: Deactivated successfully.
Nov 29 08:10:25 compute-0 sudo[284742]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:25 compute-0 podman[284873]: 2025-11-29 08:10:25.334687426 +0000 UTC m=+0.129211825 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:10:25 compute-0 sudo[284914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:25 compute-0 sudo[284914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:25 compute-0 sudo[284914]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Nov 29 08:10:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Nov 29 08:10:25 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Nov 29 08:10:25 compute-0 ceph-mon[75237]: osdmap e298: 3 total, 3 up, 3 in
Nov 29 08:10:25 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2930010491' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:25 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2930010491' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:25 compute-0 sudo[284939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:10:25 compute-0 sudo[284939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:25 compute-0 sudo[284939]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:25 compute-0 sudo[284964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:25 compute-0 sudo[284964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:25 compute-0 sudo[284964]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:25 compute-0 sudo[284989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 08:10:25 compute-0 sudo[284989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3449651535' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3449651535' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 165 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 6.4 MiB/s wr, 310 op/s
Nov 29 08:10:26 compute-0 podman[285053]: 2025-11-29 08:10:26.05055977 +0000 UTC m=+0.044073887 container create 664d3c2ef89d48859da1a21f3280fbe646089a727c5e517906ef944518e5f840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swirles, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 08:10:26 compute-0 systemd[1]: Started libpod-conmon-664d3c2ef89d48859da1a21f3280fbe646089a727c5e517906ef944518e5f840.scope.
Nov 29 08:10:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:10:26 compute-0 podman[285053]: 2025-11-29 08:10:26.032603937 +0000 UTC m=+0.026118074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:10:26 compute-0 podman[285053]: 2025-11-29 08:10:26.143972811 +0000 UTC m=+0.137486958 container init 664d3c2ef89d48859da1a21f3280fbe646089a727c5e517906ef944518e5f840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swirles, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 29 08:10:26 compute-0 podman[285053]: 2025-11-29 08:10:26.153635331 +0000 UTC m=+0.147149458 container start 664d3c2ef89d48859da1a21f3280fbe646089a727c5e517906ef944518e5f840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swirles, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:10:26 compute-0 podman[285053]: 2025-11-29 08:10:26.157709891 +0000 UTC m=+0.151224028 container attach 664d3c2ef89d48859da1a21f3280fbe646089a727c5e517906ef944518e5f840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 08:10:26 compute-0 funny_swirles[285071]: 167 167
Nov 29 08:10:26 compute-0 systemd[1]: libpod-664d3c2ef89d48859da1a21f3280fbe646089a727c5e517906ef944518e5f840.scope: Deactivated successfully.
Nov 29 08:10:26 compute-0 conmon[285071]: conmon 664d3c2ef89d48859da1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-664d3c2ef89d48859da1a21f3280fbe646089a727c5e517906ef944518e5f840.scope/container/memory.events
Nov 29 08:10:26 compute-0 podman[285053]: 2025-11-29 08:10:26.162242334 +0000 UTC m=+0.155756441 container died 664d3c2ef89d48859da1a21f3280fbe646089a727c5e517906ef944518e5f840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:10:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-da509ebcace622c6c8fa9f0b724e3c6d870d8e6ffab1af7f9a1566985f9eee5c-merged.mount: Deactivated successfully.
Nov 29 08:10:26 compute-0 podman[285053]: 2025-11-29 08:10:26.208585 +0000 UTC m=+0.202099117 container remove 664d3c2ef89d48859da1a21f3280fbe646089a727c5e517906ef944518e5f840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 08:10:26 compute-0 systemd[1]: libpod-conmon-664d3c2ef89d48859da1a21f3280fbe646089a727c5e517906ef944518e5f840.scope: Deactivated successfully.
Nov 29 08:10:26 compute-0 podman[285094]: 2025-11-29 08:10:26.385736274 +0000 UTC m=+0.047347025 container create d9b5bb3a790ed8842c006ba09996e6d049b82aa6131a0640fd78de4d7c14db90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haibt, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:10:26 compute-0 systemd[1]: Started libpod-conmon-d9b5bb3a790ed8842c006ba09996e6d049b82aa6131a0640fd78de4d7c14db90.scope.
Nov 29 08:10:26 compute-0 podman[285094]: 2025-11-29 08:10:26.366648611 +0000 UTC m=+0.028259382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:10:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:10:26 compute-0 ceph-mon[75237]: osdmap e299: 3 total, 3 up, 3 in
Nov 29 08:10:26 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3449651535' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:26 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3449651535' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:26 compute-0 ceph-mon[75237]: pgmap v1573: 305 pgs: 305 active+clean; 165 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 6.4 MiB/s wr, 310 op/s
Nov 29 08:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63dc457fa063625585df02592de54945247171457c19d48aefb3ff98c7eda616/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63dc457fa063625585df02592de54945247171457c19d48aefb3ff98c7eda616/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63dc457fa063625585df02592de54945247171457c19d48aefb3ff98c7eda616/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63dc457fa063625585df02592de54945247171457c19d48aefb3ff98c7eda616/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:26 compute-0 podman[285094]: 2025-11-29 08:10:26.485288431 +0000 UTC m=+0.146899212 container init d9b5bb3a790ed8842c006ba09996e6d049b82aa6131a0640fd78de4d7c14db90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:10:26 compute-0 podman[285094]: 2025-11-29 08:10:26.493917823 +0000 UTC m=+0.155528574 container start d9b5bb3a790ed8842c006ba09996e6d049b82aa6131a0640fd78de4d7c14db90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:10:26 compute-0 podman[285094]: 2025-11-29 08:10:26.49825449 +0000 UTC m=+0.159865271 container attach d9b5bb3a790ed8842c006ba09996e6d049b82aa6131a0640fd78de4d7c14db90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 08:10:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:27 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2231471168' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:27 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2231471168' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:27.132 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:27.134 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:27.135 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:27 compute-0 nova_compute[255040]: 2025-11-29 08:10:27.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 305 active+clean; 165 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 790 KiB/s rd, 4.8 MiB/s wr, 233 op/s
Nov 29 08:10:28 compute-0 nova_compute[255040]: 2025-11-29 08:10:28.292 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:28 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2231471168' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:28 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2231471168' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:28 compute-0 condescending_haibt[285111]: {
Nov 29 08:10:28 compute-0 condescending_haibt[285111]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 08:10:28 compute-0 condescending_haibt[285111]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:10:28 compute-0 condescending_haibt[285111]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:10:28 compute-0 condescending_haibt[285111]:         "osd_id": 2,
Nov 29 08:10:28 compute-0 condescending_haibt[285111]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:10:28 compute-0 condescending_haibt[285111]:         "type": "bluestore"
Nov 29 08:10:28 compute-0 condescending_haibt[285111]:     },
Nov 29 08:10:28 compute-0 condescending_haibt[285111]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 08:10:28 compute-0 condescending_haibt[285111]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:10:28 compute-0 condescending_haibt[285111]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:10:28 compute-0 condescending_haibt[285111]:         "osd_id": 0,
Nov 29 08:10:28 compute-0 condescending_haibt[285111]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:10:28 compute-0 condescending_haibt[285111]:         "type": "bluestore"
Nov 29 08:10:28 compute-0 condescending_haibt[285111]:     },
Nov 29 08:10:28 compute-0 condescending_haibt[285111]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 08:10:28 compute-0 condescending_haibt[285111]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:10:28 compute-0 condescending_haibt[285111]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:10:28 compute-0 condescending_haibt[285111]:         "osd_id": 1,
Nov 29 08:10:28 compute-0 condescending_haibt[285111]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:10:28 compute-0 condescending_haibt[285111]:         "type": "bluestore"
Nov 29 08:10:28 compute-0 condescending_haibt[285111]:     }
Nov 29 08:10:28 compute-0 condescending_haibt[285111]: }
Nov 29 08:10:28 compute-0 systemd[1]: libpod-d9b5bb3a790ed8842c006ba09996e6d049b82aa6131a0640fd78de4d7c14db90.scope: Deactivated successfully.
Nov 29 08:10:28 compute-0 systemd[1]: libpod-d9b5bb3a790ed8842c006ba09996e6d049b82aa6131a0640fd78de4d7c14db90.scope: Consumed 1.926s CPU time.
Nov 29 08:10:28 compute-0 podman[285094]: 2025-11-29 08:10:28.426253612 +0000 UTC m=+2.087864443 container died d9b5bb3a790ed8842c006ba09996e6d049b82aa6131a0640fd78de4d7c14db90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haibt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:10:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-63dc457fa063625585df02592de54945247171457c19d48aefb3ff98c7eda616-merged.mount: Deactivated successfully.
Nov 29 08:10:28 compute-0 podman[285094]: 2025-11-29 08:10:28.49833069 +0000 UTC m=+2.159941451 container remove d9b5bb3a790ed8842c006ba09996e6d049b82aa6131a0640fd78de4d7c14db90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haibt, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:10:28 compute-0 systemd[1]: libpod-conmon-d9b5bb3a790ed8842c006ba09996e6d049b82aa6131a0640fd78de4d7c14db90.scope: Deactivated successfully.
Nov 29 08:10:28 compute-0 sudo[284989]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:10:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:10:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:10:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:10:28 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 849c50cb-f0a1-4907-98cf-ffb9bc5570a6 does not exist
Nov 29 08:10:28 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 507cc549-689c-4235-9e52-2f3e0fa93a9b does not exist
Nov 29 08:10:28 compute-0 sudo[285158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:28 compute-0 sudo[285158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:28 compute-0 sudo[285158]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:10:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/756117795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:28 compute-0 sudo[285183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:10:28 compute-0 sudo[285183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:28 compute-0 sudo[285183]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3788188792' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3788188792' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:28 compute-0 nova_compute[255040]: 2025-11-29 08:10:28.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:29 compute-0 ceph-mon[75237]: pgmap v1574: 305 pgs: 305 active+clean; 165 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 790 KiB/s rd, 4.8 MiB/s wr, 233 op/s
Nov 29 08:10:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:10:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:10:29 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/756117795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:29 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3788188792' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:29 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3788188792' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Nov 29 08:10:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:29 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/701098729' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Nov 29 08:10:29 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Nov 29 08:10:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:29 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/701098729' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 167 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 721 KiB/s rd, 3.4 MiB/s wr, 287 op/s
Nov 29 08:10:29 compute-0 nova_compute[255040]: 2025-11-29 08:10:29.759 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:30 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/701098729' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:30 compute-0 ceph-mon[75237]: osdmap e300: 3 total, 3 up, 3 in
Nov 29 08:10:30 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/701098729' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:30 compute-0 ceph-mon[75237]: pgmap v1576: 305 pgs: 305 active+clean; 167 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 721 KiB/s rd, 3.4 MiB/s wr, 287 op/s
Nov 29 08:10:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Nov 29 08:10:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Nov 29 08:10:30 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Nov 29 08:10:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/427350901' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/427350901' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:30 compute-0 nova_compute[255040]: 2025-11-29 08:10:30.977 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:30 compute-0 nova_compute[255040]: 2025-11-29 08:10:30.978 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:10:31 compute-0 nova_compute[255040]: 2025-11-29 08:10:31.001 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:10:31 compute-0 nova_compute[255040]: 2025-11-29 08:10:31.001 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:31 compute-0 ceph-mon[75237]: osdmap e301: 3 total, 3 up, 3 in
Nov 29 08:10:31 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/427350901' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:31 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/427350901' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 305 active+clean; 167 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 218 KiB/s rd, 1.2 MiB/s wr, 198 op/s
Nov 29 08:10:31 compute-0 nova_compute[255040]: 2025-11-29 08:10:31.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:32 compute-0 nova_compute[255040]: 2025-11-29 08:10:32.007 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:32 compute-0 nova_compute[255040]: 2025-11-29 08:10:32.008 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:32 compute-0 nova_compute[255040]: 2025-11-29 08:10:32.008 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:32 compute-0 nova_compute[255040]: 2025-11-29 08:10:32.008 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:10:32 compute-0 nova_compute[255040]: 2025-11-29 08:10:32.009 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:10:32 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3228186015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:32 compute-0 nova_compute[255040]: 2025-11-29 08:10:32.509 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Nov 29 08:10:32 compute-0 ceph-mon[75237]: pgmap v1578: 305 pgs: 305 active+clean; 167 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 218 KiB/s rd, 1.2 MiB/s wr, 198 op/s
Nov 29 08:10:32 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3228186015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Nov 29 08:10:32 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Nov 29 08:10:32 compute-0 nova_compute[255040]: 2025-11-29 08:10:32.595 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:10:32 compute-0 nova_compute[255040]: 2025-11-29 08:10:32.596 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:10:32 compute-0 nova_compute[255040]: 2025-11-29 08:10:32.787 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:10:32 compute-0 nova_compute[255040]: 2025-11-29 08:10:32.790 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4225MB free_disk=59.98810958862305GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:10:32 compute-0 nova_compute[255040]: 2025-11-29 08:10:32.790 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:32 compute-0 nova_compute[255040]: 2025-11-29 08:10:32.790 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:32 compute-0 nova_compute[255040]: 2025-11-29 08:10:32.863 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance 40011a89-5ea1-4ffe-bda7-a3116abd2267 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:10:32 compute-0 nova_compute[255040]: 2025-11-29 08:10:32.864 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:10:32 compute-0 nova_compute[255040]: 2025-11-29 08:10:32.864 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:10:32 compute-0 nova_compute[255040]: 2025-11-29 08:10:32.912 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:33 compute-0 nova_compute[255040]: 2025-11-29 08:10:33.295 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:10:33 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3944765300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:33 compute-0 nova_compute[255040]: 2025-11-29 08:10:33.364 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:33 compute-0 nova_compute[255040]: 2025-11-29 08:10:33.371 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:10:33 compute-0 nova_compute[255040]: 2025-11-29 08:10:33.388 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:10:33 compute-0 nova_compute[255040]: 2025-11-29 08:10:33.408 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:10:33 compute-0 nova_compute[255040]: 2025-11-29 08:10:33.409 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:33 compute-0 ceph-mon[75237]: osdmap e302: 3 total, 3 up, 3 in
Nov 29 08:10:33 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3944765300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 167 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 149 KiB/s rd, 36 KiB/s wr, 185 op/s
Nov 29 08:10:33 compute-0 podman[285253]: 2025-11-29 08:10:33.913092303 +0000 UTC m=+0.072793928 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 08:10:34 compute-0 nova_compute[255040]: 2025-11-29 08:10:34.409 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:34 compute-0 nova_compute[255040]: 2025-11-29 08:10:34.410 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:34 compute-0 nova_compute[255040]: 2025-11-29 08:10:34.410 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:10:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Nov 29 08:10:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Nov 29 08:10:34 compute-0 ceph-mon[75237]: pgmap v1580: 305 pgs: 305 active+clean; 167 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 149 KiB/s rd, 36 KiB/s wr, 185 op/s
Nov 29 08:10:34 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Nov 29 08:10:34 compute-0 nova_compute[255040]: 2025-11-29 08:10:34.762 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:34 compute-0 nova_compute[255040]: 2025-11-29 08:10:34.970 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Nov 29 08:10:35 compute-0 ceph-mon[75237]: osdmap e303: 3 total, 3 up, 3 in
Nov 29 08:10:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Nov 29 08:10:35 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Nov 29 08:10:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 167 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 4.6 KiB/s wr, 135 op/s
Nov 29 08:10:36 compute-0 ceph-mon[75237]: osdmap e304: 3 total, 3 up, 3 in
Nov 29 08:10:36 compute-0 ceph-mon[75237]: pgmap v1583: 305 pgs: 305 active+clean; 167 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 4.6 KiB/s wr, 135 op/s
Nov 29 08:10:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Nov 29 08:10:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Nov 29 08:10:37 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Nov 29 08:10:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 167 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 4.6 KiB/s wr, 136 op/s
Nov 29 08:10:37 compute-0 nova_compute[255040]: 2025-11-29 08:10:37.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Nov 29 08:10:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Nov 29 08:10:38 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Nov 29 08:10:38 compute-0 nova_compute[255040]: 2025-11-29 08:10:38.348 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:38 compute-0 ceph-mon[75237]: osdmap e305: 3 total, 3 up, 3 in
Nov 29 08:10:38 compute-0 ceph-mon[75237]: pgmap v1585: 305 pgs: 305 active+clean; 167 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 4.6 KiB/s wr, 136 op/s
Nov 29 08:10:38 compute-0 ceph-mon[75237]: osdmap e306: 3 total, 3 up, 3 in
Nov 29 08:10:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:10:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:10:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:10:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:10:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:10:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:10:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:10:38
Nov 29 08:10:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:10:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:10:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'images', 'default.rgw.log', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'volumes']
Nov 29 08:10:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:10:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Nov 29 08:10:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Nov 29 08:10:39 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Nov 29 08:10:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 167 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 174 KiB/s rd, 7.0 KiB/s wr, 252 op/s
Nov 29 08:10:39 compute-0 nova_compute[255040]: 2025-11-29 08:10:39.763 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Nov 29 08:10:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Nov 29 08:10:40 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Nov 29 08:10:40 compute-0 ceph-mon[75237]: osdmap e307: 3 total, 3 up, 3 in
Nov 29 08:10:40 compute-0 ceph-mon[75237]: pgmap v1588: 305 pgs: 305 active+clean; 167 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 174 KiB/s rd, 7.0 KiB/s wr, 252 op/s
Nov 29 08:10:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:40 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3695309737' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:40 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3695309737' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:40 compute-0 ovn_controller[153295]: 2025-11-29T08:10:40Z|00168|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Nov 29 08:10:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Nov 29 08:10:41 compute-0 ceph-mon[75237]: osdmap e308: 3 total, 3 up, 3 in
Nov 29 08:10:41 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3695309737' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:41 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3695309737' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Nov 29 08:10:41 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Nov 29 08:10:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 167 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 229 KiB/s rd, 7.5 KiB/s wr, 344 op/s
Nov 29 08:10:41 compute-0 podman[285272]: 2025-11-29 08:10:41.938347532 +0000 UTC m=+0.095426507 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:10:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:42 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1353208476' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:42 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1353208476' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Nov 29 08:10:42 compute-0 ceph-mon[75237]: osdmap e309: 3 total, 3 up, 3 in
Nov 29 08:10:42 compute-0 ceph-mon[75237]: pgmap v1591: 305 pgs: 305 active+clean; 167 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 229 KiB/s rd, 7.5 KiB/s wr, 344 op/s
Nov 29 08:10:42 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1353208476' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:42 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1353208476' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Nov 29 08:10:42 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Nov 29 08:10:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:43 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3847296407' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:43 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3847296407' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Nov 29 08:10:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Nov 29 08:10:43 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Nov 29 08:10:43 compute-0 nova_compute[255040]: 2025-11-29 08:10:43.352 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:10:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:10:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:10:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:10:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:10:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:10:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:10:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:10:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:10:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:10:43 compute-0 nova_compute[255040]: 2025-11-29 08:10:43.552 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:43.551 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:10:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:43.553 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:10:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:43.554 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:43 compute-0 ceph-mon[75237]: osdmap e310: 3 total, 3 up, 3 in
Nov 29 08:10:43 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3847296407' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:43 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3847296407' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:43 compute-0 ceph-mon[75237]: osdmap e311: 3 total, 3 up, 3 in
Nov 29 08:10:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 167 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 6.7 KiB/s wr, 185 op/s
Nov 29 08:10:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Nov 29 08:10:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Nov 29 08:10:44 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Nov 29 08:10:44 compute-0 ceph-mon[75237]: pgmap v1594: 305 pgs: 305 active+clean; 167 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 6.7 KiB/s wr, 185 op/s
Nov 29 08:10:44 compute-0 nova_compute[255040]: 2025-11-29 08:10:44.767 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:45 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3989190611' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:45 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3989190611' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Nov 29 08:10:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Nov 29 08:10:45 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Nov 29 08:10:45 compute-0 ceph-mon[75237]: osdmap e312: 3 total, 3 up, 3 in
Nov 29 08:10:45 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3989190611' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:45 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3989190611' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 167 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 11 KiB/s wr, 236 op/s
Nov 29 08:10:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Nov 29 08:10:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Nov 29 08:10:46 compute-0 ceph-mon[75237]: osdmap e313: 3 total, 3 up, 3 in
Nov 29 08:10:46 compute-0 ceph-mon[75237]: pgmap v1597: 305 pgs: 305 active+clean; 167 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 11 KiB/s wr, 236 op/s
Nov 29 08:10:46 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Nov 29 08:10:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:47 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1694878505' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:47 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1694878505' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:47 compute-0 ceph-mon[75237]: osdmap e314: 3 total, 3 up, 3 in
Nov 29 08:10:47 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1694878505' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:47 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1694878505' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 167 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 4.5 KiB/s wr, 121 op/s
Nov 29 08:10:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Nov 29 08:10:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Nov 29 08:10:48 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Nov 29 08:10:48 compute-0 nova_compute[255040]: 2025-11-29 08:10:48.354 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:48 compute-0 ceph-mon[75237]: pgmap v1599: 305 pgs: 305 active+clean; 167 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 4.5 KiB/s wr, 121 op/s
Nov 29 08:10:48 compute-0 ceph-mon[75237]: osdmap e315: 3 total, 3 up, 3 in
Nov 29 08:10:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:10:48 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2598199491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 167 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 10 KiB/s wr, 143 op/s
Nov 29 08:10:49 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2598199491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:49 compute-0 nova_compute[255040]: 2025-11-29 08:10:49.771 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Nov 29 08:10:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Nov 29 08:10:49 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Nov 29 08:10:50 compute-0 ceph-mon[75237]: pgmap v1601: 305 pgs: 305 active+clean; 167 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 10 KiB/s wr, 143 op/s
Nov 29 08:10:50 compute-0 ceph-mon[75237]: osdmap e316: 3 total, 3 up, 3 in
Nov 29 08:10:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Nov 29 08:10:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Nov 29 08:10:50 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Nov 29 08:10:51 compute-0 nova_compute[255040]: 2025-11-29 08:10:51.442 255071 DEBUG oslo_concurrency.lockutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "5dd77a80-b879-40f3-87b5-03140434178c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:51 compute-0 nova_compute[255040]: 2025-11-29 08:10:51.443 255071 DEBUG oslo_concurrency.lockutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "5dd77a80-b879-40f3-87b5-03140434178c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:51 compute-0 nova_compute[255040]: 2025-11-29 08:10:51.459 255071 DEBUG nova.compute.manager [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:10:51 compute-0 nova_compute[255040]: 2025-11-29 08:10:51.530 255071 DEBUG oslo_concurrency.lockutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:51 compute-0 nova_compute[255040]: 2025-11-29 08:10:51.531 255071 DEBUG oslo_concurrency.lockutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:51 compute-0 nova_compute[255040]: 2025-11-29 08:10:51.541 255071 DEBUG nova.virt.hardware [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:10:51 compute-0 nova_compute[255040]: 2025-11-29 08:10:51.542 255071 INFO nova.compute.claims [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:10:51 compute-0 nova_compute[255040]: 2025-11-29 08:10:51.660 255071 DEBUG oslo_concurrency.processutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 167 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 10 KiB/s wr, 102 op/s
Nov 29 08:10:51 compute-0 ceph-mon[75237]: osdmap e317: 3 total, 3 up, 3 in
Nov 29 08:10:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:10:52 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1609221832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:52 compute-0 nova_compute[255040]: 2025-11-29 08:10:52.179 255071 DEBUG oslo_concurrency.processutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:52 compute-0 nova_compute[255040]: 2025-11-29 08:10:52.189 255071 DEBUG nova.compute.provider_tree [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:10:52 compute-0 nova_compute[255040]: 2025-11-29 08:10:52.205 255071 DEBUG nova.scheduler.client.report [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:10:52 compute-0 nova_compute[255040]: 2025-11-29 08:10:52.230 255071 DEBUG oslo_concurrency.lockutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:52 compute-0 nova_compute[255040]: 2025-11-29 08:10:52.232 255071 DEBUG nova.compute.manager [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:10:52 compute-0 nova_compute[255040]: 2025-11-29 08:10:52.287 255071 INFO nova.virt.libvirt.driver [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:10:52 compute-0 nova_compute[255040]: 2025-11-29 08:10:52.290 255071 DEBUG nova.compute.manager [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:10:52 compute-0 nova_compute[255040]: 2025-11-29 08:10:52.291 255071 DEBUG nova.network.neutron [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:10:52 compute-0 nova_compute[255040]: 2025-11-29 08:10:52.314 255071 DEBUG nova.compute.manager [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:10:52 compute-0 nova_compute[255040]: 2025-11-29 08:10:52.366 255071 INFO nova.virt.block_device [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Booting with volume snapshot fce04476-a05f-4e3f-9956-30a42aa7e07c at /dev/vda
Nov 29 08:10:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:10:52 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4193380424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:52 compute-0 nova_compute[255040]: 2025-11-29 08:10:52.659 255071 DEBUG nova.policy [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5e62d407203540599a65ac50d5d447b9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3df24932e2a44aeab3c2aece8a045774', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:10:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Nov 29 08:10:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Nov 29 08:10:52 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Nov 29 08:10:52 compute-0 ceph-mon[75237]: pgmap v1604: 305 pgs: 305 active+clean; 167 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 10 KiB/s wr, 102 op/s
Nov 29 08:10:52 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1609221832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:52 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4193380424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Nov 29 08:10:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Nov 29 08:10:53 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Nov 29 08:10:53 compute-0 nova_compute[255040]: 2025-11-29 08:10:53.357 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:53 compute-0 nova_compute[255040]: 2025-11-29 08:10:53.386 255071 DEBUG nova.network.neutron [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Successfully created port: 8943e356-9f8e-4b4c-a308-d113f8558460 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:10:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 167 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 3.0 KiB/s wr, 32 op/s
Nov 29 08:10:53 compute-0 ceph-mon[75237]: osdmap e318: 3 total, 3 up, 3 in
Nov 29 08:10:53 compute-0 ceph-mon[75237]: osdmap e319: 3 total, 3 up, 3 in
Nov 29 08:10:54 compute-0 nova_compute[255040]: 2025-11-29 08:10:54.173 255071 DEBUG nova.network.neutron [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Successfully updated port: 8943e356-9f8e-4b4c-a308-d113f8558460 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:10:54 compute-0 nova_compute[255040]: 2025-11-29 08:10:54.195 255071 DEBUG oslo_concurrency.lockutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "refresh_cache-5dd77a80-b879-40f3-87b5-03140434178c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:10:54 compute-0 nova_compute[255040]: 2025-11-29 08:10:54.195 255071 DEBUG oslo_concurrency.lockutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquired lock "refresh_cache-5dd77a80-b879-40f3-87b5-03140434178c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:10:54 compute-0 nova_compute[255040]: 2025-11-29 08:10:54.196 255071 DEBUG nova.network.neutron [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:10:54 compute-0 nova_compute[255040]: 2025-11-29 08:10:54.273 255071 DEBUG nova.compute.manager [req-489138c8-e78b-4cd3-a680-33f1057430ae req-40a92bba-5d10-4d70-86a2-44782636d5a2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Received event network-changed-8943e356-9f8e-4b4c-a308-d113f8558460 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:54 compute-0 nova_compute[255040]: 2025-11-29 08:10:54.273 255071 DEBUG nova.compute.manager [req-489138c8-e78b-4cd3-a680-33f1057430ae req-40a92bba-5d10-4d70-86a2-44782636d5a2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Refreshing instance network info cache due to event network-changed-8943e356-9f8e-4b4c-a308-d113f8558460. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:10:54 compute-0 nova_compute[255040]: 2025-11-29 08:10:54.273 255071 DEBUG oslo_concurrency.lockutils [req-489138c8-e78b-4cd3-a680-33f1057430ae req-40a92bba-5d10-4d70-86a2-44782636d5a2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-5dd77a80-b879-40f3-87b5-03140434178c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:10:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Nov 29 08:10:54 compute-0 nova_compute[255040]: 2025-11-29 08:10:54.332 255071 DEBUG nova.network.neutron [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:10:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Nov 29 08:10:54 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Nov 29 08:10:54 compute-0 nova_compute[255040]: 2025-11-29 08:10:54.772 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:54 compute-0 ceph-mon[75237]: pgmap v1607: 305 pgs: 305 active+clean; 167 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 3.0 KiB/s wr, 32 op/s
Nov 29 08:10:54 compute-0 ceph-mon[75237]: osdmap e320: 3 total, 3 up, 3 in
Nov 29 08:10:55 compute-0 nova_compute[255040]: 2025-11-29 08:10:55.003 255071 DEBUG nova.network.neutron [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Updating instance_info_cache with network_info: [{"id": "8943e356-9f8e-4b4c-a308-d113f8558460", "address": "fa:16:3e:b6:93:1a", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8943e356-9f", "ovs_interfaceid": "8943e356-9f8e-4b4c-a308-d113f8558460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:10:55 compute-0 nova_compute[255040]: 2025-11-29 08:10:55.024 255071 DEBUG oslo_concurrency.lockutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Releasing lock "refresh_cache-5dd77a80-b879-40f3-87b5-03140434178c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:10:55 compute-0 nova_compute[255040]: 2025-11-29 08:10:55.024 255071 DEBUG nova.compute.manager [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Instance network_info: |[{"id": "8943e356-9f8e-4b4c-a308-d113f8558460", "address": "fa:16:3e:b6:93:1a", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8943e356-9f", "ovs_interfaceid": "8943e356-9f8e-4b4c-a308-d113f8558460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:10:55 compute-0 nova_compute[255040]: 2025-11-29 08:10:55.025 255071 DEBUG oslo_concurrency.lockutils [req-489138c8-e78b-4cd3-a680-33f1057430ae req-40a92bba-5d10-4d70-86a2-44782636d5a2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-5dd77a80-b879-40f3-87b5-03140434178c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:10:55 compute-0 nova_compute[255040]: 2025-11-29 08:10:55.025 255071 DEBUG nova.network.neutron [req-489138c8-e78b-4cd3-a680-33f1057430ae req-40a92bba-5d10-4d70-86a2-44782636d5a2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Refreshing network info cache for port 8943e356-9f8e-4b4c-a308-d113f8558460 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:10:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Nov 29 08:10:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Nov 29 08:10:55 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Nov 29 08:10:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 167 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 18 KiB/s wr, 129 op/s
Nov 29 08:10:55 compute-0 nova_compute[255040]: 2025-11-29 08:10:55.967 255071 DEBUG nova.network.neutron [req-489138c8-e78b-4cd3-a680-33f1057430ae req-40a92bba-5d10-4d70-86a2-44782636d5a2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Updated VIF entry in instance network info cache for port 8943e356-9f8e-4b4c-a308-d113f8558460. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:10:55 compute-0 nova_compute[255040]: 2025-11-29 08:10:55.968 255071 DEBUG nova.network.neutron [req-489138c8-e78b-4cd3-a680-33f1057430ae req-40a92bba-5d10-4d70-86a2-44782636d5a2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Updating instance_info_cache with network_info: [{"id": "8943e356-9f8e-4b4c-a308-d113f8558460", "address": "fa:16:3e:b6:93:1a", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8943e356-9f", "ovs_interfaceid": "8943e356-9f8e-4b4c-a308-d113f8558460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:10:55 compute-0 podman[285315]: 2025-11-29 08:10:55.98011272 +0000 UTC m=+0.145471324 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 08:10:55 compute-0 nova_compute[255040]: 2025-11-29 08:10:55.980 255071 DEBUG oslo_concurrency.lockutils [req-489138c8-e78b-4cd3-a680-33f1057430ae req-40a92bba-5d10-4d70-86a2-44782636d5a2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-5dd77a80-b879-40f3-87b5-03140434178c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.8615818519242038e-06 of space, bias 1.0, pg target 0.0008584745555772611 quantized to 32 (current 32)
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0011047613669662043 of space, bias 1.0, pg target 0.3314284100898613 quantized to 32 (current 32)
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663034365435958 of space, bias 1.0, pg target 0.19989103096307873 quantized to 32 (current 32)
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:10:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:10:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:10:56 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2532685539' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:56 compute-0 ceph-mon[75237]: osdmap e321: 3 total, 3 up, 3 in
Nov 29 08:10:56 compute-0 ceph-mon[75237]: pgmap v1610: 305 pgs: 305 active+clean; 167 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 18 KiB/s wr, 129 op/s
Nov 29 08:10:56 compute-0 nova_compute[255040]: 2025-11-29 08:10:56.978 255071 DEBUG os_brick.utils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:10:56 compute-0 nova_compute[255040]: 2025-11-29 08:10:56.981 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:56 compute-0 nova_compute[255040]: 2025-11-29 08:10:56.996 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:56 compute-0 nova_compute[255040]: 2025-11-29 08:10:56.996 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[17db8d4c-dca9-4d29-b4e1-59e4e197bd98]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:56 compute-0 nova_compute[255040]: 2025-11-29 08:10:56.999 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:57 compute-0 nova_compute[255040]: 2025-11-29 08:10:57.010 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:57 compute-0 nova_compute[255040]: 2025-11-29 08:10:57.011 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[948310d4-fe6b-4db7-89c6-094b4025d0f9]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:57 compute-0 nova_compute[255040]: 2025-11-29 08:10:57.014 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:57 compute-0 nova_compute[255040]: 2025-11-29 08:10:57.028 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:57 compute-0 nova_compute[255040]: 2025-11-29 08:10:57.028 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[0a93de8e-115b-462a-927a-97b131526266]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:57 compute-0 nova_compute[255040]: 2025-11-29 08:10:57.031 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[ac74ca10-17e0-4874-ab17-1bc2170de253]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:57 compute-0 nova_compute[255040]: 2025-11-29 08:10:57.032 255071 DEBUG oslo_concurrency.processutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:57 compute-0 nova_compute[255040]: 2025-11-29 08:10:57.067 255071 DEBUG oslo_concurrency.processutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:57 compute-0 nova_compute[255040]: 2025-11-29 08:10:57.071 255071 DEBUG os_brick.initiator.connectors.lightos [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:10:57 compute-0 nova_compute[255040]: 2025-11-29 08:10:57.071 255071 DEBUG os_brick.initiator.connectors.lightos [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:10:57 compute-0 nova_compute[255040]: 2025-11-29 08:10:57.071 255071 DEBUG os_brick.initiator.connectors.lightos [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:10:57 compute-0 nova_compute[255040]: 2025-11-29 08:10:57.072 255071 DEBUG os_brick.utils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] <== get_connector_properties: return (92ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:10:57 compute-0 nova_compute[255040]: 2025-11-29 08:10:57.072 255071 DEBUG nova.virt.block_device [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Updating existing volume attachment record: 7f63f4d5-9af6-4df1-9f39-fc7cdb30d589 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:10:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:10:57 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3929657280' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Nov 29 08:10:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 167 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 14 KiB/s wr, 104 op/s
Nov 29 08:10:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Nov 29 08:10:57 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Nov 29 08:10:57 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2532685539' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:57 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3929657280' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.018 255071 DEBUG nova.compute.manager [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.020 255071 DEBUG nova.virt.libvirt.driver [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.021 255071 INFO nova.virt.libvirt.driver [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Creating image(s)
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.022 255071 DEBUG nova.virt.libvirt.driver [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.022 255071 DEBUG nova.virt.libvirt.driver [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Ensure instance console log exists: /var/lib/nova/instances/5dd77a80-b879-40f3-87b5-03140434178c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.023 255071 DEBUG oslo_concurrency.lockutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.023 255071 DEBUG oslo_concurrency.lockutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.023 255071 DEBUG oslo_concurrency.lockutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.027 255071 DEBUG nova.virt.libvirt.driver [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Start _get_guest_xml network_info=[{"id": "8943e356-9f8e-4b4c-a308-d113f8558460", "address": "fa:16:3e:b6:93:1a", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8943e356-9f", "ovs_interfaceid": "8943e356-9f8e-4b4c-a308-d113f8558460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-11-29T08:10:44Z,direct_url=<?>,disk_format='qcow2',id=a1916382-bc5f-42e0-b21c-ef024e59945e,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-781969565',owner='3df24932e2a44aeab3c2aece8a045774',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-11-29T08:10:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-86d69c7f-1685-4057-86e2-30a1849ae3bc', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '86d69c7f-1685-4057-86e2-30a1849ae3bc', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '5dd77a80-b879-40f3-87b5-03140434178c', 'attached_at': '', 'detached_at': '', 'volume_id': '86d69c7f-1685-4057-86e2-30a1849ae3bc', 'serial': '86d69c7f-1685-4057-86e2-30a1849ae3bc'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'delete_on_termination': True, 'attachment_id': '7f63f4d5-9af6-4df1-9f39-fc7cdb30d589', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.034 255071 WARNING nova.virt.libvirt.driver [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.042 255071 DEBUG nova.virt.libvirt.host [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.043 255071 DEBUG nova.virt.libvirt.host [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.047 255071 DEBUG nova.virt.libvirt.host [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.048 255071 DEBUG nova.virt.libvirt.host [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.048 255071 DEBUG nova.virt.libvirt.driver [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.048 255071 DEBUG nova.virt.hardware [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-11-29T08:10:44Z,direct_url=<?>,disk_format='qcow2',id=a1916382-bc5f-42e0-b21c-ef024e59945e,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-781969565',owner='3df24932e2a44aeab3c2aece8a045774',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-11-29T08:10:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.049 255071 DEBUG nova.virt.hardware [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.049 255071 DEBUG nova.virt.hardware [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.049 255071 DEBUG nova.virt.hardware [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.050 255071 DEBUG nova.virt.hardware [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.050 255071 DEBUG nova.virt.hardware [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.050 255071 DEBUG nova.virt.hardware [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.050 255071 DEBUG nova.virt.hardware [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.051 255071 DEBUG nova.virt.hardware [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.051 255071 DEBUG nova.virt.hardware [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.051 255071 DEBUG nova.virt.hardware [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.079 255071 DEBUG nova.storage.rbd_utils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] rbd image 5dd77a80-b879-40f3-87b5-03140434178c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.085 255071 DEBUG oslo_concurrency.processutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.398 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:10:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2847168362' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.566 255071 DEBUG oslo_concurrency.processutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.596 255071 DEBUG nova.virt.libvirt.vif [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:10:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1737596904',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1737596904',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1737596904',id=18,image_ref='a1916382-bc5f-42e0-b21c-ef024e59945e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIYbaW5Cz98MPv+dg9KtHkYpYVVoIaatnuhSC1XdhwyJ+P+b6nHd2r5M7Ip3vlw7oXiKc0nUgjp60S2QMU0rNP2g/q8a9v2jtUSVxBO0ZN/oMFDMg1yFPVI6kZ8/JTHqdw==',key_name='tempest-keypair-249992626',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3df24932e2a44aeab3c2aece8a045774',ramdisk_id='',reservation_id='r-boighzo5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1666331213',image_owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1666331213',owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:10:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e62d407203540599a65ac50d5d447b9',uuid=5dd77a80-b879-40f3-87b5-03140434178c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8943e356-9f8e-4b4c-a308-d113f8558460", "address": "fa:16:3e:b6:93:1a", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8943e356-9f", "ovs_interfaceid": "8943e356-9f8e-4b4c-a308-d113f8558460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.598 255071 DEBUG nova.network.os_vif_util [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converting VIF {"id": "8943e356-9f8e-4b4c-a308-d113f8558460", "address": "fa:16:3e:b6:93:1a", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8943e356-9f", "ovs_interfaceid": "8943e356-9f8e-4b4c-a308-d113f8558460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.599 255071 DEBUG nova.network.os_vif_util [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:93:1a,bridge_name='br-int',has_traffic_filtering=True,id=8943e356-9f8e-4b4c-a308-d113f8558460,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8943e356-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.600 255071 DEBUG nova.objects.instance [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5dd77a80-b879-40f3-87b5-03140434178c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.613 255071 DEBUG nova.virt.libvirt.driver [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:10:58 compute-0 nova_compute[255040]:   <uuid>5dd77a80-b879-40f3-87b5-03140434178c</uuid>
Nov 29 08:10:58 compute-0 nova_compute[255040]:   <name>instance-00000012</name>
Nov 29 08:10:58 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:10:58 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:10:58 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <nova:name>tempest-TestVolumeBootPattern-image-snapshot-server-1737596904</nova:name>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:10:58</nova:creationTime>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:10:58 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:10:58 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:10:58 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:10:58 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:10:58 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:10:58 compute-0 nova_compute[255040]:         <nova:user uuid="5e62d407203540599a65ac50d5d447b9">tempest-TestVolumeBootPattern-1666331213-project-member</nova:user>
Nov 29 08:10:58 compute-0 nova_compute[255040]:         <nova:project uuid="3df24932e2a44aeab3c2aece8a045774">tempest-TestVolumeBootPattern-1666331213</nova:project>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <nova:root type="image" uuid="a1916382-bc5f-42e0-b21c-ef024e59945e"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:10:58 compute-0 nova_compute[255040]:         <nova:port uuid="8943e356-9f8e-4b4c-a308-d113f8558460">
Nov 29 08:10:58 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:10:58 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:10:58 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <system>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <entry name="serial">5dd77a80-b879-40f3-87b5-03140434178c</entry>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <entry name="uuid">5dd77a80-b879-40f3-87b5-03140434178c</entry>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     </system>
Nov 29 08:10:58 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:10:58 compute-0 nova_compute[255040]:   <os>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:   </os>
Nov 29 08:10:58 compute-0 nova_compute[255040]:   <features>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:   </features>
Nov 29 08:10:58 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:10:58 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:10:58 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/5dd77a80-b879-40f3-87b5-03140434178c_disk.config">
Nov 29 08:10:58 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       </source>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:10:58 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <source protocol="rbd" name="volumes/volume-86d69c7f-1685-4057-86e2-30a1849ae3bc">
Nov 29 08:10:58 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       </source>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:10:58 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <serial>86d69c7f-1685-4057-86e2-30a1849ae3bc</serial>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:b6:93:1a"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <target dev="tap8943e356-9f"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/5dd77a80-b879-40f3-87b5-03140434178c/console.log" append="off"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <video>
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     </video>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <input type="keyboard" bus="usb"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:10:58 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:10:58 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:10:58 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:10:58 compute-0 nova_compute[255040]: </domain>
Nov 29 08:10:58 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.614 255071 DEBUG nova.compute.manager [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Preparing to wait for external event network-vif-plugged-8943e356-9f8e-4b4c-a308-d113f8558460 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.614 255071 DEBUG oslo_concurrency.lockutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "5dd77a80-b879-40f3-87b5-03140434178c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.615 255071 DEBUG oslo_concurrency.lockutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "5dd77a80-b879-40f3-87b5-03140434178c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.615 255071 DEBUG oslo_concurrency.lockutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "5dd77a80-b879-40f3-87b5-03140434178c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.615 255071 DEBUG nova.virt.libvirt.vif [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:10:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1737596904',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1737596904',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1737596904',id=18,image_ref='a1916382-bc5f-42e0-b21c-ef024e59945e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIYbaW5Cz98MPv+dg9KtHkYpYVVoIaatnuhSC1XdhwyJ+P+b6nHd2r5M7Ip3vlw7oXiKc0nUgjp60S2QMU0rNP2g/q8a9v2jtUSVxBO0ZN/oMFDMg1yFPVI6kZ8/JTHqdw==',key_name='tempest-keypair-249992626',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3df24932e2a44aeab3c2aece8a045774',ramdisk_id='',reservation_id='r-boighzo5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1666331213',image_owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1666331213',owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:10:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e62d407203540599a65ac50d5d447b9',uuid=5dd77a80-b879-40f3-87b5-03140434178c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8943e356-9f8e-4b4c-a308-d113f8558460", "address": "fa:16:3e:b6:93:1a", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8943e356-9f", "ovs_interfaceid": "8943e356-9f8e-4b4c-a308-d113f8558460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.616 255071 DEBUG nova.network.os_vif_util [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converting VIF {"id": "8943e356-9f8e-4b4c-a308-d113f8558460", "address": "fa:16:3e:b6:93:1a", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8943e356-9f", "ovs_interfaceid": "8943e356-9f8e-4b4c-a308-d113f8558460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.616 255071 DEBUG nova.network.os_vif_util [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:93:1a,bridge_name='br-int',has_traffic_filtering=True,id=8943e356-9f8e-4b4c-a308-d113f8558460,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8943e356-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.617 255071 DEBUG os_vif [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:93:1a,bridge_name='br-int',has_traffic_filtering=True,id=8943e356-9f8e-4b4c-a308-d113f8558460,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8943e356-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.617 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.618 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.618 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.623 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.623 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8943e356-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.624 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8943e356-9f, col_values=(('external_ids', {'iface-id': '8943e356-9f8e-4b4c-a308-d113f8558460', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b6:93:1a', 'vm-uuid': '5dd77a80-b879-40f3-87b5-03140434178c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.625 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:58 compute-0 NetworkManager[49116]: <info>  [1764403858.6270] manager: (tap8943e356-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.629 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.636 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.638 255071 INFO os_vif [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:93:1a,bridge_name='br-int',has_traffic_filtering=True,id=8943e356-9f8e-4b4c-a308-d113f8558460,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8943e356-9f')
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.692 255071 DEBUG nova.virt.libvirt.driver [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.692 255071 DEBUG nova.virt.libvirt.driver [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.692 255071 DEBUG nova.virt.libvirt.driver [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] No VIF found with MAC fa:16:3e:b6:93:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.693 255071 INFO nova.virt.libvirt.driver [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Using config drive
Nov 29 08:10:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4292381737' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4292381737' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:58 compute-0 nova_compute[255040]: 2025-11-29 08:10:58.718 255071 DEBUG nova.storage.rbd_utils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] rbd image 5dd77a80-b879-40f3-87b5-03140434178c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:10:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2172647911' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2172647911' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:58 compute-0 ceph-mon[75237]: pgmap v1611: 305 pgs: 305 active+clean; 167 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 14 KiB/s wr, 104 op/s
Nov 29 08:10:58 compute-0 ceph-mon[75237]: osdmap e322: 3 total, 3 up, 3 in
Nov 29 08:10:58 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2847168362' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:58 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4292381737' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:58 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4292381737' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:58 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2172647911' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:58 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2172647911' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Nov 29 08:10:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Nov 29 08:10:58 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Nov 29 08:10:59 compute-0 nova_compute[255040]: 2025-11-29 08:10:59.193 255071 INFO nova.virt.libvirt.driver [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Creating config drive at /var/lib/nova/instances/5dd77a80-b879-40f3-87b5-03140434178c/disk.config
Nov 29 08:10:59 compute-0 nova_compute[255040]: 2025-11-29 08:10:59.202 255071 DEBUG oslo_concurrency.processutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5dd77a80-b879-40f3-87b5-03140434178c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9miep85f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:59 compute-0 nova_compute[255040]: 2025-11-29 08:10:59.341 255071 DEBUG oslo_concurrency.processutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5dd77a80-b879-40f3-87b5-03140434178c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9miep85f" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:59 compute-0 nova_compute[255040]: 2025-11-29 08:10:59.377 255071 DEBUG nova.storage.rbd_utils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] rbd image 5dd77a80-b879-40f3-87b5-03140434178c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:10:59 compute-0 nova_compute[255040]: 2025-11-29 08:10:59.382 255071 DEBUG oslo_concurrency.processutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5dd77a80-b879-40f3-87b5-03140434178c/disk.config 5dd77a80-b879-40f3-87b5-03140434178c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:59 compute-0 nova_compute[255040]: 2025-11-29 08:10:59.606 255071 DEBUG oslo_concurrency.processutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5dd77a80-b879-40f3-87b5-03140434178c/disk.config 5dd77a80-b879-40f3-87b5-03140434178c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:59 compute-0 nova_compute[255040]: 2025-11-29 08:10:59.608 255071 INFO nova.virt.libvirt.driver [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Deleting local config drive /var/lib/nova/instances/5dd77a80-b879-40f3-87b5-03140434178c/disk.config because it was imported into RBD.
Nov 29 08:10:59 compute-0 kernel: tap8943e356-9f: entered promiscuous mode
Nov 29 08:10:59 compute-0 NetworkManager[49116]: <info>  [1764403859.6712] manager: (tap8943e356-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Nov 29 08:10:59 compute-0 ovn_controller[153295]: 2025-11-29T08:10:59Z|00169|binding|INFO|Claiming lport 8943e356-9f8e-4b4c-a308-d113f8558460 for this chassis.
Nov 29 08:10:59 compute-0 ovn_controller[153295]: 2025-11-29T08:10:59Z|00170|binding|INFO|8943e356-9f8e-4b4c-a308-d113f8558460: Claiming fa:16:3e:b6:93:1a 10.100.0.12
Nov 29 08:10:59 compute-0 nova_compute[255040]: 2025-11-29 08:10:59.673 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:59.680 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:93:1a 10.100.0.12'], port_security=['fa:16:3e:b6:93:1a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5dd77a80-b879-40f3-87b5-03140434178c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3df24932e2a44aeab3c2aece8a045774', 'neutron:revision_number': '2', 'neutron:security_group_ids': '688ed12f-3bea-4537-80cf-50d770e51be0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6d2be5e-00f1-4a95-b572-cb93402763d5, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=8943e356-9f8e-4b4c-a308-d113f8558460) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:10:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:59.681 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 8943e356-9f8e-4b4c-a308-d113f8558460 in datapath 6e23492e-beff-43f6-b4d1-f88ebeea0b6f bound to our chassis
Nov 29 08:10:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:59.682 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e23492e-beff-43f6-b4d1-f88ebeea0b6f
Nov 29 08:10:59 compute-0 ovn_controller[153295]: 2025-11-29T08:10:59Z|00171|binding|INFO|Setting lport 8943e356-9f8e-4b4c-a308-d113f8558460 ovn-installed in OVS
Nov 29 08:10:59 compute-0 ovn_controller[153295]: 2025-11-29T08:10:59Z|00172|binding|INFO|Setting lport 8943e356-9f8e-4b4c-a308-d113f8558460 up in Southbound
Nov 29 08:10:59 compute-0 nova_compute[255040]: 2025-11-29 08:10:59.698 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:59 compute-0 nova_compute[255040]: 2025-11-29 08:10:59.706 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:59 compute-0 systemd-udevd[285461]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:10:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:59.711 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3c66df-c84e-4a36-ab1d-22fdf8fb6d37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:59 compute-0 systemd-machined[216271]: New machine qemu-18-instance-00000012.
Nov 29 08:10:59 compute-0 NetworkManager[49116]: <info>  [1764403859.7286] device (tap8943e356-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:10:59 compute-0 NetworkManager[49116]: <info>  [1764403859.7299] device (tap8943e356-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:10:59 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-00000012.
Nov 29 08:10:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:59.758 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[cc37c718-a5f2-4ccb-ae37-5e3ee4f4cd40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:59.763 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[76c92f82-ea29-490e-83ea-466399b94be4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 167 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 171 KiB/s rd, 19 KiB/s wr, 231 op/s
Nov 29 08:10:59 compute-0 nova_compute[255040]: 2025-11-29 08:10:59.774 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:59.802 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[9adec020-a127-4955-bb4b-8aeadac3ffe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:59.829 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[40c6ec9d-3171-4d99-9e53-2585fb812932]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e23492e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:19:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604659, 'reachable_time': 30731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285474, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Nov 29 08:10:59 compute-0 ceph-mon[75237]: osdmap e323: 3 total, 3 up, 3 in
Nov 29 08:10:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Nov 29 08:10:59 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Nov 29 08:10:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:59.855 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[8d3f70fe-e34c-4201-9f02-e5686f2ca67d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6e23492e-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604674, 'tstamp': 604674}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285476, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e23492e-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604678, 'tstamp': 604678}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285476, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:59.860 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e23492e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:59 compute-0 nova_compute[255040]: 2025-11-29 08:10:59.862 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:59 compute-0 nova_compute[255040]: 2025-11-29 08:10:59.865 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:59.865 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e23492e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:59.865 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:10:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:59.866 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e23492e-b0, col_values=(('external_ids', {'iface-id': 'c7579d40-4225-44ab-93bd-e31c3efe399f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:10:59.866 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.127 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403860.1263902, 5dd77a80-b879-40f3-87b5-03140434178c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.128 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] VM Started (Lifecycle Event)
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.155 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.160 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403860.1281507, 5dd77a80-b879-40f3-87b5-03140434178c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.160 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] VM Paused (Lifecycle Event)
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.187 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.191 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.212 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.350 255071 DEBUG nova.compute.manager [req-3b874ff7-3784-498f-a498-367264c7029a req-97dfde32-f5fe-4ba9-a86b-f99d2d1dea47 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Received event network-vif-plugged-8943e356-9f8e-4b4c-a308-d113f8558460 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.350 255071 DEBUG oslo_concurrency.lockutils [req-3b874ff7-3784-498f-a498-367264c7029a req-97dfde32-f5fe-4ba9-a86b-f99d2d1dea47 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "5dd77a80-b879-40f3-87b5-03140434178c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.351 255071 DEBUG oslo_concurrency.lockutils [req-3b874ff7-3784-498f-a498-367264c7029a req-97dfde32-f5fe-4ba9-a86b-f99d2d1dea47 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5dd77a80-b879-40f3-87b5-03140434178c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.351 255071 DEBUG oslo_concurrency.lockutils [req-3b874ff7-3784-498f-a498-367264c7029a req-97dfde32-f5fe-4ba9-a86b-f99d2d1dea47 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5dd77a80-b879-40f3-87b5-03140434178c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.351 255071 DEBUG nova.compute.manager [req-3b874ff7-3784-498f-a498-367264c7029a req-97dfde32-f5fe-4ba9-a86b-f99d2d1dea47 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Processing event network-vif-plugged-8943e356-9f8e-4b4c-a308-d113f8558460 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.352 255071 DEBUG nova.compute.manager [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.356 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403860.3558247, 5dd77a80-b879-40f3-87b5-03140434178c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.356 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] VM Resumed (Lifecycle Event)
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.359 255071 DEBUG nova.virt.libvirt.driver [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.362 255071 INFO nova.virt.libvirt.driver [-] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Instance spawned successfully.
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.363 255071 INFO nova.compute.manager [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Took 2.34 seconds to spawn the instance on the hypervisor.
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.363 255071 DEBUG nova.compute.manager [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.376 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.379 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:11:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Nov 29 08:11:00 compute-0 ceph-mon[75237]: pgmap v1614: 305 pgs: 305 active+clean; 167 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 171 KiB/s rd, 19 KiB/s wr, 231 op/s
Nov 29 08:11:00 compute-0 ceph-mon[75237]: osdmap e324: 3 total, 3 up, 3 in
Nov 29 08:11:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Nov 29 08:11:00 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Nov 29 08:11:00 compute-0 nova_compute[255040]: 2025-11-29 08:11:00.956 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:11:01 compute-0 nova_compute[255040]: 2025-11-29 08:11:01.392 255071 INFO nova.compute.manager [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Took 9.89 seconds to build instance.
Nov 29 08:11:01 compute-0 nova_compute[255040]: 2025-11-29 08:11:01.423 255071 DEBUG oslo_concurrency.lockutils [None req-d71232c1-18ef-49cb-a597-d8e242bf2e6f 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "5dd77a80-b879-40f3-87b5-03140434178c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 182 KiB/s rd, 54 KiB/s wr, 242 op/s
Nov 29 08:11:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Nov 29 08:11:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Nov 29 08:11:02 compute-0 nova_compute[255040]: 2025-11-29 08:11:02.587 255071 DEBUG nova.compute.manager [req-8bc16f9f-39a7-4ca2-b2d5-b6774ed53ff6 req-2bf0a87f-9e2c-4b8b-b284-44949b52c979 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Received event network-vif-plugged-8943e356-9f8e-4b4c-a308-d113f8558460 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:02 compute-0 nova_compute[255040]: 2025-11-29 08:11:02.587 255071 DEBUG oslo_concurrency.lockutils [req-8bc16f9f-39a7-4ca2-b2d5-b6774ed53ff6 req-2bf0a87f-9e2c-4b8b-b284-44949b52c979 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "5dd77a80-b879-40f3-87b5-03140434178c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:02 compute-0 nova_compute[255040]: 2025-11-29 08:11:02.588 255071 DEBUG oslo_concurrency.lockutils [req-8bc16f9f-39a7-4ca2-b2d5-b6774ed53ff6 req-2bf0a87f-9e2c-4b8b-b284-44949b52c979 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5dd77a80-b879-40f3-87b5-03140434178c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:02 compute-0 nova_compute[255040]: 2025-11-29 08:11:02.588 255071 DEBUG oslo_concurrency.lockutils [req-8bc16f9f-39a7-4ca2-b2d5-b6774ed53ff6 req-2bf0a87f-9e2c-4b8b-b284-44949b52c979 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5dd77a80-b879-40f3-87b5-03140434178c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:02 compute-0 nova_compute[255040]: 2025-11-29 08:11:02.588 255071 DEBUG nova.compute.manager [req-8bc16f9f-39a7-4ca2-b2d5-b6774ed53ff6 req-2bf0a87f-9e2c-4b8b-b284-44949b52c979 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] No waiting events found dispatching network-vif-plugged-8943e356-9f8e-4b4c-a308-d113f8558460 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:11:02 compute-0 nova_compute[255040]: 2025-11-29 08:11:02.588 255071 WARNING nova.compute.manager [req-8bc16f9f-39a7-4ca2-b2d5-b6774ed53ff6 req-2bf0a87f-9e2c-4b8b-b284-44949b52c979 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Received unexpected event network-vif-plugged-8943e356-9f8e-4b4c-a308-d113f8558460 for instance with vm_state active and task_state None.
Nov 29 08:11:02 compute-0 ceph-mon[75237]: osdmap e325: 3 total, 3 up, 3 in
Nov 29 08:11:03 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Nov 29 08:11:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Nov 29 08:11:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Nov 29 08:11:03 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Nov 29 08:11:03 compute-0 ceph-mon[75237]: pgmap v1617: 305 pgs: 305 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 182 KiB/s rd, 54 KiB/s wr, 242 op/s
Nov 29 08:11:03 compute-0 ceph-mon[75237]: osdmap e326: 3 total, 3 up, 3 in
Nov 29 08:11:03 compute-0 ceph-mon[75237]: osdmap e327: 3 total, 3 up, 3 in
Nov 29 08:11:03 compute-0 nova_compute[255040]: 2025-11-29 08:11:03.626 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:11:03 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3355345711' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:11:03 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3355345711' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 49 KiB/s wr, 216 op/s
Nov 29 08:11:04 compute-0 nova_compute[255040]: 2025-11-29 08:11:04.742 255071 DEBUG nova.compute.manager [req-c375a120-89ab-4344-8c8b-ba74ca461b14 req-fdf1cf0b-5d4d-486e-9396-c1d716abf719 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Received event network-changed-8943e356-9f8e-4b4c-a308-d113f8558460 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:04 compute-0 nova_compute[255040]: 2025-11-29 08:11:04.743 255071 DEBUG nova.compute.manager [req-c375a120-89ab-4344-8c8b-ba74ca461b14 req-fdf1cf0b-5d4d-486e-9396-c1d716abf719 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Refreshing instance network info cache due to event network-changed-8943e356-9f8e-4b4c-a308-d113f8558460. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:11:04 compute-0 nova_compute[255040]: 2025-11-29 08:11:04.743 255071 DEBUG oslo_concurrency.lockutils [req-c375a120-89ab-4344-8c8b-ba74ca461b14 req-fdf1cf0b-5d4d-486e-9396-c1d716abf719 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-5dd77a80-b879-40f3-87b5-03140434178c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:11:04 compute-0 nova_compute[255040]: 2025-11-29 08:11:04.743 255071 DEBUG oslo_concurrency.lockutils [req-c375a120-89ab-4344-8c8b-ba74ca461b14 req-fdf1cf0b-5d4d-486e-9396-c1d716abf719 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-5dd77a80-b879-40f3-87b5-03140434178c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:11:04 compute-0 nova_compute[255040]: 2025-11-29 08:11:04.744 255071 DEBUG nova.network.neutron [req-c375a120-89ab-4344-8c8b-ba74ca461b14 req-fdf1cf0b-5d4d-486e-9396-c1d716abf719 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Refreshing network info cache for port 8943e356-9f8e-4b4c-a308-d113f8558460 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:11:04 compute-0 nova_compute[255040]: 2025-11-29 08:11:04.777 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:04 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3355345711' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:04 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3355345711' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:04 compute-0 ceph-mon[75237]: pgmap v1620: 305 pgs: 305 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 49 KiB/s wr, 216 op/s
Nov 29 08:11:04 compute-0 podman[285519]: 2025-11-29 08:11:04.913937507 +0000 UTC m=+0.069088839 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 08:11:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 36 KiB/s wr, 302 op/s
Nov 29 08:11:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:11:06 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/993174994' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:06 compute-0 ceph-mon[75237]: pgmap v1621: 305 pgs: 305 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 36 KiB/s wr, 302 op/s
Nov 29 08:11:06 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/993174994' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.4 KiB/s wr, 231 op/s
Nov 29 08:11:07 compute-0 nova_compute[255040]: 2025-11-29 08:11:07.778 255071 DEBUG nova.network.neutron [req-c375a120-89ab-4344-8c8b-ba74ca461b14 req-fdf1cf0b-5d4d-486e-9396-c1d716abf719 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Updated VIF entry in instance network info cache for port 8943e356-9f8e-4b4c-a308-d113f8558460. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:11:07 compute-0 nova_compute[255040]: 2025-11-29 08:11:07.779 255071 DEBUG nova.network.neutron [req-c375a120-89ab-4344-8c8b-ba74ca461b14 req-fdf1cf0b-5d4d-486e-9396-c1d716abf719 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Updating instance_info_cache with network_info: [{"id": "8943e356-9f8e-4b4c-a308-d113f8558460", "address": "fa:16:3e:b6:93:1a", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8943e356-9f", "ovs_interfaceid": "8943e356-9f8e-4b4c-a308-d113f8558460", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:11:07 compute-0 nova_compute[255040]: 2025-11-29 08:11:07.799 255071 DEBUG oslo_concurrency.lockutils [req-c375a120-89ab-4344-8c8b-ba74ca461b14 req-fdf1cf0b-5d4d-486e-9396-c1d716abf719 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-5dd77a80-b879-40f3-87b5-03140434178c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:11:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Nov 29 08:11:08 compute-0 nova_compute[255040]: 2025-11-29 08:11:08.630 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Nov 29 08:11:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:11:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:11:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:11:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:11:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:11:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:11:08 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Nov 29 08:11:09 compute-0 ceph-mon[75237]: pgmap v1622: 305 pgs: 305 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.4 KiB/s wr, 231 op/s
Nov 29 08:11:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.2 KiB/s wr, 233 op/s
Nov 29 08:11:09 compute-0 nova_compute[255040]: 2025-11-29 08:11:09.780 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:10 compute-0 nova_compute[255040]: 2025-11-29 08:11:10.288 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Nov 29 08:11:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Nov 29 08:11:10 compute-0 ceph-mon[75237]: osdmap e328: 3 total, 3 up, 3 in
Nov 29 08:11:10 compute-0 ceph-mon[75237]: pgmap v1624: 305 pgs: 305 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.2 KiB/s wr, 233 op/s
Nov 29 08:11:10 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Nov 29 08:11:11 compute-0 ceph-mon[75237]: osdmap e329: 3 total, 3 up, 3 in
Nov 29 08:11:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 KiB/s wr, 130 op/s
Nov 29 08:11:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Nov 29 08:11:12 compute-0 ceph-mon[75237]: pgmap v1626: 305 pgs: 305 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 KiB/s wr, 130 op/s
Nov 29 08:11:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Nov 29 08:11:12 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Nov 29 08:11:12 compute-0 podman[285539]: 2025-11-29 08:11:12.911664938 +0000 UTC m=+0.070112618 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 08:11:13 compute-0 nova_compute[255040]: 2025-11-29 08:11:13.059 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Nov 29 08:11:13 compute-0 nova_compute[255040]: 2025-11-29 08:11:13.682 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 2.5 KiB/s wr, 34 op/s
Nov 29 08:11:13 compute-0 ceph-mon[75237]: osdmap e330: 3 total, 3 up, 3 in
Nov 29 08:11:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Nov 29 08:11:14 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Nov 29 08:11:14 compute-0 nova_compute[255040]: 2025-11-29 08:11:14.782 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:15 compute-0 ceph-mon[75237]: pgmap v1628: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 2.5 KiB/s wr, 34 op/s
Nov 29 08:11:15 compute-0 ceph-mon[75237]: osdmap e331: 3 total, 3 up, 3 in
Nov 29 08:11:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 171 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 314 KiB/s wr, 63 op/s
Nov 29 08:11:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:11:15 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3359521003' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:11:15 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3359521003' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:16 compute-0 ceph-mon[75237]: pgmap v1630: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 171 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 314 KiB/s wr, 63 op/s
Nov 29 08:11:16 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3359521003' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:16 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3359521003' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:11:16 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/771320802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:17 compute-0 ovn_controller[153295]: 2025-11-29T08:11:17Z|00030|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.11 does not match offer 10.100.0.12
Nov 29 08:11:17 compute-0 ovn_controller[153295]: 2025-11-29T08:11:17Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:b6:93:1a 10.100.0.12
Nov 29 08:11:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Nov 29 08:11:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Nov 29 08:11:17 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Nov 29 08:11:17 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/771320802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 171 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 1021 KiB/s rd, 313 KiB/s wr, 46 op/s
Nov 29 08:11:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e332 do_prune osdmap full prune enabled
Nov 29 08:11:18 compute-0 ceph-mon[75237]: osdmap e332: 3 total, 3 up, 3 in
Nov 29 08:11:18 compute-0 ceph-mon[75237]: pgmap v1632: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 171 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 1021 KiB/s rd, 313 KiB/s wr, 46 op/s
Nov 29 08:11:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e333 e333: 3 total, 3 up, 3 in
Nov 29 08:11:18 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e333: 3 total, 3 up, 3 in
Nov 29 08:11:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:18 compute-0 nova_compute[255040]: 2025-11-29 08:11:18.685 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:11:19 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3203743444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:19 compute-0 ceph-mon[75237]: osdmap e333: 3 total, 3 up, 3 in
Nov 29 08:11:19 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3203743444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e333 do_prune osdmap full prune enabled
Nov 29 08:11:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e334 e334: 3 total, 3 up, 3 in
Nov 29 08:11:19 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e334: 3 total, 3 up, 3 in
Nov 29 08:11:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 178 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 994 KiB/s rd, 1008 KiB/s wr, 126 op/s
Nov 29 08:11:19 compute-0 nova_compute[255040]: 2025-11-29 08:11:19.783 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:11:19 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/944448704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e334 do_prune osdmap full prune enabled
Nov 29 08:11:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e335 e335: 3 total, 3 up, 3 in
Nov 29 08:11:20 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e335: 3 total, 3 up, 3 in
Nov 29 08:11:20 compute-0 ceph-mon[75237]: osdmap e334: 3 total, 3 up, 3 in
Nov 29 08:11:20 compute-0 ceph-mon[75237]: pgmap v1635: 305 pgs: 305 active+clean; 178 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 994 KiB/s rd, 1008 KiB/s wr, 126 op/s
Nov 29 08:11:20 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/944448704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:20 compute-0 ovn_controller[153295]: 2025-11-29T08:11:20Z|00032|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.11 does not match offer 10.100.0.12
Nov 29 08:11:20 compute-0 ovn_controller[153295]: 2025-11-29T08:11:20Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:b6:93:1a 10.100.0.12
Nov 29 08:11:21 compute-0 ceph-mon[75237]: osdmap e335: 3 total, 3 up, 3 in
Nov 29 08:11:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 198 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.4 MiB/s wr, 208 op/s
Nov 29 08:11:22 compute-0 ovn_controller[153295]: 2025-11-29T08:11:22Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b6:93:1a 10.100.0.12
Nov 29 08:11:22 compute-0 ovn_controller[153295]: 2025-11-29T08:11:22Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b6:93:1a 10.100.0.12
Nov 29 08:11:22 compute-0 ceph-mon[75237]: pgmap v1637: 305 pgs: 305 active+clean; 198 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.4 MiB/s wr, 208 op/s
Nov 29 08:11:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e335 do_prune osdmap full prune enabled
Nov 29 08:11:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e336 e336: 3 total, 3 up, 3 in
Nov 29 08:11:23 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e336: 3 total, 3 up, 3 in
Nov 29 08:11:23 compute-0 nova_compute[255040]: 2025-11-29 08:11:23.688 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 310 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 24 MiB/s wr, 195 op/s
Nov 29 08:11:24 compute-0 nova_compute[255040]: 2025-11-29 08:11:24.789 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:24 compute-0 ceph-mon[75237]: osdmap e336: 3 total, 3 up, 3 in
Nov 29 08:11:24 compute-0 ceph-mon[75237]: pgmap v1639: 305 pgs: 305 active+clean; 310 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 24 MiB/s wr, 195 op/s
Nov 29 08:11:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:11:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4131025795' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:11:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4131025795' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 609 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 457 KiB/s rd, 69 MiB/s wr, 306 op/s
Nov 29 08:11:25 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4131025795' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:25 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4131025795' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:26 compute-0 ceph-mon[75237]: pgmap v1640: 305 pgs: 305 active+clean; 609 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 457 KiB/s rd, 69 MiB/s wr, 306 op/s
Nov 29 08:11:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:27.133 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:27.134 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:27.135 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:27 compute-0 podman[285563]: 2025-11-29 08:11:27.136480749 +0000 UTC m=+0.290266807 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 29 08:11:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:11:27 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2001802993' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 609 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 53 MiB/s wr, 234 op/s
Nov 29 08:11:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2001802993' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:27 compute-0 nova_compute[255040]: 2025-11-29 08:11:27.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:28 compute-0 nova_compute[255040]: 2025-11-29 08:11:28.691 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:28 compute-0 sudo[285589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:28 compute-0 sudo[285589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:28 compute-0 sudo[285589]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:28 compute-0 sudo[285614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:11:28 compute-0 sudo[285614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:28 compute-0 sudo[285614]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e336 do_prune osdmap full prune enabled
Nov 29 08:11:29 compute-0 ceph-mon[75237]: pgmap v1641: 305 pgs: 305 active+clean; 609 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 53 MiB/s wr, 234 op/s
Nov 29 08:11:29 compute-0 sudo[285639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:29 compute-0 sudo[285639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:29 compute-0 sudo[285639]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e337 e337: 3 total, 3 up, 3 in
Nov 29 08:11:29 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Nov 29 08:11:29 compute-0 sudo[285664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 08:11:29 compute-0 sudo[285664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:29 compute-0 sudo[285664]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:11:29 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:11:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:11:29 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:11:29 compute-0 sudo[285709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:29 compute-0 sudo[285709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:29 compute-0 sudo[285709]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:29 compute-0 sudo[285734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:11:29 compute-0 sudo[285734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:29 compute-0 sudo[285734]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:29 compute-0 sudo[285759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:29 compute-0 sudo[285759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:29 compute-0 sudo[285759]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:29 compute-0 sudo[285784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:11:29 compute-0 sudo[285784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:29 compute-0 nova_compute[255040]: 2025-11-29 08:11:29.791 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 909 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 236 KiB/s rd, 88 MiB/s wr, 308 op/s
Nov 29 08:11:29 compute-0 nova_compute[255040]: 2025-11-29 08:11:29.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:30 compute-0 ceph-mon[75237]: osdmap e337: 3 total, 3 up, 3 in
Nov 29 08:11:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:11:30 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:11:30 compute-0 sudo[285784]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:11:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:11:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:11:30 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:11:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:11:30 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:11:30 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 52e0c964-f073-4228-8fa8-0de12b63a8ea does not exist
Nov 29 08:11:30 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 4daee6d9-ea0e-43f4-a4d8-75aac8109dd6 does not exist
Nov 29 08:11:30 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev f6334bf6-6599-4d8b-b41c-ebaf5962d51c does not exist
Nov 29 08:11:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:11:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:11:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:11:30 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:11:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:11:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:11:30 compute-0 sudo[285838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:30 compute-0 sudo[285838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:30 compute-0 sudo[285838]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:30 compute-0 sudo[285863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:11:30 compute-0 sudo[285863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:30 compute-0 sudo[285863]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:30 compute-0 sudo[285888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:30 compute-0 sudo[285888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:30 compute-0 sudo[285888]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:30 compute-0 sudo[285913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:11:30 compute-0 sudo[285913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:30 compute-0 nova_compute[255040]: 2025-11-29 08:11:30.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:30 compute-0 nova_compute[255040]: 2025-11-29 08:11:30.976 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:11:30 compute-0 nova_compute[255040]: 2025-11-29 08:11:30.976 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:11:31 compute-0 podman[285976]: 2025-11-29 08:11:31.031364548 +0000 UTC m=+0.051380453 container create cc873e4e39ae77218b889059cbc1aca741dd692399ccae5e2406060111b28332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_margulis, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 08:11:31 compute-0 podman[285976]: 2025-11-29 08:11:31.009425488 +0000 UTC m=+0.029441413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:11:31 compute-0 systemd[1]: Started libpod-conmon-cc873e4e39ae77218b889059cbc1aca741dd692399ccae5e2406060111b28332.scope.
Nov 29 08:11:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:11:31 compute-0 podman[285976]: 2025-11-29 08:11:31.175232248 +0000 UTC m=+0.195248163 container init cc873e4e39ae77218b889059cbc1aca741dd692399ccae5e2406060111b28332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_margulis, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:11:31 compute-0 podman[285976]: 2025-11-29 08:11:31.183213812 +0000 UTC m=+0.203229717 container start cc873e4e39ae77218b889059cbc1aca741dd692399ccae5e2406060111b28332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:11:31 compute-0 podman[285976]: 2025-11-29 08:11:31.18874413 +0000 UTC m=+0.208760045 container attach cc873e4e39ae77218b889059cbc1aca741dd692399ccae5e2406060111b28332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_margulis, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 08:11:31 compute-0 ceph-mon[75237]: pgmap v1643: 305 pgs: 305 active+clean; 909 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 236 KiB/s rd, 88 MiB/s wr, 308 op/s
Nov 29 08:11:31 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:11:31 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:11:31 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:11:31 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:11:31 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:11:31 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:11:31 compute-0 systemd[1]: libpod-cc873e4e39ae77218b889059cbc1aca741dd692399ccae5e2406060111b28332.scope: Deactivated successfully.
Nov 29 08:11:31 compute-0 strange_margulis[285992]: 167 167
Nov 29 08:11:31 compute-0 podman[285976]: 2025-11-29 08:11:31.19615484 +0000 UTC m=+0.216170775 container died cc873e4e39ae77218b889059cbc1aca741dd692399ccae5e2406060111b28332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_margulis, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 29 08:11:31 compute-0 conmon[285992]: conmon cc873e4e39ae77218b88 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc873e4e39ae77218b889059cbc1aca741dd692399ccae5e2406060111b28332.scope/container/memory.events
Nov 29 08:11:31 compute-0 nova_compute[255040]: 2025-11-29 08:11:31.201 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "refresh_cache-40011a89-5ea1-4ffe-bda7-a3116abd2267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:11:31 compute-0 nova_compute[255040]: 2025-11-29 08:11:31.203 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquired lock "refresh_cache-40011a89-5ea1-4ffe-bda7-a3116abd2267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:11:31 compute-0 nova_compute[255040]: 2025-11-29 08:11:31.203 255071 DEBUG nova.network.neutron [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:11:31 compute-0 nova_compute[255040]: 2025-11-29 08:11:31.203 255071 DEBUG nova.objects.instance [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 40011a89-5ea1-4ffe-bda7-a3116abd2267 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:11:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f8cc8286c96e0c54544392fc3aa7990c92a382c5623c9c5bc7b6afcc974e313-merged.mount: Deactivated successfully.
Nov 29 08:11:31 compute-0 podman[285976]: 2025-11-29 08:11:31.260199392 +0000 UTC m=+0.280215337 container remove cc873e4e39ae77218b889059cbc1aca741dd692399ccae5e2406060111b28332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 08:11:31 compute-0 systemd[1]: libpod-conmon-cc873e4e39ae77218b889059cbc1aca741dd692399ccae5e2406060111b28332.scope: Deactivated successfully.
Nov 29 08:11:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e337 do_prune osdmap full prune enabled
Nov 29 08:11:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e338 e338: 3 total, 3 up, 3 in
Nov 29 08:11:31 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e338: 3 total, 3 up, 3 in
Nov 29 08:11:31 compute-0 podman[286015]: 2025-11-29 08:11:31.470002865 +0000 UTC m=+0.045659079 container create ca6585b98960355092cac6e38305a66b97f9418f87dfa147c2bf57c6e254e5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:11:31 compute-0 systemd[1]: Started libpod-conmon-ca6585b98960355092cac6e38305a66b97f9418f87dfa147c2bf57c6e254e5c6.scope.
Nov 29 08:11:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:11:31 compute-0 podman[286015]: 2025-11-29 08:11:31.449279027 +0000 UTC m=+0.024935271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:11:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93697f99175e146a1c8810ce88b54dfca0cab9da022b290ee2395164df67b61f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93697f99175e146a1c8810ce88b54dfca0cab9da022b290ee2395164df67b61f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93697f99175e146a1c8810ce88b54dfca0cab9da022b290ee2395164df67b61f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93697f99175e146a1c8810ce88b54dfca0cab9da022b290ee2395164df67b61f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93697f99175e146a1c8810ce88b54dfca0cab9da022b290ee2395164df67b61f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:31 compute-0 podman[286015]: 2025-11-29 08:11:31.562740239 +0000 UTC m=+0.138396503 container init ca6585b98960355092cac6e38305a66b97f9418f87dfa147c2bf57c6e254e5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 08:11:31 compute-0 podman[286015]: 2025-11-29 08:11:31.571742501 +0000 UTC m=+0.147398725 container start ca6585b98960355092cac6e38305a66b97f9418f87dfa147c2bf57c6e254e5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_haibt, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 29 08:11:31 compute-0 podman[286015]: 2025-11-29 08:11:31.577547787 +0000 UTC m=+0.153204051 container attach ca6585b98960355092cac6e38305a66b97f9418f87dfa147c2bf57c6e254e5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:11:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:11:31 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/532332519' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:11:31 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/532332519' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 1.0 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 195 KiB/s rd, 91 MiB/s wr, 321 op/s
Nov 29 08:11:32 compute-0 nova_compute[255040]: 2025-11-29 08:11:32.121 255071 DEBUG nova.network.neutron [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Updating instance_info_cache with network_info: [{"id": "3c03306d-f387-4844-a235-2eaba1efde2e", "address": "fa:16:3e:98:0b:ca", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c03306d-f3", "ovs_interfaceid": "3c03306d-f387-4844-a235-2eaba1efde2e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:11:32 compute-0 nova_compute[255040]: 2025-11-29 08:11:32.136 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Releasing lock "refresh_cache-40011a89-5ea1-4ffe-bda7-a3116abd2267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:11:32 compute-0 nova_compute[255040]: 2025-11-29 08:11:32.136 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:11:32 compute-0 nova_compute[255040]: 2025-11-29 08:11:32.136 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:32 compute-0 nova_compute[255040]: 2025-11-29 08:11:32.136 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:32 compute-0 nova_compute[255040]: 2025-11-29 08:11:32.137 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 08:11:32 compute-0 nova_compute[255040]: 2025-11-29 08:11:32.154 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 08:11:32 compute-0 ceph-mon[75237]: osdmap e338: 3 total, 3 up, 3 in
Nov 29 08:11:32 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/532332519' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:32 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/532332519' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:32 compute-0 ceph-mon[75237]: pgmap v1645: 305 pgs: 305 active+clean; 1.0 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 195 KiB/s rd, 91 MiB/s wr, 321 op/s
Nov 29 08:11:32 compute-0 hardcore_haibt[286031]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:11:32 compute-0 hardcore_haibt[286031]: --> relative data size: 1.0
Nov 29 08:11:32 compute-0 hardcore_haibt[286031]: --> All data devices are unavailable
Nov 29 08:11:32 compute-0 systemd[1]: libpod-ca6585b98960355092cac6e38305a66b97f9418f87dfa147c2bf57c6e254e5c6.scope: Deactivated successfully.
Nov 29 08:11:32 compute-0 podman[286015]: 2025-11-29 08:11:32.743664499 +0000 UTC m=+1.319320723 container died ca6585b98960355092cac6e38305a66b97f9418f87dfa147c2bf57c6e254e5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 29 08:11:32 compute-0 systemd[1]: libpod-ca6585b98960355092cac6e38305a66b97f9418f87dfa147c2bf57c6e254e5c6.scope: Consumed 1.090s CPU time.
Nov 29 08:11:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-93697f99175e146a1c8810ce88b54dfca0cab9da022b290ee2395164df67b61f-merged.mount: Deactivated successfully.
Nov 29 08:11:32 compute-0 podman[286015]: 2025-11-29 08:11:32.807576987 +0000 UTC m=+1.383233201 container remove ca6585b98960355092cac6e38305a66b97f9418f87dfa147c2bf57c6e254e5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 29 08:11:32 compute-0 systemd[1]: libpod-conmon-ca6585b98960355092cac6e38305a66b97f9418f87dfa147c2bf57c6e254e5c6.scope: Deactivated successfully.
Nov 29 08:11:32 compute-0 sudo[285913]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:32 compute-0 sudo[286074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:32 compute-0 sudo[286074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:32 compute-0 sudo[286074]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:32 compute-0 sudo[286099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:11:32 compute-0 sudo[286099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:32 compute-0 sudo[286099]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:32 compute-0 nova_compute[255040]: 2025-11-29 08:11:32.993 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:32 compute-0 nova_compute[255040]: 2025-11-29 08:11:32.994 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:33 compute-0 nova_compute[255040]: 2025-11-29 08:11:33.026 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:33 compute-0 nova_compute[255040]: 2025-11-29 08:11:33.026 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:33 compute-0 nova_compute[255040]: 2025-11-29 08:11:33.027 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:33 compute-0 nova_compute[255040]: 2025-11-29 08:11:33.027 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:11:33 compute-0 nova_compute[255040]: 2025-11-29 08:11:33.028 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:11:33 compute-0 sudo[286124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:33 compute-0 sudo[286124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:33 compute-0 sudo[286124]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:33 compute-0 sudo[286150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 08:11:33 compute-0 sudo[286150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e338 do_prune osdmap full prune enabled
Nov 29 08:11:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e339 e339: 3 total, 3 up, 3 in
Nov 29 08:11:33 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e339: 3 total, 3 up, 3 in
Nov 29 08:11:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:11:33 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2528624490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:33 compute-0 podman[286234]: 2025-11-29 08:11:33.489379204 +0000 UTC m=+0.049959234 container create 6107859b91f80c6af57e0e2b953a38946105daf1379613560966048f6570946e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 29 08:11:33 compute-0 nova_compute[255040]: 2025-11-29 08:11:33.490 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:11:33 compute-0 systemd[1]: Started libpod-conmon-6107859b91f80c6af57e0e2b953a38946105daf1379613560966048f6570946e.scope.
Nov 29 08:11:33 compute-0 podman[286234]: 2025-11-29 08:11:33.470601109 +0000 UTC m=+0.031181159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:11:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:11:33 compute-0 nova_compute[255040]: 2025-11-29 08:11:33.582 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:11:33 compute-0 nova_compute[255040]: 2025-11-29 08:11:33.583 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:11:33 compute-0 nova_compute[255040]: 2025-11-29 08:11:33.586 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:11:33 compute-0 nova_compute[255040]: 2025-11-29 08:11:33.587 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:11:33 compute-0 podman[286234]: 2025-11-29 08:11:33.602030054 +0000 UTC m=+0.162610114 container init 6107859b91f80c6af57e0e2b953a38946105daf1379613560966048f6570946e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:11:33 compute-0 podman[286234]: 2025-11-29 08:11:33.612810714 +0000 UTC m=+0.173390754 container start 6107859b91f80c6af57e0e2b953a38946105daf1379613560966048f6570946e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 08:11:33 compute-0 podman[286234]: 2025-11-29 08:11:33.616619706 +0000 UTC m=+0.177199756 container attach 6107859b91f80c6af57e0e2b953a38946105daf1379613560966048f6570946e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 08:11:33 compute-0 confident_bell[286253]: 167 167
Nov 29 08:11:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e339 do_prune osdmap full prune enabled
Nov 29 08:11:33 compute-0 systemd[1]: libpod-6107859b91f80c6af57e0e2b953a38946105daf1379613560966048f6570946e.scope: Deactivated successfully.
Nov 29 08:11:33 compute-0 podman[286234]: 2025-11-29 08:11:33.623223724 +0000 UTC m=+0.183803754 container died 6107859b91f80c6af57e0e2b953a38946105daf1379613560966048f6570946e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 08:11:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e340 e340: 3 total, 3 up, 3 in
Nov 29 08:11:33 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e340: 3 total, 3 up, 3 in
Nov 29 08:11:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-be6c5e5fa735bfde54faec3098cf26a74c48aefb61c7550cb4f76902a207560a-merged.mount: Deactivated successfully.
Nov 29 08:11:33 compute-0 podman[286234]: 2025-11-29 08:11:33.689918438 +0000 UTC m=+0.250498458 container remove 6107859b91f80c6af57e0e2b953a38946105daf1379613560966048f6570946e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bell, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:11:33 compute-0 nova_compute[255040]: 2025-11-29 08:11:33.695 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:33 compute-0 systemd[1]: libpod-conmon-6107859b91f80c6af57e0e2b953a38946105daf1379613560966048f6570946e.scope: Deactivated successfully.
Nov 29 08:11:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 164 KiB/s rd, 85 MiB/s wr, 270 op/s
Nov 29 08:11:33 compute-0 nova_compute[255040]: 2025-11-29 08:11:33.837 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:11:33 compute-0 nova_compute[255040]: 2025-11-29 08:11:33.839 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4028MB free_disk=59.98794174194336GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:11:33 compute-0 nova_compute[255040]: 2025-11-29 08:11:33.839 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:33 compute-0 nova_compute[255040]: 2025-11-29 08:11:33.840 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:33 compute-0 podman[286276]: 2025-11-29 08:11:33.884056648 +0000 UTC m=+0.046864151 container create dfaec304aa4217bad7a6801ef1a240670f7646e979a1ed9e1f46d0bbab3afeb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 29 08:11:33 compute-0 systemd[1]: Started libpod-conmon-dfaec304aa4217bad7a6801ef1a240670f7646e979a1ed9e1f46d0bbab3afeb9.scope.
Nov 29 08:11:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44418c89dc9ad543cae07832dae4520ea513dfb0e0c91b10a4708e5732610a5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:33 compute-0 podman[286276]: 2025-11-29 08:11:33.863564428 +0000 UTC m=+0.026371951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44418c89dc9ad543cae07832dae4520ea513dfb0e0c91b10a4708e5732610a5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44418c89dc9ad543cae07832dae4520ea513dfb0e0c91b10a4708e5732610a5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44418c89dc9ad543cae07832dae4520ea513dfb0e0c91b10a4708e5732610a5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:33 compute-0 podman[286276]: 2025-11-29 08:11:33.970953535 +0000 UTC m=+0.133761098 container init dfaec304aa4217bad7a6801ef1a240670f7646e979a1ed9e1f46d0bbab3afeb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jennings, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:11:33 compute-0 podman[286276]: 2025-11-29 08:11:33.98339604 +0000 UTC m=+0.146203543 container start dfaec304aa4217bad7a6801ef1a240670f7646e979a1ed9e1f46d0bbab3afeb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:11:33 compute-0 podman[286276]: 2025-11-29 08:11:33.987815399 +0000 UTC m=+0.150622942 container attach dfaec304aa4217bad7a6801ef1a240670f7646e979a1ed9e1f46d0bbab3afeb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jennings, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 08:11:34 compute-0 nova_compute[255040]: 2025-11-29 08:11:34.041 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance 40011a89-5ea1-4ffe-bda7-a3116abd2267 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:11:34 compute-0 nova_compute[255040]: 2025-11-29 08:11:34.042 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance 5dd77a80-b879-40f3-87b5-03140434178c actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:11:34 compute-0 nova_compute[255040]: 2025-11-29 08:11:34.042 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:11:34 compute-0 nova_compute[255040]: 2025-11-29 08:11:34.043 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:11:34 compute-0 nova_compute[255040]: 2025-11-29 08:11:34.191 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:11:34 compute-0 ceph-mon[75237]: osdmap e339: 3 total, 3 up, 3 in
Nov 29 08:11:34 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2528624490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:34 compute-0 ceph-mon[75237]: osdmap e340: 3 total, 3 up, 3 in
Nov 29 08:11:34 compute-0 ceph-mon[75237]: pgmap v1648: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 164 KiB/s rd, 85 MiB/s wr, 270 op/s
Nov 29 08:11:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:11:34 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/206965728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:34 compute-0 nova_compute[255040]: 2025-11-29 08:11:34.618 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:11:34 compute-0 nova_compute[255040]: 2025-11-29 08:11:34.626 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:11:34 compute-0 nova_compute[255040]: 2025-11-29 08:11:34.643 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:11:34 compute-0 nova_compute[255040]: 2025-11-29 08:11:34.674 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:11:34 compute-0 nova_compute[255040]: 2025-11-29 08:11:34.675 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:34 compute-0 nova_compute[255040]: 2025-11-29 08:11:34.794 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:34 compute-0 epic_jennings[286292]: {
Nov 29 08:11:34 compute-0 epic_jennings[286292]:     "0": [
Nov 29 08:11:34 compute-0 epic_jennings[286292]:         {
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "devices": [
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "/dev/loop3"
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             ],
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "lv_name": "ceph_lv0",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "lv_size": "21470642176",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "name": "ceph_lv0",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "tags": {
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.cluster_name": "ceph",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.crush_device_class": "",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.encrypted": "0",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.osd_id": "0",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.type": "block",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.vdo": "0"
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             },
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "type": "block",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "vg_name": "ceph_vg0"
Nov 29 08:11:34 compute-0 epic_jennings[286292]:         }
Nov 29 08:11:34 compute-0 epic_jennings[286292]:     ],
Nov 29 08:11:34 compute-0 epic_jennings[286292]:     "1": [
Nov 29 08:11:34 compute-0 epic_jennings[286292]:         {
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "devices": [
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "/dev/loop4"
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             ],
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "lv_name": "ceph_lv1",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "lv_size": "21470642176",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "name": "ceph_lv1",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "tags": {
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.cluster_name": "ceph",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.crush_device_class": "",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.encrypted": "0",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.osd_id": "1",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.type": "block",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.vdo": "0"
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             },
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "type": "block",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "vg_name": "ceph_vg1"
Nov 29 08:11:34 compute-0 epic_jennings[286292]:         }
Nov 29 08:11:34 compute-0 epic_jennings[286292]:     ],
Nov 29 08:11:34 compute-0 epic_jennings[286292]:     "2": [
Nov 29 08:11:34 compute-0 epic_jennings[286292]:         {
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "devices": [
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "/dev/loop5"
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             ],
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "lv_name": "ceph_lv2",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "lv_size": "21470642176",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "name": "ceph_lv2",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "tags": {
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.cluster_name": "ceph",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.crush_device_class": "",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.encrypted": "0",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.osd_id": "2",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.type": "block",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:                 "ceph.vdo": "0"
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             },
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "type": "block",
Nov 29 08:11:34 compute-0 epic_jennings[286292]:             "vg_name": "ceph_vg2"
Nov 29 08:11:34 compute-0 epic_jennings[286292]:         }
Nov 29 08:11:34 compute-0 epic_jennings[286292]:     ]
Nov 29 08:11:34 compute-0 epic_jennings[286292]: }
Nov 29 08:11:34 compute-0 systemd[1]: libpod-dfaec304aa4217bad7a6801ef1a240670f7646e979a1ed9e1f46d0bbab3afeb9.scope: Deactivated successfully.
Nov 29 08:11:34 compute-0 conmon[286292]: conmon dfaec304aa4217bad7a6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dfaec304aa4217bad7a6801ef1a240670f7646e979a1ed9e1f46d0bbab3afeb9.scope/container/memory.events
Nov 29 08:11:34 compute-0 podman[286276]: 2025-11-29 08:11:34.877308451 +0000 UTC m=+1.040115954 container died dfaec304aa4217bad7a6801ef1a240670f7646e979a1ed9e1f46d0bbab3afeb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:11:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-44418c89dc9ad543cae07832dae4520ea513dfb0e0c91b10a4708e5732610a5b-merged.mount: Deactivated successfully.
Nov 29 08:11:34 compute-0 podman[286276]: 2025-11-29 08:11:34.941068106 +0000 UTC m=+1.103875619 container remove dfaec304aa4217bad7a6801ef1a240670f7646e979a1ed9e1f46d0bbab3afeb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 08:11:34 compute-0 systemd[1]: libpod-conmon-dfaec304aa4217bad7a6801ef1a240670f7646e979a1ed9e1f46d0bbab3afeb9.scope: Deactivated successfully.
Nov 29 08:11:34 compute-0 sudo[286150]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:35 compute-0 sudo[286344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:35 compute-0 sudo[286344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:35 compute-0 podman[286336]: 2025-11-29 08:11:35.053710045 +0000 UTC m=+0.073309843 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 08:11:35 compute-0 sudo[286344]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:35 compute-0 sudo[286382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:11:35 compute-0 sudo[286382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:35 compute-0 sudo[286382]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:35 compute-0 sudo[286408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:35 compute-0 sudo[286408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:35 compute-0 sudo[286408]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:35 compute-0 sudo[286433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 08:11:35 compute-0 sudo[286433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:35 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/206965728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:35 compute-0 podman[286497]: 2025-11-29 08:11:35.586758951 +0000 UTC m=+0.047559480 container create d772c47c02da4e150c5434e027f9da5659e2b5aded2e7fd243a5ee720fbf16ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_williamson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:11:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e340 do_prune osdmap full prune enabled
Nov 29 08:11:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e341 e341: 3 total, 3 up, 3 in
Nov 29 08:11:35 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e341: 3 total, 3 up, 3 in
Nov 29 08:11:35 compute-0 systemd[1]: Started libpod-conmon-d772c47c02da4e150c5434e027f9da5659e2b5aded2e7fd243a5ee720fbf16ff.scope.
Nov 29 08:11:35 compute-0 podman[286497]: 2025-11-29 08:11:35.568459999 +0000 UTC m=+0.029260548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:11:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:11:35 compute-0 podman[286497]: 2025-11-29 08:11:35.695867175 +0000 UTC m=+0.156667704 container init d772c47c02da4e150c5434e027f9da5659e2b5aded2e7fd243a5ee720fbf16ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_williamson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:11:35 compute-0 podman[286497]: 2025-11-29 08:11:35.705359311 +0000 UTC m=+0.166159850 container start d772c47c02da4e150c5434e027f9da5659e2b5aded2e7fd243a5ee720fbf16ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:11:35 compute-0 podman[286497]: 2025-11-29 08:11:35.710002015 +0000 UTC m=+0.170802544 container attach d772c47c02da4e150c5434e027f9da5659e2b5aded2e7fd243a5ee720fbf16ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 08:11:35 compute-0 zen_williamson[286513]: 167 167
Nov 29 08:11:35 compute-0 systemd[1]: libpod-d772c47c02da4e150c5434e027f9da5659e2b5aded2e7fd243a5ee720fbf16ff.scope: Deactivated successfully.
Nov 29 08:11:35 compute-0 podman[286497]: 2025-11-29 08:11:35.715145754 +0000 UTC m=+0.175946273 container died d772c47c02da4e150c5434e027f9da5659e2b5aded2e7fd243a5ee720fbf16ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:11:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-f79283e15a10f1160633bdf4aa610bc4b40247c0e6007cc5e52cbb22dec934ea-merged.mount: Deactivated successfully.
Nov 29 08:11:35 compute-0 podman[286497]: 2025-11-29 08:11:35.767682327 +0000 UTC m=+0.228482856 container remove d772c47c02da4e150c5434e027f9da5659e2b5aded2e7fd243a5ee720fbf16ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_williamson, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 08:11:35 compute-0 systemd[1]: libpod-conmon-d772c47c02da4e150c5434e027f9da5659e2b5aded2e7fd243a5ee720fbf16ff.scope: Deactivated successfully.
Nov 29 08:11:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 126 KiB/s rd, 37 MiB/s wr, 190 op/s
Nov 29 08:11:35 compute-0 podman[286538]: 2025-11-29 08:11:35.967287695 +0000 UTC m=+0.053066029 container create a27198eb05a7bf476ccc7d200e1f50ee3b520b2b9aba1d75ef508a4736b980ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:11:36 compute-0 systemd[1]: Started libpod-conmon-a27198eb05a7bf476ccc7d200e1f50ee3b520b2b9aba1d75ef508a4736b980ae.scope.
Nov 29 08:11:36 compute-0 podman[286538]: 2025-11-29 08:11:35.943504795 +0000 UTC m=+0.029283109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:11:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b575fe8af0981d11734baf2c412deca99e76d768073cd9261843cd232ff4ff86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b575fe8af0981d11734baf2c412deca99e76d768073cd9261843cd232ff4ff86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b575fe8af0981d11734baf2c412deca99e76d768073cd9261843cd232ff4ff86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b575fe8af0981d11734baf2c412deca99e76d768073cd9261843cd232ff4ff86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:36 compute-0 podman[286538]: 2025-11-29 08:11:36.071453467 +0000 UTC m=+0.157231801 container init a27198eb05a7bf476ccc7d200e1f50ee3b520b2b9aba1d75ef508a4736b980ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_liskov, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:11:36 compute-0 podman[286538]: 2025-11-29 08:11:36.090528549 +0000 UTC m=+0.176306843 container start a27198eb05a7bf476ccc7d200e1f50ee3b520b2b9aba1d75ef508a4736b980ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:11:36 compute-0 podman[286538]: 2025-11-29 08:11:36.095606126 +0000 UTC m=+0.181384440 container attach a27198eb05a7bf476ccc7d200e1f50ee3b520b2b9aba1d75ef508a4736b980ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.419 255071 DEBUG oslo_concurrency.lockutils [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "5dd77a80-b879-40f3-87b5-03140434178c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.422 255071 DEBUG oslo_concurrency.lockutils [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "5dd77a80-b879-40f3-87b5-03140434178c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.423 255071 DEBUG oslo_concurrency.lockutils [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "5dd77a80-b879-40f3-87b5-03140434178c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.423 255071 DEBUG oslo_concurrency.lockutils [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "5dd77a80-b879-40f3-87b5-03140434178c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.423 255071 DEBUG oslo_concurrency.lockutils [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "5dd77a80-b879-40f3-87b5-03140434178c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.426 255071 INFO nova.compute.manager [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Terminating instance
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.428 255071 DEBUG nova.compute.manager [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:11:36 compute-0 kernel: tap8943e356-9f (unregistering): left promiscuous mode
Nov 29 08:11:36 compute-0 NetworkManager[49116]: <info>  [1764403896.5030] device (tap8943e356-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:11:36 compute-0 ovn_controller[153295]: 2025-11-29T08:11:36Z|00173|binding|INFO|Releasing lport 8943e356-9f8e-4b4c-a308-d113f8558460 from this chassis (sb_readonly=0)
Nov 29 08:11:36 compute-0 ovn_controller[153295]: 2025-11-29T08:11:36Z|00174|binding|INFO|Setting lport 8943e356-9f8e-4b4c-a308-d113f8558460 down in Southbound
Nov 29 08:11:36 compute-0 ovn_controller[153295]: 2025-11-29T08:11:36Z|00175|binding|INFO|Removing iface tap8943e356-9f ovn-installed in OVS
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.526 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:36.536 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:93:1a 10.100.0.12'], port_security=['fa:16:3e:b6:93:1a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5dd77a80-b879-40f3-87b5-03140434178c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3df24932e2a44aeab3c2aece8a045774', 'neutron:revision_number': '4', 'neutron:security_group_ids': '688ed12f-3bea-4537-80cf-50d770e51be0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6d2be5e-00f1-4a95-b572-cb93402763d5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=8943e356-9f8e-4b4c-a308-d113f8558460) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:11:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:36.538 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 8943e356-9f8e-4b4c-a308-d113f8558460 in datapath 6e23492e-beff-43f6-b4d1-f88ebeea0b6f unbound from our chassis
Nov 29 08:11:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:36.539 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e23492e-beff-43f6-b4d1-f88ebeea0b6f
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.550 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:36 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Deactivated successfully.
Nov 29 08:11:36 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Consumed 16.022s CPU time.
Nov 29 08:11:36 compute-0 systemd-machined[216271]: Machine qemu-18-instance-00000012 terminated.
Nov 29 08:11:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:36.577 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[077a66c9-dfa1-45e3-b2cc-af7e83159568]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:36.616 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[dcca5900-a142-48c5-9f52-952af65e068a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:36.621 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[a935f735-5904-4727-8c00-b73d274615e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:36 compute-0 ceph-mon[75237]: osdmap e341: 3 total, 3 up, 3 in
Nov 29 08:11:36 compute-0 ceph-mon[75237]: pgmap v1650: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 126 KiB/s rd, 37 MiB/s wr, 190 op/s
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.657 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.658 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.659 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:11:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:36.661 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[87c97797-62cc-4f06-93b3-ff888bd35300]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.677 255071 INFO nova.virt.libvirt.driver [-] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Instance destroyed successfully.
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.678 255071 DEBUG nova.objects.instance [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lazy-loading 'resources' on Instance uuid 5dd77a80-b879-40f3-87b5-03140434178c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:11:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:36.689 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff2686b-9e32-4f7d-97da-2edb2755a339]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e23492e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:19:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604659, 'reachable_time': 30731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286578, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:11:36 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1799746448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.698 255071 DEBUG nova.virt.libvirt.vif [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1737596904',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1737596904',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1737596904',id=18,image_ref='a1916382-bc5f-42e0-b21c-ef024e59945e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIYbaW5Cz98MPv+dg9KtHkYpYVVoIaatnuhSC1XdhwyJ+P+b6nHd2r5M7Ip3vlw7oXiKc0nUgjp60S2QMU0rNP2g/q8a9v2jtUSVxBO0ZN/oMFDMg1yFPVI6kZ8/JTHqdw==',key_name='tempest-keypair-249992626',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:11:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3df24932e2a44aeab3c2aece8a045774',ramdisk_id='',reservation_id='r-boighzo5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1666331213',image_owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1666331213',owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:11:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e62d407203540599a65ac50d5d447b9',uuid=5dd77a80-b879-40f3-87b5-03140434178c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8943e356-9f8e-4b4c-a308-d113f8558460", "address": "fa:16:3e:b6:93:1a", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8943e356-9f", "ovs_interfaceid": "8943e356-9f8e-4b4c-a308-d113f8558460", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.703 255071 DEBUG nova.network.os_vif_util [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converting VIF {"id": "8943e356-9f8e-4b4c-a308-d113f8558460", "address": "fa:16:3e:b6:93:1a", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8943e356-9f", "ovs_interfaceid": "8943e356-9f8e-4b4c-a308-d113f8558460", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.705 255071 DEBUG nova.network.os_vif_util [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b6:93:1a,bridge_name='br-int',has_traffic_filtering=True,id=8943e356-9f8e-4b4c-a308-d113f8558460,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8943e356-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.706 255071 DEBUG os_vif [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b6:93:1a,bridge_name='br-int',has_traffic_filtering=True,id=8943e356-9f8e-4b4c-a308-d113f8558460,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8943e356-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.712 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.713 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8943e356-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.718 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.721 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.725 255071 INFO os_vif [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b6:93:1a,bridge_name='br-int',has_traffic_filtering=True,id=8943e356-9f8e-4b4c-a308-d113f8558460,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8943e356-9f')
Nov 29 08:11:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:36.727 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[e60ae0d3-0850-4cff-90ed-0f3c288df5ed]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6e23492e-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604674, 'tstamp': 604674}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286583, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e23492e-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604678, 'tstamp': 604678}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286583, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:36.729 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e23492e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:36.740 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e23492e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:36.740 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:11:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:36.741 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e23492e-b0, col_values=(('external_ids', {'iface-id': 'c7579d40-4225-44ab-93bd-e31c3efe399f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:36 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:36.742 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.753 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.964 255071 INFO nova.virt.libvirt.driver [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Deleting instance files /var/lib/nova/instances/5dd77a80-b879-40f3-87b5-03140434178c_del
Nov 29 08:11:36 compute-0 nova_compute[255040]: 2025-11-29 08:11:36.966 255071 INFO nova.virt.libvirt.driver [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Deletion of /var/lib/nova/instances/5dd77a80-b879-40f3-87b5-03140434178c_del complete
Nov 29 08:11:37 compute-0 nova_compute[255040]: 2025-11-29 08:11:37.025 255071 INFO nova.compute.manager [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Took 0.60 seconds to destroy the instance on the hypervisor.
Nov 29 08:11:37 compute-0 nova_compute[255040]: 2025-11-29 08:11:37.026 255071 DEBUG oslo.service.loopingcall [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:11:37 compute-0 nova_compute[255040]: 2025-11-29 08:11:37.026 255071 DEBUG nova.compute.manager [-] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:11:37 compute-0 nova_compute[255040]: 2025-11-29 08:11:37.026 255071 DEBUG nova.network.neutron [-] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:11:37 compute-0 nova_compute[255040]: 2025-11-29 08:11:37.116 255071 DEBUG nova.compute.manager [req-e7aa47f2-25f4-43e2-8e52-4aa21b12b47d req-3625f123-fb64-4037-93f5-23a9566da302 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Received event network-vif-unplugged-8943e356-9f8e-4b4c-a308-d113f8558460 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:37 compute-0 nova_compute[255040]: 2025-11-29 08:11:37.116 255071 DEBUG oslo_concurrency.lockutils [req-e7aa47f2-25f4-43e2-8e52-4aa21b12b47d req-3625f123-fb64-4037-93f5-23a9566da302 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "5dd77a80-b879-40f3-87b5-03140434178c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:37 compute-0 nova_compute[255040]: 2025-11-29 08:11:37.117 255071 DEBUG oslo_concurrency.lockutils [req-e7aa47f2-25f4-43e2-8e52-4aa21b12b47d req-3625f123-fb64-4037-93f5-23a9566da302 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5dd77a80-b879-40f3-87b5-03140434178c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:37 compute-0 nova_compute[255040]: 2025-11-29 08:11:37.117 255071 DEBUG oslo_concurrency.lockutils [req-e7aa47f2-25f4-43e2-8e52-4aa21b12b47d req-3625f123-fb64-4037-93f5-23a9566da302 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5dd77a80-b879-40f3-87b5-03140434178c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:37 compute-0 nova_compute[255040]: 2025-11-29 08:11:37.117 255071 DEBUG nova.compute.manager [req-e7aa47f2-25f4-43e2-8e52-4aa21b12b47d req-3625f123-fb64-4037-93f5-23a9566da302 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] No waiting events found dispatching network-vif-unplugged-8943e356-9f8e-4b4c-a308-d113f8558460 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:11:37 compute-0 nova_compute[255040]: 2025-11-29 08:11:37.117 255071 DEBUG nova.compute.manager [req-e7aa47f2-25f4-43e2-8e52-4aa21b12b47d req-3625f123-fb64-4037-93f5-23a9566da302 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Received event network-vif-unplugged-8943e356-9f8e-4b4c-a308-d113f8558460 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]: {
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]:         "osd_id": 2,
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]:         "type": "bluestore"
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]:     },
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]:         "osd_id": 0,
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]:         "type": "bluestore"
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]:     },
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]:         "osd_id": 1,
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]:         "type": "bluestore"
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]:     }
Nov 29 08:11:37 compute-0 relaxed_liskov[286554]: }
Nov 29 08:11:37 compute-0 systemd[1]: libpod-a27198eb05a7bf476ccc7d200e1f50ee3b520b2b9aba1d75ef508a4736b980ae.scope: Deactivated successfully.
Nov 29 08:11:37 compute-0 systemd[1]: libpod-a27198eb05a7bf476ccc7d200e1f50ee3b520b2b9aba1d75ef508a4736b980ae.scope: Consumed 1.191s CPU time.
Nov 29 08:11:37 compute-0 podman[286631]: 2025-11-29 08:11:37.344809932 +0000 UTC m=+0.029804522 container died a27198eb05a7bf476ccc7d200e1f50ee3b520b2b9aba1d75ef508a4736b980ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 08:11:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b575fe8af0981d11734baf2c412deca99e76d768073cd9261843cd232ff4ff86-merged.mount: Deactivated successfully.
Nov 29 08:11:37 compute-0 podman[286631]: 2025-11-29 08:11:37.412677257 +0000 UTC m=+0.097671827 container remove a27198eb05a7bf476ccc7d200e1f50ee3b520b2b9aba1d75ef508a4736b980ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 08:11:37 compute-0 systemd[1]: libpod-conmon-a27198eb05a7bf476ccc7d200e1f50ee3b520b2b9aba1d75ef508a4736b980ae.scope: Deactivated successfully.
Nov 29 08:11:37 compute-0 sudo[286433]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:11:37 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:11:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:11:37 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:11:37 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 34c92cd1-43dd-4cda-930c-212f4c6fb0f0 does not exist
Nov 29 08:11:37 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev d8703a6e-c5d3-45e9-be56-0991b9051caa does not exist
Nov 29 08:11:37 compute-0 sudo[286645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:37 compute-0 sudo[286645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:37 compute-0 sudo[286645]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:37 compute-0 sudo[286670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:11:37 compute-0 sudo[286670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:37 compute-0 sudo[286670]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e341 do_prune osdmap full prune enabled
Nov 29 08:11:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e342 e342: 3 total, 3 up, 3 in
Nov 29 08:11:37 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e342: 3 total, 3 up, 3 in
Nov 29 08:11:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1799746448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:37 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:11:37 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:11:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 116 KiB/s rd, 13 MiB/s wr, 171 op/s
Nov 29 08:11:37 compute-0 nova_compute[255040]: 2025-11-29 08:11:37.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:38 compute-0 nova_compute[255040]: 2025-11-29 08:11:38.124 255071 DEBUG nova.network.neutron [-] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:11:38 compute-0 nova_compute[255040]: 2025-11-29 08:11:38.144 255071 INFO nova.compute.manager [-] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Took 1.12 seconds to deallocate network for instance.
Nov 29 08:11:38 compute-0 nova_compute[255040]: 2025-11-29 08:11:38.345 255071 INFO nova.compute.manager [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Took 0.20 seconds to detach 1 volumes for instance.
Nov 29 08:11:38 compute-0 nova_compute[255040]: 2025-11-29 08:11:38.347 255071 DEBUG nova.compute.manager [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Deleting volume: 86d69c7f-1685-4057-86e2-30a1849ae3bc _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Nov 29 08:11:38 compute-0 nova_compute[255040]: 2025-11-29 08:11:38.509 255071 DEBUG oslo_concurrency.lockutils [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:38 compute-0 nova_compute[255040]: 2025-11-29 08:11:38.510 255071 DEBUG oslo_concurrency.lockutils [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:38 compute-0 nova_compute[255040]: 2025-11-29 08:11:38.576 255071 DEBUG oslo_concurrency.processutils [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:11:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e342 do_prune osdmap full prune enabled
Nov 29 08:11:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e343 e343: 3 total, 3 up, 3 in
Nov 29 08:11:38 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Nov 29 08:11:38 compute-0 ceph-mon[75237]: osdmap e342: 3 total, 3 up, 3 in
Nov 29 08:11:38 compute-0 ceph-mon[75237]: pgmap v1652: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 116 KiB/s rd, 13 MiB/s wr, 171 op/s
Nov 29 08:11:38 compute-0 ceph-mon[75237]: osdmap e343: 3 total, 3 up, 3 in
Nov 29 08:11:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:11:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:11:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:11:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:11:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:11:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:11:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:11:38
Nov 29 08:11:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:11:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:11:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['images', 'default.rgw.log', '.rgw.root', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', '.mgr', 'vms', 'cephfs.cephfs.meta']
Nov 29 08:11:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:11:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:11:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/466492532' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:11:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/466492532' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:11:39 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3614000963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:39 compute-0 nova_compute[255040]: 2025-11-29 08:11:39.068 255071 DEBUG oslo_concurrency.processutils [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:11:39 compute-0 nova_compute[255040]: 2025-11-29 08:11:39.077 255071 DEBUG nova.compute.provider_tree [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:11:39 compute-0 nova_compute[255040]: 2025-11-29 08:11:39.103 255071 DEBUG nova.scheduler.client.report [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:11:39 compute-0 nova_compute[255040]: 2025-11-29 08:11:39.136 255071 DEBUG oslo_concurrency.lockutils [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:39 compute-0 nova_compute[255040]: 2025-11-29 08:11:39.164 255071 INFO nova.scheduler.client.report [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Deleted allocations for instance 5dd77a80-b879-40f3-87b5-03140434178c
Nov 29 08:11:39 compute-0 nova_compute[255040]: 2025-11-29 08:11:39.192 255071 DEBUG nova.compute.manager [req-93c90bc3-aacf-44f4-b888-0a993d65c3d5 req-4b5542c2-620f-43c8-b8f6-5dec0c14c38f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Received event network-vif-plugged-8943e356-9f8e-4b4c-a308-d113f8558460 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:39 compute-0 nova_compute[255040]: 2025-11-29 08:11:39.192 255071 DEBUG oslo_concurrency.lockutils [req-93c90bc3-aacf-44f4-b888-0a993d65c3d5 req-4b5542c2-620f-43c8-b8f6-5dec0c14c38f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "5dd77a80-b879-40f3-87b5-03140434178c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:39 compute-0 nova_compute[255040]: 2025-11-29 08:11:39.192 255071 DEBUG oslo_concurrency.lockutils [req-93c90bc3-aacf-44f4-b888-0a993d65c3d5 req-4b5542c2-620f-43c8-b8f6-5dec0c14c38f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5dd77a80-b879-40f3-87b5-03140434178c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:39 compute-0 nova_compute[255040]: 2025-11-29 08:11:39.193 255071 DEBUG oslo_concurrency.lockutils [req-93c90bc3-aacf-44f4-b888-0a993d65c3d5 req-4b5542c2-620f-43c8-b8f6-5dec0c14c38f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5dd77a80-b879-40f3-87b5-03140434178c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:39 compute-0 nova_compute[255040]: 2025-11-29 08:11:39.193 255071 DEBUG nova.compute.manager [req-93c90bc3-aacf-44f4-b888-0a993d65c3d5 req-4b5542c2-620f-43c8-b8f6-5dec0c14c38f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] No waiting events found dispatching network-vif-plugged-8943e356-9f8e-4b4c-a308-d113f8558460 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:11:39 compute-0 nova_compute[255040]: 2025-11-29 08:11:39.193 255071 WARNING nova.compute.manager [req-93c90bc3-aacf-44f4-b888-0a993d65c3d5 req-4b5542c2-620f-43c8-b8f6-5dec0c14c38f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Received unexpected event network-vif-plugged-8943e356-9f8e-4b4c-a308-d113f8558460 for instance with vm_state deleted and task_state None.
Nov 29 08:11:39 compute-0 nova_compute[255040]: 2025-11-29 08:11:39.194 255071 DEBUG nova.compute.manager [req-93c90bc3-aacf-44f4-b888-0a993d65c3d5 req-4b5542c2-620f-43c8-b8f6-5dec0c14c38f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Received event network-vif-deleted-8943e356-9f8e-4b4c-a308-d113f8558460 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:39 compute-0 nova_compute[255040]: 2025-11-29 08:11:39.226 255071 DEBUG oslo_concurrency.lockutils [None req-3bf58fe7-00a3-44f4-b7ed-19eef74b4c90 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "5dd77a80-b879-40f3-87b5-03140434178c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e343 do_prune osdmap full prune enabled
Nov 29 08:11:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e344 e344: 3 total, 3 up, 3 in
Nov 29 08:11:39 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e344: 3 total, 3 up, 3 in
Nov 29 08:11:39 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/466492532' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:39 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/466492532' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:39 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3614000963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:39 compute-0 ceph-mon[75237]: osdmap e344: 3 total, 3 up, 3 in
Nov 29 08:11:39 compute-0 nova_compute[255040]: 2025-11-29 08:11:39.797 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 227 KiB/s rd, 2.9 MiB/s wr, 301 op/s
Nov 29 08:11:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:11:40 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/890323427' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:11:40 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1199430162' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:11:40 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1199430162' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e344 do_prune osdmap full prune enabled
Nov 29 08:11:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e345 e345: 3 total, 3 up, 3 in
Nov 29 08:11:40 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e345: 3 total, 3 up, 3 in
Nov 29 08:11:40 compute-0 ceph-mon[75237]: pgmap v1655: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 227 KiB/s rd, 2.9 MiB/s wr, 301 op/s
Nov 29 08:11:40 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/890323427' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:40 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1199430162' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:40 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1199430162' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e345 do_prune osdmap full prune enabled
Nov 29 08:11:41 compute-0 ceph-mon[75237]: osdmap e345: 3 total, 3 up, 3 in
Nov 29 08:11:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e346 e346: 3 total, 3 up, 3 in
Nov 29 08:11:41 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e346: 3 total, 3 up, 3 in
Nov 29 08:11:41 compute-0 nova_compute[255040]: 2025-11-29 08:11:41.721 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 256 KiB/s rd, 13 KiB/s wr, 339 op/s
Nov 29 08:11:41 compute-0 nova_compute[255040]: 2025-11-29 08:11:41.969 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.192 255071 DEBUG oslo_concurrency.lockutils [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "40011a89-5ea1-4ffe-bda7-a3116abd2267" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.192 255071 DEBUG oslo_concurrency.lockutils [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "40011a89-5ea1-4ffe-bda7-a3116abd2267" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.193 255071 DEBUG oslo_concurrency.lockutils [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "40011a89-5ea1-4ffe-bda7-a3116abd2267-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.193 255071 DEBUG oslo_concurrency.lockutils [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "40011a89-5ea1-4ffe-bda7-a3116abd2267-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.193 255071 DEBUG oslo_concurrency.lockutils [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "40011a89-5ea1-4ffe-bda7-a3116abd2267-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.194 255071 INFO nova.compute.manager [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Terminating instance
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.195 255071 DEBUG nova.compute.manager [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:11:42 compute-0 kernel: tap3c03306d-f3 (unregistering): left promiscuous mode
Nov 29 08:11:42 compute-0 NetworkManager[49116]: <info>  [1764403902.2776] device (tap3c03306d-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.291 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:42 compute-0 ovn_controller[153295]: 2025-11-29T08:11:42Z|00176|binding|INFO|Releasing lport 3c03306d-f387-4844-a235-2eaba1efde2e from this chassis (sb_readonly=0)
Nov 29 08:11:42 compute-0 ovn_controller[153295]: 2025-11-29T08:11:42Z|00177|binding|INFO|Setting lport 3c03306d-f387-4844-a235-2eaba1efde2e down in Southbound
Nov 29 08:11:42 compute-0 ovn_controller[153295]: 2025-11-29T08:11:42Z|00178|binding|INFO|Removing iface tap3c03306d-f3 ovn-installed in OVS
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.294 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:42.303 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:0b:ca 10.100.0.11'], port_security=['fa:16:3e:98:0b:ca 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '40011a89-5ea1-4ffe-bda7-a3116abd2267', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3df24932e2a44aeab3c2aece8a045774', 'neutron:revision_number': '4', 'neutron:security_group_ids': '24a98d15-8f29-4bfa-8cd9-cf25fb940203', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6d2be5e-00f1-4a95-b572-cb93402763d5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=3c03306d-f387-4844-a235-2eaba1efde2e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:11:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:42.304 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 3c03306d-f387-4844-a235-2eaba1efde2e in datapath 6e23492e-beff-43f6-b4d1-f88ebeea0b6f unbound from our chassis
Nov 29 08:11:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:42.305 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6e23492e-beff-43f6-b4d1-f88ebeea0b6f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:11:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:42.307 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[f1dc36ba-da7c-414f-a97d-3b48f4397e1a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:42.307 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f namespace which is not needed anymore
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.323 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:42 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Deactivated successfully.
Nov 29 08:11:42 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Consumed 18.874s CPU time.
Nov 29 08:11:42 compute-0 systemd-machined[216271]: Machine qemu-17-instance-00000011 terminated.
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.432 255071 INFO nova.virt.libvirt.driver [-] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Instance destroyed successfully.
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.433 255071 DEBUG nova.objects.instance [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lazy-loading 'resources' on Instance uuid 40011a89-5ea1-4ffe-bda7-a3116abd2267 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.447 255071 DEBUG nova.virt.libvirt.vif [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-1377407486',display_name='tempest-TestVolumeBootPattern-volume-backed-server-1377407486',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-1377407486',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPEbvAc9/jx3S8T+wMMpuDgL3NZ7787JivLCfsQ6S8S7rjlgDHEMLkK8QzfAaHZjQKlWuCCrWH3SBgFrm+yzUG9F59LWD7hG53fMSRwk/Ued5TFWRCaTfMEt25DEVMC+Hw==',key_name='tempest-keypair-1595562543',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3df24932e2a44aeab3c2aece8a045774',ramdisk_id='',reservation_id='r-atc77nhd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1666331213',owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e62d407203540599a65ac50d5d447b9',uuid=40011a89-5ea1-4ffe-bda7-a3116abd2267,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c03306d-f387-4844-a235-2eaba1efde2e", "address": "fa:16:3e:98:0b:ca", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c03306d-f3", "ovs_interfaceid": "3c03306d-f387-4844-a235-2eaba1efde2e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.448 255071 DEBUG nova.network.os_vif_util [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converting VIF {"id": "3c03306d-f387-4844-a235-2eaba1efde2e", "address": "fa:16:3e:98:0b:ca", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c03306d-f3", "ovs_interfaceid": "3c03306d-f387-4844-a235-2eaba1efde2e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.449 255071 DEBUG nova.network.os_vif_util [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:98:0b:ca,bridge_name='br-int',has_traffic_filtering=True,id=3c03306d-f387-4844-a235-2eaba1efde2e,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c03306d-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.450 255071 DEBUG os_vif [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:0b:ca,bridge_name='br-int',has_traffic_filtering=True,id=3c03306d-f387-4844-a235-2eaba1efde2e,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c03306d-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.455 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.455 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c03306d-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.457 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.458 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.461 255071 INFO os_vif [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:0b:ca,bridge_name='br-int',has_traffic_filtering=True,id=3c03306d-f387-4844-a235-2eaba1efde2e,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c03306d-f3')
Nov 29 08:11:42 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[284262]: [NOTICE]   (284266) : haproxy version is 2.8.14-c23fe91
Nov 29 08:11:42 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[284262]: [NOTICE]   (284266) : path to executable is /usr/sbin/haproxy
Nov 29 08:11:42 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[284262]: [WARNING]  (284266) : Exiting Master process...
Nov 29 08:11:42 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[284262]: [ALERT]    (284266) : Current worker (284268) exited with code 143 (Terminated)
Nov 29 08:11:42 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[284262]: [WARNING]  (284266) : All workers exited. Exiting... (0)
Nov 29 08:11:42 compute-0 systemd[1]: libpod-8d67db01b08bb417faadfa0d40637da2a3530762562f55aeb9c931ad8bc53b92.scope: Deactivated successfully.
Nov 29 08:11:42 compute-0 podman[286741]: 2025-11-29 08:11:42.476010692 +0000 UTC m=+0.057853868 container died 8d67db01b08bb417faadfa0d40637da2a3530762562f55aeb9c931ad8bc53b92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:11:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8d67db01b08bb417faadfa0d40637da2a3530762562f55aeb9c931ad8bc53b92-userdata-shm.mount: Deactivated successfully.
Nov 29 08:11:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3e243a254c2ee5c0c953424e3ee356b6dd4e5fbf7395b4e95a734c64d2a51de-merged.mount: Deactivated successfully.
Nov 29 08:11:42 compute-0 podman[286741]: 2025-11-29 08:11:42.518486404 +0000 UTC m=+0.100329580 container cleanup 8d67db01b08bb417faadfa0d40637da2a3530762562f55aeb9c931ad8bc53b92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 08:11:42 compute-0 systemd[1]: libpod-conmon-8d67db01b08bb417faadfa0d40637da2a3530762562f55aeb9c931ad8bc53b92.scope: Deactivated successfully.
Nov 29 08:11:42 compute-0 podman[286802]: 2025-11-29 08:11:42.591487667 +0000 UTC m=+0.047548650 container remove 8d67db01b08bb417faadfa0d40637da2a3530762562f55aeb9c931ad8bc53b92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:11:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:42.599 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[29f89de2-2dd4-42ff-b712-acfb992359d1]: (4, ('Sat Nov 29 08:11:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f (8d67db01b08bb417faadfa0d40637da2a3530762562f55aeb9c931ad8bc53b92)\n8d67db01b08bb417faadfa0d40637da2a3530762562f55aeb9c931ad8bc53b92\nSat Nov 29 08:11:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f (8d67db01b08bb417faadfa0d40637da2a3530762562f55aeb9c931ad8bc53b92)\n8d67db01b08bb417faadfa0d40637da2a3530762562f55aeb9c931ad8bc53b92\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:42.601 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[8ae70dc7-0748-45e1-9c68-65e63d0d5d24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:42.603 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e23492e-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.604 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:42 compute-0 kernel: tap6e23492e-b0: left promiscuous mode
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.622 255071 DEBUG nova.compute.manager [req-9dbbd9a6-de94-4871-b3b9-01b7cb938d7b req-e5d69cab-3306-417b-905a-4ebae62dedee cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Received event network-vif-unplugged-3c03306d-f387-4844-a235-2eaba1efde2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.623 255071 DEBUG oslo_concurrency.lockutils [req-9dbbd9a6-de94-4871-b3b9-01b7cb938d7b req-e5d69cab-3306-417b-905a-4ebae62dedee cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "40011a89-5ea1-4ffe-bda7-a3116abd2267-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.623 255071 DEBUG oslo_concurrency.lockutils [req-9dbbd9a6-de94-4871-b3b9-01b7cb938d7b req-e5d69cab-3306-417b-905a-4ebae62dedee cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "40011a89-5ea1-4ffe-bda7-a3116abd2267-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.623 255071 DEBUG oslo_concurrency.lockutils [req-9dbbd9a6-de94-4871-b3b9-01b7cb938d7b req-e5d69cab-3306-417b-905a-4ebae62dedee cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "40011a89-5ea1-4ffe-bda7-a3116abd2267-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.623 255071 DEBUG nova.compute.manager [req-9dbbd9a6-de94-4871-b3b9-01b7cb938d7b req-e5d69cab-3306-417b-905a-4ebae62dedee cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] No waiting events found dispatching network-vif-unplugged-3c03306d-f387-4844-a235-2eaba1efde2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.623 255071 DEBUG nova.compute.manager [req-9dbbd9a6-de94-4871-b3b9-01b7cb938d7b req-e5d69cab-3306-417b-905a-4ebae62dedee cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Received event network-vif-unplugged-3c03306d-f387-4844-a235-2eaba1efde2e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.625 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:42.627 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[1464f0dd-adbf-4754-8c93-9dd821815162]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:42.641 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[1feb7b34-3cdb-4f15-aefb-2a5c1f7edfd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:42.643 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[9903ed25-db54-48b6-a764-dc9ab7449034]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:42.662 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[486af943-9763-4fb4-ae7c-e3ede149433f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604652, 'reachable_time': 30498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286815, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.666 255071 INFO nova.virt.libvirt.driver [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Deleting instance files /var/lib/nova/instances/40011a89-5ea1-4ffe-bda7-a3116abd2267_del
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.667 255071 INFO nova.virt.libvirt.driver [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Deletion of /var/lib/nova/instances/40011a89-5ea1-4ffe-bda7-a3116abd2267_del complete
Nov 29 08:11:42 compute-0 systemd[1]: run-netns-ovnmeta\x2d6e23492e\x2dbeff\x2d43f6\x2db4d1\x2df88ebeea0b6f.mount: Deactivated successfully.
Nov 29 08:11:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:42.668 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:11:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:42.668 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[4158d3b8-cfba-4329-9fd6-c74ad0ac444a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.711 255071 INFO nova.compute.manager [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Took 0.52 seconds to destroy the instance on the hypervisor.
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.712 255071 DEBUG oslo.service.loopingcall [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.712 255071 DEBUG nova.compute.manager [-] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.713 255071 DEBUG nova.network.neutron [-] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:11:42 compute-0 ceph-mon[75237]: osdmap e346: 3 total, 3 up, 3 in
Nov 29 08:11:42 compute-0 ceph-mon[75237]: pgmap v1658: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 256 KiB/s rd, 13 KiB/s wr, 339 op/s
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.976 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 08:11:42 compute-0 nova_compute[255040]: 2025-11-29 08:11:42.989 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:11:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:11:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:11:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:11:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:11:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:11:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:11:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:11:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:11:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:11:43 compute-0 nova_compute[255040]: 2025-11-29 08:11:43.474 255071 DEBUG nova.network.neutron [-] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:11:43 compute-0 nova_compute[255040]: 2025-11-29 08:11:43.491 255071 INFO nova.compute.manager [-] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Took 0.78 seconds to deallocate network for instance.
Nov 29 08:11:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:43.581 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:11:43 compute-0 nova_compute[255040]: 2025-11-29 08:11:43.581 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:43.582 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:11:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e346 do_prune osdmap full prune enabled
Nov 29 08:11:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e347 e347: 3 total, 3 up, 3 in
Nov 29 08:11:43 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e347: 3 total, 3 up, 3 in
Nov 29 08:11:43 compute-0 nova_compute[255040]: 2025-11-29 08:11:43.661 255071 DEBUG nova.compute.manager [req-44774ce5-9e1c-4dd8-981c-0b9f62c597ff req-c2f5dee8-7204-4e20-9176-32b21237308a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Received event network-vif-deleted-3c03306d-f387-4844-a235-2eaba1efde2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:43 compute-0 nova_compute[255040]: 2025-11-29 08:11:43.744 255071 INFO nova.compute.manager [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Took 0.25 seconds to detach 1 volumes for instance.
Nov 29 08:11:43 compute-0 nova_compute[255040]: 2025-11-29 08:11:43.745 255071 DEBUG nova.compute.manager [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Deleting volume: 4e507a0b-03e8-4934-a54e-56137eebde3a _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Nov 29 08:11:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:11:43 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2997464114' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 7.4 KiB/s wr, 116 op/s
Nov 29 08:11:43 compute-0 podman[286816]: 2025-11-29 08:11:43.915269389 +0000 UTC m=+0.070989161 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 08:11:43 compute-0 nova_compute[255040]: 2025-11-29 08:11:43.965 255071 DEBUG oslo_concurrency.lockutils [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:43 compute-0 nova_compute[255040]: 2025-11-29 08:11:43.966 255071 DEBUG oslo_concurrency.lockutils [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:44 compute-0 nova_compute[255040]: 2025-11-29 08:11:44.018 255071 DEBUG oslo_concurrency.processutils [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:11:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:11:44 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1957101197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:44 compute-0 nova_compute[255040]: 2025-11-29 08:11:44.513 255071 DEBUG oslo_concurrency.processutils [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:11:44 compute-0 nova_compute[255040]: 2025-11-29 08:11:44.522 255071 DEBUG nova.compute.provider_tree [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:11:44 compute-0 nova_compute[255040]: 2025-11-29 08:11:44.547 255071 DEBUG nova.scheduler.client.report [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:11:44 compute-0 nova_compute[255040]: 2025-11-29 08:11:44.581 255071 DEBUG oslo_concurrency.lockutils [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:44 compute-0 nova_compute[255040]: 2025-11-29 08:11:44.602 255071 INFO nova.scheduler.client.report [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Deleted allocations for instance 40011a89-5ea1-4ffe-bda7-a3116abd2267
Nov 29 08:11:44 compute-0 nova_compute[255040]: 2025-11-29 08:11:44.675 255071 DEBUG oslo_concurrency.lockutils [None req-81856b50-88b5-4302-aba5-cbae75a7f298 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "40011a89-5ea1-4ffe-bda7-a3116abd2267" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:44 compute-0 nova_compute[255040]: 2025-11-29 08:11:44.761 255071 DEBUG nova.compute.manager [req-89f35c8e-c747-416e-be3a-97f2281d16de req-1d43a474-e861-42a8-af79-7f6aba56199e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Received event network-vif-plugged-3c03306d-f387-4844-a235-2eaba1efde2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:44 compute-0 nova_compute[255040]: 2025-11-29 08:11:44.762 255071 DEBUG oslo_concurrency.lockutils [req-89f35c8e-c747-416e-be3a-97f2281d16de req-1d43a474-e861-42a8-af79-7f6aba56199e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "40011a89-5ea1-4ffe-bda7-a3116abd2267-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:44 compute-0 nova_compute[255040]: 2025-11-29 08:11:44.762 255071 DEBUG oslo_concurrency.lockutils [req-89f35c8e-c747-416e-be3a-97f2281d16de req-1d43a474-e861-42a8-af79-7f6aba56199e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "40011a89-5ea1-4ffe-bda7-a3116abd2267-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:44 compute-0 nova_compute[255040]: 2025-11-29 08:11:44.762 255071 DEBUG oslo_concurrency.lockutils [req-89f35c8e-c747-416e-be3a-97f2281d16de req-1d43a474-e861-42a8-af79-7f6aba56199e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "40011a89-5ea1-4ffe-bda7-a3116abd2267-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:44 compute-0 nova_compute[255040]: 2025-11-29 08:11:44.762 255071 DEBUG nova.compute.manager [req-89f35c8e-c747-416e-be3a-97f2281d16de req-1d43a474-e861-42a8-af79-7f6aba56199e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] No waiting events found dispatching network-vif-plugged-3c03306d-f387-4844-a235-2eaba1efde2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:11:44 compute-0 nova_compute[255040]: 2025-11-29 08:11:44.762 255071 WARNING nova.compute.manager [req-89f35c8e-c747-416e-be3a-97f2281d16de req-1d43a474-e861-42a8-af79-7f6aba56199e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Received unexpected event network-vif-plugged-3c03306d-f387-4844-a235-2eaba1efde2e for instance with vm_state deleted and task_state None.
Nov 29 08:11:44 compute-0 ceph-mon[75237]: osdmap e347: 3 total, 3 up, 3 in
Nov 29 08:11:44 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2997464114' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:44 compute-0 ceph-mon[75237]: pgmap v1660: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 7.4 KiB/s wr, 116 op/s
Nov 29 08:11:44 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1957101197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:44 compute-0 nova_compute[255040]: 2025-11-29 08:11:44.800 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:11:45 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2258540638' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:11:45 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2258540638' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:45 compute-0 nova_compute[255040]: 2025-11-29 08:11:45.450 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:11:45.584 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 1.3 GiB data, 1.5 GiB used, 58 GiB / 60 GiB avail; 183 KiB/s rd, 17 MiB/s wr, 263 op/s
Nov 29 08:11:45 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2258540638' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:45 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2258540638' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:11:46 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/939765228' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:46 compute-0 ceph-mon[75237]: pgmap v1661: 305 pgs: 305 active+clean; 1.3 GiB data, 1.5 GiB used, 58 GiB / 60 GiB avail; 183 KiB/s rd, 17 MiB/s wr, 263 op/s
Nov 29 08:11:46 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/939765228' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:47 compute-0 nova_compute[255040]: 2025-11-29 08:11:47.457 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 1.3 GiB data, 1.5 GiB used, 58 GiB / 60 GiB avail; 154 KiB/s rd, 15 MiB/s wr, 221 op/s
Nov 29 08:11:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e347 do_prune osdmap full prune enabled
Nov 29 08:11:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e348 e348: 3 total, 3 up, 3 in
Nov 29 08:11:48 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e348: 3 total, 3 up, 3 in
Nov 29 08:11:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e348 do_prune osdmap full prune enabled
Nov 29 08:11:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e349 e349: 3 total, 3 up, 3 in
Nov 29 08:11:48 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e349: 3 total, 3 up, 3 in
Nov 29 08:11:49 compute-0 ceph-mon[75237]: pgmap v1662: 305 pgs: 305 active+clean; 1.3 GiB data, 1.5 GiB used, 58 GiB / 60 GiB avail; 154 KiB/s rd, 15 MiB/s wr, 221 op/s
Nov 29 08:11:49 compute-0 ceph-mon[75237]: osdmap e348: 3 total, 3 up, 3 in
Nov 29 08:11:49 compute-0 ceph-mon[75237]: osdmap e349: 3 total, 3 up, 3 in
Nov 29 08:11:49 compute-0 nova_compute[255040]: 2025-11-29 08:11:49.802 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 1.4 GiB data, 1.7 GiB used, 58 GiB / 60 GiB avail; 138 KiB/s rd, 49 MiB/s wr, 207 op/s
Nov 29 08:11:51 compute-0 ceph-mon[75237]: pgmap v1665: 305 pgs: 305 active+clean; 1.4 GiB data, 1.7 GiB used, 58 GiB / 60 GiB avail; 138 KiB/s rd, 49 MiB/s wr, 207 op/s
Nov 29 08:11:51 compute-0 nova_compute[255040]: 2025-11-29 08:11:51.672 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403896.6705875, 5dd77a80-b879-40f3-87b5-03140434178c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:11:51 compute-0 nova_compute[255040]: 2025-11-29 08:11:51.673 255071 INFO nova.compute.manager [-] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] VM Stopped (Lifecycle Event)
Nov 29 08:11:51 compute-0 nova_compute[255040]: 2025-11-29 08:11:51.692 255071 DEBUG nova.compute.manager [None req-4090613e-01ce-4680-a6ad-ec051f3d6bf9 - - - - - -] [instance: 5dd77a80-b879-40f3-87b5-03140434178c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 1.7 GiB data, 1.9 GiB used, 58 GiB / 60 GiB avail; 217 KiB/s rd, 74 MiB/s wr, 340 op/s
Nov 29 08:11:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e349 do_prune osdmap full prune enabled
Nov 29 08:11:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e350 e350: 3 total, 3 up, 3 in
Nov 29 08:11:52 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e350: 3 total, 3 up, 3 in
Nov 29 08:11:52 compute-0 ceph-mon[75237]: pgmap v1666: 305 pgs: 305 active+clean; 1.7 GiB data, 1.9 GiB used, 58 GiB / 60 GiB avail; 217 KiB/s rd, 74 MiB/s wr, 340 op/s
Nov 29 08:11:52 compute-0 nova_compute[255040]: 2025-11-29 08:11:52.459 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:53 compute-0 ceph-mon[75237]: osdmap e350: 3 total, 3 up, 3 in
Nov 29 08:11:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e350 do_prune osdmap full prune enabled
Nov 29 08:11:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e351 e351: 3 total, 3 up, 3 in
Nov 29 08:11:53 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e351: 3 total, 3 up, 3 in
Nov 29 08:11:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 1.8 GiB data, 2.1 GiB used, 58 GiB / 60 GiB avail; 181 KiB/s rd, 115 MiB/s wr, 309 op/s
Nov 29 08:11:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e351 do_prune osdmap full prune enabled
Nov 29 08:11:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e352 e352: 3 total, 3 up, 3 in
Nov 29 08:11:54 compute-0 ceph-mon[75237]: osdmap e351: 3 total, 3 up, 3 in
Nov 29 08:11:54 compute-0 ceph-mon[75237]: pgmap v1669: 305 pgs: 305 active+clean; 1.8 GiB data, 2.1 GiB used, 58 GiB / 60 GiB avail; 181 KiB/s rd, 115 MiB/s wr, 309 op/s
Nov 29 08:11:54 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e352: 3 total, 3 up, 3 in
Nov 29 08:11:54 compute-0 nova_compute[255040]: 2025-11-29 08:11:54.803 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:11:55 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2173985550' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:11:55 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2173985550' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:55 compute-0 ceph-mon[75237]: osdmap e352: 3 total, 3 up, 3 in
Nov 29 08:11:55 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2173985550' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:55 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2173985550' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 9 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 294 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.9 MiB/s rd, 119 MiB/s wr, 419 op/s
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.03362180622029042 of space, bias 1.0, pg target 10.086541866087126 quantized to 32 (current 32)
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00012908913687569186 quantized to 32 (current 32)
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19309890746076708 quantized to 32 (current 32)
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0005901217685745913 quantized to 16 (current 16)
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.376522107182392e-05 quantized to 32 (current 32)
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006270043791105033 quantized to 32 (current 32)
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:11:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014753044214364783 quantized to 32 (current 32)
Nov 29 08:11:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e352 do_prune osdmap full prune enabled
Nov 29 08:11:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e353 e353: 3 total, 3 up, 3 in
Nov 29 08:11:56 compute-0 ceph-mon[75237]: pgmap v1671: 305 pgs: 9 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 294 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.9 MiB/s rd, 119 MiB/s wr, 419 op/s
Nov 29 08:11:56 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e353: 3 total, 3 up, 3 in
Nov 29 08:11:57 compute-0 nova_compute[255040]: 2025-11-29 08:11:57.431 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403902.4297626, 40011a89-5ea1-4ffe-bda7-a3116abd2267 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:11:57 compute-0 nova_compute[255040]: 2025-11-29 08:11:57.431 255071 INFO nova.compute.manager [-] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] VM Stopped (Lifecycle Event)
Nov 29 08:11:57 compute-0 nova_compute[255040]: 2025-11-29 08:11:57.455 255071 DEBUG nova.compute.manager [None req-010c2604-cf2d-4aa3-b71e-cae8e765bd02 - - - - - -] [instance: 40011a89-5ea1-4ffe-bda7-a3116abd2267] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:57 compute-0 nova_compute[255040]: 2025-11-29 08:11:57.461 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:57 compute-0 ceph-mon[75237]: osdmap e353: 3 total, 3 up, 3 in
Nov 29 08:11:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 9 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 294 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.0 MiB/s rd, 78 MiB/s wr, 194 op/s
Nov 29 08:11:57 compute-0 podman[286862]: 2025-11-29 08:11:57.932860965 +0000 UTC m=+0.098560076 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 08:11:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e353 do_prune osdmap full prune enabled
Nov 29 08:11:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e354 e354: 3 total, 3 up, 3 in
Nov 29 08:11:58 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e354: 3 total, 3 up, 3 in
Nov 29 08:11:58 compute-0 ceph-mon[75237]: pgmap v1673: 305 pgs: 9 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 294 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.0 MiB/s rd, 78 MiB/s wr, 194 op/s
Nov 29 08:11:58 compute-0 ceph-mon[75237]: osdmap e354: 3 total, 3 up, 3 in
Nov 29 08:11:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:11:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1970753207' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:11:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1970753207' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1970753207' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1970753207' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:59 compute-0 nova_compute[255040]: 2025-11-29 08:11:59.805 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 9 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 294 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.6 MiB/s rd, 42 MiB/s wr, 264 op/s
Nov 29 08:12:00 compute-0 ceph-mon[75237]: pgmap v1675: 305 pgs: 9 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 294 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.6 MiB/s rd, 42 MiB/s wr, 264 op/s
Nov 29 08:12:01 compute-0 nova_compute[255040]: 2025-11-29 08:12:01.750 255071 DEBUG oslo_concurrency.lockutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "76ba0630-af87-46e8-83ee-b983d76f480d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:01 compute-0 nova_compute[255040]: 2025-11-29 08:12:01.750 255071 DEBUG oslo_concurrency.lockutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "76ba0630-af87-46e8-83ee-b983d76f480d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:01 compute-0 nova_compute[255040]: 2025-11-29 08:12:01.765 255071 DEBUG nova.compute.manager [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:12:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.1 MiB/s rd, 37 MiB/s wr, 260 op/s
Nov 29 08:12:01 compute-0 nova_compute[255040]: 2025-11-29 08:12:01.851 255071 DEBUG oslo_concurrency.lockutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:01 compute-0 nova_compute[255040]: 2025-11-29 08:12:01.852 255071 DEBUG oslo_concurrency.lockutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:01 compute-0 nova_compute[255040]: 2025-11-29 08:12:01.862 255071 DEBUG nova.virt.hardware [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:12:01 compute-0 nova_compute[255040]: 2025-11-29 08:12:01.862 255071 INFO nova.compute.claims [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:12:01 compute-0 nova_compute[255040]: 2025-11-29 08:12:01.963 255071 DEBUG oslo_concurrency.processutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:12:02 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1995338752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.462 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.480 255071 DEBUG oslo_concurrency.processutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.487 255071 DEBUG nova.compute.provider_tree [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.503 255071 DEBUG nova.scheduler.client.report [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.525 255071 DEBUG oslo_concurrency.lockutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.526 255071 DEBUG nova.compute.manager [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.577 255071 DEBUG nova.compute.manager [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.578 255071 DEBUG nova.network.neutron [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.602 255071 INFO nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.623 255071 DEBUG nova.compute.manager [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.672 255071 INFO nova.virt.block_device [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Booting with volume 3d3fcb07-9e86-4e90-86d4-07632d484796 at /dev/vda
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.827 255071 DEBUG os_brick.utils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.832 255071 DEBUG nova.policy [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5e62d407203540599a65ac50d5d447b9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3df24932e2a44aeab3c2aece8a045774', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.829 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.844 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.844 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[ae01eb63-9a39-4a11-b8eb-e83587be0cba]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.846 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.858 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.858 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[94b52562-b052-45ae-a7f0-21ef93c8e13c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.861 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.873 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.874 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[902ec0cf-afbe-48ee-bcc5-11e3d913df6f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.876 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[f7246053-258a-416e-a428-1def824b868f]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.876 255071 DEBUG oslo_concurrency.processutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.905 255071 DEBUG oslo_concurrency.processutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:02 compute-0 ceph-mon[75237]: pgmap v1676: 305 pgs: 305 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.1 MiB/s rd, 37 MiB/s wr, 260 op/s
Nov 29 08:12:02 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1995338752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.908 255071 DEBUG os_brick.initiator.connectors.lightos [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.908 255071 DEBUG os_brick.initiator.connectors.lightos [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.908 255071 DEBUG os_brick.initiator.connectors.lightos [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.909 255071 DEBUG os_brick.utils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] <== get_connector_properties: return (81ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:12:02 compute-0 nova_compute[255040]: 2025-11-29 08:12:02.909 255071 DEBUG nova.virt.block_device [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Updating existing volume attachment record: 042a5895-3384-4676-a96a-0a3937704486 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:12:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:12:03 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/730949617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:03 compute-0 nova_compute[255040]: 2025-11-29 08:12:03.561 255071 DEBUG nova.network.neutron [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Successfully created port: 53f09235-af69-49c2-9137-b16a1ea8d7f3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:12:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e354 do_prune osdmap full prune enabled
Nov 29 08:12:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e355 e355: 3 total, 3 up, 3 in
Nov 29 08:12:03 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e355: 3 total, 3 up, 3 in
Nov 29 08:12:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.5 MiB/s wr, 142 op/s
Nov 29 08:12:03 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/730949617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:03 compute-0 ceph-mon[75237]: osdmap e355: 3 total, 3 up, 3 in
Nov 29 08:12:04 compute-0 nova_compute[255040]: 2025-11-29 08:12:04.303 255071 DEBUG nova.compute.manager [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:12:04 compute-0 nova_compute[255040]: 2025-11-29 08:12:04.305 255071 DEBUG nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:12:04 compute-0 nova_compute[255040]: 2025-11-29 08:12:04.306 255071 INFO nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Creating image(s)
Nov 29 08:12:04 compute-0 nova_compute[255040]: 2025-11-29 08:12:04.307 255071 DEBUG nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:12:04 compute-0 nova_compute[255040]: 2025-11-29 08:12:04.307 255071 DEBUG nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Ensure instance console log exists: /var/lib/nova/instances/76ba0630-af87-46e8-83ee-b983d76f480d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:12:04 compute-0 nova_compute[255040]: 2025-11-29 08:12:04.308 255071 DEBUG oslo_concurrency.lockutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:04 compute-0 nova_compute[255040]: 2025-11-29 08:12:04.308 255071 DEBUG oslo_concurrency.lockutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:04 compute-0 nova_compute[255040]: 2025-11-29 08:12:04.309 255071 DEBUG oslo_concurrency.lockutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:04 compute-0 nova_compute[255040]: 2025-11-29 08:12:04.660 255071 DEBUG nova.network.neutron [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Successfully updated port: 53f09235-af69-49c2-9137-b16a1ea8d7f3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:12:04 compute-0 nova_compute[255040]: 2025-11-29 08:12:04.676 255071 DEBUG oslo_concurrency.lockutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "refresh_cache-76ba0630-af87-46e8-83ee-b983d76f480d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:12:04 compute-0 nova_compute[255040]: 2025-11-29 08:12:04.676 255071 DEBUG oslo_concurrency.lockutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquired lock "refresh_cache-76ba0630-af87-46e8-83ee-b983d76f480d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:12:04 compute-0 nova_compute[255040]: 2025-11-29 08:12:04.676 255071 DEBUG nova.network.neutron [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:12:04 compute-0 nova_compute[255040]: 2025-11-29 08:12:04.755 255071 DEBUG nova.compute.manager [req-e34b9085-e1a6-4409-867d-516df191938a req-cb808635-1ffc-4abe-b30d-2f813e8a61ae cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Received event network-changed-53f09235-af69-49c2-9137-b16a1ea8d7f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:12:04 compute-0 nova_compute[255040]: 2025-11-29 08:12:04.755 255071 DEBUG nova.compute.manager [req-e34b9085-e1a6-4409-867d-516df191938a req-cb808635-1ffc-4abe-b30d-2f813e8a61ae cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Refreshing instance network info cache due to event network-changed-53f09235-af69-49c2-9137-b16a1ea8d7f3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:12:04 compute-0 nova_compute[255040]: 2025-11-29 08:12:04.756 255071 DEBUG oslo_concurrency.lockutils [req-e34b9085-e1a6-4409-867d-516df191938a req-cb808635-1ffc-4abe-b30d-2f813e8a61ae cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-76ba0630-af87-46e8-83ee-b983d76f480d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:12:04 compute-0 nova_compute[255040]: 2025-11-29 08:12:04.806 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:04 compute-0 nova_compute[255040]: 2025-11-29 08:12:04.812 255071 DEBUG nova.network.neutron [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:12:04 compute-0 ceph-mon[75237]: pgmap v1678: 305 pgs: 305 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.5 MiB/s wr, 142 op/s
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.560 255071 DEBUG nova.network.neutron [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Updating instance_info_cache with network_info: [{"id": "53f09235-af69-49c2-9137-b16a1ea8d7f3", "address": "fa:16:3e:62:b2:a7", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53f09235-af", "ovs_interfaceid": "53f09235-af69-49c2-9137-b16a1ea8d7f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.583 255071 DEBUG oslo_concurrency.lockutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Releasing lock "refresh_cache-76ba0630-af87-46e8-83ee-b983d76f480d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.583 255071 DEBUG nova.compute.manager [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Instance network_info: |[{"id": "53f09235-af69-49c2-9137-b16a1ea8d7f3", "address": "fa:16:3e:62:b2:a7", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53f09235-af", "ovs_interfaceid": "53f09235-af69-49c2-9137-b16a1ea8d7f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.584 255071 DEBUG oslo_concurrency.lockutils [req-e34b9085-e1a6-4409-867d-516df191938a req-cb808635-1ffc-4abe-b30d-2f813e8a61ae cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-76ba0630-af87-46e8-83ee-b983d76f480d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.584 255071 DEBUG nova.network.neutron [req-e34b9085-e1a6-4409-867d-516df191938a req-cb808635-1ffc-4abe-b30d-2f813e8a61ae cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Refreshing network info cache for port 53f09235-af69-49c2-9137-b16a1ea8d7f3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.589 255071 DEBUG nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Start _get_guest_xml network_info=[{"id": "53f09235-af69-49c2-9137-b16a1ea8d7f3", "address": "fa:16:3e:62:b2:a7", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53f09235-af", "ovs_interfaceid": "53f09235-af69-49c2-9137-b16a1ea8d7f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3d3fcb07-9e86-4e90-86d4-07632d484796', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3d3fcb07-9e86-4e90-86d4-07632d484796', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '76ba0630-af87-46e8-83ee-b983d76f480d', 'attached_at': '', 'detached_at': '', 'volume_id': '3d3fcb07-9e86-4e90-86d4-07632d484796', 'serial': '3d3fcb07-9e86-4e90-86d4-07632d484796'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'delete_on_termination': False, 'attachment_id': '042a5895-3384-4676-a96a-0a3937704486', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.597 255071 WARNING nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.609 255071 DEBUG nova.virt.libvirt.host [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.610 255071 DEBUG nova.virt.libvirt.host [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.615 255071 DEBUG nova.virt.libvirt.host [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.615 255071 DEBUG nova.virt.libvirt.host [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.616 255071 DEBUG nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.617 255071 DEBUG nova.virt.hardware [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.618 255071 DEBUG nova.virt.hardware [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.618 255071 DEBUG nova.virt.hardware [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.618 255071 DEBUG nova.virt.hardware [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.619 255071 DEBUG nova.virt.hardware [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.619 255071 DEBUG nova.virt.hardware [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.620 255071 DEBUG nova.virt.hardware [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.620 255071 DEBUG nova.virt.hardware [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.621 255071 DEBUG nova.virt.hardware [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.621 255071 DEBUG nova.virt.hardware [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.622 255071 DEBUG nova.virt.hardware [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.659 255071 DEBUG nova.storage.rbd_utils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] rbd image 76ba0630-af87-46e8-83ee-b983d76f480d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:12:05 compute-0 nova_compute[255040]: 2025-11-29 08:12:05.665 255071 DEBUG oslo_concurrency.processutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.7 MiB/s wr, 151 op/s
Nov 29 08:12:05 compute-0 podman[286946]: 2025-11-29 08:12:05.896313763 +0000 UTC m=+0.062701897 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:12:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:12:06 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3940217580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.130 255071 DEBUG oslo_concurrency.processutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.159 255071 DEBUG nova.virt.libvirt.vif [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-823551388',display_name='tempest-TestVolumeBootPattern-server-823551388',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-823551388',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBNqVOtasX0MqRaMqqfsWVfBGlBxHyLONahirMfYc0xM/PP91rZ4W+N/NUA4y30TxcMcH62LfUYChDkxcMCwFGnIBRbZARerRoVNJBX6SaD1meU9QKaSGEO9I5Zm9Q8bzQ==',key_name='tempest-TestVolumeBootPattern-1223045967',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3df24932e2a44aeab3c2aece8a045774',ramdisk_id='',reservation_id='r-lh2flxu9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1666331213',owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:02Z,user_data=None,user_id='5e62d407203540599a65ac50d5d447b9',uuid=76ba0630-af87-46e8-83ee-b983d76f480d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "53f09235-af69-49c2-9137-b16a1ea8d7f3", "address": "fa:16:3e:62:b2:a7", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53f09235-af", "ovs_interfaceid": "53f09235-af69-49c2-9137-b16a1ea8d7f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.160 255071 DEBUG nova.network.os_vif_util [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converting VIF {"id": "53f09235-af69-49c2-9137-b16a1ea8d7f3", "address": "fa:16:3e:62:b2:a7", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53f09235-af", "ovs_interfaceid": "53f09235-af69-49c2-9137-b16a1ea8d7f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.161 255071 DEBUG nova.network.os_vif_util [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:b2:a7,bridge_name='br-int',has_traffic_filtering=True,id=53f09235-af69-49c2-9137-b16a1ea8d7f3,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53f09235-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.162 255071 DEBUG nova.objects.instance [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lazy-loading 'pci_devices' on Instance uuid 76ba0630-af87-46e8-83ee-b983d76f480d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.177 255071 DEBUG nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:12:06 compute-0 nova_compute[255040]:   <uuid>76ba0630-af87-46e8-83ee-b983d76f480d</uuid>
Nov 29 08:12:06 compute-0 nova_compute[255040]:   <name>instance-00000013</name>
Nov 29 08:12:06 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:12:06 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:12:06 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <nova:name>tempest-TestVolumeBootPattern-server-823551388</nova:name>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:12:05</nova:creationTime>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:12:06 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:12:06 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:12:06 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:12:06 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:12:06 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:12:06 compute-0 nova_compute[255040]:         <nova:user uuid="5e62d407203540599a65ac50d5d447b9">tempest-TestVolumeBootPattern-1666331213-project-member</nova:user>
Nov 29 08:12:06 compute-0 nova_compute[255040]:         <nova:project uuid="3df24932e2a44aeab3c2aece8a045774">tempest-TestVolumeBootPattern-1666331213</nova:project>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:12:06 compute-0 nova_compute[255040]:         <nova:port uuid="53f09235-af69-49c2-9137-b16a1ea8d7f3">
Nov 29 08:12:06 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:12:06 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:12:06 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <system>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <entry name="serial">76ba0630-af87-46e8-83ee-b983d76f480d</entry>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <entry name="uuid">76ba0630-af87-46e8-83ee-b983d76f480d</entry>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     </system>
Nov 29 08:12:06 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:12:06 compute-0 nova_compute[255040]:   <os>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:   </os>
Nov 29 08:12:06 compute-0 nova_compute[255040]:   <features>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:   </features>
Nov 29 08:12:06 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:12:06 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:12:06 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/76ba0630-af87-46e8-83ee-b983d76f480d_disk.config">
Nov 29 08:12:06 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       </source>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:12:06 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <source protocol="rbd" name="volumes/volume-3d3fcb07-9e86-4e90-86d4-07632d484796">
Nov 29 08:12:06 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       </source>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:12:06 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <serial>3d3fcb07-9e86-4e90-86d4-07632d484796</serial>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:62:b2:a7"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <target dev="tap53f09235-af"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/76ba0630-af87-46e8-83ee-b983d76f480d/console.log" append="off"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <video>
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     </video>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:12:06 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:12:06 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:12:06 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:12:06 compute-0 nova_compute[255040]: </domain>
Nov 29 08:12:06 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.178 255071 DEBUG nova.compute.manager [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Preparing to wait for external event network-vif-plugged-53f09235-af69-49c2-9137-b16a1ea8d7f3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.179 255071 DEBUG oslo_concurrency.lockutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "76ba0630-af87-46e8-83ee-b983d76f480d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.180 255071 DEBUG oslo_concurrency.lockutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "76ba0630-af87-46e8-83ee-b983d76f480d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.180 255071 DEBUG oslo_concurrency.lockutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "76ba0630-af87-46e8-83ee-b983d76f480d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.181 255071 DEBUG nova.virt.libvirt.vif [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-823551388',display_name='tempest-TestVolumeBootPattern-server-823551388',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-823551388',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBNqVOtasX0MqRaMqqfsWVfBGlBxHyLONahirMfYc0xM/PP91rZ4W+N/NUA4y30TxcMcH62LfUYChDkxcMCwFGnIBRbZARerRoVNJBX6SaD1meU9QKaSGEO9I5Zm9Q8bzQ==',key_name='tempest-TestVolumeBootPattern-1223045967',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3df24932e2a44aeab3c2aece8a045774',ramdisk_id='',reservation_id='r-lh2flxu9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1666331213',owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:02Z,user_data=None,user_id='5e62d407203540599a65ac50d5d447b9',uuid=76ba0630-af87-46e8-83ee-b983d76f480d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "53f09235-af69-49c2-9137-b16a1ea8d7f3", "address": "fa:16:3e:62:b2:a7", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53f09235-af", "ovs_interfaceid": "53f09235-af69-49c2-9137-b16a1ea8d7f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.181 255071 DEBUG nova.network.os_vif_util [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converting VIF {"id": "53f09235-af69-49c2-9137-b16a1ea8d7f3", "address": "fa:16:3e:62:b2:a7", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53f09235-af", "ovs_interfaceid": "53f09235-af69-49c2-9137-b16a1ea8d7f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.182 255071 DEBUG nova.network.os_vif_util [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:b2:a7,bridge_name='br-int',has_traffic_filtering=True,id=53f09235-af69-49c2-9137-b16a1ea8d7f3,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53f09235-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.182 255071 DEBUG os_vif [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:b2:a7,bridge_name='br-int',has_traffic_filtering=True,id=53f09235-af69-49c2-9137-b16a1ea8d7f3,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53f09235-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.183 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.183 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.184 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.188 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.188 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap53f09235-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.189 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap53f09235-af, col_values=(('external_ids', {'iface-id': '53f09235-af69-49c2-9137-b16a1ea8d7f3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:b2:a7', 'vm-uuid': '76ba0630-af87-46e8-83ee-b983d76f480d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.191 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:06 compute-0 NetworkManager[49116]: <info>  [1764403926.1917] manager: (tap53f09235-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.193 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.197 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.197 255071 INFO os_vif [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:b2:a7,bridge_name='br-int',has_traffic_filtering=True,id=53f09235-af69-49c2-9137-b16a1ea8d7f3,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53f09235-af')
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.334 255071 DEBUG nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.335 255071 DEBUG nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.335 255071 DEBUG nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] No VIF found with MAC fa:16:3e:62:b2:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.335 255071 INFO nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Using config drive
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.358 255071 DEBUG nova.storage.rbd_utils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] rbd image 76ba0630-af87-46e8-83ee-b983d76f480d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.675 255071 INFO nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Creating config drive at /var/lib/nova/instances/76ba0630-af87-46e8-83ee-b983d76f480d/disk.config
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.684 255071 DEBUG oslo_concurrency.processutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/76ba0630-af87-46e8-83ee-b983d76f480d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk9g68es8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.822 255071 DEBUG oslo_concurrency.processutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/76ba0630-af87-46e8-83ee-b983d76f480d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk9g68es8" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.855 255071 DEBUG nova.storage.rbd_utils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] rbd image 76ba0630-af87-46e8-83ee-b983d76f480d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.861 255071 DEBUG oslo_concurrency.processutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/76ba0630-af87-46e8-83ee-b983d76f480d/disk.config 76ba0630-af87-46e8-83ee-b983d76f480d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:06 compute-0 ceph-mon[75237]: pgmap v1679: 305 pgs: 305 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.7 MiB/s wr, 151 op/s
Nov 29 08:12:06 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3940217580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.993 255071 DEBUG nova.network.neutron [req-e34b9085-e1a6-4409-867d-516df191938a req-cb808635-1ffc-4abe-b30d-2f813e8a61ae cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Updated VIF entry in instance network info cache for port 53f09235-af69-49c2-9137-b16a1ea8d7f3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:12:06 compute-0 nova_compute[255040]: 2025-11-29 08:12:06.994 255071 DEBUG nova.network.neutron [req-e34b9085-e1a6-4409-867d-516df191938a req-cb808635-1ffc-4abe-b30d-2f813e8a61ae cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Updating instance_info_cache with network_info: [{"id": "53f09235-af69-49c2-9137-b16a1ea8d7f3", "address": "fa:16:3e:62:b2:a7", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53f09235-af", "ovs_interfaceid": "53f09235-af69-49c2-9137-b16a1ea8d7f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.010 255071 DEBUG oslo_concurrency.lockutils [req-e34b9085-e1a6-4409-867d-516df191938a req-cb808635-1ffc-4abe-b30d-2f813e8a61ae cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-76ba0630-af87-46e8-83ee-b983d76f480d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.023 255071 DEBUG oslo_concurrency.processutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/76ba0630-af87-46e8-83ee-b983d76f480d/disk.config 76ba0630-af87-46e8-83ee-b983d76f480d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.025 255071 INFO nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Deleting local config drive /var/lib/nova/instances/76ba0630-af87-46e8-83ee-b983d76f480d/disk.config because it was imported into RBD.
Nov 29 08:12:07 compute-0 kernel: tap53f09235-af: entered promiscuous mode
Nov 29 08:12:07 compute-0 NetworkManager[49116]: <info>  [1764403927.0843] manager: (tap53f09235-af): new Tun device (/org/freedesktop/NetworkManager/Devices/100)
Nov 29 08:12:07 compute-0 ovn_controller[153295]: 2025-11-29T08:12:07Z|00179|binding|INFO|Claiming lport 53f09235-af69-49c2-9137-b16a1ea8d7f3 for this chassis.
Nov 29 08:12:07 compute-0 ovn_controller[153295]: 2025-11-29T08:12:07Z|00180|binding|INFO|53f09235-af69-49c2-9137-b16a1ea8d7f3: Claiming fa:16:3e:62:b2:a7 10.100.0.6
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.085 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.099 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:b2:a7 10.100.0.6'], port_security=['fa:16:3e:62:b2:a7 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '76ba0630-af87-46e8-83ee-b983d76f480d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3df24932e2a44aeab3c2aece8a045774', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fd76aebb-076a-4516-b4a3-04b7aa482016', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6d2be5e-00f1-4a95-b572-cb93402763d5, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=53f09235-af69-49c2-9137-b16a1ea8d7f3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.101 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 53f09235-af69-49c2-9137-b16a1ea8d7f3 in datapath 6e23492e-beff-43f6-b4d1-f88ebeea0b6f bound to our chassis
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.102 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e23492e-beff-43f6-b4d1-f88ebeea0b6f
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.106 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:07 compute-0 ovn_controller[153295]: 2025-11-29T08:12:07Z|00181|binding|INFO|Setting lport 53f09235-af69-49c2-9137-b16a1ea8d7f3 ovn-installed in OVS
Nov 29 08:12:07 compute-0 ovn_controller[153295]: 2025-11-29T08:12:07Z|00182|binding|INFO|Setting lport 53f09235-af69-49c2-9137-b16a1ea8d7f3 up in Southbound
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.110 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.113 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:07 compute-0 systemd-udevd[287044]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.118 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[7a1340a3-0ae0-453b-9225-c47e996991eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.120 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6e23492e-b1 in ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.123 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6e23492e-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.123 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[c6c25757-c97c-4a2d-bece-08de9b7fef17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.124 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[d70223ff-817a-4531-b39a-d0bc4248f748]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:07 compute-0 NetworkManager[49116]: <info>  [1764403927.1302] device (tap53f09235-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:12:07 compute-0 NetworkManager[49116]: <info>  [1764403927.1312] device (tap53f09235-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.139 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[d7f18415-dd89-4db2-a693-f580c9e6cbab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:07 compute-0 systemd-machined[216271]: New machine qemu-19-instance-00000013.
Nov 29 08:12:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:12:07 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1612975634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.157 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[ef92b2de-869d-4445-9472-bbff1ce962de]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:07 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-00000013.
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.201 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[2a84dd90-70b8-4e0a-8d0b-cae616b9f449]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.209 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[f6e5a4e3-61f0-429d-a38e-b711c37f8538]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:07 compute-0 NetworkManager[49116]: <info>  [1764403927.2111] manager: (tap6e23492e-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/101)
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.248 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[e21449f1-e741-4c17-a86f-bb20a141f08d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.252 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[bf39b97d-2973-4295-8fcf-eb91680ffef4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:07 compute-0 NetworkManager[49116]: <info>  [1764403927.2789] device (tap6e23492e-b0): carrier: link connected
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.287 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[62c48434-40dd-4cd4-84b6-4219f434e78b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.311 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[16af99bb-1e89-4bf7-8131-c823727f9251]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e23492e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:19:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616461, 'reachable_time': 31729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287079, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.335 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[0b70d5a5-3960-4a95-8b31-1104ca3eb4f1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:1984'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616461, 'tstamp': 616461}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287080, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.358 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[8f0fc086-da0f-4cf6-99f6-d2c09caa2400]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e23492e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:19:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616461, 'reachable_time': 31729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287081, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.406 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5dd1ae34-b2cc-4db2-87c6-556b2e3d2243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.428 255071 DEBUG nova.compute.manager [req-7dd9acb8-0103-4cb4-ae42-539659356ac5 req-0f71efd5-f814-408a-894b-e2ff8926cff2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Received event network-vif-plugged-53f09235-af69-49c2-9137-b16a1ea8d7f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.429 255071 DEBUG oslo_concurrency.lockutils [req-7dd9acb8-0103-4cb4-ae42-539659356ac5 req-0f71efd5-f814-408a-894b-e2ff8926cff2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "76ba0630-af87-46e8-83ee-b983d76f480d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.429 255071 DEBUG oslo_concurrency.lockutils [req-7dd9acb8-0103-4cb4-ae42-539659356ac5 req-0f71efd5-f814-408a-894b-e2ff8926cff2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "76ba0630-af87-46e8-83ee-b983d76f480d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.429 255071 DEBUG oslo_concurrency.lockutils [req-7dd9acb8-0103-4cb4-ae42-539659356ac5 req-0f71efd5-f814-408a-894b-e2ff8926cff2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "76ba0630-af87-46e8-83ee-b983d76f480d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.429 255071 DEBUG nova.compute.manager [req-7dd9acb8-0103-4cb4-ae42-539659356ac5 req-0f71efd5-f814-408a-894b-e2ff8926cff2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Processing event network-vif-plugged-53f09235-af69-49c2-9137-b16a1ea8d7f3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.483 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[24587991-59b1-4a6b-93ca-ad7bcd0f0179]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.484 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e23492e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.485 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.485 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e23492e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:07 compute-0 kernel: tap6e23492e-b0: entered promiscuous mode
Nov 29 08:12:07 compute-0 NetworkManager[49116]: <info>  [1764403927.4881] manager: (tap6e23492e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.487 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.494 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e23492e-b0, col_values=(('external_ids', {'iface-id': 'c7579d40-4225-44ab-93bd-e31c3efe399f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:07 compute-0 ovn_controller[153295]: 2025-11-29T08:12:07Z|00183|binding|INFO|Releasing lport c7579d40-4225-44ab-93bd-e31c3efe399f from this chassis (sb_readonly=0)
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.495 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.496 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.497 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6e23492e-beff-43f6-b4d1-f88ebeea0b6f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6e23492e-beff-43f6-b4d1-f88ebeea0b6f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.499 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[d5bb4453-ca9e-4277-9b0b-62f006224868]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.500 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-6e23492e-beff-43f6-b4d1-f88ebeea0b6f
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/6e23492e-beff-43f6-b4d1-f88ebeea0b6f.pid.haproxy
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID 6e23492e-beff-43f6-b4d1-f88ebeea0b6f
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:12:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:07.502 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'env', 'PROCESS_TAG=haproxy-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6e23492e-beff-43f6-b4d1-f88ebeea0b6f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.513 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.639 255071 DEBUG nova.compute.manager [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.640 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403927.6385581, 76ba0630-af87-46e8-83ee-b983d76f480d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.641 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] VM Started (Lifecycle Event)
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.644 255071 DEBUG nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.650 255071 INFO nova.virt.libvirt.driver [-] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Instance spawned successfully.
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.650 255071 DEBUG nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.658 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.663 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.670 255071 DEBUG nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.671 255071 DEBUG nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.671 255071 DEBUG nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.671 255071 DEBUG nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.672 255071 DEBUG nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.672 255071 DEBUG nova.virt.libvirt.driver [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.681 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.682 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403927.6399064, 76ba0630-af87-46e8-83ee-b983d76f480d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.682 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] VM Paused (Lifecycle Event)
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.705 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.710 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403927.6436749, 76ba0630-af87-46e8-83ee-b983d76f480d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.711 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] VM Resumed (Lifecycle Event)
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.728 255071 INFO nova.compute.manager [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Took 3.42 seconds to spawn the instance on the hypervisor.
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.728 255071 DEBUG nova.compute.manager [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.729 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.736 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.771 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.806 255071 INFO nova.compute.manager [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Took 5.99 seconds to build instance.
Nov 29 08:12:07 compute-0 nova_compute[255040]: 2025-11-29 08:12:07.824 255071 DEBUG oslo_concurrency.lockutils [None req-b77e747d-52c4-491a-a196-ad877c8863d6 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "76ba0630-af87-46e8-83ee-b983d76f480d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 58 op/s
Nov 29 08:12:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e355 do_prune osdmap full prune enabled
Nov 29 08:12:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e356 e356: 3 total, 3 up, 3 in
Nov 29 08:12:07 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1612975634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:07 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e356: 3 total, 3 up, 3 in
Nov 29 08:12:07 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Nov 29 08:12:07 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:07.949603) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:12:07 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Nov 29 08:12:07 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403927949721, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2846, "num_deletes": 539, "total_data_size": 3602071, "memory_usage": 3662704, "flush_reason": "Manual Compaction"}
Nov 29 08:12:07 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Nov 29 08:12:07 compute-0 podman[287155]: 2025-11-29 08:12:07.951444103 +0000 UTC m=+0.066182661 container create 6b88e70c7e4158bcc9dc7db09d98f0517f4009afc4ecb08fa9eb357d22c67ca7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 08:12:07 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403927979034, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3527334, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29087, "largest_seqno": 31932, "table_properties": {"data_size": 3514283, "index_size": 8265, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3717, "raw_key_size": 30923, "raw_average_key_size": 21, "raw_value_size": 3486121, "raw_average_value_size": 2377, "num_data_blocks": 355, "num_entries": 1466, "num_filter_entries": 1466, "num_deletions": 539, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403785, "oldest_key_time": 1764403785, "file_creation_time": 1764403927, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:12:07 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 29503 microseconds, and 13924 cpu microseconds.
Nov 29 08:12:07 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:12:07 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:07.979118) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3527334 bytes OK
Nov 29 08:12:07 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:07.979144) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Nov 29 08:12:07 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:07.983330) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Nov 29 08:12:07 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:07.983387) EVENT_LOG_v1 {"time_micros": 1764403927983372, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:12:07 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:07.983420) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:12:07 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3588558, prev total WAL file size 3588558, number of live WAL files 2.
Nov 29 08:12:07 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:12:07 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:07.985814) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 29 08:12:07 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:12:07 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3444KB)], [62(9559KB)]
Nov 29 08:12:07 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403927985994, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 13316715, "oldest_snapshot_seqno": -1}
Nov 29 08:12:08 compute-0 systemd[1]: Started libpod-conmon-6b88e70c7e4158bcc9dc7db09d98f0517f4009afc4ecb08fa9eb357d22c67ca7.scope.
Nov 29 08:12:08 compute-0 podman[287155]: 2025-11-29 08:12:07.911834341 +0000 UTC m=+0.026572929 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:12:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cae95a9c5c674f9df133ee71d96a5c1fe6a9b1692ed34972d8a08813376136f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:08 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6273 keys, 11447789 bytes, temperature: kUnknown
Nov 29 08:12:08 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403928090134, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 11447789, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11398944, "index_size": 32031, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 158604, "raw_average_key_size": 25, "raw_value_size": 11279350, "raw_average_value_size": 1798, "num_data_blocks": 1292, "num_entries": 6273, "num_filter_entries": 6273, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764403927, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:12:08 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:12:08 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:08.090435) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 11447789 bytes
Nov 29 08:12:08 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:08.092526) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.8 rd, 109.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 9.3 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 7339, records dropped: 1066 output_compression: NoCompression
Nov 29 08:12:08 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:08.092544) EVENT_LOG_v1 {"time_micros": 1764403928092534, "job": 34, "event": "compaction_finished", "compaction_time_micros": 104233, "compaction_time_cpu_micros": 36604, "output_level": 6, "num_output_files": 1, "total_output_size": 11447789, "num_input_records": 7339, "num_output_records": 6273, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:12:08 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:12:08 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403928093232, "job": 34, "event": "table_file_deletion", "file_number": 64}
Nov 29 08:12:08 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:12:08 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403928094739, "job": 34, "event": "table_file_deletion", "file_number": 62}
Nov 29 08:12:08 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:07.985703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:08 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:08.094775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:08 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:08.094781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:08 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:08.094783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:08 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:08.094785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:08 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:08.094787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:08 compute-0 podman[287155]: 2025-11-29 08:12:08.095358234 +0000 UTC m=+0.210096792 container init 6b88e70c7e4158bcc9dc7db09d98f0517f4009afc4ecb08fa9eb357d22c67ca7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:12:08 compute-0 podman[287155]: 2025-11-29 08:12:08.105337185 +0000 UTC m=+0.220075743 container start 6b88e70c7e4158bcc9dc7db09d98f0517f4009afc4ecb08fa9eb357d22c67ca7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 08:12:08 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[287170]: [NOTICE]   (287174) : New worker (287176) forked
Nov 29 08:12:08 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[287170]: [NOTICE]   (287174) : Loading success.
Nov 29 08:12:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:12:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:12:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:12:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:12:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:12:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:12:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e356 do_prune osdmap full prune enabled
Nov 29 08:12:08 compute-0 ceph-mon[75237]: pgmap v1680: 305 pgs: 305 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 58 op/s
Nov 29 08:12:08 compute-0 ceph-mon[75237]: osdmap e356: 3 total, 3 up, 3 in
Nov 29 08:12:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e357 e357: 3 total, 3 up, 3 in
Nov 29 08:12:08 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e357: 3 total, 3 up, 3 in
Nov 29 08:12:09 compute-0 nova_compute[255040]: 2025-11-29 08:12:09.527 255071 DEBUG nova.compute.manager [req-607ff00d-dd71-4dc3-a6cc-a64c56d0f948 req-c39db00b-e39f-4c4e-b60e-7f6214e9e092 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Received event network-vif-plugged-53f09235-af69-49c2-9137-b16a1ea8d7f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:12:09 compute-0 nova_compute[255040]: 2025-11-29 08:12:09.527 255071 DEBUG oslo_concurrency.lockutils [req-607ff00d-dd71-4dc3-a6cc-a64c56d0f948 req-c39db00b-e39f-4c4e-b60e-7f6214e9e092 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "76ba0630-af87-46e8-83ee-b983d76f480d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:09 compute-0 nova_compute[255040]: 2025-11-29 08:12:09.528 255071 DEBUG oslo_concurrency.lockutils [req-607ff00d-dd71-4dc3-a6cc-a64c56d0f948 req-c39db00b-e39f-4c4e-b60e-7f6214e9e092 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "76ba0630-af87-46e8-83ee-b983d76f480d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:09 compute-0 nova_compute[255040]: 2025-11-29 08:12:09.529 255071 DEBUG oslo_concurrency.lockutils [req-607ff00d-dd71-4dc3-a6cc-a64c56d0f948 req-c39db00b-e39f-4c4e-b60e-7f6214e9e092 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "76ba0630-af87-46e8-83ee-b983d76f480d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:09 compute-0 nova_compute[255040]: 2025-11-29 08:12:09.529 255071 DEBUG nova.compute.manager [req-607ff00d-dd71-4dc3-a6cc-a64c56d0f948 req-c39db00b-e39f-4c4e-b60e-7f6214e9e092 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] No waiting events found dispatching network-vif-plugged-53f09235-af69-49c2-9137-b16a1ea8d7f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:12:09 compute-0 nova_compute[255040]: 2025-11-29 08:12:09.530 255071 WARNING nova.compute.manager [req-607ff00d-dd71-4dc3-a6cc-a64c56d0f948 req-c39db00b-e39f-4c4e-b60e-7f6214e9e092 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Received unexpected event network-vif-plugged-53f09235-af69-49c2-9137-b16a1ea8d7f3 for instance with vm_state active and task_state None.
Nov 29 08:12:09 compute-0 nova_compute[255040]: 2025-11-29 08:12:09.808 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 79 op/s
Nov 29 08:12:09 compute-0 ceph-mon[75237]: osdmap e357: 3 total, 3 up, 3 in
Nov 29 08:12:10 compute-0 ceph-mon[75237]: pgmap v1683: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 79 op/s
Nov 29 08:12:11 compute-0 nova_compute[255040]: 2025-11-29 08:12:11.192 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 6.1 MiB/s rd, 4.9 MiB/s wr, 187 op/s
Nov 29 08:12:11 compute-0 nova_compute[255040]: 2025-11-29 08:12:11.927 255071 DEBUG nova.compute.manager [req-0f279b89-b7ca-45f6-868b-ce10545f4ac6 req-a3128272-fd60-42de-b681-f732620392c6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Received event network-changed-53f09235-af69-49c2-9137-b16a1ea8d7f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:12:11 compute-0 nova_compute[255040]: 2025-11-29 08:12:11.927 255071 DEBUG nova.compute.manager [req-0f279b89-b7ca-45f6-868b-ce10545f4ac6 req-a3128272-fd60-42de-b681-f732620392c6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Refreshing instance network info cache due to event network-changed-53f09235-af69-49c2-9137-b16a1ea8d7f3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:12:11 compute-0 nova_compute[255040]: 2025-11-29 08:12:11.928 255071 DEBUG oslo_concurrency.lockutils [req-0f279b89-b7ca-45f6-868b-ce10545f4ac6 req-a3128272-fd60-42de-b681-f732620392c6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-76ba0630-af87-46e8-83ee-b983d76f480d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:12:11 compute-0 nova_compute[255040]: 2025-11-29 08:12:11.928 255071 DEBUG oslo_concurrency.lockutils [req-0f279b89-b7ca-45f6-868b-ce10545f4ac6 req-a3128272-fd60-42de-b681-f732620392c6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-76ba0630-af87-46e8-83ee-b983d76f480d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:12:11 compute-0 nova_compute[255040]: 2025-11-29 08:12:11.929 255071 DEBUG nova.network.neutron [req-0f279b89-b7ca-45f6-868b-ce10545f4ac6 req-a3128272-fd60-42de-b681-f732620392c6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Refreshing network info cache for port 53f09235-af69-49c2-9137-b16a1ea8d7f3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:12:12 compute-0 ceph-mon[75237]: pgmap v1684: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 6.1 MiB/s rd, 4.9 MiB/s wr, 187 op/s
Nov 29 08:12:13 compute-0 nova_compute[255040]: 2025-11-29 08:12:13.256 255071 DEBUG nova.network.neutron [req-0f279b89-b7ca-45f6-868b-ce10545f4ac6 req-a3128272-fd60-42de-b681-f732620392c6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Updated VIF entry in instance network info cache for port 53f09235-af69-49c2-9137-b16a1ea8d7f3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:12:13 compute-0 nova_compute[255040]: 2025-11-29 08:12:13.257 255071 DEBUG nova.network.neutron [req-0f279b89-b7ca-45f6-868b-ce10545f4ac6 req-a3128272-fd60-42de-b681-f732620392c6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Updating instance_info_cache with network_info: [{"id": "53f09235-af69-49c2-9137-b16a1ea8d7f3", "address": "fa:16:3e:62:b2:a7", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53f09235-af", "ovs_interfaceid": "53f09235-af69-49c2-9137-b16a1ea8d7f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:12:13 compute-0 nova_compute[255040]: 2025-11-29 08:12:13.278 255071 DEBUG oslo_concurrency.lockutils [req-0f279b89-b7ca-45f6-868b-ce10545f4ac6 req-a3128272-fd60-42de-b681-f732620392c6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-76ba0630-af87-46e8-83ee-b983d76f480d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:12:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 5.6 MiB/s rd, 3.8 MiB/s wr, 184 op/s
Nov 29 08:12:14 compute-0 nova_compute[255040]: 2025-11-29 08:12:14.812 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:14 compute-0 podman[287186]: 2025-11-29 08:12:14.978857286 +0000 UTC m=+0.120366656 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 08:12:15 compute-0 ceph-mon[75237]: pgmap v1685: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 5.6 MiB/s rd, 3.8 MiB/s wr, 184 op/s
Nov 29 08:12:15 compute-0 nova_compute[255040]: 2025-11-29 08:12:15.702 255071 DEBUG oslo_concurrency.lockutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Acquiring lock "66776362-7d85-47fc-a7d5-f2c50e77d9da" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:15 compute-0 nova_compute[255040]: 2025-11-29 08:12:15.703 255071 DEBUG oslo_concurrency.lockutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Lock "66776362-7d85-47fc-a7d5-f2c50e77d9da" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:15 compute-0 nova_compute[255040]: 2025-11-29 08:12:15.723 255071 DEBUG nova.compute.manager [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:12:15 compute-0 nova_compute[255040]: 2025-11-29 08:12:15.818 255071 DEBUG oslo_concurrency.lockutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:15 compute-0 nova_compute[255040]: 2025-11-29 08:12:15.820 255071 DEBUG oslo_concurrency.lockutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:15 compute-0 nova_compute[255040]: 2025-11-29 08:12:15.830 255071 DEBUG nova.virt.hardware [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:12:15 compute-0 nova_compute[255040]: 2025-11-29 08:12:15.831 255071 INFO nova.compute.claims [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:12:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 5.6 MiB/s rd, 3.8 MiB/s wr, 184 op/s
Nov 29 08:12:15 compute-0 nova_compute[255040]: 2025-11-29 08:12:15.993 255071 DEBUG oslo_concurrency.processutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.196 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:12:16 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2610481454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.578 255071 DEBUG oslo_concurrency.processutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.587 255071 DEBUG nova.compute.provider_tree [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.603 255071 DEBUG nova.scheduler.client.report [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.622 255071 DEBUG oslo_concurrency.lockutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.624 255071 DEBUG nova.compute.manager [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.670 255071 DEBUG nova.compute.manager [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.672 255071 DEBUG nova.network.neutron [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.693 255071 INFO nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.733 255071 DEBUG nova.compute.manager [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.781 255071 INFO nova.virt.block_device [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Booting with volume e659bda1-3f72-42ba-8ec7-e7958af94e60 at /dev/vdb
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.946 255071 DEBUG os_brick.utils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.949 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.962 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.963 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[fbfbb770-1efa-4a8e-9044-43cb40bfc8b9]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.965 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.975 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.976 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc5fc9a-e681-4be4-b556-d3c690fee620]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.979 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.990 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.991 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[9b658212-2985-4760-94be-cd0114721434]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.993 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[db5bf427-f79b-4793-a8e9-e168c5d3ab6f]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:16 compute-0 nova_compute[255040]: 2025-11-29 08:12:16.994 255071 DEBUG oslo_concurrency.processutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:17 compute-0 ceph-mon[75237]: pgmap v1686: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 5.6 MiB/s rd, 3.8 MiB/s wr, 184 op/s
Nov 29 08:12:17 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2610481454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:17 compute-0 nova_compute[255040]: 2025-11-29 08:12:17.022 255071 DEBUG oslo_concurrency.processutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:17 compute-0 nova_compute[255040]: 2025-11-29 08:12:17.026 255071 DEBUG os_brick.initiator.connectors.lightos [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:12:17 compute-0 nova_compute[255040]: 2025-11-29 08:12:17.027 255071 DEBUG os_brick.initiator.connectors.lightos [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:12:17 compute-0 nova_compute[255040]: 2025-11-29 08:12:17.027 255071 DEBUG os_brick.initiator.connectors.lightos [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:12:17 compute-0 nova_compute[255040]: 2025-11-29 08:12:17.028 255071 DEBUG os_brick.utils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] <== get_connector_properties: return (79ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:12:17 compute-0 nova_compute[255040]: 2025-11-29 08:12:17.028 255071 DEBUG nova.virt.block_device [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Updating existing volume attachment record: 7d22dc09-53fb-4a71-94c0-4f86ae4f575f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:12:17 compute-0 nova_compute[255040]: 2025-11-29 08:12:17.063 255071 DEBUG nova.policy [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3ac027dfac1940a585665db58d3c343b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dd12500a556245649485ffa25f9896cc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:12:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:12:17 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3197790770' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 4.5 MiB/s rd, 3.1 MiB/s wr, 149 op/s
Nov 29 08:12:17 compute-0 nova_compute[255040]: 2025-11-29 08:12:17.895 255071 DEBUG nova.compute.manager [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:12:17 compute-0 nova_compute[255040]: 2025-11-29 08:12:17.897 255071 DEBUG nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:12:17 compute-0 nova_compute[255040]: 2025-11-29 08:12:17.898 255071 INFO nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Creating image(s)
Nov 29 08:12:17 compute-0 nova_compute[255040]: 2025-11-29 08:12:17.923 255071 DEBUG nova.storage.rbd_utils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] rbd image 66776362-7d85-47fc-a7d5-f2c50e77d9da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:12:17 compute-0 nova_compute[255040]: 2025-11-29 08:12:17.945 255071 DEBUG nova.storage.rbd_utils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] rbd image 66776362-7d85-47fc-a7d5-f2c50e77d9da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:12:17 compute-0 nova_compute[255040]: 2025-11-29 08:12:17.976 255071 DEBUG nova.storage.rbd_utils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] rbd image 66776362-7d85-47fc-a7d5-f2c50e77d9da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:12:17 compute-0 nova_compute[255040]: 2025-11-29 08:12:17.982 255071 DEBUG oslo_concurrency.processutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:18 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3197790770' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:18 compute-0 nova_compute[255040]: 2025-11-29 08:12:18.017 255071 DEBUG nova.network.neutron [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Successfully created port: 788543c5-e772-41b1-a887-4ced66fc5497 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:12:18 compute-0 nova_compute[255040]: 2025-11-29 08:12:18.059 255071 DEBUG oslo_concurrency.processutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:18 compute-0 nova_compute[255040]: 2025-11-29 08:12:18.060 255071 DEBUG oslo_concurrency.lockutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Acquiring lock "55a6637599f7119d0d1afd670bb8713620840059" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:18 compute-0 nova_compute[255040]: 2025-11-29 08:12:18.062 255071 DEBUG oslo_concurrency.lockutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:18 compute-0 nova_compute[255040]: 2025-11-29 08:12:18.062 255071 DEBUG oslo_concurrency.lockutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:18 compute-0 nova_compute[255040]: 2025-11-29 08:12:18.089 255071 DEBUG nova.storage.rbd_utils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] rbd image 66776362-7d85-47fc-a7d5-f2c50e77d9da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:12:18 compute-0 nova_compute[255040]: 2025-11-29 08:12:18.095 255071 DEBUG oslo_concurrency.processutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 66776362-7d85-47fc-a7d5-f2c50e77d9da_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:18 compute-0 nova_compute[255040]: 2025-11-29 08:12:18.462 255071 DEBUG oslo_concurrency.processutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 66776362-7d85-47fc-a7d5-f2c50e77d9da_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:18 compute-0 nova_compute[255040]: 2025-11-29 08:12:18.537 255071 DEBUG nova.storage.rbd_utils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] resizing rbd image 66776362-7d85-47fc-a7d5-f2c50e77d9da_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:12:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:18 compute-0 nova_compute[255040]: 2025-11-29 08:12:18.668 255071 DEBUG nova.objects.instance [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Lazy-loading 'migration_context' on Instance uuid 66776362-7d85-47fc-a7d5-f2c50e77d9da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:12:18 compute-0 nova_compute[255040]: 2025-11-29 08:12:18.687 255071 DEBUG nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:12:18 compute-0 nova_compute[255040]: 2025-11-29 08:12:18.687 255071 DEBUG nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Ensure instance console log exists: /var/lib/nova/instances/66776362-7d85-47fc-a7d5-f2c50e77d9da/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:12:18 compute-0 nova_compute[255040]: 2025-11-29 08:12:18.688 255071 DEBUG oslo_concurrency.lockutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:18 compute-0 nova_compute[255040]: 2025-11-29 08:12:18.688 255071 DEBUG oslo_concurrency.lockutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:18 compute-0 nova_compute[255040]: 2025-11-29 08:12:18.689 255071 DEBUG oslo_concurrency.lockutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:18 compute-0 nova_compute[255040]: 2025-11-29 08:12:18.774 255071 DEBUG nova.network.neutron [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Successfully updated port: 788543c5-e772-41b1-a887-4ced66fc5497 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:12:18 compute-0 nova_compute[255040]: 2025-11-29 08:12:18.788 255071 DEBUG oslo_concurrency.lockutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Acquiring lock "refresh_cache-66776362-7d85-47fc-a7d5-f2c50e77d9da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:12:18 compute-0 nova_compute[255040]: 2025-11-29 08:12:18.789 255071 DEBUG oslo_concurrency.lockutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Acquired lock "refresh_cache-66776362-7d85-47fc-a7d5-f2c50e77d9da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:12:18 compute-0 nova_compute[255040]: 2025-11-29 08:12:18.789 255071 DEBUG nova.network.neutron [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:12:18 compute-0 nova_compute[255040]: 2025-11-29 08:12:18.886 255071 DEBUG nova.compute.manager [req-fff29000-cec9-4657-864e-589e99f4db74 req-7da1b644-5951-435a-9d0d-12bdb1e1863e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Received event network-changed-788543c5-e772-41b1-a887-4ced66fc5497 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:12:18 compute-0 nova_compute[255040]: 2025-11-29 08:12:18.887 255071 DEBUG nova.compute.manager [req-fff29000-cec9-4657-864e-589e99f4db74 req-7da1b644-5951-435a-9d0d-12bdb1e1863e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Refreshing instance network info cache due to event network-changed-788543c5-e772-41b1-a887-4ced66fc5497. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:12:18 compute-0 nova_compute[255040]: 2025-11-29 08:12:18.887 255071 DEBUG oslo_concurrency.lockutils [req-fff29000-cec9-4657-864e-589e99f4db74 req-7da1b644-5951-435a-9d0d-12bdb1e1863e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-66776362-7d85-47fc-a7d5-f2c50e77d9da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:12:19 compute-0 nova_compute[255040]: 2025-11-29 08:12:19.006 255071 DEBUG nova.network.neutron [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:12:19 compute-0 ceph-mon[75237]: pgmap v1687: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 4.5 MiB/s rd, 3.1 MiB/s wr, 149 op/s
Nov 29 08:12:19 compute-0 nova_compute[255040]: 2025-11-29 08:12:19.813 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.4 MiB/s wr, 118 op/s
Nov 29 08:12:19 compute-0 nova_compute[255040]: 2025-11-29 08:12:19.972 255071 DEBUG nova.network.neutron [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Updating instance_info_cache with network_info: [{"id": "788543c5-e772-41b1-a887-4ced66fc5497", "address": "fa:16:3e:97:f4:ae", "network": {"id": "86b65b38-9c10-44ce-abcd-c34ac448faec", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-568996322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd12500a556245649485ffa25f9896cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap788543c5-e7", "ovs_interfaceid": "788543c5-e772-41b1-a887-4ced66fc5497", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:12:19 compute-0 nova_compute[255040]: 2025-11-29 08:12:19.994 255071 DEBUG oslo_concurrency.lockutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Releasing lock "refresh_cache-66776362-7d85-47fc-a7d5-f2c50e77d9da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:12:19 compute-0 nova_compute[255040]: 2025-11-29 08:12:19.995 255071 DEBUG nova.compute.manager [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Instance network_info: |[{"id": "788543c5-e772-41b1-a887-4ced66fc5497", "address": "fa:16:3e:97:f4:ae", "network": {"id": "86b65b38-9c10-44ce-abcd-c34ac448faec", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-568996322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd12500a556245649485ffa25f9896cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap788543c5-e7", "ovs_interfaceid": "788543c5-e772-41b1-a887-4ced66fc5497", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:12:19 compute-0 nova_compute[255040]: 2025-11-29 08:12:19.995 255071 DEBUG oslo_concurrency.lockutils [req-fff29000-cec9-4657-864e-589e99f4db74 req-7da1b644-5951-435a-9d0d-12bdb1e1863e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-66776362-7d85-47fc-a7d5-f2c50e77d9da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:12:19 compute-0 nova_compute[255040]: 2025-11-29 08:12:19.996 255071 DEBUG nova.network.neutron [req-fff29000-cec9-4657-864e-589e99f4db74 req-7da1b644-5951-435a-9d0d-12bdb1e1863e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Refreshing network info cache for port 788543c5-e772-41b1-a887-4ced66fc5497 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:12:20 compute-0 nova_compute[255040]: 2025-11-29 08:12:20.000 255071 DEBUG nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Start _get_guest_xml network_info=[{"id": "788543c5-e772-41b1-a887-4ced66fc5497", "address": "fa:16:3e:97:f4:ae", "network": {"id": "86b65b38-9c10-44ce-abcd-c34ac448faec", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-568996322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd12500a556245649485ffa25f9896cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap788543c5-e7", "ovs_interfaceid": "788543c5-e772-41b1-a887-4ced66fc5497", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'image_id': '36a9388d-0d77-4d24-a915-be92247e5dbc'}], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-e659bda1-3f72-42ba-8ec7-e7958af94e60', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'e659bda1-3f72-42ba-8ec7-e7958af94e60', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '66776362-7d85-47fc-a7d5-f2c50e77d9da', 'attached_at': '', 'detached_at': '', 'volume_id': 'e659bda1-3f72-42ba-8ec7-e7958af94e60', 'serial': 'e659bda1-3f72-42ba-8ec7-e7958af94e60'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': -1, 'delete_on_termination': False, 'attachment_id': '7d22dc09-53fb-4a71-94c0-4f86ae4f575f', 'mount_device': '/dev/vdb', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:12:20 compute-0 nova_compute[255040]: 2025-11-29 08:12:20.009 255071 WARNING nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:12:20 compute-0 nova_compute[255040]: 2025-11-29 08:12:20.018 255071 DEBUG nova.virt.libvirt.host [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:12:20 compute-0 nova_compute[255040]: 2025-11-29 08:12:20.019 255071 DEBUG nova.virt.libvirt.host [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:12:20 compute-0 nova_compute[255040]: 2025-11-29 08:12:20.025 255071 DEBUG nova.virt.libvirt.host [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:12:20 compute-0 nova_compute[255040]: 2025-11-29 08:12:20.026 255071 DEBUG nova.virt.libvirt.host [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:12:20 compute-0 nova_compute[255040]: 2025-11-29 08:12:20.026 255071 DEBUG nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:12:20 compute-0 nova_compute[255040]: 2025-11-29 08:12:20.027 255071 DEBUG nova.virt.hardware [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:12:20 compute-0 nova_compute[255040]: 2025-11-29 08:12:20.027 255071 DEBUG nova.virt.hardware [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:12:20 compute-0 nova_compute[255040]: 2025-11-29 08:12:20.027 255071 DEBUG nova.virt.hardware [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:12:20 compute-0 nova_compute[255040]: 2025-11-29 08:12:20.028 255071 DEBUG nova.virt.hardware [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:12:20 compute-0 nova_compute[255040]: 2025-11-29 08:12:20.028 255071 DEBUG nova.virt.hardware [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:12:20 compute-0 nova_compute[255040]: 2025-11-29 08:12:20.028 255071 DEBUG nova.virt.hardware [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:12:20 compute-0 nova_compute[255040]: 2025-11-29 08:12:20.029 255071 DEBUG nova.virt.hardware [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:12:20 compute-0 nova_compute[255040]: 2025-11-29 08:12:20.029 255071 DEBUG nova.virt.hardware [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:12:20 compute-0 nova_compute[255040]: 2025-11-29 08:12:20.029 255071 DEBUG nova.virt.hardware [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:12:20 compute-0 nova_compute[255040]: 2025-11-29 08:12:20.030 255071 DEBUG nova.virt.hardware [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:12:20 compute-0 nova_compute[255040]: 2025-11-29 08:12:20.030 255071 DEBUG nova.virt.hardware [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:12:20 compute-0 nova_compute[255040]: 2025-11-29 08:12:20.034 255071 DEBUG oslo_concurrency.processutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:12:20 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/396524459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:20 compute-0 nova_compute[255040]: 2025-11-29 08:12:20.591 255071 DEBUG oslo_concurrency.processutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:20 compute-0 nova_compute[255040]: 2025-11-29 08:12:20.619 255071 DEBUG nova.storage.rbd_utils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] rbd image 66776362-7d85-47fc-a7d5-f2c50e77d9da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:12:20 compute-0 nova_compute[255040]: 2025-11-29 08:12:20.627 255071 DEBUG oslo_concurrency.processutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:21 compute-0 ceph-mon[75237]: pgmap v1688: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.4 MiB/s wr, 118 op/s
Nov 29 08:12:21 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/396524459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:12:21 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2816270012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.201 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.224 255071 DEBUG oslo_concurrency.processutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.253 255071 DEBUG nova.virt.libvirt.vif [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-363213132',display_name='tempest-instance-363213132',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-363213132',id=20,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKLQBtih3QY3upK7E/mGXww52oaLXJUDqVdvSGRFeD61UVQv9k745MBcEt+NyoEeeC/RacC2dH1tl9acg0h5vr7l5lgiy/wKy827YiGaE08rwTfDwDvFUvadQsHi/BzD6w==',key_name='tempest-keypair-919818827',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd12500a556245649485ffa25f9896cc',ramdisk_id='',reservation_id='r-mltrx3sa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1194012855',owner_user_name='tempest-VolumesBackupsTest-1194012855-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3ac027dfac1940a585665db58d3c343b',uuid=66776362-7d85-47fc-a7d5-f2c50e77d9da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "788543c5-e772-41b1-a887-4ced66fc5497", "address": "fa:16:3e:97:f4:ae", "network": {"id": "86b65b38-9c10-44ce-abcd-c34ac448faec", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-568996322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd12500a556245649485ffa25f9896cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap788543c5-e7", "ovs_interfaceid": "788543c5-e772-41b1-a887-4ced66fc5497", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.254 255071 DEBUG nova.network.os_vif_util [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Converting VIF {"id": "788543c5-e772-41b1-a887-4ced66fc5497", "address": "fa:16:3e:97:f4:ae", "network": {"id": "86b65b38-9c10-44ce-abcd-c34ac448faec", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-568996322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd12500a556245649485ffa25f9896cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap788543c5-e7", "ovs_interfaceid": "788543c5-e772-41b1-a887-4ced66fc5497", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.255 255071 DEBUG nova.network.os_vif_util [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:f4:ae,bridge_name='br-int',has_traffic_filtering=True,id=788543c5-e772-41b1-a887-4ced66fc5497,network=Network(86b65b38-9c10-44ce-abcd-c34ac448faec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap788543c5-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.257 255071 DEBUG nova.objects.instance [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Lazy-loading 'pci_devices' on Instance uuid 66776362-7d85-47fc-a7d5-f2c50e77d9da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.272 255071 DEBUG nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:12:21 compute-0 nova_compute[255040]:   <uuid>66776362-7d85-47fc-a7d5-f2c50e77d9da</uuid>
Nov 29 08:12:21 compute-0 nova_compute[255040]:   <name>instance-00000014</name>
Nov 29 08:12:21 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:12:21 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:12:21 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <nova:name>tempest-instance-363213132</nova:name>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:12:20</nova:creationTime>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:12:21 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:12:21 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:12:21 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:12:21 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:12:21 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:12:21 compute-0 nova_compute[255040]:         <nova:user uuid="3ac027dfac1940a585665db58d3c343b">tempest-VolumesBackupsTest-1194012855-project-member</nova:user>
Nov 29 08:12:21 compute-0 nova_compute[255040]:         <nova:project uuid="dd12500a556245649485ffa25f9896cc">tempest-VolumesBackupsTest-1194012855</nova:project>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <nova:root type="image" uuid="36a9388d-0d77-4d24-a915-be92247e5dbc"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:12:21 compute-0 nova_compute[255040]:         <nova:port uuid="788543c5-e772-41b1-a887-4ced66fc5497">
Nov 29 08:12:21 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:12:21 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:12:21 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <system>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <entry name="serial">66776362-7d85-47fc-a7d5-f2c50e77d9da</entry>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <entry name="uuid">66776362-7d85-47fc-a7d5-f2c50e77d9da</entry>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     </system>
Nov 29 08:12:21 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:12:21 compute-0 nova_compute[255040]:   <os>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:   </os>
Nov 29 08:12:21 compute-0 nova_compute[255040]:   <features>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:   </features>
Nov 29 08:12:21 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:12:21 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:12:21 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/66776362-7d85-47fc-a7d5-f2c50e77d9da_disk">
Nov 29 08:12:21 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       </source>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:12:21 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/66776362-7d85-47fc-a7d5-f2c50e77d9da_disk.config">
Nov 29 08:12:21 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       </source>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:12:21 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <source protocol="rbd" name="volumes/volume-e659bda1-3f72-42ba-8ec7-e7958af94e60">
Nov 29 08:12:21 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       </source>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:12:21 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <target dev="vdb" bus="virtio"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <serial>e659bda1-3f72-42ba-8ec7-e7958af94e60</serial>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:97:f4:ae"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <target dev="tap788543c5-e7"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/66776362-7d85-47fc-a7d5-f2c50e77d9da/console.log" append="off"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <video>
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     </video>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:12:21 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:12:21 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:12:21 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:12:21 compute-0 nova_compute[255040]: </domain>
Nov 29 08:12:21 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.282 255071 DEBUG nova.compute.manager [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Preparing to wait for external event network-vif-plugged-788543c5-e772-41b1-a887-4ced66fc5497 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.282 255071 DEBUG oslo_concurrency.lockutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Acquiring lock "66776362-7d85-47fc-a7d5-f2c50e77d9da-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.283 255071 DEBUG oslo_concurrency.lockutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Lock "66776362-7d85-47fc-a7d5-f2c50e77d9da-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.283 255071 DEBUG oslo_concurrency.lockutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Lock "66776362-7d85-47fc-a7d5-f2c50e77d9da-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.284 255071 DEBUG nova.virt.libvirt.vif [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-363213132',display_name='tempest-instance-363213132',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-363213132',id=20,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKLQBtih3QY3upK7E/mGXww52oaLXJUDqVdvSGRFeD61UVQv9k745MBcEt+NyoEeeC/RacC2dH1tl9acg0h5vr7l5lgiy/wKy827YiGaE08rwTfDwDvFUvadQsHi/BzD6w==',key_name='tempest-keypair-919818827',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd12500a556245649485ffa25f9896cc',ramdisk_id='',reservation_id='r-mltrx3sa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1194012855',owner_user_name='tempest-VolumesBackupsTest-1194012855-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3ac027dfac1940a585665db58d3c343b',uuid=66776362-7d85-47fc-a7d5-f2c50e77d9da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "788543c5-e772-41b1-a887-4ced66fc5497", "address": "fa:16:3e:97:f4:ae", "network": {"id": "86b65b38-9c10-44ce-abcd-c34ac448faec", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-568996322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd12500a556245649485ffa25f9896cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap788543c5-e7", "ovs_interfaceid": "788543c5-e772-41b1-a887-4ced66fc5497", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.285 255071 DEBUG nova.network.os_vif_util [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Converting VIF {"id": "788543c5-e772-41b1-a887-4ced66fc5497", "address": "fa:16:3e:97:f4:ae", "network": {"id": "86b65b38-9c10-44ce-abcd-c34ac448faec", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-568996322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd12500a556245649485ffa25f9896cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap788543c5-e7", "ovs_interfaceid": "788543c5-e772-41b1-a887-4ced66fc5497", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.286 255071 DEBUG nova.network.os_vif_util [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:f4:ae,bridge_name='br-int',has_traffic_filtering=True,id=788543c5-e772-41b1-a887-4ced66fc5497,network=Network(86b65b38-9c10-44ce-abcd-c34ac448faec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap788543c5-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.287 255071 DEBUG os_vif [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:f4:ae,bridge_name='br-int',has_traffic_filtering=True,id=788543c5-e772-41b1-a887-4ced66fc5497,network=Network(86b65b38-9c10-44ce-abcd-c34ac448faec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap788543c5-e7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.288 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.288 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.289 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.293 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.294 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap788543c5-e7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.294 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap788543c5-e7, col_values=(('external_ids', {'iface-id': '788543c5-e772-41b1-a887-4ced66fc5497', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:f4:ae', 'vm-uuid': '66776362-7d85-47fc-a7d5-f2c50e77d9da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:21 compute-0 NetworkManager[49116]: <info>  [1764403941.2985] manager: (tap788543c5-e7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.300 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.303 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.309 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.311 255071 INFO os_vif [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:f4:ae,bridge_name='br-int',has_traffic_filtering=True,id=788543c5-e772-41b1-a887-4ced66fc5497,network=Network(86b65b38-9c10-44ce-abcd-c34ac448faec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap788543c5-e7')
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.385 255071 DEBUG nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.398 255071 DEBUG nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.398 255071 DEBUG nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.399 255071 DEBUG nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] No VIF found with MAC fa:16:3e:97:f4:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.399 255071 INFO nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Using config drive
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.431 255071 DEBUG nova.storage.rbd_utils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] rbd image 66776362-7d85-47fc-a7d5-f2c50e77d9da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.801 255071 INFO nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Creating config drive at /var/lib/nova/instances/66776362-7d85-47fc-a7d5-f2c50e77d9da/disk.config
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.811 255071 DEBUG oslo_concurrency.processutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66776362-7d85-47fc-a7d5-f2c50e77d9da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpis8wg5u4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.7 MiB/s wr, 137 op/s
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.957 255071 DEBUG oslo_concurrency.processutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66776362-7d85-47fc-a7d5-f2c50e77d9da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpis8wg5u4" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.989 255071 DEBUG nova.storage.rbd_utils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] rbd image 66776362-7d85-47fc-a7d5-f2c50e77d9da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:12:21 compute-0 nova_compute[255040]: 2025-11-29 08:12:21.993 255071 DEBUG oslo_concurrency.processutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66776362-7d85-47fc-a7d5-f2c50e77d9da/disk.config 66776362-7d85-47fc-a7d5-f2c50e77d9da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:22 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2816270012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:22 compute-0 nova_compute[255040]: 2025-11-29 08:12:22.178 255071 DEBUG oslo_concurrency.processutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66776362-7d85-47fc-a7d5-f2c50e77d9da/disk.config 66776362-7d85-47fc-a7d5-f2c50e77d9da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:22 compute-0 nova_compute[255040]: 2025-11-29 08:12:22.179 255071 INFO nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Deleting local config drive /var/lib/nova/instances/66776362-7d85-47fc-a7d5-f2c50e77d9da/disk.config because it was imported into RBD.
Nov 29 08:12:22 compute-0 nova_compute[255040]: 2025-11-29 08:12:22.196 255071 DEBUG nova.network.neutron [req-fff29000-cec9-4657-864e-589e99f4db74 req-7da1b644-5951-435a-9d0d-12bdb1e1863e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Updated VIF entry in instance network info cache for port 788543c5-e772-41b1-a887-4ced66fc5497. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:12:22 compute-0 nova_compute[255040]: 2025-11-29 08:12:22.197 255071 DEBUG nova.network.neutron [req-fff29000-cec9-4657-864e-589e99f4db74 req-7da1b644-5951-435a-9d0d-12bdb1e1863e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Updating instance_info_cache with network_info: [{"id": "788543c5-e772-41b1-a887-4ced66fc5497", "address": "fa:16:3e:97:f4:ae", "network": {"id": "86b65b38-9c10-44ce-abcd-c34ac448faec", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-568996322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd12500a556245649485ffa25f9896cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap788543c5-e7", "ovs_interfaceid": "788543c5-e772-41b1-a887-4ced66fc5497", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:12:22 compute-0 nova_compute[255040]: 2025-11-29 08:12:22.213 255071 DEBUG oslo_concurrency.lockutils [req-fff29000-cec9-4657-864e-589e99f4db74 req-7da1b644-5951-435a-9d0d-12bdb1e1863e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-66776362-7d85-47fc-a7d5-f2c50e77d9da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:12:22 compute-0 kernel: tap788543c5-e7: entered promiscuous mode
Nov 29 08:12:22 compute-0 NetworkManager[49116]: <info>  [1764403942.2332] manager: (tap788543c5-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Nov 29 08:12:22 compute-0 nova_compute[255040]: 2025-11-29 08:12:22.238 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:22 compute-0 ovn_controller[153295]: 2025-11-29T08:12:22Z|00184|binding|INFO|Claiming lport 788543c5-e772-41b1-a887-4ced66fc5497 for this chassis.
Nov 29 08:12:22 compute-0 ovn_controller[153295]: 2025-11-29T08:12:22Z|00185|binding|INFO|788543c5-e772-41b1-a887-4ced66fc5497: Claiming fa:16:3e:97:f4:ae 10.100.0.6
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.249 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:f4:ae 10.100.0.6'], port_security=['fa:16:3e:97:f4:ae 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '66776362-7d85-47fc-a7d5-f2c50e77d9da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86b65b38-9c10-44ce-abcd-c34ac448faec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd12500a556245649485ffa25f9896cc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '413160d7-759e-4024-b663-8512ced0a321', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d4d91e7-0e7f-4535-8521-212392cb3f4e, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=788543c5-e772-41b1-a887-4ced66fc5497) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.251 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 788543c5-e772-41b1-a887-4ced66fc5497 in datapath 86b65b38-9c10-44ce-abcd-c34ac448faec bound to our chassis
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.253 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 86b65b38-9c10-44ce-abcd-c34ac448faec
Nov 29 08:12:22 compute-0 nova_compute[255040]: 2025-11-29 08:12:22.269 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:22 compute-0 nova_compute[255040]: 2025-11-29 08:12:22.271 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:22 compute-0 ovn_controller[153295]: 2025-11-29T08:12:22Z|00186|binding|INFO|Setting lport 788543c5-e772-41b1-a887-4ced66fc5497 ovn-installed in OVS
Nov 29 08:12:22 compute-0 ovn_controller[153295]: 2025-11-29T08:12:22Z|00187|binding|INFO|Setting lport 788543c5-e772-41b1-a887-4ced66fc5497 up in Southbound
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.271 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf861bf-e2de-419f-a34e-fa880c9a1818]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.273 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap86b65b38-91 in ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.275 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap86b65b38-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.276 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[343b99f9-201f-490b-af64-9aab43f2d066]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.280 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa7d30f-5b1f-4ad0-9ae9-e246ddb0df4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:22 compute-0 systemd-udevd[287537]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:12:22 compute-0 ovn_controller[153295]: 2025-11-29T08:12:22Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:62:b2:a7 10.100.0.6
Nov 29 08:12:22 compute-0 ovn_controller[153295]: 2025-11-29T08:12:22Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:62:b2:a7 10.100.0.6
Nov 29 08:12:22 compute-0 systemd-machined[216271]: New machine qemu-20-instance-00000014.
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.301 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[fb9d343d-0de0-4e70-9cf0-116977a0fb81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:22 compute-0 NetworkManager[49116]: <info>  [1764403942.3032] device (tap788543c5-e7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:12:22 compute-0 NetworkManager[49116]: <info>  [1764403942.3048] device (tap788543c5-e7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:12:22 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-00000014.
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.334 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5d598b08-3ac6-48a5-9abe-4ba1459dc676]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.372 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[e82fcbdd-3687-48f8-b72c-61ad2fdb45cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:22 compute-0 NetworkManager[49116]: <info>  [1764403942.3810] manager: (tap86b65b38-90): new Veth device (/org/freedesktop/NetworkManager/Devices/105)
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.379 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5bf657b8-0764-441f-963f-387c63d439ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.421 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[3956aa63-7ee6-45dc-abe2-c719063354e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.425 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b5a89d-6b78-41df-b3d8-1adb02f063f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:22 compute-0 NetworkManager[49116]: <info>  [1764403942.4615] device (tap86b65b38-90): carrier: link connected
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.468 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[6be371d5-b5e7-49cc-8108-0c62c0203dd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.497 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[96ff69f8-32c1-4e5d-bc65-e19d43fdd823]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap86b65b38-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:f0:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617980, 'reachable_time': 18883, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287569, 'error': None, 'target': 'ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.523 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[97e1b086-6a47-439a-8bf5-de5e3ea76946]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefe:f01f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 617980, 'tstamp': 617980}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287570, 'error': None, 'target': 'ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.544 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[6555697e-0523-4308-ad89-4c4c4cdd3e61]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap86b65b38-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:f0:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617980, 'reachable_time': 18883, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287571, 'error': None, 'target': 'ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.587 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5f9a513f-bd88-4bb1-ade3-3ba26a3fe60e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:22 compute-0 nova_compute[255040]: 2025-11-29 08:12:22.598 255071 DEBUG nova.compute.manager [req-70b0adaa-79b2-454c-b58b-e827511cbd31 req-df02d291-0a58-4536-90dc-fa56742afdee cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Received event network-vif-plugged-788543c5-e772-41b1-a887-4ced66fc5497 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:12:22 compute-0 nova_compute[255040]: 2025-11-29 08:12:22.598 255071 DEBUG oslo_concurrency.lockutils [req-70b0adaa-79b2-454c-b58b-e827511cbd31 req-df02d291-0a58-4536-90dc-fa56742afdee cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "66776362-7d85-47fc-a7d5-f2c50e77d9da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:22 compute-0 nova_compute[255040]: 2025-11-29 08:12:22.598 255071 DEBUG oslo_concurrency.lockutils [req-70b0adaa-79b2-454c-b58b-e827511cbd31 req-df02d291-0a58-4536-90dc-fa56742afdee cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "66776362-7d85-47fc-a7d5-f2c50e77d9da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:22 compute-0 nova_compute[255040]: 2025-11-29 08:12:22.599 255071 DEBUG oslo_concurrency.lockutils [req-70b0adaa-79b2-454c-b58b-e827511cbd31 req-df02d291-0a58-4536-90dc-fa56742afdee cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "66776362-7d85-47fc-a7d5-f2c50e77d9da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:22 compute-0 nova_compute[255040]: 2025-11-29 08:12:22.599 255071 DEBUG nova.compute.manager [req-70b0adaa-79b2-454c-b58b-e827511cbd31 req-df02d291-0a58-4536-90dc-fa56742afdee cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Processing event network-vif-plugged-788543c5-e772-41b1-a887-4ced66fc5497 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.649 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[eb8d90d0-8c02-4f81-ae8b-bc8bad0892bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.652 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86b65b38-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.652 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.652 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86b65b38-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:22 compute-0 NetworkManager[49116]: <info>  [1764403942.6559] manager: (tap86b65b38-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Nov 29 08:12:22 compute-0 kernel: tap86b65b38-90: entered promiscuous mode
Nov 29 08:12:22 compute-0 nova_compute[255040]: 2025-11-29 08:12:22.656 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.658 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap86b65b38-90, col_values=(('external_ids', {'iface-id': '1abf2589-793c-4d7c-a3ce-8a2edebe30c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:22 compute-0 ovn_controller[153295]: 2025-11-29T08:12:22Z|00188|binding|INFO|Releasing lport 1abf2589-793c-4d7c-a3ce-8a2edebe30c6 from this chassis (sb_readonly=0)
Nov 29 08:12:22 compute-0 nova_compute[255040]: 2025-11-29 08:12:22.659 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.661 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/86b65b38-9c10-44ce-abcd-c34ac448faec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/86b65b38-9c10-44ce-abcd-c34ac448faec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.662 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[46e9d4ee-ce0e-49b2-98a1-7eead6b3ab00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.663 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-86b65b38-9c10-44ce-abcd-c34ac448faec
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/86b65b38-9c10-44ce-abcd-c34ac448faec.pid.haproxy
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID 86b65b38-9c10-44ce-abcd-c34ac448faec
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:12:22 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:22.666 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec', 'env', 'PROCESS_TAG=haproxy-86b65b38-9c10-44ce-abcd-c34ac448faec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/86b65b38-9c10-44ce-abcd-c34ac448faec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:12:22 compute-0 nova_compute[255040]: 2025-11-29 08:12:22.678 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:23 compute-0 ceph-mon[75237]: pgmap v1689: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.7 MiB/s wr, 137 op/s
Nov 29 08:12:23 compute-0 podman[287658]: 2025-11-29 08:12:23.158108469 +0000 UTC m=+0.058403960 container create 77368de0f15868caa0906ddcddccb9e0411191eb7457caf5cc0198ee83610722 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 08:12:23 compute-0 systemd[1]: Started libpod-conmon-77368de0f15868caa0906ddcddccb9e0411191eb7457caf5cc0198ee83610722.scope.
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.214 255071 DEBUG nova.compute.manager [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.216 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403943.214696, 66776362-7d85-47fc-a7d5-f2c50e77d9da => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.216 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] VM Started (Lifecycle Event)
Nov 29 08:12:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.221 255071 DEBUG nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:12:23 compute-0 podman[287658]: 2025-11-29 08:12:23.127632996 +0000 UTC m=+0.027928507 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.226 255071 INFO nova.virt.libvirt.driver [-] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Instance spawned successfully.
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.227 255071 DEBUG nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07415294d5eb91a7f1e753ddfa964a0d4ea99357e0973fae64ba0cdbb5099f35/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:23 compute-0 podman[287658]: 2025-11-29 08:12:23.24759736 +0000 UTC m=+0.147892871 container init 77368de0f15868caa0906ddcddccb9e0411191eb7457caf5cc0198ee83610722 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.248 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:12:23 compute-0 podman[287658]: 2025-11-29 08:12:23.256269045 +0000 UTC m=+0.156564546 container start 77368de0f15868caa0906ddcddccb9e0411191eb7457caf5cc0198ee83610722 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.258 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.262 255071 DEBUG nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.263 255071 DEBUG nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.263 255071 DEBUG nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.264 255071 DEBUG nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.264 255071 DEBUG nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.265 255071 DEBUG nova.virt.libvirt.driver [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:23 compute-0 neutron-haproxy-ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec[287679]: [NOTICE]   (287683) : New worker (287685) forked
Nov 29 08:12:23 compute-0 neutron-haproxy-ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec[287679]: [NOTICE]   (287683) : Loading success.
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.296 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.297 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403943.2155035, 66776362-7d85-47fc-a7d5-f2c50e77d9da => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.297 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] VM Paused (Lifecycle Event)
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.323 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.326 255071 INFO nova.compute.manager [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Took 5.43 seconds to spawn the instance on the hypervisor.
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.326 255071 DEBUG nova.compute.manager [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.334 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403943.2201967, 66776362-7d85-47fc-a7d5-f2c50e77d9da => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.334 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] VM Resumed (Lifecycle Event)
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.369 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.376 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.402 255071 INFO nova.compute.manager [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Took 7.61 seconds to build instance.
Nov 29 08:12:23 compute-0 nova_compute[255040]: 2025-11-29 08:12:23.415 255071 DEBUG oslo_concurrency.lockutils [None req-44003ef5-552a-49a6-9efc-94554bb90f09 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Lock "66776362-7d85-47fc-a7d5-f2c50e77d9da" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 402 KiB/s rd, 3.4 MiB/s wr, 71 op/s
Nov 29 08:12:24 compute-0 nova_compute[255040]: 2025-11-29 08:12:24.659 255071 DEBUG nova.compute.manager [req-919c8b12-0a9e-4347-b613-407954249878 req-b561ac4f-a128-4d02-bb66-134fb21525f4 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Received event network-vif-plugged-788543c5-e772-41b1-a887-4ced66fc5497 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:12:24 compute-0 nova_compute[255040]: 2025-11-29 08:12:24.659 255071 DEBUG oslo_concurrency.lockutils [req-919c8b12-0a9e-4347-b613-407954249878 req-b561ac4f-a128-4d02-bb66-134fb21525f4 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "66776362-7d85-47fc-a7d5-f2c50e77d9da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:24 compute-0 nova_compute[255040]: 2025-11-29 08:12:24.660 255071 DEBUG oslo_concurrency.lockutils [req-919c8b12-0a9e-4347-b613-407954249878 req-b561ac4f-a128-4d02-bb66-134fb21525f4 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "66776362-7d85-47fc-a7d5-f2c50e77d9da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:24 compute-0 nova_compute[255040]: 2025-11-29 08:12:24.660 255071 DEBUG oslo_concurrency.lockutils [req-919c8b12-0a9e-4347-b613-407954249878 req-b561ac4f-a128-4d02-bb66-134fb21525f4 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "66776362-7d85-47fc-a7d5-f2c50e77d9da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:24 compute-0 nova_compute[255040]: 2025-11-29 08:12:24.661 255071 DEBUG nova.compute.manager [req-919c8b12-0a9e-4347-b613-407954249878 req-b561ac4f-a128-4d02-bb66-134fb21525f4 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] No waiting events found dispatching network-vif-plugged-788543c5-e772-41b1-a887-4ced66fc5497 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:12:24 compute-0 nova_compute[255040]: 2025-11-29 08:12:24.661 255071 WARNING nova.compute.manager [req-919c8b12-0a9e-4347-b613-407954249878 req-b561ac4f-a128-4d02-bb66-134fb21525f4 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Received unexpected event network-vif-plugged-788543c5-e772-41b1-a887-4ced66fc5497 for instance with vm_state active and task_state None.
Nov 29 08:12:24 compute-0 nova_compute[255040]: 2025-11-29 08:12:24.815 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:25 compute-0 ceph-mon[75237]: pgmap v1690: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 402 KiB/s rd, 3.4 MiB/s wr, 71 op/s
Nov 29 08:12:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 138 op/s
Nov 29 08:12:26 compute-0 nova_compute[255040]: 2025-11-29 08:12:26.054 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:26.053 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:12:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:26.056 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:12:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:12:26 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/573104119' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:12:26 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/573104119' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:26 compute-0 nova_compute[255040]: 2025-11-29 08:12:26.297 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:26 compute-0 nova_compute[255040]: 2025-11-29 08:12:26.747 255071 DEBUG nova.compute.manager [req-119215d7-fc2e-42ab-b683-d3776a86cde6 req-d39c2c61-7af6-45d6-8094-8ebc815ea774 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Received event network-changed-788543c5-e772-41b1-a887-4ced66fc5497 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:12:26 compute-0 nova_compute[255040]: 2025-11-29 08:12:26.748 255071 DEBUG nova.compute.manager [req-119215d7-fc2e-42ab-b683-d3776a86cde6 req-d39c2c61-7af6-45d6-8094-8ebc815ea774 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Refreshing instance network info cache due to event network-changed-788543c5-e772-41b1-a887-4ced66fc5497. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:12:26 compute-0 nova_compute[255040]: 2025-11-29 08:12:26.748 255071 DEBUG oslo_concurrency.lockutils [req-119215d7-fc2e-42ab-b683-d3776a86cde6 req-d39c2c61-7af6-45d6-8094-8ebc815ea774 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-66776362-7d85-47fc-a7d5-f2c50e77d9da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:12:26 compute-0 nova_compute[255040]: 2025-11-29 08:12:26.748 255071 DEBUG oslo_concurrency.lockutils [req-119215d7-fc2e-42ab-b683-d3776a86cde6 req-d39c2c61-7af6-45d6-8094-8ebc815ea774 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-66776362-7d85-47fc-a7d5-f2c50e77d9da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:12:26 compute-0 nova_compute[255040]: 2025-11-29 08:12:26.749 255071 DEBUG nova.network.neutron [req-119215d7-fc2e-42ab-b683-d3776a86cde6 req-d39c2c61-7af6-45d6-8094-8ebc815ea774 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Refreshing network info cache for port 788543c5-e772-41b1-a887-4ced66fc5497 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:12:27 compute-0 ceph-mon[75237]: pgmap v1691: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 138 op/s
Nov 29 08:12:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/573104119' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/573104119' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:27.134 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:27.135 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:27.136 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 138 op/s
Nov 29 08:12:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:12:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1577956150' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:12:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1577956150' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:28 compute-0 nova_compute[255040]: 2025-11-29 08:12:28.393 255071 DEBUG nova.network.neutron [req-119215d7-fc2e-42ab-b683-d3776a86cde6 req-d39c2c61-7af6-45d6-8094-8ebc815ea774 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Updated VIF entry in instance network info cache for port 788543c5-e772-41b1-a887-4ced66fc5497. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:12:28 compute-0 nova_compute[255040]: 2025-11-29 08:12:28.394 255071 DEBUG nova.network.neutron [req-119215d7-fc2e-42ab-b683-d3776a86cde6 req-d39c2c61-7af6-45d6-8094-8ebc815ea774 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Updating instance_info_cache with network_info: [{"id": "788543c5-e772-41b1-a887-4ced66fc5497", "address": "fa:16:3e:97:f4:ae", "network": {"id": "86b65b38-9c10-44ce-abcd-c34ac448faec", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-568996322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd12500a556245649485ffa25f9896cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap788543c5-e7", "ovs_interfaceid": "788543c5-e772-41b1-a887-4ced66fc5497", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:12:28 compute-0 nova_compute[255040]: 2025-11-29 08:12:28.435 255071 DEBUG oslo_concurrency.lockutils [req-119215d7-fc2e-42ab-b683-d3776a86cde6 req-d39c2c61-7af6-45d6-8094-8ebc815ea774 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-66776362-7d85-47fc-a7d5-f2c50e77d9da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:12:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:28 compute-0 podman[287694]: 2025-11-29 08:12:28.959222528 +0000 UTC m=+0.120088809 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 29 08:12:29 compute-0 ceph-mon[75237]: pgmap v1692: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 138 op/s
Nov 29 08:12:29 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1577956150' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:29 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1577956150' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:29 compute-0 nova_compute[255040]: 2025-11-29 08:12:29.818 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 179 op/s
Nov 29 08:12:29 compute-0 nova_compute[255040]: 2025-11-29 08:12:29.990 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:29 compute-0 nova_compute[255040]: 2025-11-29 08:12:29.990 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:30 compute-0 nova_compute[255040]: 2025-11-29 08:12:30.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:31 compute-0 ceph-mon[75237]: pgmap v1693: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 179 op/s
Nov 29 08:12:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:12:31 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/326072825' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:12:31 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/326072825' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:31 compute-0 nova_compute[255040]: 2025-11-29 08:12:31.302 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.8 MiB/s wr, 187 op/s
Nov 29 08:12:31 compute-0 nova_compute[255040]: 2025-11-29 08:12:31.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:31 compute-0 nova_compute[255040]: 2025-11-29 08:12:31.978 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:12:31 compute-0 nova_compute[255040]: 2025-11-29 08:12:31.999 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:12:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:32.059 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:32 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/326072825' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:32 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/326072825' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:33 compute-0 ceph-mon[75237]: pgmap v1694: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.8 MiB/s wr, 187 op/s
Nov 29 08:12:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:12:33 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4245527503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:12:33 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4245527503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 158 op/s
Nov 29 08:12:33 compute-0 nova_compute[255040]: 2025-11-29 08:12:33.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:33 compute-0 nova_compute[255040]: 2025-11-29 08:12:33.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:34 compute-0 nova_compute[255040]: 2025-11-29 08:12:34.013 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:34 compute-0 nova_compute[255040]: 2025-11-29 08:12:34.013 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:34 compute-0 nova_compute[255040]: 2025-11-29 08:12:34.014 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:34 compute-0 nova_compute[255040]: 2025-11-29 08:12:34.014 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:12:34 compute-0 nova_compute[255040]: 2025-11-29 08:12:34.015 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:34 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4245527503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:34 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4245527503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:12:34 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4198246112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:34 compute-0 nova_compute[255040]: 2025-11-29 08:12:34.511 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:34 compute-0 nova_compute[255040]: 2025-11-29 08:12:34.612 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:12:34 compute-0 nova_compute[255040]: 2025-11-29 08:12:34.613 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:12:34 compute-0 nova_compute[255040]: 2025-11-29 08:12:34.613 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:12:34 compute-0 nova_compute[255040]: 2025-11-29 08:12:34.619 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:12:34 compute-0 nova_compute[255040]: 2025-11-29 08:12:34.620 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:12:34 compute-0 nova_compute[255040]: 2025-11-29 08:12:34.797 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:12:34 compute-0 nova_compute[255040]: 2025-11-29 08:12:34.799 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4147MB free_disk=59.96721267700195GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:12:34 compute-0 nova_compute[255040]: 2025-11-29 08:12:34.800 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:34 compute-0 nova_compute[255040]: 2025-11-29 08:12:34.800 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:34 compute-0 nova_compute[255040]: 2025-11-29 08:12:34.823 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:34 compute-0 nova_compute[255040]: 2025-11-29 08:12:34.883 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance 76ba0630-af87-46e8-83ee-b983d76f480d actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:12:34 compute-0 nova_compute[255040]: 2025-11-29 08:12:34.884 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance 66776362-7d85-47fc-a7d5-f2c50e77d9da actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:12:34 compute-0 nova_compute[255040]: 2025-11-29 08:12:34.884 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:12:34 compute-0 nova_compute[255040]: 2025-11-29 08:12:34.884 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:12:34 compute-0 nova_compute[255040]: 2025-11-29 08:12:34.939 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:35 compute-0 ceph-mon[75237]: pgmap v1695: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 158 op/s
Nov 29 08:12:35 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/4198246112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:12:35 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2125065092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:35 compute-0 nova_compute[255040]: 2025-11-29 08:12:35.408 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:35 compute-0 nova_compute[255040]: 2025-11-29 08:12:35.416 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:12:35 compute-0 nova_compute[255040]: 2025-11-29 08:12:35.434 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:12:35 compute-0 nova_compute[255040]: 2025-11-29 08:12:35.460 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:12:35 compute-0 nova_compute[255040]: 2025-11-29 08:12:35.461 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.2 MiB/s rd, 866 KiB/s wr, 165 op/s
Nov 29 08:12:36 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2125065092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:36 compute-0 nova_compute[255040]: 2025-11-29 08:12:36.360 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:36 compute-0 nova_compute[255040]: 2025-11-29 08:12:36.461 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:36 compute-0 nova_compute[255040]: 2025-11-29 08:12:36.463 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:12:36 compute-0 podman[287763]: 2025-11-29 08:12:36.950075576 +0000 UTC m=+0.106063270 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 08:12:37 compute-0 ceph-mon[75237]: pgmap v1696: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.2 MiB/s rd, 866 KiB/s wr, 165 op/s
Nov 29 08:12:37 compute-0 sudo[287782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:37 compute-0 sudo[287782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:37 compute-0 sudo[287782]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:37 compute-0 sudo[287807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:12:37 compute-0 sudo[287807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:37 compute-0 sudo[287807]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:37 compute-0 sudo[287832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:37 compute-0 sudo[287832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:37 compute-0 sudo[287832]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.2 MiB/s rd, 17 KiB/s wr, 85 op/s
Nov 29 08:12:37 compute-0 sudo[287857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:12:37 compute-0 sudo[287857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:37 compute-0 nova_compute[255040]: 2025-11-29 08:12:37.971 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:37 compute-0 nova_compute[255040]: 2025-11-29 08:12:37.977 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:38 compute-0 ceph-mon[75237]: pgmap v1697: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.2 MiB/s rd, 17 KiB/s wr, 85 op/s
Nov 29 08:12:38 compute-0 sudo[287857]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 08:12:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 08:12:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:12:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:12:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:12:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:12:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:12:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:12:38 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev a37ee514-da18-4de6-93a0-0d92bb166fbe does not exist
Nov 29 08:12:38 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 6dd8d6ee-6e6c-4ddb-9435-bd879fc5c7b6 does not exist
Nov 29 08:12:38 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev d9724cd3-bd69-48fe-a181-90a74877b11a does not exist
Nov 29 08:12:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:12:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:12:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:12:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:12:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:12:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:12:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:38 compute-0 sudo[287912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:38 compute-0 sudo[287912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:38 compute-0 sudo[287912]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:12:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:12:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:12:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:12:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:12:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:12:38 compute-0 sudo[287937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:12:38 compute-0 sudo[287937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:38 compute-0 sudo[287937]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:38 compute-0 sudo[287962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:38 compute-0 sudo[287962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:38 compute-0 sudo[287962]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:38 compute-0 sudo[287987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:12:38 compute-0 sudo[287987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:12:38
Nov 29 08:12:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:12:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:12:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'vms', 'images', '.rgw.root']
Nov 29 08:12:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:12:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 08:12:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:12:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:12:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:12:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:12:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:12:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:12:39 compute-0 podman[288050]: 2025-11-29 08:12:39.284856008 +0000 UTC m=+0.052329825 container create 65e3b1a207dfedebd156af518bac1fd457ca886356bbe67ff444bca346d716b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_poitras, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:12:39 compute-0 systemd[1]: Started libpod-conmon-65e3b1a207dfedebd156af518bac1fd457ca886356bbe67ff444bca346d716b5.scope.
Nov 29 08:12:39 compute-0 podman[288050]: 2025-11-29 08:12:39.263152472 +0000 UTC m=+0.030626319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:12:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:12:39 compute-0 podman[288050]: 2025-11-29 08:12:39.385692416 +0000 UTC m=+0.153166253 container init 65e3b1a207dfedebd156af518bac1fd457ca886356bbe67ff444bca346d716b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 08:12:39 compute-0 podman[288050]: 2025-11-29 08:12:39.395504601 +0000 UTC m=+0.162978418 container start 65e3b1a207dfedebd156af518bac1fd457ca886356bbe67ff444bca346d716b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_poitras, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 08:12:39 compute-0 podman[288050]: 2025-11-29 08:12:39.400493056 +0000 UTC m=+0.167966883 container attach 65e3b1a207dfedebd156af518bac1fd457ca886356bbe67ff444bca346d716b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_poitras, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:12:39 compute-0 romantic_poitras[288066]: 167 167
Nov 29 08:12:39 compute-0 systemd[1]: libpod-65e3b1a207dfedebd156af518bac1fd457ca886356bbe67ff444bca346d716b5.scope: Deactivated successfully.
Nov 29 08:12:39 compute-0 podman[288050]: 2025-11-29 08:12:39.405218874 +0000 UTC m=+0.172692711 container died 65e3b1a207dfedebd156af518bac1fd457ca886356bbe67ff444bca346d716b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_poitras, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 08:12:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-933f6a4ee30c8eb2fb34d45bd158f938d1bdc70dd300dd34c573d1b3f3e5df16-merged.mount: Deactivated successfully.
Nov 29 08:12:39 compute-0 podman[288050]: 2025-11-29 08:12:39.455929225 +0000 UTC m=+0.223403032 container remove 65e3b1a207dfedebd156af518bac1fd457ca886356bbe67ff444bca346d716b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 08:12:39 compute-0 systemd[1]: libpod-conmon-65e3b1a207dfedebd156af518bac1fd457ca886356bbe67ff444bca346d716b5.scope: Deactivated successfully.
Nov 29 08:12:39 compute-0 podman[288089]: 2025-11-29 08:12:39.64801253 +0000 UTC m=+0.049794468 container create 90d609883e1fe031bf8a6ffb29f4967cb1ed8edc21fa3f332494ee3404409512 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cohen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:12:39 compute-0 systemd[1]: Started libpod-conmon-90d609883e1fe031bf8a6ffb29f4967cb1ed8edc21fa3f332494ee3404409512.scope.
Nov 29 08:12:39 compute-0 podman[288089]: 2025-11-29 08:12:39.625993715 +0000 UTC m=+0.027775673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:12:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/755d4fc63838f3fd003a9edc397a5a46e075515cf2f1a5053e76cf566bea65af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/755d4fc63838f3fd003a9edc397a5a46e075515cf2f1a5053e76cf566bea65af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/755d4fc63838f3fd003a9edc397a5a46e075515cf2f1a5053e76cf566bea65af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/755d4fc63838f3fd003a9edc397a5a46e075515cf2f1a5053e76cf566bea65af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/755d4fc63838f3fd003a9edc397a5a46e075515cf2f1a5053e76cf566bea65af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:39 compute-0 podman[288089]: 2025-11-29 08:12:39.742509046 +0000 UTC m=+0.144290994 container init 90d609883e1fe031bf8a6ffb29f4967cb1ed8edc21fa3f332494ee3404409512 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cohen, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 08:12:39 compute-0 podman[288089]: 2025-11-29 08:12:39.754427578 +0000 UTC m=+0.156209536 container start 90d609883e1fe031bf8a6ffb29f4967cb1ed8edc21fa3f332494ee3404409512 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:12:39 compute-0 podman[288089]: 2025-11-29 08:12:39.758817147 +0000 UTC m=+0.160599085 container attach 90d609883e1fe031bf8a6ffb29f4967cb1ed8edc21fa3f332494ee3404409512 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cohen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 08:12:39 compute-0 nova_compute[255040]: 2025-11-29 08:12:39.825 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.3 MiB/s rd, 388 KiB/s wr, 105 op/s
Nov 29 08:12:39 compute-0 ovn_controller[153295]: 2025-11-29T08:12:39Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:97:f4:ae 10.100.0.6
Nov 29 08:12:39 compute-0 ovn_controller[153295]: 2025-11-29T08:12:39Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:97:f4:ae 10.100.0.6
Nov 29 08:12:40 compute-0 ceph-mon[75237]: pgmap v1698: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.3 MiB/s rd, 388 KiB/s wr, 105 op/s
Nov 29 08:12:40 compute-0 distracted_cohen[288105]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:12:40 compute-0 distracted_cohen[288105]: --> relative data size: 1.0
Nov 29 08:12:40 compute-0 distracted_cohen[288105]: --> All data devices are unavailable
Nov 29 08:12:41 compute-0 systemd[1]: libpod-90d609883e1fe031bf8a6ffb29f4967cb1ed8edc21fa3f332494ee3404409512.scope: Deactivated successfully.
Nov 29 08:12:41 compute-0 systemd[1]: libpod-90d609883e1fe031bf8a6ffb29f4967cb1ed8edc21fa3f332494ee3404409512.scope: Consumed 1.174s CPU time.
Nov 29 08:12:41 compute-0 podman[288089]: 2025-11-29 08:12:41.015703139 +0000 UTC m=+1.417485097 container died 90d609883e1fe031bf8a6ffb29f4967cb1ed8edc21fa3f332494ee3404409512 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:12:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-755d4fc63838f3fd003a9edc397a5a46e075515cf2f1a5053e76cf566bea65af-merged.mount: Deactivated successfully.
Nov 29 08:12:41 compute-0 podman[288089]: 2025-11-29 08:12:41.081355244 +0000 UTC m=+1.483137182 container remove 90d609883e1fe031bf8a6ffb29f4967cb1ed8edc21fa3f332494ee3404409512 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cohen, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 08:12:41 compute-0 systemd[1]: libpod-conmon-90d609883e1fe031bf8a6ffb29f4967cb1ed8edc21fa3f332494ee3404409512.scope: Deactivated successfully.
Nov 29 08:12:41 compute-0 sudo[287987]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:41 compute-0 sudo[288147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:41 compute-0 sudo[288147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:41 compute-0 sudo[288147]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:41 compute-0 sudo[288172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:12:41 compute-0 sudo[288172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:41 compute-0 sudo[288172]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:41 compute-0 sudo[288197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:41 compute-0 sudo[288197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:41 compute-0 sudo[288197]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:41 compute-0 nova_compute[255040]: 2025-11-29 08:12:41.364 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:41 compute-0 sudo[288222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 08:12:41 compute-0 sudo[288222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:41 compute-0 podman[288287]: 2025-11-29 08:12:41.758485557 +0000 UTC m=+0.046245192 container create 177274afef76bea61b8fd9c0f615a9431fe1713d6a9ae1d492e3ced90d48b503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_saha, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:12:41 compute-0 systemd[1]: Started libpod-conmon-177274afef76bea61b8fd9c0f615a9431fe1713d6a9ae1d492e3ced90d48b503.scope.
Nov 29 08:12:41 compute-0 podman[288287]: 2025-11-29 08:12:41.740243984 +0000 UTC m=+0.028003639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:12:41 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:12:41 compute-0 podman[288287]: 2025-11-29 08:12:41.862283664 +0000 UTC m=+0.150043319 container init 177274afef76bea61b8fd9c0f615a9431fe1713d6a9ae1d492e3ced90d48b503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_saha, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 08:12:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 340 KiB/s rd, 1.7 MiB/s wr, 103 op/s
Nov 29 08:12:41 compute-0 podman[288287]: 2025-11-29 08:12:41.870364803 +0000 UTC m=+0.158124438 container start 177274afef76bea61b8fd9c0f615a9431fe1713d6a9ae1d492e3ced90d48b503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_saha, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:12:41 compute-0 podman[288287]: 2025-11-29 08:12:41.874507525 +0000 UTC m=+0.162267160 container attach 177274afef76bea61b8fd9c0f615a9431fe1713d6a9ae1d492e3ced90d48b503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 08:12:41 compute-0 nifty_saha[288303]: 167 167
Nov 29 08:12:41 compute-0 systemd[1]: libpod-177274afef76bea61b8fd9c0f615a9431fe1713d6a9ae1d492e3ced90d48b503.scope: Deactivated successfully.
Nov 29 08:12:41 compute-0 podman[288287]: 2025-11-29 08:12:41.88026644 +0000 UTC m=+0.168026115 container died 177274afef76bea61b8fd9c0f615a9431fe1713d6a9ae1d492e3ced90d48b503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_saha, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 08:12:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a129fd346dd02a5d3f4caf78be50ae4b03022a5dee65236b8af3fdc28e86219-merged.mount: Deactivated successfully.
Nov 29 08:12:41 compute-0 podman[288287]: 2025-11-29 08:12:41.932426191 +0000 UTC m=+0.220185826 container remove 177274afef76bea61b8fd9c0f615a9431fe1713d6a9ae1d492e3ced90d48b503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:12:41 compute-0 systemd[1]: libpod-conmon-177274afef76bea61b8fd9c0f615a9431fe1713d6a9ae1d492e3ced90d48b503.scope: Deactivated successfully.
Nov 29 08:12:42 compute-0 podman[288325]: 2025-11-29 08:12:42.139896231 +0000 UTC m=+0.057785053 container create 6d0b400f59cf6ab0c4d2dcbe169c7f02ae8ca0dd443d2083ad7690471a420660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_knuth, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 08:12:42 compute-0 nova_compute[255040]: 2025-11-29 08:12:42.141 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:42 compute-0 systemd[1]: Started libpod-conmon-6d0b400f59cf6ab0c4d2dcbe169c7f02ae8ca0dd443d2083ad7690471a420660.scope.
Nov 29 08:12:42 compute-0 podman[288325]: 2025-11-29 08:12:42.115929903 +0000 UTC m=+0.033818515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:12:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f9acbc2159c53191e800f94ea8aa708d6f4fcdd0128b83724c23b8d4a4bc26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f9acbc2159c53191e800f94ea8aa708d6f4fcdd0128b83724c23b8d4a4bc26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f9acbc2159c53191e800f94ea8aa708d6f4fcdd0128b83724c23b8d4a4bc26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f9acbc2159c53191e800f94ea8aa708d6f4fcdd0128b83724c23b8d4a4bc26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:42 compute-0 podman[288325]: 2025-11-29 08:12:42.231797477 +0000 UTC m=+0.149686079 container init 6d0b400f59cf6ab0c4d2dcbe169c7f02ae8ca0dd443d2083ad7690471a420660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_knuth, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 08:12:42 compute-0 podman[288325]: 2025-11-29 08:12:42.240492412 +0000 UTC m=+0.158380994 container start 6d0b400f59cf6ab0c4d2dcbe169c7f02ae8ca0dd443d2083ad7690471a420660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_knuth, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:12:42 compute-0 podman[288325]: 2025-11-29 08:12:42.244566433 +0000 UTC m=+0.162455015 container attach 6d0b400f59cf6ab0c4d2dcbe169c7f02ae8ca0dd443d2083ad7690471a420660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 08:12:42 compute-0 ceph-mon[75237]: pgmap v1699: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 340 KiB/s rd, 1.7 MiB/s wr, 103 op/s
Nov 29 08:12:43 compute-0 busy_knuth[288341]: {
Nov 29 08:12:43 compute-0 busy_knuth[288341]:     "0": [
Nov 29 08:12:43 compute-0 busy_knuth[288341]:         {
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "devices": [
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "/dev/loop3"
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             ],
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "lv_name": "ceph_lv0",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "lv_size": "21470642176",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "name": "ceph_lv0",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "tags": {
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.cluster_name": "ceph",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.crush_device_class": "",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.encrypted": "0",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.osd_id": "0",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.type": "block",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.vdo": "0"
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             },
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "type": "block",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "vg_name": "ceph_vg0"
Nov 29 08:12:43 compute-0 busy_knuth[288341]:         }
Nov 29 08:12:43 compute-0 busy_knuth[288341]:     ],
Nov 29 08:12:43 compute-0 busy_knuth[288341]:     "1": [
Nov 29 08:12:43 compute-0 busy_knuth[288341]:         {
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "devices": [
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "/dev/loop4"
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             ],
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "lv_name": "ceph_lv1",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "lv_size": "21470642176",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "name": "ceph_lv1",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "tags": {
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.cluster_name": "ceph",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.crush_device_class": "",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.encrypted": "0",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.osd_id": "1",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.type": "block",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.vdo": "0"
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             },
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "type": "block",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "vg_name": "ceph_vg1"
Nov 29 08:12:43 compute-0 busy_knuth[288341]:         }
Nov 29 08:12:43 compute-0 busy_knuth[288341]:     ],
Nov 29 08:12:43 compute-0 busy_knuth[288341]:     "2": [
Nov 29 08:12:43 compute-0 busy_knuth[288341]:         {
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "devices": [
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "/dev/loop5"
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             ],
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "lv_name": "ceph_lv2",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "lv_size": "21470642176",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "name": "ceph_lv2",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "tags": {
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.cluster_name": "ceph",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.crush_device_class": "",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.encrypted": "0",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.osd_id": "2",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.type": "block",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:                 "ceph.vdo": "0"
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             },
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "type": "block",
Nov 29 08:12:43 compute-0 busy_knuth[288341]:             "vg_name": "ceph_vg2"
Nov 29 08:12:43 compute-0 busy_knuth[288341]:         }
Nov 29 08:12:43 compute-0 busy_knuth[288341]:     ]
Nov 29 08:12:43 compute-0 busy_knuth[288341]: }
Nov 29 08:12:43 compute-0 systemd[1]: libpod-6d0b400f59cf6ab0c4d2dcbe169c7f02ae8ca0dd443d2083ad7690471a420660.scope: Deactivated successfully.
Nov 29 08:12:43 compute-0 podman[288325]: 2025-11-29 08:12:43.084221431 +0000 UTC m=+1.002110013 container died 6d0b400f59cf6ab0c4d2dcbe169c7f02ae8ca0dd443d2083ad7690471a420660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_knuth, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:12:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8f9acbc2159c53191e800f94ea8aa708d6f4fcdd0128b83724c23b8d4a4bc26-merged.mount: Deactivated successfully.
Nov 29 08:12:43 compute-0 podman[288325]: 2025-11-29 08:12:43.143075122 +0000 UTC m=+1.060963704 container remove 6d0b400f59cf6ab0c4d2dcbe169c7f02ae8ca0dd443d2083ad7690471a420660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_knuth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 08:12:43 compute-0 systemd[1]: libpod-conmon-6d0b400f59cf6ab0c4d2dcbe169c7f02ae8ca0dd443d2083ad7690471a420660.scope: Deactivated successfully.
Nov 29 08:12:43 compute-0 sudo[288222]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:43 compute-0 sudo[288364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:43 compute-0 sudo[288364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:43 compute-0 sudo[288364]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:43 compute-0 sudo[288389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:12:43 compute-0 sudo[288389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:43 compute-0 sudo[288389]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:12:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:12:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:12:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:12:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:12:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:12:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:12:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:12:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:12:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:12:43 compute-0 sudo[288414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:43 compute-0 sudo[288414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:43 compute-0 sudo[288414]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:43 compute-0 sudo[288439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 08:12:43 compute-0 sudo[288439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:43 compute-0 nova_compute[255040]: 2025-11-29 08:12:43.778 255071 DEBUG oslo_concurrency.lockutils [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "76ba0630-af87-46e8-83ee-b983d76f480d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:43 compute-0 nova_compute[255040]: 2025-11-29 08:12:43.780 255071 DEBUG oslo_concurrency.lockutils [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "76ba0630-af87-46e8-83ee-b983d76f480d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:43 compute-0 nova_compute[255040]: 2025-11-29 08:12:43.780 255071 DEBUG oslo_concurrency.lockutils [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "76ba0630-af87-46e8-83ee-b983d76f480d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:43 compute-0 nova_compute[255040]: 2025-11-29 08:12:43.780 255071 DEBUG oslo_concurrency.lockutils [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "76ba0630-af87-46e8-83ee-b983d76f480d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:43 compute-0 nova_compute[255040]: 2025-11-29 08:12:43.780 255071 DEBUG oslo_concurrency.lockutils [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "76ba0630-af87-46e8-83ee-b983d76f480d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:43 compute-0 nova_compute[255040]: 2025-11-29 08:12:43.782 255071 INFO nova.compute.manager [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Terminating instance
Nov 29 08:12:43 compute-0 nova_compute[255040]: 2025-11-29 08:12:43.783 255071 DEBUG nova.compute.manager [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:12:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 410 KiB/s rd, 2.1 MiB/s wr, 114 op/s
Nov 29 08:12:43 compute-0 podman[288505]: 2025-11-29 08:12:43.836194627 +0000 UTC m=+0.030025433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:12:44 compute-0 podman[288505]: 2025-11-29 08:12:44.006956896 +0000 UTC m=+0.200787712 container create a5fcf804680a0f39ac909a919f063a1c55d2ab63609ca03b79d15dd55e32b02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 08:12:44 compute-0 kernel: tap53f09235-af (unregistering): left promiscuous mode
Nov 29 08:12:44 compute-0 systemd[1]: Started libpod-conmon-a5fcf804680a0f39ac909a919f063a1c55d2ab63609ca03b79d15dd55e32b02b.scope.
Nov 29 08:12:44 compute-0 NetworkManager[49116]: <info>  [1764403964.0623] device (tap53f09235-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:12:44 compute-0 ovn_controller[153295]: 2025-11-29T08:12:44Z|00189|binding|INFO|Releasing lport 53f09235-af69-49c2-9137-b16a1ea8d7f3 from this chassis (sb_readonly=0)
Nov 29 08:12:44 compute-0 ovn_controller[153295]: 2025-11-29T08:12:44Z|00190|binding|INFO|Setting lport 53f09235-af69-49c2-9137-b16a1ea8d7f3 down in Southbound
Nov 29 08:12:44 compute-0 ovn_controller[153295]: 2025-11-29T08:12:44Z|00191|binding|INFO|Removing iface tap53f09235-af ovn-installed in OVS
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.075 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.078 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:12:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:44.095 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:b2:a7 10.100.0.6'], port_security=['fa:16:3e:62:b2:a7 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '76ba0630-af87-46e8-83ee-b983d76f480d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3df24932e2a44aeab3c2aece8a045774', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd76aebb-076a-4516-b4a3-04b7aa482016', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6d2be5e-00f1-4a95-b572-cb93402763d5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=53f09235-af69-49c2-9137-b16a1ea8d7f3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.095 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:44.097 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 53f09235-af69-49c2-9137-b16a1ea8d7f3 in datapath 6e23492e-beff-43f6-b4d1-f88ebeea0b6f unbound from our chassis
Nov 29 08:12:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:44.098 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6e23492e-beff-43f6-b4d1-f88ebeea0b6f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:12:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:44.101 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[bd765edc-77e3-44d3-bd6f-3f386323cd9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:44.102 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f namespace which is not needed anymore
Nov 29 08:12:44 compute-0 podman[288505]: 2025-11-29 08:12:44.113962319 +0000 UTC m=+0.307793125 container init a5fcf804680a0f39ac909a919f063a1c55d2ab63609ca03b79d15dd55e32b02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:12:44 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Deactivated successfully.
Nov 29 08:12:44 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Consumed 15.911s CPU time.
Nov 29 08:12:44 compute-0 podman[288505]: 2025-11-29 08:12:44.126853449 +0000 UTC m=+0.320684235 container start a5fcf804680a0f39ac909a919f063a1c55d2ab63609ca03b79d15dd55e32b02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 08:12:44 compute-0 systemd-machined[216271]: Machine qemu-19-instance-00000013 terminated.
Nov 29 08:12:44 compute-0 podman[288505]: 2025-11-29 08:12:44.131325059 +0000 UTC m=+0.325155865 container attach a5fcf804680a0f39ac909a919f063a1c55d2ab63609ca03b79d15dd55e32b02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 08:12:44 compute-0 nice_mccarthy[288521]: 167 167
Nov 29 08:12:44 compute-0 podman[288505]: 2025-11-29 08:12:44.134740091 +0000 UTC m=+0.328570877 container died a5fcf804680a0f39ac909a919f063a1c55d2ab63609ca03b79d15dd55e32b02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 29 08:12:44 compute-0 systemd[1]: libpod-a5fcf804680a0f39ac909a919f063a1c55d2ab63609ca03b79d15dd55e32b02b.scope: Deactivated successfully.
Nov 29 08:12:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6362de7e84712f774892d160581153a648caccdeb722897910c70c2d23fa2db-merged.mount: Deactivated successfully.
Nov 29 08:12:44 compute-0 podman[288505]: 2025-11-29 08:12:44.182811431 +0000 UTC m=+0.376642217 container remove a5fcf804680a0f39ac909a919f063a1c55d2ab63609ca03b79d15dd55e32b02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mccarthy, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:12:44 compute-0 systemd[1]: libpod-conmon-a5fcf804680a0f39ac909a919f063a1c55d2ab63609ca03b79d15dd55e32b02b.scope: Deactivated successfully.
Nov 29 08:12:44 compute-0 NetworkManager[49116]: <info>  [1764403964.2099] manager: (tap53f09235-af): new Tun device (/org/freedesktop/NetworkManager/Devices/107)
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.226 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.249 255071 INFO nova.virt.libvirt.driver [-] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Instance destroyed successfully.
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.250 255071 DEBUG nova.objects.instance [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lazy-loading 'resources' on Instance uuid 76ba0630-af87-46e8-83ee-b983d76f480d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.263 255071 DEBUG nova.virt.libvirt.vif [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-823551388',display_name='tempest-TestVolumeBootPattern-server-823551388',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-823551388',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBNqVOtasX0MqRaMqqfsWVfBGlBxHyLONahirMfYc0xM/PP91rZ4W+N/NUA4y30TxcMcH62LfUYChDkxcMCwFGnIBRbZARerRoVNJBX6SaD1meU9QKaSGEO9I5Zm9Q8bzQ==',key_name='tempest-TestVolumeBootPattern-1223045967',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3df24932e2a44aeab3c2aece8a045774',ramdisk_id='',reservation_id='r-lh2flxu9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1666331213',owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:07Z,user_data=None,user_id='5e62d407203540599a65ac50d5d447b9',uuid=76ba0630-af87-46e8-83ee-b983d76f480d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "53f09235-af69-49c2-9137-b16a1ea8d7f3", "address": "fa:16:3e:62:b2:a7", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53f09235-af", "ovs_interfaceid": "53f09235-af69-49c2-9137-b16a1ea8d7f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.264 255071 DEBUG nova.network.os_vif_util [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converting VIF {"id": "53f09235-af69-49c2-9137-b16a1ea8d7f3", "address": "fa:16:3e:62:b2:a7", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53f09235-af", "ovs_interfaceid": "53f09235-af69-49c2-9137-b16a1ea8d7f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.265 255071 DEBUG nova.network.os_vif_util [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:62:b2:a7,bridge_name='br-int',has_traffic_filtering=True,id=53f09235-af69-49c2-9137-b16a1ea8d7f3,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53f09235-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.265 255071 DEBUG os_vif [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:62:b2:a7,bridge_name='br-int',has_traffic_filtering=True,id=53f09235-af69-49c2-9137-b16a1ea8d7f3,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53f09235-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.272 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.273 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53f09235-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.275 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.277 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.282 255071 INFO os_vif [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:62:b2:a7,bridge_name='br-int',has_traffic_filtering=True,id=53f09235-af69-49c2-9137-b16a1ea8d7f3,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53f09235-af')
Nov 29 08:12:44 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[287170]: [NOTICE]   (287174) : haproxy version is 2.8.14-c23fe91
Nov 29 08:12:44 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[287170]: [NOTICE]   (287174) : path to executable is /usr/sbin/haproxy
Nov 29 08:12:44 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[287170]: [WARNING]  (287174) : Exiting Master process...
Nov 29 08:12:44 compute-0 systemd[1]: libpod-6b88e70c7e4158bcc9dc7db09d98f0517f4009afc4ecb08fa9eb357d22c67ca7.scope: Deactivated successfully.
Nov 29 08:12:44 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[287170]: [ALERT]    (287174) : Current worker (287176) exited with code 143 (Terminated)
Nov 29 08:12:44 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[287170]: [WARNING]  (287174) : All workers exited. Exiting... (0)
Nov 29 08:12:44 compute-0 conmon[287170]: conmon 6b88e70c7e4158bcc9dc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6b88e70c7e4158bcc9dc7db09d98f0517f4009afc4ecb08fa9eb357d22c67ca7.scope/container/memory.events
Nov 29 08:12:44 compute-0 podman[288569]: 2025-11-29 08:12:44.320809243 +0000 UTC m=+0.056458087 container died 6b88e70c7e4158bcc9dc7db09d98f0517f4009afc4ecb08fa9eb357d22c67ca7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:12:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6b88e70c7e4158bcc9dc7db09d98f0517f4009afc4ecb08fa9eb357d22c67ca7-userdata-shm.mount: Deactivated successfully.
Nov 29 08:12:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cae95a9c5c674f9df133ee71d96a5c1fe6a9b1692ed34972d8a08813376136f-merged.mount: Deactivated successfully.
Nov 29 08:12:44 compute-0 podman[288569]: 2025-11-29 08:12:44.370065585 +0000 UTC m=+0.105714429 container cleanup 6b88e70c7e4158bcc9dc7db09d98f0517f4009afc4ecb08fa9eb357d22c67ca7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:12:44 compute-0 systemd[1]: libpod-conmon-6b88e70c7e4158bcc9dc7db09d98f0517f4009afc4ecb08fa9eb357d22c67ca7.scope: Deactivated successfully.
Nov 29 08:12:44 compute-0 podman[288609]: 2025-11-29 08:12:44.420426578 +0000 UTC m=+0.065976846 container create 80bf9a5017dfc2aa314aa35baca9574e1d2b59f4a581c8368a0198dadd119dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carver, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:12:44 compute-0 podman[288637]: 2025-11-29 08:12:44.44527522 +0000 UTC m=+0.050363903 container remove 6b88e70c7e4158bcc9dc7db09d98f0517f4009afc4ecb08fa9eb357d22c67ca7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:12:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:44.452 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[f70d6727-ff39-42c2-8819-6b2d124a6228]: (4, ('Sat Nov 29 08:12:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f (6b88e70c7e4158bcc9dc7db09d98f0517f4009afc4ecb08fa9eb357d22c67ca7)\n6b88e70c7e4158bcc9dc7db09d98f0517f4009afc4ecb08fa9eb357d22c67ca7\nSat Nov 29 08:12:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f (6b88e70c7e4158bcc9dc7db09d98f0517f4009afc4ecb08fa9eb357d22c67ca7)\n6b88e70c7e4158bcc9dc7db09d98f0517f4009afc4ecb08fa9eb357d22c67ca7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:44.456 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[f4258103-0925-41ef-9a57-cdc52026c353]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:44.457 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e23492e-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:44 compute-0 kernel: tap6e23492e-b0: left promiscuous mode
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.459 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.478 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:44.481 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[13f0f7bc-faa5-4d99-92db-0df89ac84842]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:44 compute-0 podman[288609]: 2025-11-29 08:12:44.396806899 +0000 UTC m=+0.042357187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:12:44 compute-0 systemd[1]: Started libpod-conmon-80bf9a5017dfc2aa314aa35baca9574e1d2b59f4a581c8368a0198dadd119dec.scope.
Nov 29 08:12:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:44.496 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[c3bf41ba-48e2-47f4-8b90-3042ee098536]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:44.500 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[42a23a70-5e8b-42cf-a37e-0c9c39c6896d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:44.520 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[d3177e1b-08e7-4a87-a2e1-f42ce5a69944]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616453, 'reachable_time': 32350, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288662, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:12:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:44.525 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:12:44 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:44.525 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[d60e0be4-4746-42b7-b51e-a56315058415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.526 255071 INFO nova.virt.libvirt.driver [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Deleting instance files /var/lib/nova/instances/76ba0630-af87-46e8-83ee-b983d76f480d_del
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.526 255071 INFO nova.virt.libvirt.driver [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Deletion of /var/lib/nova/instances/76ba0630-af87-46e8-83ee-b983d76f480d_del complete
Nov 29 08:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9a883c20b0c431d606cb8e2d86f83e12931da0df0a217328cfd6e627e3e2a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9a883c20b0c431d606cb8e2d86f83e12931da0df0a217328cfd6e627e3e2a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9a883c20b0c431d606cb8e2d86f83e12931da0df0a217328cfd6e627e3e2a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9a883c20b0c431d606cb8e2d86f83e12931da0df0a217328cfd6e627e3e2a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:44 compute-0 podman[288609]: 2025-11-29 08:12:44.547360391 +0000 UTC m=+0.192910689 container init 80bf9a5017dfc2aa314aa35baca9574e1d2b59f4a581c8368a0198dadd119dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:12:44 compute-0 podman[288609]: 2025-11-29 08:12:44.555950983 +0000 UTC m=+0.201501251 container start 80bf9a5017dfc2aa314aa35baca9574e1d2b59f4a581c8368a0198dadd119dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carver, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 08:12:44 compute-0 podman[288609]: 2025-11-29 08:12:44.559264462 +0000 UTC m=+0.204814750 container attach 80bf9a5017dfc2aa314aa35baca9574e1d2b59f4a581c8368a0198dadd119dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carver, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.580 255071 INFO nova.compute.manager [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Took 0.80 seconds to destroy the instance on the hypervisor.
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.580 255071 DEBUG oslo.service.loopingcall [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.581 255071 DEBUG nova.compute.manager [-] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.581 255071 DEBUG nova.network.neutron [-] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.761 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.773 255071 DEBUG nova.compute.manager [req-a1afbd48-8c82-4820-a99f-544b221a61d2 req-61fed50a-163d-4ae3-b984-fb797db8c4b0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Received event network-vif-unplugged-53f09235-af69-49c2-9137-b16a1ea8d7f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.773 255071 DEBUG oslo_concurrency.lockutils [req-a1afbd48-8c82-4820-a99f-544b221a61d2 req-61fed50a-163d-4ae3-b984-fb797db8c4b0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "76ba0630-af87-46e8-83ee-b983d76f480d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.774 255071 DEBUG oslo_concurrency.lockutils [req-a1afbd48-8c82-4820-a99f-544b221a61d2 req-61fed50a-163d-4ae3-b984-fb797db8c4b0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "76ba0630-af87-46e8-83ee-b983d76f480d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.774 255071 DEBUG oslo_concurrency.lockutils [req-a1afbd48-8c82-4820-a99f-544b221a61d2 req-61fed50a-163d-4ae3-b984-fb797db8c4b0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "76ba0630-af87-46e8-83ee-b983d76f480d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.775 255071 DEBUG nova.compute.manager [req-a1afbd48-8c82-4820-a99f-544b221a61d2 req-61fed50a-163d-4ae3-b984-fb797db8c4b0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] No waiting events found dispatching network-vif-unplugged-53f09235-af69-49c2-9137-b16a1ea8d7f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.775 255071 DEBUG nova.compute.manager [req-a1afbd48-8c82-4820-a99f-544b221a61d2 req-61fed50a-163d-4ae3-b984-fb797db8c4b0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Received event network-vif-unplugged-53f09235-af69-49c2-9137-b16a1ea8d7f3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:12:44 compute-0 nova_compute[255040]: 2025-11-29 08:12:44.827 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d6e23492e\x2dbeff\x2d43f6\x2db4d1\x2df88ebeea0b6f.mount: Deactivated successfully.
Nov 29 08:12:45 compute-0 ceph-mon[75237]: pgmap v1700: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 410 KiB/s rd, 2.1 MiB/s wr, 114 op/s
Nov 29 08:12:45 compute-0 podman[288666]: 2025-11-29 08:12:45.151324984 +0000 UTC m=+0.086464719 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:12:45 compute-0 nova_compute[255040]: 2025-11-29 08:12:45.359 255071 DEBUG nova.network.neutron [-] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:12:45 compute-0 nova_compute[255040]: 2025-11-29 08:12:45.383 255071 INFO nova.compute.manager [-] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Took 0.80 seconds to deallocate network for instance.
Nov 29 08:12:45 compute-0 modest_carver[288657]: {
Nov 29 08:12:45 compute-0 modest_carver[288657]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 08:12:45 compute-0 modest_carver[288657]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:12:45 compute-0 modest_carver[288657]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:12:45 compute-0 modest_carver[288657]:         "osd_id": 2,
Nov 29 08:12:45 compute-0 modest_carver[288657]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:12:45 compute-0 modest_carver[288657]:         "type": "bluestore"
Nov 29 08:12:45 compute-0 modest_carver[288657]:     },
Nov 29 08:12:45 compute-0 modest_carver[288657]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 08:12:45 compute-0 modest_carver[288657]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:12:45 compute-0 modest_carver[288657]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:12:45 compute-0 modest_carver[288657]:         "osd_id": 0,
Nov 29 08:12:45 compute-0 modest_carver[288657]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:12:45 compute-0 modest_carver[288657]:         "type": "bluestore"
Nov 29 08:12:45 compute-0 modest_carver[288657]:     },
Nov 29 08:12:45 compute-0 modest_carver[288657]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 08:12:45 compute-0 modest_carver[288657]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:12:45 compute-0 modest_carver[288657]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:12:45 compute-0 modest_carver[288657]:         "osd_id": 1,
Nov 29 08:12:45 compute-0 modest_carver[288657]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:12:45 compute-0 modest_carver[288657]:         "type": "bluestore"
Nov 29 08:12:45 compute-0 modest_carver[288657]:     }
Nov 29 08:12:45 compute-0 modest_carver[288657]: }
Nov 29 08:12:45 compute-0 nova_compute[255040]: 2025-11-29 08:12:45.706 255071 INFO nova.compute.manager [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Took 0.32 seconds to detach 1 volumes for instance.
Nov 29 08:12:45 compute-0 systemd[1]: libpod-80bf9a5017dfc2aa314aa35baca9574e1d2b59f4a581c8368a0198dadd119dec.scope: Deactivated successfully.
Nov 29 08:12:45 compute-0 systemd[1]: libpod-80bf9a5017dfc2aa314aa35baca9574e1d2b59f4a581c8368a0198dadd119dec.scope: Consumed 1.164s CPU time.
Nov 29 08:12:45 compute-0 conmon[288657]: conmon 80bf9a5017dfc2aa314a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-80bf9a5017dfc2aa314aa35baca9574e1d2b59f4a581c8368a0198dadd119dec.scope/container/memory.events
Nov 29 08:12:45 compute-0 podman[288609]: 2025-11-29 08:12:45.722245635 +0000 UTC m=+1.367795963 container died 80bf9a5017dfc2aa314aa35baca9574e1d2b59f4a581c8368a0198dadd119dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Nov 29 08:12:45 compute-0 nova_compute[255040]: 2025-11-29 08:12:45.746 255071 DEBUG oslo_concurrency.lockutils [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:45 compute-0 nova_compute[255040]: 2025-11-29 08:12:45.747 255071 DEBUG oslo_concurrency.lockutils [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd9a883c20b0c431d606cb8e2d86f83e12931da0df0a217328cfd6e627e3e2a8-merged.mount: Deactivated successfully.
Nov 29 08:12:45 compute-0 podman[288609]: 2025-11-29 08:12:45.790271815 +0000 UTC m=+1.435822083 container remove 80bf9a5017dfc2aa314aa35baca9574e1d2b59f4a581c8368a0198dadd119dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 08:12:45 compute-0 systemd[1]: libpod-conmon-80bf9a5017dfc2aa314aa35baca9574e1d2b59f4a581c8368a0198dadd119dec.scope: Deactivated successfully.
Nov 29 08:12:45 compute-0 nova_compute[255040]: 2025-11-29 08:12:45.814 255071 DEBUG oslo_concurrency.processutils [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:45 compute-0 sudo[288439]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:12:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 428 KiB/s rd, 2.2 MiB/s wr, 129 op/s
Nov 29 08:12:45 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:12:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:12:45 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:12:45 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 664207ca-e60d-42af-87cc-81d647486af1 does not exist
Nov 29 08:12:45 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev f170d7e3-7e59-435c-936c-4227e17ed40a does not exist
Nov 29 08:12:45 compute-0 sudo[288727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:45 compute-0 sudo[288727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:45 compute-0 sudo[288727]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:46 compute-0 sudo[288771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:12:46 compute-0 sudo[288771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:46 compute-0 sudo[288771]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:12:46 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2480151308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:46 compute-0 nova_compute[255040]: 2025-11-29 08:12:46.299 255071 DEBUG oslo_concurrency.processutils [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:46 compute-0 nova_compute[255040]: 2025-11-29 08:12:46.307 255071 DEBUG nova.compute.provider_tree [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:12:46 compute-0 nova_compute[255040]: 2025-11-29 08:12:46.327 255071 DEBUG nova.scheduler.client.report [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:12:46 compute-0 nova_compute[255040]: 2025-11-29 08:12:46.357 255071 DEBUG oslo_concurrency.lockutils [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:46 compute-0 nova_compute[255040]: 2025-11-29 08:12:46.449 255071 INFO nova.scheduler.client.report [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Deleted allocations for instance 76ba0630-af87-46e8-83ee-b983d76f480d
Nov 29 08:12:46 compute-0 nova_compute[255040]: 2025-11-29 08:12:46.573 255071 DEBUG oslo_concurrency.lockutils [None req-2882d7a2-6e03-4743-a0dc-5c3fb37c04ac 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "76ba0630-af87-46e8-83ee-b983d76f480d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:46 compute-0 ceph-mon[75237]: pgmap v1701: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 428 KiB/s rd, 2.2 MiB/s wr, 129 op/s
Nov 29 08:12:46 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:12:46 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:12:46 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2480151308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:46 compute-0 nova_compute[255040]: 2025-11-29 08:12:46.902 255071 DEBUG nova.compute.manager [req-6b338af8-7588-4734-ba63-cc067e65b3bd req-196f15b8-fb42-46ee-8461-55b71182193d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Received event network-vif-plugged-53f09235-af69-49c2-9137-b16a1ea8d7f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:12:46 compute-0 nova_compute[255040]: 2025-11-29 08:12:46.902 255071 DEBUG oslo_concurrency.lockutils [req-6b338af8-7588-4734-ba63-cc067e65b3bd req-196f15b8-fb42-46ee-8461-55b71182193d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "76ba0630-af87-46e8-83ee-b983d76f480d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:46 compute-0 nova_compute[255040]: 2025-11-29 08:12:46.902 255071 DEBUG oslo_concurrency.lockutils [req-6b338af8-7588-4734-ba63-cc067e65b3bd req-196f15b8-fb42-46ee-8461-55b71182193d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "76ba0630-af87-46e8-83ee-b983d76f480d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:46 compute-0 nova_compute[255040]: 2025-11-29 08:12:46.903 255071 DEBUG oslo_concurrency.lockutils [req-6b338af8-7588-4734-ba63-cc067e65b3bd req-196f15b8-fb42-46ee-8461-55b71182193d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "76ba0630-af87-46e8-83ee-b983d76f480d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:46 compute-0 nova_compute[255040]: 2025-11-29 08:12:46.903 255071 DEBUG nova.compute.manager [req-6b338af8-7588-4734-ba63-cc067e65b3bd req-196f15b8-fb42-46ee-8461-55b71182193d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] No waiting events found dispatching network-vif-plugged-53f09235-af69-49c2-9137-b16a1ea8d7f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:12:46 compute-0 nova_compute[255040]: 2025-11-29 08:12:46.904 255071 WARNING nova.compute.manager [req-6b338af8-7588-4734-ba63-cc067e65b3bd req-196f15b8-fb42-46ee-8461-55b71182193d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Received unexpected event network-vif-plugged-53f09235-af69-49c2-9137-b16a1ea8d7f3 for instance with vm_state deleted and task_state None.
Nov 29 08:12:46 compute-0 nova_compute[255040]: 2025-11-29 08:12:46.904 255071 DEBUG nova.compute.manager [req-6b338af8-7588-4734-ba63-cc067e65b3bd req-196f15b8-fb42-46ee-8461-55b71182193d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Received event network-vif-deleted-53f09235-af69-49c2-9137-b16a1ea8d7f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:12:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 409 KiB/s rd, 2.2 MiB/s wr, 103 op/s
Nov 29 08:12:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:49 compute-0 ceph-mon[75237]: pgmap v1702: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 409 KiB/s rd, 2.2 MiB/s wr, 103 op/s
Nov 29 08:12:49 compute-0 nova_compute[255040]: 2025-11-29 08:12:49.065 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:49 compute-0 nova_compute[255040]: 2025-11-29 08:12:49.275 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:49 compute-0 nova_compute[255040]: 2025-11-29 08:12:49.829 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 409 KiB/s rd, 2.2 MiB/s wr, 104 op/s
Nov 29 08:12:49 compute-0 nova_compute[255040]: 2025-11-29 08:12:49.904 255071 DEBUG oslo_concurrency.lockutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "cde9039b-1882-4723-9524-c51a289f67b0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:49 compute-0 nova_compute[255040]: 2025-11-29 08:12:49.905 255071 DEBUG oslo_concurrency.lockutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "cde9039b-1882-4723-9524-c51a289f67b0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:49 compute-0 nova_compute[255040]: 2025-11-29 08:12:49.921 255071 DEBUG nova.compute.manager [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:12:49 compute-0 nova_compute[255040]: 2025-11-29 08:12:49.982 255071 DEBUG oslo_concurrency.lockutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:49 compute-0 nova_compute[255040]: 2025-11-29 08:12:49.982 255071 DEBUG oslo_concurrency.lockutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:49 compute-0 nova_compute[255040]: 2025-11-29 08:12:49.991 255071 DEBUG nova.virt.hardware [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:12:49 compute-0 nova_compute[255040]: 2025-11-29 08:12:49.991 255071 INFO nova.compute.claims [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.104 255071 DEBUG oslo_concurrency.processutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:12:50 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2797166844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.588 255071 DEBUG oslo_concurrency.processutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.595 255071 DEBUG nova.compute.provider_tree [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.618 255071 DEBUG nova.scheduler.client.report [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.649 255071 DEBUG oslo_concurrency.lockutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.650 255071 DEBUG nova.compute.manager [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.693 255071 DEBUG nova.compute.manager [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.693 255071 DEBUG nova.network.neutron [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.715 255071 INFO nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.749 255071 DEBUG nova.compute.manager [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.804 255071 INFO nova.virt.block_device [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Booting with volume 3d3fcb07-9e86-4e90-86d4-07632d484796 at /dev/vda
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.869 255071 DEBUG nova.policy [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5e62d407203540599a65ac50d5d447b9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3df24932e2a44aeab3c2aece8a045774', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.917 255071 DEBUG os_brick.utils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.919 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.934 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.934 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[b9adb3b0-5654-4e85-aad9-ce2ade26f8d8]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.936 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.960 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.960 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[5883f0ed-56eb-48c8-a65e-8a95ba5de783]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.963 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.973 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.974 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[f75a2ba4-0abe-4e8a-adf3-973e6964c00d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.975 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e3ceff-c6a2-417f-a6ac-d4bd51c1cceb]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:50 compute-0 nova_compute[255040]: 2025-11-29 08:12:50.976 255071 DEBUG oslo_concurrency.processutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:51 compute-0 ceph-mon[75237]: pgmap v1703: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 409 KiB/s rd, 2.2 MiB/s wr, 104 op/s
Nov 29 08:12:51 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2797166844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:51 compute-0 nova_compute[255040]: 2025-11-29 08:12:51.015 255071 DEBUG oslo_concurrency.processutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "nvme version" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:51 compute-0 nova_compute[255040]: 2025-11-29 08:12:51.018 255071 DEBUG os_brick.initiator.connectors.lightos [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:12:51 compute-0 nova_compute[255040]: 2025-11-29 08:12:51.018 255071 DEBUG os_brick.initiator.connectors.lightos [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:12:51 compute-0 nova_compute[255040]: 2025-11-29 08:12:51.019 255071 DEBUG os_brick.initiator.connectors.lightos [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:12:51 compute-0 nova_compute[255040]: 2025-11-29 08:12:51.019 255071 DEBUG os_brick.utils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] <== get_connector_properties: return (100ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:12:51 compute-0 nova_compute[255040]: 2025-11-29 08:12:51.019 255071 DEBUG nova.virt.block_device [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Updating existing volume attachment record: 9b4c1baf-e4db-4aeb-ad21-660c66a4c6b4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:12:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:12:51 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2018872559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:51 compute-0 nova_compute[255040]: 2025-11-29 08:12:51.629 255071 DEBUG nova.network.neutron [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Successfully created port: 9fb97b8d-7982-4dac-8c85-e972bacc8ad7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:12:51 compute-0 nova_compute[255040]: 2025-11-29 08:12:51.832 255071 DEBUG nova.compute.manager [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:12:51 compute-0 nova_compute[255040]: 2025-11-29 08:12:51.834 255071 DEBUG nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:12:51 compute-0 nova_compute[255040]: 2025-11-29 08:12:51.834 255071 INFO nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Creating image(s)
Nov 29 08:12:51 compute-0 nova_compute[255040]: 2025-11-29 08:12:51.835 255071 DEBUG nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:12:51 compute-0 nova_compute[255040]: 2025-11-29 08:12:51.835 255071 DEBUG nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Ensure instance console log exists: /var/lib/nova/instances/cde9039b-1882-4723-9524-c51a289f67b0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:12:51 compute-0 nova_compute[255040]: 2025-11-29 08:12:51.835 255071 DEBUG oslo_concurrency.lockutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:51 compute-0 nova_compute[255040]: 2025-11-29 08:12:51.836 255071 DEBUG oslo_concurrency.lockutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:51 compute-0 nova_compute[255040]: 2025-11-29 08:12:51.836 255071 DEBUG oslo_concurrency.lockutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 377 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Nov 29 08:12:52 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2018872559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:52 compute-0 nova_compute[255040]: 2025-11-29 08:12:52.115 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:52 compute-0 nova_compute[255040]: 2025-11-29 08:12:52.285 255071 DEBUG nova.network.neutron [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Successfully updated port: 9fb97b8d-7982-4dac-8c85-e972bacc8ad7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:12:52 compute-0 nova_compute[255040]: 2025-11-29 08:12:52.304 255071 DEBUG oslo_concurrency.lockutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "refresh_cache-cde9039b-1882-4723-9524-c51a289f67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:12:52 compute-0 nova_compute[255040]: 2025-11-29 08:12:52.305 255071 DEBUG oslo_concurrency.lockutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquired lock "refresh_cache-cde9039b-1882-4723-9524-c51a289f67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:12:52 compute-0 nova_compute[255040]: 2025-11-29 08:12:52.305 255071 DEBUG nova.network.neutron [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:12:52 compute-0 nova_compute[255040]: 2025-11-29 08:12:52.379 255071 DEBUG nova.compute.manager [req-4f5cca26-b8ea-4db6-8a56-018be05b286e req-7eff2587-9ee2-451f-9b5a-2a1c5e91e493 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Received event network-changed-9fb97b8d-7982-4dac-8c85-e972bacc8ad7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:12:52 compute-0 nova_compute[255040]: 2025-11-29 08:12:52.380 255071 DEBUG nova.compute.manager [req-4f5cca26-b8ea-4db6-8a56-018be05b286e req-7eff2587-9ee2-451f-9b5a-2a1c5e91e493 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Refreshing instance network info cache due to event network-changed-9fb97b8d-7982-4dac-8c85-e972bacc8ad7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:12:52 compute-0 nova_compute[255040]: 2025-11-29 08:12:52.380 255071 DEBUG oslo_concurrency.lockutils [req-4f5cca26-b8ea-4db6-8a56-018be05b286e req-7eff2587-9ee2-451f-9b5a-2a1c5e91e493 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-cde9039b-1882-4723-9524-c51a289f67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:12:52 compute-0 nova_compute[255040]: 2025-11-29 08:12:52.503 255071 DEBUG nova.network.neutron [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:12:53 compute-0 ceph-mon[75237]: pgmap v1704: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 377 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.566 255071 DEBUG nova.network.neutron [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Updating instance_info_cache with network_info: [{"id": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "address": "fa:16:3e:9f:12:20", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb97b8d-79", "ovs_interfaceid": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.598 255071 DEBUG oslo_concurrency.lockutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Releasing lock "refresh_cache-cde9039b-1882-4723-9524-c51a289f67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.598 255071 DEBUG nova.compute.manager [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Instance network_info: |[{"id": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "address": "fa:16:3e:9f:12:20", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb97b8d-79", "ovs_interfaceid": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.599 255071 DEBUG oslo_concurrency.lockutils [req-4f5cca26-b8ea-4db6-8a56-018be05b286e req-7eff2587-9ee2-451f-9b5a-2a1c5e91e493 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-cde9039b-1882-4723-9524-c51a289f67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.599 255071 DEBUG nova.network.neutron [req-4f5cca26-b8ea-4db6-8a56-018be05b286e req-7eff2587-9ee2-451f-9b5a-2a1c5e91e493 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Refreshing network info cache for port 9fb97b8d-7982-4dac-8c85-e972bacc8ad7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.604 255071 DEBUG nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Start _get_guest_xml network_info=[{"id": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "address": "fa:16:3e:9f:12:20", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb97b8d-79", "ovs_interfaceid": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3d3fcb07-9e86-4e90-86d4-07632d484796', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3d3fcb07-9e86-4e90-86d4-07632d484796', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'cde9039b-1882-4723-9524-c51a289f67b0', 'attached_at': '', 'detached_at': '', 'volume_id': '3d3fcb07-9e86-4e90-86d4-07632d484796', 'serial': '3d3fcb07-9e86-4e90-86d4-07632d484796'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'delete_on_termination': False, 'attachment_id': '9b4c1baf-e4db-4aeb-ad21-660c66a4c6b4', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.608 255071 WARNING nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.613 255071 DEBUG nova.virt.libvirt.host [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.614 255071 DEBUG nova.virt.libvirt.host [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.617 255071 DEBUG nova.virt.libvirt.host [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.617 255071 DEBUG nova.virt.libvirt.host [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.618 255071 DEBUG nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.618 255071 DEBUG nova.virt.hardware [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.619 255071 DEBUG nova.virt.hardware [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.619 255071 DEBUG nova.virt.hardware [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.619 255071 DEBUG nova.virt.hardware [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.619 255071 DEBUG nova.virt.hardware [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.620 255071 DEBUG nova.virt.hardware [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.620 255071 DEBUG nova.virt.hardware [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.620 255071 DEBUG nova.virt.hardware [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.621 255071 DEBUG nova.virt.hardware [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.621 255071 DEBUG nova.virt.hardware [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.621 255071 DEBUG nova.virt.hardware [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.645 255071 DEBUG nova.storage.rbd_utils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] rbd image cde9039b-1882-4723-9524-c51a289f67b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:12:53 compute-0 nova_compute[255040]: 2025-11-29 08:12:53.649 255071 DEBUG oslo_concurrency.processutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 101 KiB/s rd, 442 KiB/s wr, 45 op/s
Nov 29 08:12:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:12:54 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2632640131' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.088 255071 DEBUG oslo_concurrency.processutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.113 255071 DEBUG nova.virt.libvirt.vif [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-627045043',display_name='tempest-TestVolumeBootPattern-server-627045043',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-627045043',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBNqVOtasX0MqRaMqqfsWVfBGlBxHyLONahirMfYc0xM/PP91rZ4W+N/NUA4y30TxcMcH62LfUYChDkxcMCwFGnIBRbZARerRoVNJBX6SaD1meU9QKaSGEO9I5Zm9Q8bzQ==',key_name='tempest-TestVolumeBootPattern-1223045967',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3df24932e2a44aeab3c2aece8a045774',ramdisk_id='',reservation_id='r-r93fjp7h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1666331213',owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:50Z,user_data=None,user_id='5e62d407203540599a65ac50d5d447b9',uuid=cde9039b-1882-4723-9524-c51a289f67b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "address": "fa:16:3e:9f:12:20", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb97b8d-79", "ovs_interfaceid": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.114 255071 DEBUG nova.network.os_vif_util [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converting VIF {"id": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "address": "fa:16:3e:9f:12:20", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb97b8d-79", "ovs_interfaceid": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.115 255071 DEBUG nova.network.os_vif_util [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:12:20,bridge_name='br-int',has_traffic_filtering=True,id=9fb97b8d-7982-4dac-8c85-e972bacc8ad7,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb97b8d-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.116 255071 DEBUG nova.objects.instance [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lazy-loading 'pci_devices' on Instance uuid cde9039b-1882-4723-9524-c51a289f67b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.129 255071 DEBUG nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:12:54 compute-0 nova_compute[255040]:   <uuid>cde9039b-1882-4723-9524-c51a289f67b0</uuid>
Nov 29 08:12:54 compute-0 nova_compute[255040]:   <name>instance-00000015</name>
Nov 29 08:12:54 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:12:54 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:12:54 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <nova:name>tempest-TestVolumeBootPattern-server-627045043</nova:name>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:12:53</nova:creationTime>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:12:54 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:12:54 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:12:54 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:12:54 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:12:54 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:12:54 compute-0 nova_compute[255040]:         <nova:user uuid="5e62d407203540599a65ac50d5d447b9">tempest-TestVolumeBootPattern-1666331213-project-member</nova:user>
Nov 29 08:12:54 compute-0 nova_compute[255040]:         <nova:project uuid="3df24932e2a44aeab3c2aece8a045774">tempest-TestVolumeBootPattern-1666331213</nova:project>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:12:54 compute-0 nova_compute[255040]:         <nova:port uuid="9fb97b8d-7982-4dac-8c85-e972bacc8ad7">
Nov 29 08:12:54 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:12:54 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:12:54 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <system>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <entry name="serial">cde9039b-1882-4723-9524-c51a289f67b0</entry>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <entry name="uuid">cde9039b-1882-4723-9524-c51a289f67b0</entry>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     </system>
Nov 29 08:12:54 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:12:54 compute-0 nova_compute[255040]:   <os>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:   </os>
Nov 29 08:12:54 compute-0 nova_compute[255040]:   <features>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:   </features>
Nov 29 08:12:54 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:12:54 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:12:54 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/cde9039b-1882-4723-9524-c51a289f67b0_disk.config">
Nov 29 08:12:54 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       </source>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:12:54 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <source protocol="rbd" name="volumes/volume-3d3fcb07-9e86-4e90-86d4-07632d484796">
Nov 29 08:12:54 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       </source>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:12:54 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <serial>3d3fcb07-9e86-4e90-86d4-07632d484796</serial>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:9f:12:20"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <target dev="tap9fb97b8d-79"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/cde9039b-1882-4723-9524-c51a289f67b0/console.log" append="off"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <video>
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     </video>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:12:54 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:12:54 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:12:54 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:12:54 compute-0 nova_compute[255040]: </domain>
Nov 29 08:12:54 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.130 255071 DEBUG nova.compute.manager [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Preparing to wait for external event network-vif-plugged-9fb97b8d-7982-4dac-8c85-e972bacc8ad7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.131 255071 DEBUG oslo_concurrency.lockutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "cde9039b-1882-4723-9524-c51a289f67b0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.131 255071 DEBUG oslo_concurrency.lockutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "cde9039b-1882-4723-9524-c51a289f67b0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.131 255071 DEBUG oslo_concurrency.lockutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "cde9039b-1882-4723-9524-c51a289f67b0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.132 255071 DEBUG nova.virt.libvirt.vif [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-627045043',display_name='tempest-TestVolumeBootPattern-server-627045043',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-627045043',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBNqVOtasX0MqRaMqqfsWVfBGlBxHyLONahirMfYc0xM/PP91rZ4W+N/NUA4y30TxcMcH62LfUYChDkxcMCwFGnIBRbZARerRoVNJBX6SaD1meU9QKaSGEO9I5Zm9Q8bzQ==',key_name='tempest-TestVolumeBootPattern-1223045967',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3df24932e2a44aeab3c2aece8a045774',ramdisk_id='',reservation_id='r-r93fjp7h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1666331213',owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:50Z,user_data=None,user_id='5e62d407203540599a65ac50d5d447b9',uuid=cde9039b-1882-4723-9524-c51a289f67b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "address": "fa:16:3e:9f:12:20", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb97b8d-79", "ovs_interfaceid": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.132 255071 DEBUG nova.network.os_vif_util [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converting VIF {"id": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "address": "fa:16:3e:9f:12:20", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb97b8d-79", "ovs_interfaceid": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.133 255071 DEBUG nova.network.os_vif_util [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:12:20,bridge_name='br-int',has_traffic_filtering=True,id=9fb97b8d-7982-4dac-8c85-e972bacc8ad7,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb97b8d-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.133 255071 DEBUG os_vif [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:12:20,bridge_name='br-int',has_traffic_filtering=True,id=9fb97b8d-7982-4dac-8c85-e972bacc8ad7,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb97b8d-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.134 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.134 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.135 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.140 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.140 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9fb97b8d-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.141 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9fb97b8d-79, col_values=(('external_ids', {'iface-id': '9fb97b8d-7982-4dac-8c85-e972bacc8ad7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9f:12:20', 'vm-uuid': 'cde9039b-1882-4723-9524-c51a289f67b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.176 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:54 compute-0 NetworkManager[49116]: <info>  [1764403974.1782] manager: (tap9fb97b8d-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.180 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.186 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.188 255071 INFO os_vif [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:12:20,bridge_name='br-int',has_traffic_filtering=True,id=9fb97b8d-7982-4dac-8c85-e972bacc8ad7,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb97b8d-79')
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.264 255071 DEBUG nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.265 255071 DEBUG nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.265 255071 DEBUG nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] No VIF found with MAC fa:16:3e:9f:12:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.266 255071 INFO nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Using config drive
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.288 255071 DEBUG nova.storage.rbd_utils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] rbd image cde9039b-1882-4723-9524-c51a289f67b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.832 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.886 255071 INFO nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Creating config drive at /var/lib/nova/instances/cde9039b-1882-4723-9524-c51a289f67b0/disk.config
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.896 255071 DEBUG oslo_concurrency.processutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cde9039b-1882-4723-9524-c51a289f67b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr7618ybv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.983 255071 DEBUG nova.network.neutron [req-4f5cca26-b8ea-4db6-8a56-018be05b286e req-7eff2587-9ee2-451f-9b5a-2a1c5e91e493 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Updated VIF entry in instance network info cache for port 9fb97b8d-7982-4dac-8c85-e972bacc8ad7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:12:54 compute-0 nova_compute[255040]: 2025-11-29 08:12:54.985 255071 DEBUG nova.network.neutron [req-4f5cca26-b8ea-4db6-8a56-018be05b286e req-7eff2587-9ee2-451f-9b5a-2a1c5e91e493 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Updating instance_info_cache with network_info: [{"id": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "address": "fa:16:3e:9f:12:20", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb97b8d-79", "ovs_interfaceid": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:12:55 compute-0 nova_compute[255040]: 2025-11-29 08:12:55.011 255071 DEBUG oslo_concurrency.lockutils [req-4f5cca26-b8ea-4db6-8a56-018be05b286e req-7eff2587-9ee2-451f-9b5a-2a1c5e91e493 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-cde9039b-1882-4723-9524-c51a289f67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:12:55 compute-0 nova_compute[255040]: 2025-11-29 08:12:55.033 255071 DEBUG oslo_concurrency.processutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cde9039b-1882-4723-9524-c51a289f67b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr7618ybv" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:55 compute-0 ceph-mon[75237]: pgmap v1705: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 101 KiB/s rd, 442 KiB/s wr, 45 op/s
Nov 29 08:12:55 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2632640131' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:55 compute-0 nova_compute[255040]: 2025-11-29 08:12:55.063 255071 DEBUG nova.storage.rbd_utils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] rbd image cde9039b-1882-4723-9524-c51a289f67b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:12:55 compute-0 nova_compute[255040]: 2025-11-29 08:12:55.068 255071 DEBUG oslo_concurrency.processutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cde9039b-1882-4723-9524-c51a289f67b0/disk.config cde9039b-1882-4723-9524-c51a289f67b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:55 compute-0 nova_compute[255040]: 2025-11-29 08:12:55.230 255071 DEBUG oslo_concurrency.processutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cde9039b-1882-4723-9524-c51a289f67b0/disk.config cde9039b-1882-4723-9524-c51a289f67b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:55 compute-0 nova_compute[255040]: 2025-11-29 08:12:55.232 255071 INFO nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Deleting local config drive /var/lib/nova/instances/cde9039b-1882-4723-9524-c51a289f67b0/disk.config because it was imported into RBD.
Nov 29 08:12:55 compute-0 kernel: tap9fb97b8d-79: entered promiscuous mode
Nov 29 08:12:55 compute-0 NetworkManager[49116]: <info>  [1764403975.3068] manager: (tap9fb97b8d-79): new Tun device (/org/freedesktop/NetworkManager/Devices/109)
Nov 29 08:12:55 compute-0 nova_compute[255040]: 2025-11-29 08:12:55.306 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:55 compute-0 ovn_controller[153295]: 2025-11-29T08:12:55Z|00192|binding|INFO|Claiming lport 9fb97b8d-7982-4dac-8c85-e972bacc8ad7 for this chassis.
Nov 29 08:12:55 compute-0 ovn_controller[153295]: 2025-11-29T08:12:55Z|00193|binding|INFO|9fb97b8d-7982-4dac-8c85-e972bacc8ad7: Claiming fa:16:3e:9f:12:20 10.100.0.9
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.319 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:12:20 10.100.0.9'], port_security=['fa:16:3e:9f:12:20 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'cde9039b-1882-4723-9524-c51a289f67b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3df24932e2a44aeab3c2aece8a045774', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fd76aebb-076a-4516-b4a3-04b7aa482016', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6d2be5e-00f1-4a95-b572-cb93402763d5, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=9fb97b8d-7982-4dac-8c85-e972bacc8ad7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.320 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 9fb97b8d-7982-4dac-8c85-e972bacc8ad7 in datapath 6e23492e-beff-43f6-b4d1-f88ebeea0b6f bound to our chassis
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.322 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e23492e-beff-43f6-b4d1-f88ebeea0b6f
Nov 29 08:12:55 compute-0 ovn_controller[153295]: 2025-11-29T08:12:55Z|00194|binding|INFO|Setting lport 9fb97b8d-7982-4dac-8c85-e972bacc8ad7 ovn-installed in OVS
Nov 29 08:12:55 compute-0 ovn_controller[153295]: 2025-11-29T08:12:55Z|00195|binding|INFO|Setting lport 9fb97b8d-7982-4dac-8c85-e972bacc8ad7 up in Southbound
Nov 29 08:12:55 compute-0 nova_compute[255040]: 2025-11-29 08:12:55.329 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:55 compute-0 nova_compute[255040]: 2025-11-29 08:12:55.332 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.339 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[366895e5-f207-4546-a31b-c3cb5b102521]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.340 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6e23492e-b1 in ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.343 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6e23492e-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.344 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[c4381ca9-0e6b-4419-8064-d6d8e71b1560]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.345 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[b8b94c76-b677-4de8-8a7f-6a92f7d97500]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:55 compute-0 systemd-udevd[288942]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:12:55 compute-0 systemd-machined[216271]: New machine qemu-21-instance-00000015.
Nov 29 08:12:55 compute-0 NetworkManager[49116]: <info>  [1764403975.3638] device (tap9fb97b8d-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:12:55 compute-0 NetworkManager[49116]: <info>  [1764403975.3650] device (tap9fb97b8d-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:12:55 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-00000015.
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.368 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[3152a519-714a-4c62-b556-d2678b966a1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.393 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[8a89b6d6-4aa3-4389-9cd8-3c6b893229d6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.437 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[3b43c002-6b8d-40ce-8473-f892f91c4690]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.443 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5255ed22-ba0d-40be-8c0f-f3c9c4576961]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:55 compute-0 NetworkManager[49116]: <info>  [1764403975.4450] manager: (tap6e23492e-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/110)
Nov 29 08:12:55 compute-0 systemd-udevd[288946]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.481 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[f65e4217-5f9b-4a16-9e13-b6d93f1d2ac7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.486 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[2df52b55-1a43-4a71-829b-60d6b1ac89d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:55 compute-0 nova_compute[255040]: 2025-11-29 08:12:55.511 255071 DEBUG nova.compute.manager [req-caf20310-eec7-4987-86a4-f816105e9a23 req-14666591-0450-4aa8-9719-1f576be54c36 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Received event network-vif-plugged-9fb97b8d-7982-4dac-8c85-e972bacc8ad7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:12:55 compute-0 nova_compute[255040]: 2025-11-29 08:12:55.511 255071 DEBUG oslo_concurrency.lockutils [req-caf20310-eec7-4987-86a4-f816105e9a23 req-14666591-0450-4aa8-9719-1f576be54c36 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "cde9039b-1882-4723-9524-c51a289f67b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:55 compute-0 nova_compute[255040]: 2025-11-29 08:12:55.511 255071 DEBUG oslo_concurrency.lockutils [req-caf20310-eec7-4987-86a4-f816105e9a23 req-14666591-0450-4aa8-9719-1f576be54c36 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "cde9039b-1882-4723-9524-c51a289f67b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:55 compute-0 nova_compute[255040]: 2025-11-29 08:12:55.511 255071 DEBUG oslo_concurrency.lockutils [req-caf20310-eec7-4987-86a4-f816105e9a23 req-14666591-0450-4aa8-9719-1f576be54c36 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "cde9039b-1882-4723-9524-c51a289f67b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:55 compute-0 nova_compute[255040]: 2025-11-29 08:12:55.512 255071 DEBUG nova.compute.manager [req-caf20310-eec7-4987-86a4-f816105e9a23 req-14666591-0450-4aa8-9719-1f576be54c36 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Processing event network-vif-plugged-9fb97b8d-7982-4dac-8c85-e972bacc8ad7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:12:55 compute-0 NetworkManager[49116]: <info>  [1764403975.5153] device (tap6e23492e-b0): carrier: link connected
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.520 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[e4b7e1ce-b650-4347-9011-c5192488a128]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.543 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[be0bd316-3de3-4587-8acb-9d9a01aab86c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e23492e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:19:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621285, 'reachable_time': 33842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288975, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.560 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[e16e8608-fdf3-4c7a-b098-a6e1d11e0bc3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:1984'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621285, 'tstamp': 621285}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288976, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.585 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[7829302b-1836-46c6-9f93-fbf1b77ef690]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e23492e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:19:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621285, 'reachable_time': 33842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288977, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.620 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[15290c71-1413-4295-ba90-20701263e493]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.701 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[2a8755fb-114b-42ac-a37a-8dbb7d761891]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.703 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e23492e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.703 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.703 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e23492e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:55 compute-0 nova_compute[255040]: 2025-11-29 08:12:55.706 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:55 compute-0 NetworkManager[49116]: <info>  [1764403975.7069] manager: (tap6e23492e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Nov 29 08:12:55 compute-0 kernel: tap6e23492e-b0: entered promiscuous mode
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.710 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e23492e-b0, col_values=(('external_ids', {'iface-id': 'c7579d40-4225-44ab-93bd-e31c3efe399f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:55 compute-0 ovn_controller[153295]: 2025-11-29T08:12:55Z|00196|binding|INFO|Releasing lport c7579d40-4225-44ab-93bd-e31c3efe399f from this chassis (sb_readonly=0)
Nov 29 08:12:55 compute-0 nova_compute[255040]: 2025-11-29 08:12:55.712 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.714 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6e23492e-beff-43f6-b4d1-f88ebeea0b6f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6e23492e-beff-43f6-b4d1-f88ebeea0b6f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:12:55 compute-0 nova_compute[255040]: 2025-11-29 08:12:55.714 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.715 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[d2de5260-d654-4d92-a74b-f080552edd4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.716 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-6e23492e-beff-43f6-b4d1-f88ebeea0b6f
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/6e23492e-beff-43f6-b4d1-f88ebeea0b6f.pid.haproxy
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID 6e23492e-beff-43f6-b4d1-f88ebeea0b6f
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:12:55 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:55.717 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'env', 'PROCESS_TAG=haproxy-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6e23492e-beff-43f6-b4d1-f88ebeea0b6f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:12:55 compute-0 nova_compute[255040]: 2025-11-29 08:12:55.729 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 21 KiB/s rd, 46 KiB/s wr, 20 op/s
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.052 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403976.0518904, cde9039b-1882-4723-9524-c51a289f67b0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.053 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: cde9039b-1882-4723-9524-c51a289f67b0] VM Started (Lifecycle Event)
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.056 255071 DEBUG nova.compute.manager [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.060 255071 DEBUG nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.065 255071 INFO nova.virt.libvirt.driver [-] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Instance spawned successfully.
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.066 255071 DEBUG nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.070 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.075 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.086 255071 DEBUG nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.086 255071 DEBUG nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.087 255071 DEBUG nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.087 255071 DEBUG nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.087 255071 DEBUG nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.088 255071 DEBUG nova.virt.libvirt.driver [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.092 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: cde9039b-1882-4723-9524-c51a289f67b0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.092 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403976.0520694, cde9039b-1882-4723-9524-c51a289f67b0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.093 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: cde9039b-1882-4723-9524-c51a289f67b0] VM Paused (Lifecycle Event)
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.117 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.122 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403976.0597398, cde9039b-1882-4723-9524-c51a289f67b0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.123 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: cde9039b-1882-4723-9524-c51a289f67b0] VM Resumed (Lifecycle Event)
Nov 29 08:12:56 compute-0 podman[289051]: 2025-11-29 08:12:56.136307225 +0000 UTC m=+0.057686531 container create d23b5cad16de08aecebc407082690b709024a44454cb53cb8ee27c8a7aa8965f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.153 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.162 255071 INFO nova.compute.manager [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Took 4.33 seconds to spawn the instance on the hypervisor.
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.162 255071 DEBUG nova.compute.manager [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00035108429787718954 of space, bias 1.0, pg target 0.10532528936315685 quantized to 32 (current 32)
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.03520457893794361 of space, bias 1.0, pg target 10.561373681383083 quantized to 32 (current 32)
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0003461242226671876 of space, bias 1.0, pg target 0.10037602457348441 quantized to 32 (current 32)
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19309890746076708 quantized to 32 (current 32)
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0005901217685745913 quantized to 16 (current 16)
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.376522107182392e-05 quantized to 32 (current 32)
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006270043791105033 quantized to 32 (current 32)
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:12:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014753044214364783 quantized to 32 (current 32)
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.167 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:12:56 compute-0 systemd[1]: Started libpod-conmon-d23b5cad16de08aecebc407082690b709024a44454cb53cb8ee27c8a7aa8965f.scope.
Nov 29 08:12:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.201 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: cde9039b-1882-4723-9524-c51a289f67b0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:12:56 compute-0 podman[289051]: 2025-11-29 08:12:56.10874233 +0000 UTC m=+0.030121656 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:12:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837dc8d65587c503a995c087963bf7886a1f5f626d46713a14029772f6f7a075/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:56 compute-0 podman[289051]: 2025-11-29 08:12:56.220005899 +0000 UTC m=+0.141385245 container init d23b5cad16de08aecebc407082690b709024a44454cb53cb8ee27c8a7aa8965f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 08:12:56 compute-0 podman[289051]: 2025-11-29 08:12:56.226189476 +0000 UTC m=+0.147568782 container start d23b5cad16de08aecebc407082690b709024a44454cb53cb8ee27c8a7aa8965f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.247 255071 INFO nova.compute.manager [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Took 6.29 seconds to build instance.
Nov 29 08:12:56 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[289066]: [NOTICE]   (289070) : New worker (289072) forked
Nov 29 08:12:56 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[289066]: [NOTICE]   (289070) : Loading success.
Nov 29 08:12:56 compute-0 nova_compute[255040]: 2025-11-29 08:12:56.269 255071 DEBUG oslo_concurrency.lockutils [None req-3ee435a1-0310-40f4-b741-940dd1a67dd3 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "cde9039b-1882-4723-9524-c51a289f67b0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.364s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:57 compute-0 ceph-mon[75237]: pgmap v1706: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 21 KiB/s rd, 46 KiB/s wr, 20 op/s
Nov 29 08:12:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.7 KiB/s rd, 27 KiB/s wr, 4 op/s
Nov 29 08:12:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:58.664985) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403978665223, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 714, "num_deletes": 251, "total_data_size": 805657, "memory_usage": 818224, "flush_reason": "Manual Compaction"}
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403978672337, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 547567, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31933, "largest_seqno": 32646, "table_properties": {"data_size": 544344, "index_size": 1067, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8907, "raw_average_key_size": 20, "raw_value_size": 537422, "raw_average_value_size": 1258, "num_data_blocks": 47, "num_entries": 427, "num_filter_entries": 427, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403928, "oldest_key_time": 1764403928, "file_creation_time": 1764403978, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 7402 microseconds, and 3688 cpu microseconds.
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:58.672418) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 547567 bytes OK
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:58.672447) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:58.673875) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:58.673904) EVENT_LOG_v1 {"time_micros": 1764403978673890, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:58.673933) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 801932, prev total WAL file size 801932, number of live WAL files 2.
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:58.674979) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303037' seq:72057594037927935, type:22 .. '6D6772737461740031323539' seq:0, type:0; will stop at (end)
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(534KB)], [65(10MB)]
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403978675155, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 11995356, "oldest_snapshot_seqno": -1}
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6202 keys, 8924432 bytes, temperature: kUnknown
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403978751593, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 8924432, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8880323, "index_size": 27487, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15557, "raw_key_size": 157374, "raw_average_key_size": 25, "raw_value_size": 8766092, "raw_average_value_size": 1413, "num_data_blocks": 1104, "num_entries": 6202, "num_filter_entries": 6202, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764403978, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:58.751938) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 8924432 bytes
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:58.753425) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.7 rd, 116.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.9 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(38.2) write-amplify(16.3) OK, records in: 6700, records dropped: 498 output_compression: NoCompression
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:58.753446) EVENT_LOG_v1 {"time_micros": 1764403978753435, "job": 36, "event": "compaction_finished", "compaction_time_micros": 76555, "compaction_time_cpu_micros": 28238, "output_level": 6, "num_output_files": 1, "total_output_size": 8924432, "num_input_records": 6700, "num_output_records": 6202, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403978753669, "job": 36, "event": "table_file_deletion", "file_number": 67}
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403978755624, "job": 36, "event": "table_file_deletion", "file_number": 65}
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:58.674898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:58.755664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:58.755669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:58.755671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:58.755673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:58 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:12:58.755675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:12:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:12:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2024658249' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:12:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2024658249' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:58 compute-0 nova_compute[255040]: 2025-11-29 08:12:58.774 255071 DEBUG nova.compute.manager [req-001b7d6d-57ec-4b9d-9684-16575c4d78ba req-914dc1dc-7ebb-4168-a4fb-24ed529188bd cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Received event network-vif-plugged-9fb97b8d-7982-4dac-8c85-e972bacc8ad7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:12:58 compute-0 nova_compute[255040]: 2025-11-29 08:12:58.774 255071 DEBUG oslo_concurrency.lockutils [req-001b7d6d-57ec-4b9d-9684-16575c4d78ba req-914dc1dc-7ebb-4168-a4fb-24ed529188bd cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "cde9039b-1882-4723-9524-c51a289f67b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:58 compute-0 nova_compute[255040]: 2025-11-29 08:12:58.774 255071 DEBUG oslo_concurrency.lockutils [req-001b7d6d-57ec-4b9d-9684-16575c4d78ba req-914dc1dc-7ebb-4168-a4fb-24ed529188bd cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "cde9039b-1882-4723-9524-c51a289f67b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:58 compute-0 nova_compute[255040]: 2025-11-29 08:12:58.775 255071 DEBUG oslo_concurrency.lockutils [req-001b7d6d-57ec-4b9d-9684-16575c4d78ba req-914dc1dc-7ebb-4168-a4fb-24ed529188bd cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "cde9039b-1882-4723-9524-c51a289f67b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:58 compute-0 nova_compute[255040]: 2025-11-29 08:12:58.775 255071 DEBUG nova.compute.manager [req-001b7d6d-57ec-4b9d-9684-16575c4d78ba req-914dc1dc-7ebb-4168-a4fb-24ed529188bd cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] No waiting events found dispatching network-vif-plugged-9fb97b8d-7982-4dac-8c85-e972bacc8ad7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:12:58 compute-0 nova_compute[255040]: 2025-11-29 08:12:58.775 255071 WARNING nova.compute.manager [req-001b7d6d-57ec-4b9d-9684-16575c4d78ba req-914dc1dc-7ebb-4168-a4fb-24ed529188bd cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Received unexpected event network-vif-plugged-9fb97b8d-7982-4dac-8c85-e972bacc8ad7 for instance with vm_state active and task_state None.
Nov 29 08:12:58 compute-0 nova_compute[255040]: 2025-11-29 08:12:58.968 255071 DEBUG oslo_concurrency.lockutils [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Acquiring lock "66776362-7d85-47fc-a7d5-f2c50e77d9da" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:58 compute-0 nova_compute[255040]: 2025-11-29 08:12:58.968 255071 DEBUG oslo_concurrency.lockutils [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Lock "66776362-7d85-47fc-a7d5-f2c50e77d9da" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:58 compute-0 nova_compute[255040]: 2025-11-29 08:12:58.969 255071 DEBUG oslo_concurrency.lockutils [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Acquiring lock "66776362-7d85-47fc-a7d5-f2c50e77d9da-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:58 compute-0 nova_compute[255040]: 2025-11-29 08:12:58.969 255071 DEBUG oslo_concurrency.lockutils [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Lock "66776362-7d85-47fc-a7d5-f2c50e77d9da-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:58 compute-0 nova_compute[255040]: 2025-11-29 08:12:58.969 255071 DEBUG oslo_concurrency.lockutils [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Lock "66776362-7d85-47fc-a7d5-f2c50e77d9da-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:58 compute-0 nova_compute[255040]: 2025-11-29 08:12:58.971 255071 INFO nova.compute.manager [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Terminating instance
Nov 29 08:12:58 compute-0 nova_compute[255040]: 2025-11-29 08:12:58.972 255071 DEBUG nova.compute.manager [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:12:59 compute-0 ceph-mon[75237]: pgmap v1707: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.7 KiB/s rd, 27 KiB/s wr, 4 op/s
Nov 29 08:12:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2024658249' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2024658249' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.178 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:59 compute-0 kernel: tap788543c5-e7 (unregistering): left promiscuous mode
Nov 29 08:12:59 compute-0 NetworkManager[49116]: <info>  [1764403979.1845] device (tap788543c5-e7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:12:59 compute-0 ovn_controller[153295]: 2025-11-29T08:12:59Z|00197|binding|INFO|Releasing lport 788543c5-e772-41b1-a887-4ced66fc5497 from this chassis (sb_readonly=0)
Nov 29 08:12:59 compute-0 ovn_controller[153295]: 2025-11-29T08:12:59Z|00198|binding|INFO|Setting lport 788543c5-e772-41b1-a887-4ced66fc5497 down in Southbound
Nov 29 08:12:59 compute-0 ovn_controller[153295]: 2025-11-29T08:12:59Z|00199|binding|INFO|Removing iface tap788543c5-e7 ovn-installed in OVS
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.215 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:59.214 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:f4:ae 10.100.0.6'], port_security=['fa:16:3e:97:f4:ae 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '66776362-7d85-47fc-a7d5-f2c50e77d9da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86b65b38-9c10-44ce-abcd-c34ac448faec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd12500a556245649485ffa25f9896cc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '413160d7-759e-4024-b663-8512ced0a321', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d4d91e7-0e7f-4535-8521-212392cb3f4e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=788543c5-e772-41b1-a887-4ced66fc5497) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:12:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:59.216 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 788543c5-e772-41b1-a887-4ced66fc5497 in datapath 86b65b38-9c10-44ce-abcd-c34ac448faec unbound from our chassis
Nov 29 08:12:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:59.217 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 86b65b38-9c10-44ce-abcd-c34ac448faec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:12:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:59.219 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[89e342a2-ad0b-4e40-b15e-6934c9cdf4fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:59.219 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec namespace which is not needed anymore
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.225 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.245 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403964.2442203, 76ba0630-af87-46e8-83ee-b983d76f480d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.245 255071 INFO nova.compute.manager [-] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] VM Stopped (Lifecycle Event)
Nov 29 08:12:59 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Deactivated successfully.
Nov 29 08:12:59 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Consumed 18.084s CPU time.
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.268 255071 DEBUG nova.compute.manager [None req-9c763454-3d6c-4444-9ebf-3d013132a1fb - - - - - -] [instance: 76ba0630-af87-46e8-83ee-b983d76f480d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:12:59 compute-0 systemd-machined[216271]: Machine qemu-20-instance-00000014 terminated.
Nov 29 08:12:59 compute-0 podman[289082]: 2025-11-29 08:12:59.344806607 +0000 UTC m=+0.175246820 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 08:12:59 compute-0 neutron-haproxy-ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec[287679]: [NOTICE]   (287683) : haproxy version is 2.8.14-c23fe91
Nov 29 08:12:59 compute-0 neutron-haproxy-ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec[287679]: [NOTICE]   (287683) : path to executable is /usr/sbin/haproxy
Nov 29 08:12:59 compute-0 neutron-haproxy-ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec[287679]: [WARNING]  (287683) : Exiting Master process...
Nov 29 08:12:59 compute-0 neutron-haproxy-ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec[287679]: [ALERT]    (287683) : Current worker (287685) exited with code 143 (Terminated)
Nov 29 08:12:59 compute-0 neutron-haproxy-ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec[287679]: [WARNING]  (287683) : All workers exited. Exiting... (0)
Nov 29 08:12:59 compute-0 systemd[1]: libpod-77368de0f15868caa0906ddcddccb9e0411191eb7457caf5cc0198ee83610722.scope: Deactivated successfully.
Nov 29 08:12:59 compute-0 podman[289129]: 2025-11-29 08:12:59.390185185 +0000 UTC m=+0.055931474 container died 77368de0f15868caa0906ddcddccb9e0411191eb7457caf5cc0198ee83610722 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.395 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.405 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.411 255071 INFO nova.virt.libvirt.driver [-] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Instance destroyed successfully.
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.412 255071 DEBUG nova.objects.instance [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Lazy-loading 'resources' on Instance uuid 66776362-7d85-47fc-a7d5-f2c50e77d9da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:12:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77368de0f15868caa0906ddcddccb9e0411191eb7457caf5cc0198ee83610722-userdata-shm.mount: Deactivated successfully.
Nov 29 08:12:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-07415294d5eb91a7f1e753ddfa964a0d4ea99357e0973fae64ba0cdbb5099f35-merged.mount: Deactivated successfully.
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.437 255071 DEBUG nova.virt.libvirt.vif [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-instance-363213132',display_name='tempest-instance-363213132',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-363213132',id=20,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKLQBtih3QY3upK7E/mGXww52oaLXJUDqVdvSGRFeD61UVQv9k745MBcEt+NyoEeeC/RacC2dH1tl9acg0h5vr7l5lgiy/wKy827YiGaE08rwTfDwDvFUvadQsHi/BzD6w==',key_name='tempest-keypair-919818827',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dd12500a556245649485ffa25f9896cc',ramdisk_id='',reservation_id='r-mltrx3sa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-1194012855',owner_user_name='tempest-VolumesBackupsTest-1194012855-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3ac027dfac1940a585665db58d3c343b',uuid=66776362-7d85-47fc-a7d5-f2c50e77d9da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "788543c5-e772-41b1-a887-4ced66fc5497", "address": "fa:16:3e:97:f4:ae", "network": {"id": "86b65b38-9c10-44ce-abcd-c34ac448faec", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-568996322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd12500a556245649485ffa25f9896cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap788543c5-e7", "ovs_interfaceid": "788543c5-e772-41b1-a887-4ced66fc5497", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.437 255071 DEBUG nova.network.os_vif_util [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Converting VIF {"id": "788543c5-e772-41b1-a887-4ced66fc5497", "address": "fa:16:3e:97:f4:ae", "network": {"id": "86b65b38-9c10-44ce-abcd-c34ac448faec", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-568996322-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd12500a556245649485ffa25f9896cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap788543c5-e7", "ovs_interfaceid": "788543c5-e772-41b1-a887-4ced66fc5497", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.438 255071 DEBUG nova.network.os_vif_util [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:f4:ae,bridge_name='br-int',has_traffic_filtering=True,id=788543c5-e772-41b1-a887-4ced66fc5497,network=Network(86b65b38-9c10-44ce-abcd-c34ac448faec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap788543c5-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.438 255071 DEBUG os_vif [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:f4:ae,bridge_name='br-int',has_traffic_filtering=True,id=788543c5-e772-41b1-a887-4ced66fc5497,network=Network(86b65b38-9c10-44ce-abcd-c34ac448faec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap788543c5-e7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:12:59 compute-0 podman[289129]: 2025-11-29 08:12:59.44026407 +0000 UTC m=+0.106010329 container cleanup 77368de0f15868caa0906ddcddccb9e0411191eb7457caf5cc0198ee83610722 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.440 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.441 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap788543c5-e7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.443 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.445 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.448 255071 INFO os_vif [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:f4:ae,bridge_name='br-int',has_traffic_filtering=True,id=788543c5-e772-41b1-a887-4ced66fc5497,network=Network(86b65b38-9c10-44ce-abcd-c34ac448faec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap788543c5-e7')
Nov 29 08:12:59 compute-0 systemd[1]: libpod-conmon-77368de0f15868caa0906ddcddccb9e0411191eb7457caf5cc0198ee83610722.scope: Deactivated successfully.
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.487 255071 DEBUG nova.compute.manager [req-2e79bf1c-5fa0-412a-aaa2-be6f2e30a767 req-3428e020-3861-4b80-84b0-daa6d2f49536 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Received event network-vif-unplugged-788543c5-e772-41b1-a887-4ced66fc5497 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.487 255071 DEBUG oslo_concurrency.lockutils [req-2e79bf1c-5fa0-412a-aaa2-be6f2e30a767 req-3428e020-3861-4b80-84b0-daa6d2f49536 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "66776362-7d85-47fc-a7d5-f2c50e77d9da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.488 255071 DEBUG oslo_concurrency.lockutils [req-2e79bf1c-5fa0-412a-aaa2-be6f2e30a767 req-3428e020-3861-4b80-84b0-daa6d2f49536 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "66776362-7d85-47fc-a7d5-f2c50e77d9da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.488 255071 DEBUG oslo_concurrency.lockutils [req-2e79bf1c-5fa0-412a-aaa2-be6f2e30a767 req-3428e020-3861-4b80-84b0-daa6d2f49536 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "66776362-7d85-47fc-a7d5-f2c50e77d9da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.488 255071 DEBUG nova.compute.manager [req-2e79bf1c-5fa0-412a-aaa2-be6f2e30a767 req-3428e020-3861-4b80-84b0-daa6d2f49536 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] No waiting events found dispatching network-vif-unplugged-788543c5-e772-41b1-a887-4ced66fc5497 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.489 255071 DEBUG nova.compute.manager [req-2e79bf1c-5fa0-412a-aaa2-be6f2e30a767 req-3428e020-3861-4b80-84b0-daa6d2f49536 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Received event network-vif-unplugged-788543c5-e772-41b1-a887-4ced66fc5497 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:12:59 compute-0 podman[289166]: 2025-11-29 08:12:59.5175603 +0000 UTC m=+0.053735585 container remove 77368de0f15868caa0906ddcddccb9e0411191eb7457caf5cc0198ee83610722 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 08:12:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:59.525 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[bffc2ec5-501a-427b-8335-a5b555a0fa1f]: (4, ('Sat Nov 29 08:12:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec (77368de0f15868caa0906ddcddccb9e0411191eb7457caf5cc0198ee83610722)\n77368de0f15868caa0906ddcddccb9e0411191eb7457caf5cc0198ee83610722\nSat Nov 29 08:12:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec (77368de0f15868caa0906ddcddccb9e0411191eb7457caf5cc0198ee83610722)\n77368de0f15868caa0906ddcddccb9e0411191eb7457caf5cc0198ee83610722\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:59.527 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[9646ff0d-868d-4316-ae67-7eed39447c68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:59.529 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86b65b38-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:59 compute-0 kernel: tap86b65b38-90: left promiscuous mode
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.535 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.548 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:59.554 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[e5795e4b-d71d-4b21-b1c7-df5e7bc89077]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:59.574 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[e6874760-af21-440c-bd46-240c494d695b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:59.576 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[a391908d-75e1-4f94-9121-7bf60e736016]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:59.601 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[89252082-5a2d-49a0-a3d5-8164e24115c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617970, 'reachable_time': 41969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289198, 'error': None, 'target': 'ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:59.604 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-86b65b38-9c10-44ce-abcd-c34ac448faec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:12:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:12:59.604 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[340b1f5e-c1e6-4968-a9d4-1893ba5de4e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:12:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d86b65b38\x2d9c10\x2d44ce\x2dabcd\x2dc34ac448faec.mount: Deactivated successfully.
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.834 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.870 255071 INFO nova.virt.libvirt.driver [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Deleting instance files /var/lib/nova/instances/66776362-7d85-47fc-a7d5-f2c50e77d9da_del
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.871 255071 INFO nova.virt.libvirt.driver [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Deletion of /var/lib/nova/instances/66776362-7d85-47fc-a7d5-f2c50e77d9da_del complete
Nov 29 08:12:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 345 KiB/s rd, 28 KiB/s wr, 16 op/s
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.922 255071 INFO nova.compute.manager [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Took 0.95 seconds to destroy the instance on the hypervisor.
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.923 255071 DEBUG oslo.service.loopingcall [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.923 255071 DEBUG nova.compute.manager [-] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:12:59 compute-0 nova_compute[255040]: 2025-11-29 08:12:59.924 255071 DEBUG nova.network.neutron [-] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:13:00 compute-0 nova_compute[255040]: 2025-11-29 08:13:00.039 255071 DEBUG oslo_concurrency.lockutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Acquiring lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:00 compute-0 nova_compute[255040]: 2025-11-29 08:13:00.040 255071 DEBUG oslo_concurrency.lockutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:00 compute-0 nova_compute[255040]: 2025-11-29 08:13:00.056 255071 DEBUG nova.compute.manager [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:13:00 compute-0 nova_compute[255040]: 2025-11-29 08:13:00.140 255071 DEBUG oslo_concurrency.lockutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:00 compute-0 nova_compute[255040]: 2025-11-29 08:13:00.140 255071 DEBUG oslo_concurrency.lockutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:00 compute-0 nova_compute[255040]: 2025-11-29 08:13:00.148 255071 DEBUG nova.virt.hardware [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:13:00 compute-0 nova_compute[255040]: 2025-11-29 08:13:00.149 255071 INFO nova.compute.claims [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:13:00 compute-0 nova_compute[255040]: 2025-11-29 08:13:00.272 255071 DEBUG oslo_concurrency.processutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:13:00 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/386737930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:00 compute-0 nova_compute[255040]: 2025-11-29 08:13:00.728 255071 DEBUG oslo_concurrency.processutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:00 compute-0 nova_compute[255040]: 2025-11-29 08:13:00.737 255071 DEBUG nova.compute.provider_tree [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:13:01 compute-0 ceph-mon[75237]: pgmap v1708: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 345 KiB/s rd, 28 KiB/s wr, 16 op/s
Nov 29 08:13:01 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/386737930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:01 compute-0 sshd-session[289081]: Received disconnect from 45.78.219.195 port 59584:11: Bye Bye [preauth]
Nov 29 08:13:01 compute-0 sshd-session[289081]: Disconnected from authenticating user daemon 45.78.219.195 port 59584 [preauth]
Nov 29 08:13:01 compute-0 nova_compute[255040]: 2025-11-29 08:13:01.799 255071 DEBUG nova.scheduler.client.report [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:13:01 compute-0 nova_compute[255040]: 2025-11-29 08:13:01.807 255071 DEBUG nova.compute.manager [req-ae7fc815-5dc6-4fc4-9e3d-a4b5051a2f36 req-b9e53c26-d291-4da4-a4c9-17d3adcd3e99 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Received event network-vif-plugged-788543c5-e772-41b1-a887-4ced66fc5497 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:13:01 compute-0 nova_compute[255040]: 2025-11-29 08:13:01.808 255071 DEBUG oslo_concurrency.lockutils [req-ae7fc815-5dc6-4fc4-9e3d-a4b5051a2f36 req-b9e53c26-d291-4da4-a4c9-17d3adcd3e99 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "66776362-7d85-47fc-a7d5-f2c50e77d9da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:01 compute-0 nova_compute[255040]: 2025-11-29 08:13:01.809 255071 DEBUG oslo_concurrency.lockutils [req-ae7fc815-5dc6-4fc4-9e3d-a4b5051a2f36 req-b9e53c26-d291-4da4-a4c9-17d3adcd3e99 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "66776362-7d85-47fc-a7d5-f2c50e77d9da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:01 compute-0 nova_compute[255040]: 2025-11-29 08:13:01.809 255071 DEBUG oslo_concurrency.lockutils [req-ae7fc815-5dc6-4fc4-9e3d-a4b5051a2f36 req-b9e53c26-d291-4da4-a4c9-17d3adcd3e99 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "66776362-7d85-47fc-a7d5-f2c50e77d9da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:01 compute-0 nova_compute[255040]: 2025-11-29 08:13:01.809 255071 DEBUG nova.compute.manager [req-ae7fc815-5dc6-4fc4-9e3d-a4b5051a2f36 req-b9e53c26-d291-4da4-a4c9-17d3adcd3e99 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] No waiting events found dispatching network-vif-plugged-788543c5-e772-41b1-a887-4ced66fc5497 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:13:01 compute-0 nova_compute[255040]: 2025-11-29 08:13:01.809 255071 WARNING nova.compute.manager [req-ae7fc815-5dc6-4fc4-9e3d-a4b5051a2f36 req-b9e53c26-d291-4da4-a4c9-17d3adcd3e99 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Received unexpected event network-vif-plugged-788543c5-e772-41b1-a887-4ced66fc5497 for instance with vm_state active and task_state deleting.
Nov 29 08:13:01 compute-0 nova_compute[255040]: 2025-11-29 08:13:01.821 255071 DEBUG oslo_concurrency.lockutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:01 compute-0 nova_compute[255040]: 2025-11-29 08:13:01.822 255071 DEBUG nova.compute.manager [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:13:01 compute-0 nova_compute[255040]: 2025-11-29 08:13:01.865 255071 DEBUG nova.compute.manager [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:13:01 compute-0 nova_compute[255040]: 2025-11-29 08:13:01.866 255071 DEBUG nova.network.neutron [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:13:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 85 op/s
Nov 29 08:13:01 compute-0 nova_compute[255040]: 2025-11-29 08:13:01.890 255071 INFO nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:13:01 compute-0 nova_compute[255040]: 2025-11-29 08:13:01.910 255071 DEBUG nova.compute.manager [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.024 255071 DEBUG nova.compute.manager [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.026 255071 DEBUG nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.026 255071 INFO nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Creating image(s)
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.056 255071 DEBUG nova.storage.rbd_utils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] rbd image 73161fa0-86cc-4d12-bbb4-64386b62bf99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.081 255071 DEBUG nova.storage.rbd_utils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] rbd image 73161fa0-86cc-4d12-bbb4-64386b62bf99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.103 255071 DEBUG nova.storage.rbd_utils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] rbd image 73161fa0-86cc-4d12-bbb4-64386b62bf99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.107 255071 DEBUG oslo_concurrency.processutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.151 255071 DEBUG nova.network.neutron [-] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.174 255071 INFO nova.compute.manager [-] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Took 2.25 seconds to deallocate network for instance.
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.181 255071 DEBUG oslo_concurrency.processutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.182 255071 DEBUG oslo_concurrency.lockutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Acquiring lock "55a6637599f7119d0d1afd670bb8713620840059" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.183 255071 DEBUG oslo_concurrency.lockutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.184 255071 DEBUG oslo_concurrency.lockutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.207 255071 DEBUG nova.storage.rbd_utils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] rbd image 73161fa0-86cc-4d12-bbb4-64386b62bf99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.213 255071 DEBUG oslo_concurrency.processutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 73161fa0-86cc-4d12-bbb4-64386b62bf99_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.399 255071 DEBUG nova.policy [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '628f195ee2d74504ac3b01a64427c25f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4685ebb42c1b47019026ac85736a2f9e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.430 255071 INFO nova.compute.manager [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Took 0.25 seconds to detach 1 volumes for instance.
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.471 255071 DEBUG oslo_concurrency.lockutils [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.472 255071 DEBUG oslo_concurrency.lockutils [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.558 255071 DEBUG oslo_concurrency.processutils [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.594 255071 DEBUG oslo_concurrency.processutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 73161fa0-86cc-4d12-bbb4-64386b62bf99_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.381s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.677 255071 DEBUG nova.storage.rbd_utils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] resizing rbd image 73161fa0-86cc-4d12-bbb4-64386b62bf99_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.793 255071 DEBUG nova.objects.instance [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lazy-loading 'migration_context' on Instance uuid 73161fa0-86cc-4d12-bbb4-64386b62bf99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.808 255071 DEBUG nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.808 255071 DEBUG nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Ensure instance console log exists: /var/lib/nova/instances/73161fa0-86cc-4d12-bbb4-64386b62bf99/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.809 255071 DEBUG oslo_concurrency.lockutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.809 255071 DEBUG oslo_concurrency.lockutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:02 compute-0 nova_compute[255040]: 2025-11-29 08:13:02.809 255071 DEBUG oslo_concurrency.lockutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:13:03 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3466911526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:03 compute-0 nova_compute[255040]: 2025-11-29 08:13:03.061 255071 DEBUG oslo_concurrency.processutils [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:03 compute-0 nova_compute[255040]: 2025-11-29 08:13:03.068 255071 DEBUG nova.compute.provider_tree [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:13:03 compute-0 nova_compute[255040]: 2025-11-29 08:13:03.083 255071 DEBUG nova.scheduler.client.report [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:13:03 compute-0 nova_compute[255040]: 2025-11-29 08:13:03.114 255071 DEBUG oslo_concurrency.lockutils [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:03 compute-0 ceph-mon[75237]: pgmap v1709: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 85 op/s
Nov 29 08:13:03 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3466911526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:03 compute-0 nova_compute[255040]: 2025-11-29 08:13:03.157 255071 INFO nova.scheduler.client.report [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Deleted allocations for instance 66776362-7d85-47fc-a7d5-f2c50e77d9da
Nov 29 08:13:03 compute-0 nova_compute[255040]: 2025-11-29 08:13:03.225 255071 DEBUG oslo_concurrency.lockutils [None req-4deae2d0-931e-48c4-96f6-86a199feb116 3ac027dfac1940a585665db58d3c343b dd12500a556245649485ffa25f9896cc - - default default] Lock "66776362-7d85-47fc-a7d5-f2c50e77d9da" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:03 compute-0 nova_compute[255040]: 2025-11-29 08:13:03.268 255071 DEBUG nova.network.neutron [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Successfully created port: 118420be-1bec-4d74-a16f-38c9916df2ec _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:13:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.0 MiB/s rd, 702 KiB/s wr, 126 op/s
Nov 29 08:13:03 compute-0 nova_compute[255040]: 2025-11-29 08:13:03.885 255071 DEBUG nova.compute.manager [req-96b6a64b-4162-4f47-b257-e4284ed2af7a req-a2392ca3-9629-44af-aa4c-c593bde4c3b9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Received event network-vif-deleted-788543c5-e772-41b1-a887-4ced66fc5497 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:13:03 compute-0 nova_compute[255040]: 2025-11-29 08:13:03.886 255071 DEBUG nova.compute.manager [req-96b6a64b-4162-4f47-b257-e4284ed2af7a req-a2392ca3-9629-44af-aa4c-c593bde4c3b9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Received event network-changed-9fb97b8d-7982-4dac-8c85-e972bacc8ad7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:13:03 compute-0 nova_compute[255040]: 2025-11-29 08:13:03.886 255071 DEBUG nova.compute.manager [req-96b6a64b-4162-4f47-b257-e4284ed2af7a req-a2392ca3-9629-44af-aa4c-c593bde4c3b9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Refreshing instance network info cache due to event network-changed-9fb97b8d-7982-4dac-8c85-e972bacc8ad7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:13:03 compute-0 nova_compute[255040]: 2025-11-29 08:13:03.886 255071 DEBUG oslo_concurrency.lockutils [req-96b6a64b-4162-4f47-b257-e4284ed2af7a req-a2392ca3-9629-44af-aa4c-c593bde4c3b9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-cde9039b-1882-4723-9524-c51a289f67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:13:03 compute-0 nova_compute[255040]: 2025-11-29 08:13:03.886 255071 DEBUG oslo_concurrency.lockutils [req-96b6a64b-4162-4f47-b257-e4284ed2af7a req-a2392ca3-9629-44af-aa4c-c593bde4c3b9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-cde9039b-1882-4723-9524-c51a289f67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:13:03 compute-0 nova_compute[255040]: 2025-11-29 08:13:03.886 255071 DEBUG nova.network.neutron [req-96b6a64b-4162-4f47-b257-e4284ed2af7a req-a2392ca3-9629-44af-aa4c-c593bde4c3b9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Refreshing network info cache for port 9fb97b8d-7982-4dac-8c85-e972bacc8ad7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:13:03 compute-0 nova_compute[255040]: 2025-11-29 08:13:03.927 255071 DEBUG nova.network.neutron [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Successfully updated port: 118420be-1bec-4d74-a16f-38c9916df2ec _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:13:03 compute-0 nova_compute[255040]: 2025-11-29 08:13:03.949 255071 DEBUG oslo_concurrency.lockutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Acquiring lock "refresh_cache-73161fa0-86cc-4d12-bbb4-64386b62bf99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:13:03 compute-0 nova_compute[255040]: 2025-11-29 08:13:03.949 255071 DEBUG oslo_concurrency.lockutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Acquired lock "refresh_cache-73161fa0-86cc-4d12-bbb4-64386b62bf99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:13:03 compute-0 nova_compute[255040]: 2025-11-29 08:13:03.950 255071 DEBUG nova.network.neutron [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:13:04 compute-0 nova_compute[255040]: 2025-11-29 08:13:04.014 255071 DEBUG nova.compute.manager [req-9e92a662-9794-4642-9df6-d73a9a3dfd18 req-2e3130e4-2319-46a5-b4d1-4e2bf9bd325f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Received event network-changed-118420be-1bec-4d74-a16f-38c9916df2ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:13:04 compute-0 nova_compute[255040]: 2025-11-29 08:13:04.015 255071 DEBUG nova.compute.manager [req-9e92a662-9794-4642-9df6-d73a9a3dfd18 req-2e3130e4-2319-46a5-b4d1-4e2bf9bd325f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Refreshing instance network info cache due to event network-changed-118420be-1bec-4d74-a16f-38c9916df2ec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:13:04 compute-0 nova_compute[255040]: 2025-11-29 08:13:04.016 255071 DEBUG oslo_concurrency.lockutils [req-9e92a662-9794-4642-9df6-d73a9a3dfd18 req-2e3130e4-2319-46a5-b4d1-4e2bf9bd325f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-73161fa0-86cc-4d12-bbb4-64386b62bf99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:13:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:13:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3785298105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:04 compute-0 nova_compute[255040]: 2025-11-29 08:13:04.445 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:04 compute-0 nova_compute[255040]: 2025-11-29 08:13:04.531 255071 DEBUG nova.network.neutron [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:13:04 compute-0 nova_compute[255040]: 2025-11-29 08:13:04.837 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e357 do_prune osdmap full prune enabled
Nov 29 08:13:05 compute-0 ceph-mon[75237]: pgmap v1710: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.0 MiB/s rd, 702 KiB/s wr, 126 op/s
Nov 29 08:13:05 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3785298105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e358 e358: 3 total, 3 up, 3 in
Nov 29 08:13:05 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e358: 3 total, 3 up, 3 in
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.299 255071 DEBUG nova.network.neutron [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Updating instance_info_cache with network_info: [{"id": "118420be-1bec-4d74-a16f-38c9916df2ec", "address": "fa:16:3e:8d:66:24", "network": {"id": "3ac59a05-6e29-4fc3-9a46-eac16f636fbf", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1938605210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4685ebb42c1b47019026ac85736a2f9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap118420be-1b", "ovs_interfaceid": "118420be-1bec-4d74-a16f-38c9916df2ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.316 255071 DEBUG oslo_concurrency.lockutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Releasing lock "refresh_cache-73161fa0-86cc-4d12-bbb4-64386b62bf99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.317 255071 DEBUG nova.compute.manager [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Instance network_info: |[{"id": "118420be-1bec-4d74-a16f-38c9916df2ec", "address": "fa:16:3e:8d:66:24", "network": {"id": "3ac59a05-6e29-4fc3-9a46-eac16f636fbf", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1938605210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4685ebb42c1b47019026ac85736a2f9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap118420be-1b", "ovs_interfaceid": "118420be-1bec-4d74-a16f-38c9916df2ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.318 255071 DEBUG oslo_concurrency.lockutils [req-9e92a662-9794-4642-9df6-d73a9a3dfd18 req-2e3130e4-2319-46a5-b4d1-4e2bf9bd325f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-73161fa0-86cc-4d12-bbb4-64386b62bf99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.318 255071 DEBUG nova.network.neutron [req-9e92a662-9794-4642-9df6-d73a9a3dfd18 req-2e3130e4-2319-46a5-b4d1-4e2bf9bd325f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Refreshing network info cache for port 118420be-1bec-4d74-a16f-38c9916df2ec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.322 255071 DEBUG nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Start _get_guest_xml network_info=[{"id": "118420be-1bec-4d74-a16f-38c9916df2ec", "address": "fa:16:3e:8d:66:24", "network": {"id": "3ac59a05-6e29-4fc3-9a46-eac16f636fbf", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1938605210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4685ebb42c1b47019026ac85736a2f9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap118420be-1b", "ovs_interfaceid": "118420be-1bec-4d74-a16f-38c9916df2ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'image_id': '36a9388d-0d77-4d24-a915-be92247e5dbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.328 255071 WARNING nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.332 255071 DEBUG nova.virt.libvirt.host [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.333 255071 DEBUG nova.virt.libvirt.host [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.346 255071 DEBUG nova.virt.libvirt.host [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.347 255071 DEBUG nova.virt.libvirt.host [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.349 255071 DEBUG nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.350 255071 DEBUG nova.virt.hardware [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.350 255071 DEBUG nova.virt.hardware [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.352 255071 DEBUG nova.virt.hardware [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.352 255071 DEBUG nova.virt.hardware [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.353 255071 DEBUG nova.virt.hardware [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.353 255071 DEBUG nova.virt.hardware [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.353 255071 DEBUG nova.virt.hardware [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.353 255071 DEBUG nova.virt.hardware [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.354 255071 DEBUG nova.virt.hardware [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.354 255071 DEBUG nova.virt.hardware [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.354 255071 DEBUG nova.virt.hardware [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.358 255071 DEBUG oslo_concurrency.processutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.527 255071 DEBUG nova.network.neutron [req-96b6a64b-4162-4f47-b257-e4284ed2af7a req-a2392ca3-9629-44af-aa4c-c593bde4c3b9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Updated VIF entry in instance network info cache for port 9fb97b8d-7982-4dac-8c85-e972bacc8ad7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.529 255071 DEBUG nova.network.neutron [req-96b6a64b-4162-4f47-b257-e4284ed2af7a req-a2392ca3-9629-44af-aa4c-c593bde4c3b9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Updating instance_info_cache with network_info: [{"id": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "address": "fa:16:3e:9f:12:20", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb97b8d-79", "ovs_interfaceid": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.549 255071 DEBUG oslo_concurrency.lockutils [req-96b6a64b-4162-4f47-b257-e4284ed2af7a req-a2392ca3-9629-44af-aa4c-c593bde4c3b9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-cde9039b-1882-4723-9524-c51a289f67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:13:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:13:05 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/934388926' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.839 255071 DEBUG oslo_concurrency.processutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.863 255071 DEBUG nova.storage.rbd_utils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] rbd image 73161fa0-86cc-4d12-bbb4-64386b62bf99_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:13:05 compute-0 nova_compute[255040]: 2025-11-29 08:13:05.868 255071 DEBUG oslo_concurrency.processutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.6 MiB/s wr, 163 op/s
Nov 29 08:13:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e358 do_prune osdmap full prune enabled
Nov 29 08:13:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e359 e359: 3 total, 3 up, 3 in
Nov 29 08:13:06 compute-0 ceph-mon[75237]: osdmap e358: 3 total, 3 up, 3 in
Nov 29 08:13:06 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/934388926' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:06 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e359: 3 total, 3 up, 3 in
Nov 29 08:13:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:13:06 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2340462698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.382 255071 DEBUG oslo_concurrency.processutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.384 255071 DEBUG nova.virt.libvirt.vif [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-2007858834',display_name='tempest-SnapshotDataIntegrityTests-server-2007858834',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-2007858834',id=22,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBKaRIcO7AmYjaJvzcprrw01/xfXd0JXKSpN5qfxtlP/ZK/lXduysUlgNUBiHURonuasBwtRu1mSrog6vjuWzi0jEJhcL/o3xoH/UXmwYNWA1x2U/xHUSdn4L6A8zrUHjg==',key_name='tempest-keypair-1134313540',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4685ebb42c1b47019026ac85736a2f9e',ramdisk_id='',reservation_id='r-vdbkyupn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-438088631',owner_user_name='tempest-SnapshotDataIntegrityTests-438088631-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='628f195ee2d74504ac3b01a64427c25f',uuid=73161fa0-86cc-4d12-bbb4-64386b62bf99,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "118420be-1bec-4d74-a16f-38c9916df2ec", "address": "fa:16:3e:8d:66:24", "network": {"id": "3ac59a05-6e29-4fc3-9a46-eac16f636fbf", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1938605210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4685ebb42c1b47019026ac85736a2f9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap118420be-1b", "ovs_interfaceid": "118420be-1bec-4d74-a16f-38c9916df2ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.385 255071 DEBUG nova.network.os_vif_util [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Converting VIF {"id": "118420be-1bec-4d74-a16f-38c9916df2ec", "address": "fa:16:3e:8d:66:24", "network": {"id": "3ac59a05-6e29-4fc3-9a46-eac16f636fbf", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1938605210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4685ebb42c1b47019026ac85736a2f9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap118420be-1b", "ovs_interfaceid": "118420be-1bec-4d74-a16f-38c9916df2ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.385 255071 DEBUG nova.network.os_vif_util [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:66:24,bridge_name='br-int',has_traffic_filtering=True,id=118420be-1bec-4d74-a16f-38c9916df2ec,network=Network(3ac59a05-6e29-4fc3-9a46-eac16f636fbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap118420be-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.387 255071 DEBUG nova.objects.instance [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 73161fa0-86cc-4d12-bbb4-64386b62bf99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.465 255071 DEBUG nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:13:06 compute-0 nova_compute[255040]:   <uuid>73161fa0-86cc-4d12-bbb4-64386b62bf99</uuid>
Nov 29 08:13:06 compute-0 nova_compute[255040]:   <name>instance-00000016</name>
Nov 29 08:13:06 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:13:06 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:13:06 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <nova:name>tempest-SnapshotDataIntegrityTests-server-2007858834</nova:name>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:13:05</nova:creationTime>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:13:06 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:13:06 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:13:06 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:13:06 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:13:06 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:13:06 compute-0 nova_compute[255040]:         <nova:user uuid="628f195ee2d74504ac3b01a64427c25f">tempest-SnapshotDataIntegrityTests-438088631-project-member</nova:user>
Nov 29 08:13:06 compute-0 nova_compute[255040]:         <nova:project uuid="4685ebb42c1b47019026ac85736a2f9e">tempest-SnapshotDataIntegrityTests-438088631</nova:project>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <nova:root type="image" uuid="36a9388d-0d77-4d24-a915-be92247e5dbc"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:13:06 compute-0 nova_compute[255040]:         <nova:port uuid="118420be-1bec-4d74-a16f-38c9916df2ec">
Nov 29 08:13:06 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:13:06 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:13:06 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <system>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <entry name="serial">73161fa0-86cc-4d12-bbb4-64386b62bf99</entry>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <entry name="uuid">73161fa0-86cc-4d12-bbb4-64386b62bf99</entry>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     </system>
Nov 29 08:13:06 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:13:06 compute-0 nova_compute[255040]:   <os>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:   </os>
Nov 29 08:13:06 compute-0 nova_compute[255040]:   <features>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:   </features>
Nov 29 08:13:06 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:13:06 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:13:06 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/73161fa0-86cc-4d12-bbb4-64386b62bf99_disk">
Nov 29 08:13:06 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       </source>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:13:06 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/73161fa0-86cc-4d12-bbb4-64386b62bf99_disk.config">
Nov 29 08:13:06 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       </source>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:13:06 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:8d:66:24"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <target dev="tap118420be-1b"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/73161fa0-86cc-4d12-bbb4-64386b62bf99/console.log" append="off"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <video>
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     </video>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:13:06 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:13:06 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:13:06 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:13:06 compute-0 nova_compute[255040]: </domain>
Nov 29 08:13:06 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.467 255071 DEBUG nova.compute.manager [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Preparing to wait for external event network-vif-plugged-118420be-1bec-4d74-a16f-38c9916df2ec prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.467 255071 DEBUG oslo_concurrency.lockutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Acquiring lock "73161fa0-86cc-4d12-bbb4-64386b62bf99-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.467 255071 DEBUG oslo_concurrency.lockutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.467 255071 DEBUG oslo_concurrency.lockutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.468 255071 DEBUG nova.virt.libvirt.vif [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-2007858834',display_name='tempest-SnapshotDataIntegrityTests-server-2007858834',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-2007858834',id=22,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBKaRIcO7AmYjaJvzcprrw01/xfXd0JXKSpN5qfxtlP/ZK/lXduysUlgNUBiHURonuasBwtRu1mSrog6vjuWzi0jEJhcL/o3xoH/UXmwYNWA1x2U/xHUSdn4L6A8zrUHjg==',key_name='tempest-keypair-1134313540',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4685ebb42c1b47019026ac85736a2f9e',ramdisk_id='',reservation_id='r-vdbkyupn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-438088631',owner_user_name='tempest-SnapshotDataIntegrityTests-438088631-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='628f195ee2d74504ac3b01a64427c25f',uuid=73161fa0-86cc-4d12-bbb4-64386b62bf99,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "118420be-1bec-4d74-a16f-38c9916df2ec", "address": "fa:16:3e:8d:66:24", "network": {"id": "3ac59a05-6e29-4fc3-9a46-eac16f636fbf", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1938605210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4685ebb42c1b47019026ac85736a2f9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap118420be-1b", "ovs_interfaceid": "118420be-1bec-4d74-a16f-38c9916df2ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.468 255071 DEBUG nova.network.os_vif_util [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Converting VIF {"id": "118420be-1bec-4d74-a16f-38c9916df2ec", "address": "fa:16:3e:8d:66:24", "network": {"id": "3ac59a05-6e29-4fc3-9a46-eac16f636fbf", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1938605210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4685ebb42c1b47019026ac85736a2f9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap118420be-1b", "ovs_interfaceid": "118420be-1bec-4d74-a16f-38c9916df2ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.469 255071 DEBUG nova.network.os_vif_util [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:66:24,bridge_name='br-int',has_traffic_filtering=True,id=118420be-1bec-4d74-a16f-38c9916df2ec,network=Network(3ac59a05-6e29-4fc3-9a46-eac16f636fbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap118420be-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.469 255071 DEBUG os_vif [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:66:24,bridge_name='br-int',has_traffic_filtering=True,id=118420be-1bec-4d74-a16f-38c9916df2ec,network=Network(3ac59a05-6e29-4fc3-9a46-eac16f636fbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap118420be-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.470 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.470 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.471 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.473 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.473 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap118420be-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.474 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap118420be-1b, col_values=(('external_ids', {'iface-id': '118420be-1bec-4d74-a16f-38c9916df2ec', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:66:24', 'vm-uuid': '73161fa0-86cc-4d12-bbb4-64386b62bf99'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.476 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:06 compute-0 NetworkManager[49116]: <info>  [1764403986.4768] manager: (tap118420be-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.480 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.484 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.485 255071 INFO os_vif [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:66:24,bridge_name='br-int',has_traffic_filtering=True,id=118420be-1bec-4d74-a16f-38c9916df2ec,network=Network(3ac59a05-6e29-4fc3-9a46-eac16f636fbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap118420be-1b')
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.527 255071 DEBUG nova.network.neutron [req-9e92a662-9794-4642-9df6-d73a9a3dfd18 req-2e3130e4-2319-46a5-b4d1-4e2bf9bd325f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Updated VIF entry in instance network info cache for port 118420be-1bec-4d74-a16f-38c9916df2ec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.528 255071 DEBUG nova.network.neutron [req-9e92a662-9794-4642-9df6-d73a9a3dfd18 req-2e3130e4-2319-46a5-b4d1-4e2bf9bd325f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Updating instance_info_cache with network_info: [{"id": "118420be-1bec-4d74-a16f-38c9916df2ec", "address": "fa:16:3e:8d:66:24", "network": {"id": "3ac59a05-6e29-4fc3-9a46-eac16f636fbf", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1938605210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4685ebb42c1b47019026ac85736a2f9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap118420be-1b", "ovs_interfaceid": "118420be-1bec-4d74-a16f-38c9916df2ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.555 255071 DEBUG oslo_concurrency.lockutils [req-9e92a662-9794-4642-9df6-d73a9a3dfd18 req-2e3130e4-2319-46a5-b4d1-4e2bf9bd325f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-73161fa0-86cc-4d12-bbb4-64386b62bf99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.561 255071 DEBUG nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.562 255071 DEBUG nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.562 255071 DEBUG nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] No VIF found with MAC fa:16:3e:8d:66:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.562 255071 INFO nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Using config drive
Nov 29 08:13:06 compute-0 nova_compute[255040]: 2025-11-29 08:13:06.586 255071 DEBUG nova.storage.rbd_utils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] rbd image 73161fa0-86cc-4d12-bbb4-64386b62bf99_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:13:07 compute-0 nova_compute[255040]: 2025-11-29 08:13:07.023 255071 INFO nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Creating config drive at /var/lib/nova/instances/73161fa0-86cc-4d12-bbb4-64386b62bf99/disk.config
Nov 29 08:13:07 compute-0 nova_compute[255040]: 2025-11-29 08:13:07.031 255071 DEBUG oslo_concurrency.processutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73161fa0-86cc-4d12-bbb4-64386b62bf99/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp654d6eqw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:07 compute-0 nova_compute[255040]: 2025-11-29 08:13:07.169 255071 DEBUG oslo_concurrency.processutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73161fa0-86cc-4d12-bbb4-64386b62bf99/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp654d6eqw" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e359 do_prune osdmap full prune enabled
Nov 29 08:13:07 compute-0 ceph-mon[75237]: pgmap v1712: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.6 MiB/s wr, 163 op/s
Nov 29 08:13:07 compute-0 ceph-mon[75237]: osdmap e359: 3 total, 3 up, 3 in
Nov 29 08:13:07 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2340462698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e360 e360: 3 total, 3 up, 3 in
Nov 29 08:13:07 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e360: 3 total, 3 up, 3 in
Nov 29 08:13:07 compute-0 nova_compute[255040]: 2025-11-29 08:13:07.220 255071 DEBUG nova.storage.rbd_utils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] rbd image 73161fa0-86cc-4d12-bbb4-64386b62bf99_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:13:07 compute-0 nova_compute[255040]: 2025-11-29 08:13:07.224 255071 DEBUG oslo_concurrency.processutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73161fa0-86cc-4d12-bbb4-64386b62bf99/disk.config 73161fa0-86cc-4d12-bbb4-64386b62bf99_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:07 compute-0 nova_compute[255040]: 2025-11-29 08:13:07.395 255071 DEBUG oslo_concurrency.processutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73161fa0-86cc-4d12-bbb4-64386b62bf99/disk.config 73161fa0-86cc-4d12-bbb4-64386b62bf99_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:07 compute-0 nova_compute[255040]: 2025-11-29 08:13:07.396 255071 INFO nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Deleting local config drive /var/lib/nova/instances/73161fa0-86cc-4d12-bbb4-64386b62bf99/disk.config because it was imported into RBD.
Nov 29 08:13:07 compute-0 kernel: tap118420be-1b: entered promiscuous mode
Nov 29 08:13:07 compute-0 NetworkManager[49116]: <info>  [1764403987.4532] manager: (tap118420be-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/113)
Nov 29 08:13:07 compute-0 ovn_controller[153295]: 2025-11-29T08:13:07Z|00200|binding|INFO|Claiming lport 118420be-1bec-4d74-a16f-38c9916df2ec for this chassis.
Nov 29 08:13:07 compute-0 ovn_controller[153295]: 2025-11-29T08:13:07Z|00201|binding|INFO|118420be-1bec-4d74-a16f-38c9916df2ec: Claiming fa:16:3e:8d:66:24 10.100.0.9
Nov 29 08:13:07 compute-0 nova_compute[255040]: 2025-11-29 08:13:07.454 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.463 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:66:24 10.100.0.9'], port_security=['fa:16:3e:8d:66:24 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '73161fa0-86cc-4d12-bbb4-64386b62bf99', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3ac59a05-6e29-4fc3-9a46-eac16f636fbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4685ebb42c1b47019026ac85736a2f9e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '145d1636-c578-4a94-b60c-1faee92485c3 c93e00d2-a27f-4f8a-a521-48da2c2cd6cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69515d80-21fb-4aef-ab68-ee0d263d9564, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=118420be-1bec-4d74-a16f-38c9916df2ec) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.464 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 118420be-1bec-4d74-a16f-38c9916df2ec in datapath 3ac59a05-6e29-4fc3-9a46-eac16f636fbf bound to our chassis
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.466 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3ac59a05-6e29-4fc3-9a46-eac16f636fbf
Nov 29 08:13:07 compute-0 ovn_controller[153295]: 2025-11-29T08:13:07Z|00202|binding|INFO|Setting lport 118420be-1bec-4d74-a16f-38c9916df2ec ovn-installed in OVS
Nov 29 08:13:07 compute-0 ovn_controller[153295]: 2025-11-29T08:13:07Z|00203|binding|INFO|Setting lport 118420be-1bec-4d74-a16f-38c9916df2ec up in Southbound
Nov 29 08:13:07 compute-0 nova_compute[255040]: 2025-11-29 08:13:07.484 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.489 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab1122c-9c87-4a7c-aba2-897131e9f2e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.489 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3ac59a05-61 in ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.492 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3ac59a05-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.492 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[71af82b8-d5b2-4aca-9acb-646975363bd2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:07 compute-0 nova_compute[255040]: 2025-11-29 08:13:07.489 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.493 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[3b042e25-fa90-474f-b0be-22a8e5c99e65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:07 compute-0 systemd-machined[216271]: New machine qemu-22-instance-00000016.
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.513 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[c923b272-4f5e-4b48-b9de-b5f50067db15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:07 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-00000016.
Nov 29 08:13:07 compute-0 systemd-udevd[289558]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:13:07 compute-0 NetworkManager[49116]: <info>  [1764403987.5457] device (tap118420be-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:13:07 compute-0 NetworkManager[49116]: <info>  [1764403987.5467] device (tap118420be-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.546 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c51767-5a0c-4901-8570-3fce85067dc7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:07 compute-0 podman[289541]: 2025-11-29 08:13:07.58133315 +0000 UTC m=+0.081720130 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.582 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[55b47f7c-fd6e-4d27-a083-8f9c740e4987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.588 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[96ea468c-45f6-4cac-acd0-2c8f07162ee5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:07 compute-0 systemd-udevd[289565]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:13:07 compute-0 NetworkManager[49116]: <info>  [1764403987.5898] manager: (tap3ac59a05-60): new Veth device (/org/freedesktop/NetworkManager/Devices/114)
Nov 29 08:13:07 compute-0 nova_compute[255040]: 2025-11-29 08:13:07.633 255071 DEBUG nova.compute.manager [req-d875763d-2357-40e0-9703-b26cda165bd6 req-71df725f-3748-422e-aa5b-355d216e237d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Received event network-vif-plugged-118420be-1bec-4d74-a16f-38c9916df2ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:13:07 compute-0 nova_compute[255040]: 2025-11-29 08:13:07.635 255071 DEBUG oslo_concurrency.lockutils [req-d875763d-2357-40e0-9703-b26cda165bd6 req-71df725f-3748-422e-aa5b-355d216e237d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "73161fa0-86cc-4d12-bbb4-64386b62bf99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:07 compute-0 nova_compute[255040]: 2025-11-29 08:13:07.635 255071 DEBUG oslo_concurrency.lockutils [req-d875763d-2357-40e0-9703-b26cda165bd6 req-71df725f-3748-422e-aa5b-355d216e237d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:07 compute-0 nova_compute[255040]: 2025-11-29 08:13:07.635 255071 DEBUG oslo_concurrency.lockutils [req-d875763d-2357-40e0-9703-b26cda165bd6 req-71df725f-3748-422e-aa5b-355d216e237d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:07 compute-0 nova_compute[255040]: 2025-11-29 08:13:07.635 255071 DEBUG nova.compute.manager [req-d875763d-2357-40e0-9703-b26cda165bd6 req-71df725f-3748-422e-aa5b-355d216e237d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Processing event network-vif-plugged-118420be-1bec-4d74-a16f-38c9916df2ec _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.639 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[bb6dbd41-4cb8-4bc5-a5ed-a0df39144144]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.643 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[4717a395-1176-4201-b262-59a909d464e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:07 compute-0 NetworkManager[49116]: <info>  [1764403987.6701] device (tap3ac59a05-60): carrier: link connected
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.682 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[1d0870cc-274f-4f07-9ffc-bbfc48fb8e13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.701 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[16d5c97b-b507-4bef-a51f-37405afbae0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3ac59a05-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:15:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622500, 'reachable_time': 31613, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289597, 'error': None, 'target': 'ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.724 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[c56bcd08-4a4b-495b-af36-a5ce84d5f562]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe94:1568'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622500, 'tstamp': 622500}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289598, 'error': None, 'target': 'ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.749 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[d897e941-6c4a-4b3e-9c6e-f2d6f847a02c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3ac59a05-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:15:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622500, 'reachable_time': 31613, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289599, 'error': None, 'target': 'ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.792 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[e057cb7d-6947-47ab-98c6-1d0fe04b2dde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.877 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[cb965eb8-a0da-4e81-8fef-ef946853eb70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.879 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ac59a05-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.879 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.880 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3ac59a05-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:07 compute-0 NetworkManager[49116]: <info>  [1764403987.8829] manager: (tap3ac59a05-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Nov 29 08:13:07 compute-0 kernel: tap3ac59a05-60: entered promiscuous mode
Nov 29 08:13:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.4 MiB/s wr, 109 op/s
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.889 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3ac59a05-60, col_values=(('external_ids', {'iface-id': '97dcad99-21fd-4887-92fc-118b6a771872'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:07 compute-0 nova_compute[255040]: 2025-11-29 08:13:07.887 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:07 compute-0 nova_compute[255040]: 2025-11-29 08:13:07.890 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:07 compute-0 ovn_controller[153295]: 2025-11-29T08:13:07Z|00204|binding|INFO|Releasing lport 97dcad99-21fd-4887-92fc-118b6a771872 from this chassis (sb_readonly=0)
Nov 29 08:13:07 compute-0 nova_compute[255040]: 2025-11-29 08:13:07.891 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.892 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3ac59a05-6e29-4fc3-9a46-eac16f636fbf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3ac59a05-6e29-4fc3-9a46-eac16f636fbf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.893 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[c1bfedc8-e387-425c-8d87-5470c11f36e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.894 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-3ac59a05-6e29-4fc3-9a46-eac16f636fbf
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/3ac59a05-6e29-4fc3-9a46-eac16f636fbf.pid.haproxy
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID 3ac59a05-6e29-4fc3-9a46-eac16f636fbf
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:13:07 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:07.896 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf', 'env', 'PROCESS_TAG=haproxy-3ac59a05-6e29-4fc3-9a46-eac16f636fbf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3ac59a05-6e29-4fc3-9a46-eac16f636fbf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:13:07 compute-0 nova_compute[255040]: 2025-11-29 08:13:07.906 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.128 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403988.127873, 73161fa0-86cc-4d12-bbb4-64386b62bf99 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.129 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] VM Started (Lifecycle Event)
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.132 255071 DEBUG nova.compute.manager [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.136 255071 DEBUG nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.139 255071 INFO nova.virt.libvirt.driver [-] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Instance spawned successfully.
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.140 255071 DEBUG nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.148 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.153 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.163 255071 DEBUG nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.164 255071 DEBUG nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.165 255071 DEBUG nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.165 255071 DEBUG nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.166 255071 DEBUG nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.166 255071 DEBUG nova.virt.libvirt.driver [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.175 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.175 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403988.128909, 73161fa0-86cc-4d12-bbb4-64386b62bf99 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.175 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] VM Paused (Lifecycle Event)
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.197 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:13:08 compute-0 ceph-mon[75237]: osdmap e360: 3 total, 3 up, 3 in
Nov 29 08:13:08 compute-0 ceph-mon[75237]: pgmap v1715: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.4 MiB/s wr, 109 op/s
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.203 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764403988.1361148, 73161fa0-86cc-4d12-bbb4-64386b62bf99 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.204 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] VM Resumed (Lifecycle Event)
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.222 255071 INFO nova.compute.manager [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Took 6.20 seconds to spawn the instance on the hypervisor.
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.222 255071 DEBUG nova.compute.manager [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.224 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.231 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.263 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.295 255071 INFO nova.compute.manager [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Took 8.20 seconds to build instance.
Nov 29 08:13:08 compute-0 nova_compute[255040]: 2025-11-29 08:13:08.310 255071 DEBUG oslo_concurrency.lockutils [None req-39b466f7-9855-4fc2-9316-3df417bfc497 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:08 compute-0 podman[289673]: 2025-11-29 08:13:08.317377147 +0000 UTC m=+0.054413493 container create 9895dd5b1fccf542ea5e659eedb86e59f8e736c9ed5fe3b5c942e197dd4d413a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 08:13:08 compute-0 systemd[1]: Started libpod-conmon-9895dd5b1fccf542ea5e659eedb86e59f8e736c9ed5fe3b5c942e197dd4d413a.scope.
Nov 29 08:13:08 compute-0 podman[289673]: 2025-11-29 08:13:08.289498943 +0000 UTC m=+0.026535299 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:13:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24178fad610e2166897e96c8259caaf4dfab50842dcd197f0abfc061a790e85c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:08 compute-0 podman[289673]: 2025-11-29 08:13:08.408425549 +0000 UTC m=+0.145461915 container init 9895dd5b1fccf542ea5e659eedb86e59f8e736c9ed5fe3b5c942e197dd4d413a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 08:13:08 compute-0 podman[289673]: 2025-11-29 08:13:08.414435291 +0000 UTC m=+0.151471637 container start 9895dd5b1fccf542ea5e659eedb86e59f8e736c9ed5fe3b5c942e197dd4d413a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:13:08 compute-0 neutron-haproxy-ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf[289688]: [NOTICE]   (289692) : New worker (289694) forked
Nov 29 08:13:08 compute-0 neutron-haproxy-ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf[289688]: [NOTICE]   (289692) : Loading success.
Nov 29 08:13:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:13:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:13:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:13:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:13:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:13:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:13:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e360 do_prune osdmap full prune enabled
Nov 29 08:13:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e361 e361: 3 total, 3 up, 3 in
Nov 29 08:13:09 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Nov 29 08:13:09 compute-0 ovn_controller[153295]: 2025-11-29T08:13:09Z|00040|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.6 does not match offer 10.100.0.9
Nov 29 08:13:09 compute-0 ovn_controller[153295]: 2025-11-29T08:13:09Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:9f:12:20 10.100.0.9
Nov 29 08:13:09 compute-0 nova_compute[255040]: 2025-11-29 08:13:09.704 255071 DEBUG nova.compute.manager [req-0e763000-e804-4b32-8cd8-267fc4213509 req-70493ea4-0b1d-46ad-9f1e-4f26400aab60 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Received event network-vif-plugged-118420be-1bec-4d74-a16f-38c9916df2ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:13:09 compute-0 nova_compute[255040]: 2025-11-29 08:13:09.704 255071 DEBUG oslo_concurrency.lockutils [req-0e763000-e804-4b32-8cd8-267fc4213509 req-70493ea4-0b1d-46ad-9f1e-4f26400aab60 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "73161fa0-86cc-4d12-bbb4-64386b62bf99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:09 compute-0 nova_compute[255040]: 2025-11-29 08:13:09.704 255071 DEBUG oslo_concurrency.lockutils [req-0e763000-e804-4b32-8cd8-267fc4213509 req-70493ea4-0b1d-46ad-9f1e-4f26400aab60 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:09 compute-0 nova_compute[255040]: 2025-11-29 08:13:09.705 255071 DEBUG oslo_concurrency.lockutils [req-0e763000-e804-4b32-8cd8-267fc4213509 req-70493ea4-0b1d-46ad-9f1e-4f26400aab60 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:09 compute-0 nova_compute[255040]: 2025-11-29 08:13:09.705 255071 DEBUG nova.compute.manager [req-0e763000-e804-4b32-8cd8-267fc4213509 req-70493ea4-0b1d-46ad-9f1e-4f26400aab60 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] No waiting events found dispatching network-vif-plugged-118420be-1bec-4d74-a16f-38c9916df2ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:13:09 compute-0 nova_compute[255040]: 2025-11-29 08:13:09.705 255071 WARNING nova.compute.manager [req-0e763000-e804-4b32-8cd8-267fc4213509 req-70493ea4-0b1d-46ad-9f1e-4f26400aab60 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Received unexpected event network-vif-plugged-118420be-1bec-4d74-a16f-38c9916df2ec for instance with vm_state active and task_state None.
Nov 29 08:13:09 compute-0 nova_compute[255040]: 2025-11-29 08:13:09.840 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.7 MiB/s wr, 82 op/s
Nov 29 08:13:10 compute-0 ceph-mon[75237]: osdmap e361: 3 total, 3 up, 3 in
Nov 29 08:13:10 compute-0 ceph-mon[75237]: pgmap v1717: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.7 MiB/s wr, 82 op/s
Nov 29 08:13:11 compute-0 nova_compute[255040]: 2025-11-29 08:13:11.478 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:11 compute-0 nova_compute[255040]: 2025-11-29 08:13:11.641 255071 DEBUG nova.compute.manager [req-37dbb471-c570-434d-ba3b-67688e1cbd5d req-34794742-82b1-4109-b91c-42b8c7df4705 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Received event network-changed-118420be-1bec-4d74-a16f-38c9916df2ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:13:11 compute-0 nova_compute[255040]: 2025-11-29 08:13:11.642 255071 DEBUG nova.compute.manager [req-37dbb471-c570-434d-ba3b-67688e1cbd5d req-34794742-82b1-4109-b91c-42b8c7df4705 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Refreshing instance network info cache due to event network-changed-118420be-1bec-4d74-a16f-38c9916df2ec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:13:11 compute-0 nova_compute[255040]: 2025-11-29 08:13:11.642 255071 DEBUG oslo_concurrency.lockutils [req-37dbb471-c570-434d-ba3b-67688e1cbd5d req-34794742-82b1-4109-b91c-42b8c7df4705 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-73161fa0-86cc-4d12-bbb4-64386b62bf99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:13:11 compute-0 nova_compute[255040]: 2025-11-29 08:13:11.643 255071 DEBUG oslo_concurrency.lockutils [req-37dbb471-c570-434d-ba3b-67688e1cbd5d req-34794742-82b1-4109-b91c-42b8c7df4705 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-73161fa0-86cc-4d12-bbb4-64386b62bf99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:13:11 compute-0 nova_compute[255040]: 2025-11-29 08:13:11.643 255071 DEBUG nova.network.neutron [req-37dbb471-c570-434d-ba3b-67688e1cbd5d req-34794742-82b1-4109-b91c-42b8c7df4705 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Refreshing network info cache for port 118420be-1bec-4d74-a16f-38c9916df2ec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:13:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 6.5 MiB/s rd, 3.4 MiB/s wr, 322 op/s
Nov 29 08:13:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e361 do_prune osdmap full prune enabled
Nov 29 08:13:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e362 e362: 3 total, 3 up, 3 in
Nov 29 08:13:11 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e362: 3 total, 3 up, 3 in
Nov 29 08:13:12 compute-0 nova_compute[255040]: 2025-11-29 08:13:12.602 255071 DEBUG nova.network.neutron [req-37dbb471-c570-434d-ba3b-67688e1cbd5d req-34794742-82b1-4109-b91c-42b8c7df4705 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Updated VIF entry in instance network info cache for port 118420be-1bec-4d74-a16f-38c9916df2ec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:13:12 compute-0 nova_compute[255040]: 2025-11-29 08:13:12.604 255071 DEBUG nova.network.neutron [req-37dbb471-c570-434d-ba3b-67688e1cbd5d req-34794742-82b1-4109-b91c-42b8c7df4705 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Updating instance_info_cache with network_info: [{"id": "118420be-1bec-4d74-a16f-38c9916df2ec", "address": "fa:16:3e:8d:66:24", "network": {"id": "3ac59a05-6e29-4fc3-9a46-eac16f636fbf", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1938605210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4685ebb42c1b47019026ac85736a2f9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap118420be-1b", "ovs_interfaceid": "118420be-1bec-4d74-a16f-38c9916df2ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:13:12 compute-0 nova_compute[255040]: 2025-11-29 08:13:12.625 255071 DEBUG oslo_concurrency.lockutils [req-37dbb471-c570-434d-ba3b-67688e1cbd5d req-34794742-82b1-4109-b91c-42b8c7df4705 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-73161fa0-86cc-4d12-bbb4-64386b62bf99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:13:12 compute-0 ceph-mon[75237]: pgmap v1718: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 6.5 MiB/s rd, 3.4 MiB/s wr, 322 op/s
Nov 29 08:13:12 compute-0 ceph-mon[75237]: osdmap e362: 3 total, 3 up, 3 in
Nov 29 08:13:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e362 do_prune osdmap full prune enabled
Nov 29 08:13:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e363 e363: 3 total, 3 up, 3 in
Nov 29 08:13:13 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e363: 3 total, 3 up, 3 in
Nov 29 08:13:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 8.1 MiB/s rd, 3.4 MiB/s wr, 388 op/s
Nov 29 08:13:14 compute-0 ovn_controller[153295]: 2025-11-29T08:13:14Z|00042|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.6 does not match offer 10.100.0.9
Nov 29 08:13:14 compute-0 ovn_controller[153295]: 2025-11-29T08:13:14Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:9f:12:20 10.100.0.9
Nov 29 08:13:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:13:14 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1477883991' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:14 compute-0 ovn_controller[153295]: 2025-11-29T08:13:14Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9f:12:20 10.100.0.9
Nov 29 08:13:14 compute-0 ovn_controller[153295]: 2025-11-29T08:13:14Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9f:12:20 10.100.0.9
Nov 29 08:13:14 compute-0 nova_compute[255040]: 2025-11-29 08:13:14.411 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403979.4097302, 66776362-7d85-47fc-a7d5-f2c50e77d9da => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:13:14 compute-0 nova_compute[255040]: 2025-11-29 08:13:14.412 255071 INFO nova.compute.manager [-] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] VM Stopped (Lifecycle Event)
Nov 29 08:13:14 compute-0 nova_compute[255040]: 2025-11-29 08:13:14.437 255071 DEBUG nova.compute.manager [None req-5010013f-8a70-4a35-8b0a-9001366e7357 - - - - - -] [instance: 66776362-7d85-47fc-a7d5-f2c50e77d9da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:13:14 compute-0 ceph-mon[75237]: osdmap e363: 3 total, 3 up, 3 in
Nov 29 08:13:14 compute-0 ceph-mon[75237]: pgmap v1721: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 8.1 MiB/s rd, 3.4 MiB/s wr, 388 op/s
Nov 29 08:13:14 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1477883991' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:14 compute-0 nova_compute[255040]: 2025-11-29 08:13:14.844 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 6.0 MiB/s rd, 14 MiB/s wr, 360 op/s
Nov 29 08:13:15 compute-0 podman[289704]: 2025-11-29 08:13:15.93716811 +0000 UTC m=+0.091678141 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 08:13:16 compute-0 nova_compute[255040]: 2025-11-29 08:13:16.481 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:16 compute-0 ceph-mon[75237]: pgmap v1722: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 6.0 MiB/s rd, 14 MiB/s wr, 360 op/s
Nov 29 08:13:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 5.0 MiB/s rd, 12 MiB/s wr, 300 op/s
Nov 29 08:13:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e363 do_prune osdmap full prune enabled
Nov 29 08:13:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e364 e364: 3 total, 3 up, 3 in
Nov 29 08:13:18 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e364: 3 total, 3 up, 3 in
Nov 29 08:13:19 compute-0 ceph-mon[75237]: pgmap v1723: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 5.0 MiB/s rd, 12 MiB/s wr, 300 op/s
Nov 29 08:13:19 compute-0 ceph-mon[75237]: osdmap e364: 3 total, 3 up, 3 in
Nov 29 08:13:19 compute-0 nova_compute[255040]: 2025-11-29 08:13:19.849 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 2.5 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 1.4 MiB/s rd, 29 MiB/s wr, 196 op/s
Nov 29 08:13:21 compute-0 ceph-mon[75237]: pgmap v1725: 305 pgs: 305 active+clean; 2.5 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 1.4 MiB/s rd, 29 MiB/s wr, 196 op/s
Nov 29 08:13:21 compute-0 nova_compute[255040]: 2025-11-29 08:13:21.540 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 2.8 GiB data, 3.0 GiB used, 57 GiB / 60 GiB avail; 88 KiB/s rd, 62 MiB/s wr, 150 op/s
Nov 29 08:13:23 compute-0 ceph-mon[75237]: pgmap v1726: 305 pgs: 305 active+clean; 2.8 GiB data, 3.0 GiB used, 57 GiB / 60 GiB avail; 88 KiB/s rd, 62 MiB/s wr, 150 op/s
Nov 29 08:13:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 3.0 GiB data, 3.1 GiB used, 57 GiB / 60 GiB avail; 133 KiB/s rd, 70 MiB/s wr, 222 op/s
Nov 29 08:13:24 compute-0 ceph-mon[75237]: pgmap v1727: 305 pgs: 305 active+clean; 3.0 GiB data, 3.1 GiB used, 57 GiB / 60 GiB avail; 133 KiB/s rd, 70 MiB/s wr, 222 op/s
Nov 29 08:13:24 compute-0 nova_compute[255040]: 2025-11-29 08:13:24.855 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 3.2 GiB data, 3.4 GiB used, 57 GiB / 60 GiB avail; 243 KiB/s rd, 88 MiB/s wr, 223 op/s
Nov 29 08:13:26 compute-0 ovn_controller[153295]: 2025-11-29T08:13:26Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8d:66:24 10.100.0.9
Nov 29 08:13:26 compute-0 ovn_controller[153295]: 2025-11-29T08:13:26Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8d:66:24 10.100.0.9
Nov 29 08:13:26 compute-0 nova_compute[255040]: 2025-11-29 08:13:26.542 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:27.136 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:27.138 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:27.139 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:27 compute-0 ceph-mon[75237]: pgmap v1728: 305 pgs: 305 active+clean; 3.2 GiB data, 3.4 GiB used, 57 GiB / 60 GiB avail; 243 KiB/s rd, 88 MiB/s wr, 223 op/s
Nov 29 08:13:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 3.2 GiB data, 3.4 GiB used, 57 GiB / 60 GiB avail; 243 KiB/s rd, 88 MiB/s wr, 223 op/s
Nov 29 08:13:28 compute-0 ceph-mon[75237]: pgmap v1729: 305 pgs: 305 active+clean; 3.2 GiB data, 3.4 GiB used, 57 GiB / 60 GiB avail; 243 KiB/s rd, 88 MiB/s wr, 223 op/s
Nov 29 08:13:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:29 compute-0 nova_compute[255040]: 2025-11-29 08:13:29.857 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 3.2 GiB data, 3.4 GiB used, 57 GiB / 60 GiB avail; 383 KiB/s rd, 72 MiB/s wr, 196 op/s
Nov 29 08:13:29 compute-0 podman[289722]: 2025-11-29 08:13:29.987281207 +0000 UTC m=+0.121981351 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:13:30 compute-0 nova_compute[255040]: 2025-11-29 08:13:30.977 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:30 compute-0 nova_compute[255040]: 2025-11-29 08:13:30.977 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e364 do_prune osdmap full prune enabled
Nov 29 08:13:31 compute-0 nova_compute[255040]: 2025-11-29 08:13:31.545 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e365 e365: 3 total, 3 up, 3 in
Nov 29 08:13:31 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e365: 3 total, 3 up, 3 in
Nov 29 08:13:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 2.6 GiB data, 2.9 GiB used, 57 GiB / 60 GiB avail; 544 KiB/s rd, 54 MiB/s wr, 274 op/s
Nov 29 08:13:32 compute-0 ceph-mon[75237]: pgmap v1730: 305 pgs: 305 active+clean; 3.2 GiB data, 3.4 GiB used, 57 GiB / 60 GiB avail; 383 KiB/s rd, 72 MiB/s wr, 196 op/s
Nov 29 08:13:32 compute-0 nova_compute[255040]: 2025-11-29 08:13:32.978 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:32 compute-0 nova_compute[255040]: 2025-11-29 08:13:32.979 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:13:32 compute-0 nova_compute[255040]: 2025-11-29 08:13:32.979 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:13:33 compute-0 nova_compute[255040]: 2025-11-29 08:13:33.247 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "refresh_cache-cde9039b-1882-4723-9524-c51a289f67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:13:33 compute-0 nova_compute[255040]: 2025-11-29 08:13:33.247 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquired lock "refresh_cache-cde9039b-1882-4723-9524-c51a289f67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:13:33 compute-0 nova_compute[255040]: 2025-11-29 08:13:33.248 255071 DEBUG nova.network.neutron [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:13:33 compute-0 nova_compute[255040]: 2025-11-29 08:13:33.248 255071 DEBUG nova.objects.instance [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lazy-loading 'info_cache' on Instance uuid cde9039b-1882-4723-9524-c51a289f67b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:13:33 compute-0 ceph-mon[75237]: osdmap e365: 3 total, 3 up, 3 in
Nov 29 08:13:33 compute-0 ceph-mon[75237]: pgmap v1732: 305 pgs: 305 active+clean; 2.6 GiB data, 2.9 GiB used, 57 GiB / 60 GiB avail; 544 KiB/s rd, 54 MiB/s wr, 274 op/s
Nov 29 08:13:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 506 KiB/s rd, 35 MiB/s wr, 196 op/s
Nov 29 08:13:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:13:34 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1063362857' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:13:34 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1063362857' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:34 compute-0 ceph-mon[75237]: pgmap v1733: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 506 KiB/s rd, 35 MiB/s wr, 196 op/s
Nov 29 08:13:34 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1063362857' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:34 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1063362857' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:34 compute-0 nova_compute[255040]: 2025-11-29 08:13:34.290 255071 DEBUG nova.network.neutron [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Updating instance_info_cache with network_info: [{"id": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "address": "fa:16:3e:9f:12:20", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb97b8d-79", "ovs_interfaceid": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:13:34 compute-0 nova_compute[255040]: 2025-11-29 08:13:34.303 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Releasing lock "refresh_cache-cde9039b-1882-4723-9524-c51a289f67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:13:34 compute-0 nova_compute[255040]: 2025-11-29 08:13:34.304 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:13:34 compute-0 nova_compute[255040]: 2025-11-29 08:13:34.304 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:34 compute-0 nova_compute[255040]: 2025-11-29 08:13:34.305 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e365 do_prune osdmap full prune enabled
Nov 29 08:13:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e366 e366: 3 total, 3 up, 3 in
Nov 29 08:13:34 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e366: 3 total, 3 up, 3 in
Nov 29 08:13:34 compute-0 nova_compute[255040]: 2025-11-29 08:13:34.327 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:34 compute-0 nova_compute[255040]: 2025-11-29 08:13:34.328 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:34 compute-0 nova_compute[255040]: 2025-11-29 08:13:34.328 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:34 compute-0 nova_compute[255040]: 2025-11-29 08:13:34.328 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:13:34 compute-0 nova_compute[255040]: 2025-11-29 08:13:34.328 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:13:34 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/476261438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:34 compute-0 nova_compute[255040]: 2025-11-29 08:13:34.789 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:34 compute-0 nova_compute[255040]: 2025-11-29 08:13:34.881 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:34 compute-0 nova_compute[255040]: 2025-11-29 08:13:34.890 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:13:34 compute-0 nova_compute[255040]: 2025-11-29 08:13:34.890 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:13:34 compute-0 nova_compute[255040]: 2025-11-29 08:13:34.894 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:13:34 compute-0 nova_compute[255040]: 2025-11-29 08:13:34.894 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:13:35 compute-0 nova_compute[255040]: 2025-11-29 08:13:35.058 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:13:35 compute-0 nova_compute[255040]: 2025-11-29 08:13:35.059 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4024MB free_disk=59.94271469116211GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:13:35 compute-0 nova_compute[255040]: 2025-11-29 08:13:35.060 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:35 compute-0 nova_compute[255040]: 2025-11-29 08:13:35.060 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:35 compute-0 nova_compute[255040]: 2025-11-29 08:13:35.143 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance cde9039b-1882-4723-9524-c51a289f67b0 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:13:35 compute-0 nova_compute[255040]: 2025-11-29 08:13:35.143 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance 73161fa0-86cc-4d12-bbb4-64386b62bf99 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:13:35 compute-0 nova_compute[255040]: 2025-11-29 08:13:35.144 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:13:35 compute-0 nova_compute[255040]: 2025-11-29 08:13:35.144 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:13:35 compute-0 nova_compute[255040]: 2025-11-29 08:13:35.158 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Refreshing inventories for resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 08:13:35 compute-0 nova_compute[255040]: 2025-11-29 08:13:35.179 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Updating ProviderTree inventory for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 08:13:35 compute-0 nova_compute[255040]: 2025-11-29 08:13:35.180 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Updating inventory in ProviderTree for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 08:13:35 compute-0 nova_compute[255040]: 2025-11-29 08:13:35.192 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Refreshing aggregate associations for resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 08:13:35 compute-0 nova_compute[255040]: 2025-11-29 08:13:35.212 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Refreshing trait associations for resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e, traits: COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_ABM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE,COMPUTE_NODE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 08:13:35 compute-0 nova_compute[255040]: 2025-11-29 08:13:35.261 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e366 do_prune osdmap full prune enabled
Nov 29 08:13:35 compute-0 ceph-mon[75237]: osdmap e366: 3 total, 3 up, 3 in
Nov 29 08:13:35 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/476261438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e367 e367: 3 total, 3 up, 3 in
Nov 29 08:13:35 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e367: 3 total, 3 up, 3 in
Nov 29 08:13:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:13:35 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/948213296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:35 compute-0 nova_compute[255040]: 2025-11-29 08:13:35.707 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:35 compute-0 nova_compute[255040]: 2025-11-29 08:13:35.712 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:13:35 compute-0 nova_compute[255040]: 2025-11-29 08:13:35.732 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:13:35 compute-0 nova_compute[255040]: 2025-11-29 08:13:35.773 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:13:35 compute-0 nova_compute[255040]: 2025-11-29 08:13:35.773 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 4 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 292 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 548 KiB/s rd, 2.2 MiB/s wr, 187 op/s
Nov 29 08:13:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e367 do_prune osdmap full prune enabled
Nov 29 08:13:36 compute-0 nova_compute[255040]: 2025-11-29 08:13:36.444 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:36 compute-0 nova_compute[255040]: 2025-11-29 08:13:36.444 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:36 compute-0 nova_compute[255040]: 2025-11-29 08:13:36.445 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:13:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e368 e368: 3 total, 3 up, 3 in
Nov 29 08:13:36 compute-0 ceph-mon[75237]: osdmap e367: 3 total, 3 up, 3 in
Nov 29 08:13:36 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/948213296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:36 compute-0 ceph-mon[75237]: pgmap v1736: 305 pgs: 4 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 292 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 548 KiB/s rd, 2.2 MiB/s wr, 187 op/s
Nov 29 08:13:36 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e368: 3 total, 3 up, 3 in
Nov 29 08:13:36 compute-0 nova_compute[255040]: 2025-11-29 08:13:36.548 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:13:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2213561170' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:13:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2213561170' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:37 compute-0 ceph-mon[75237]: osdmap e368: 3 total, 3 up, 3 in
Nov 29 08:13:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2213561170' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2213561170' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:37 compute-0 podman[289793]: 2025-11-29 08:13:37.8992112 +0000 UTC m=+0.059761167 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:13:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 4 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 292 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 356 KiB/s rd, 94 KiB/s wr, 83 op/s
Nov 29 08:13:37 compute-0 nova_compute[255040]: 2025-11-29 08:13:37.970 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e368 do_prune osdmap full prune enabled
Nov 29 08:13:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e369 e369: 3 total, 3 up, 3 in
Nov 29 08:13:38 compute-0 ceph-mon[75237]: pgmap v1738: 305 pgs: 4 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 292 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 356 KiB/s rd, 94 KiB/s wr, 83 op/s
Nov 29 08:13:38 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e369: 3 total, 3 up, 3 in
Nov 29 08:13:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e369 do_prune osdmap full prune enabled
Nov 29 08:13:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e370 e370: 3 total, 3 up, 3 in
Nov 29 08:13:38 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e370: 3 total, 3 up, 3 in
Nov 29 08:13:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:13:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:13:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:13:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:13:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:13:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:13:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:13:38
Nov 29 08:13:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:13:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:13:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'backups', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'volumes', 'images', '.mgr']
Nov 29 08:13:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:13:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:13:39 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1134355258' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:13:39 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1134355258' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:39 compute-0 ceph-mon[75237]: osdmap e369: 3 total, 3 up, 3 in
Nov 29 08:13:39 compute-0 ceph-mon[75237]: osdmap e370: 3 total, 3 up, 3 in
Nov 29 08:13:39 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1134355258' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:39 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1134355258' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:39 compute-0 nova_compute[255040]: 2025-11-29 08:13:39.867 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 4 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 292 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 635 KiB/s rd, 39 KiB/s wr, 131 op/s
Nov 29 08:13:39 compute-0 nova_compute[255040]: 2025-11-29 08:13:39.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:40 compute-0 nova_compute[255040]: 2025-11-29 08:13:40.098 255071 DEBUG oslo_concurrency.lockutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:40 compute-0 nova_compute[255040]: 2025-11-29 08:13:40.098 255071 DEBUG oslo_concurrency.lockutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:40 compute-0 nova_compute[255040]: 2025-11-29 08:13:40.115 255071 DEBUG nova.compute.manager [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:13:40 compute-0 nova_compute[255040]: 2025-11-29 08:13:40.170 255071 DEBUG oslo_concurrency.lockutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:40 compute-0 nova_compute[255040]: 2025-11-29 08:13:40.171 255071 DEBUG oslo_concurrency.lockutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:40 compute-0 nova_compute[255040]: 2025-11-29 08:13:40.179 255071 DEBUG nova.virt.hardware [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:13:40 compute-0 nova_compute[255040]: 2025-11-29 08:13:40.180 255071 INFO nova.compute.claims [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:13:40 compute-0 nova_compute[255040]: 2025-11-29 08:13:40.338 255071 DEBUG oslo_concurrency.processutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e370 do_prune osdmap full prune enabled
Nov 29 08:13:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e371 e371: 3 total, 3 up, 3 in
Nov 29 08:13:40 compute-0 ceph-mon[75237]: pgmap v1741: 305 pgs: 4 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 292 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 635 KiB/s rd, 39 KiB/s wr, 131 op/s
Nov 29 08:13:40 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e371: 3 total, 3 up, 3 in
Nov 29 08:13:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:13:40 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1005370069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:40 compute-0 nova_compute[255040]: 2025-11-29 08:13:40.772 255071 DEBUG oslo_concurrency.processutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:40 compute-0 nova_compute[255040]: 2025-11-29 08:13:40.779 255071 DEBUG nova.compute.provider_tree [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:13:40 compute-0 nova_compute[255040]: 2025-11-29 08:13:40.797 255071 DEBUG nova.scheduler.client.report [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:13:40 compute-0 nova_compute[255040]: 2025-11-29 08:13:40.829 255071 DEBUG oslo_concurrency.lockutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:40 compute-0 nova_compute[255040]: 2025-11-29 08:13:40.830 255071 DEBUG nova.compute.manager [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:13:40 compute-0 nova_compute[255040]: 2025-11-29 08:13:40.878 255071 DEBUG nova.compute.manager [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:13:40 compute-0 nova_compute[255040]: 2025-11-29 08:13:40.879 255071 DEBUG nova.network.neutron [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:13:40 compute-0 nova_compute[255040]: 2025-11-29 08:13:40.903 255071 INFO nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:13:40 compute-0 nova_compute[255040]: 2025-11-29 08:13:40.926 255071 DEBUG nova.compute.manager [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:13:40 compute-0 nova_compute[255040]: 2025-11-29 08:13:40.979 255071 INFO nova.virt.block_device [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Booting with volume 710d1fe4-9be5-417f-b2c6-0a997cd9f339 at /dev/vda
Nov 29 08:13:41 compute-0 nova_compute[255040]: 2025-11-29 08:13:41.127 255071 DEBUG nova.policy [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5e62d407203540599a65ac50d5d447b9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3df24932e2a44aeab3c2aece8a045774', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:13:41 compute-0 nova_compute[255040]: 2025-11-29 08:13:41.167 255071 DEBUG os_brick.utils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:13:41 compute-0 nova_compute[255040]: 2025-11-29 08:13:41.170 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:41 compute-0 nova_compute[255040]: 2025-11-29 08:13:41.185 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:41 compute-0 nova_compute[255040]: 2025-11-29 08:13:41.185 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[a091f257-6396-40de-bbd3-b5c4c09e265c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:41 compute-0 nova_compute[255040]: 2025-11-29 08:13:41.186 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:41 compute-0 nova_compute[255040]: 2025-11-29 08:13:41.196 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:41 compute-0 nova_compute[255040]: 2025-11-29 08:13:41.197 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[3034f093-c2c5-4ce5-a943-8fe562f58aeb]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:41 compute-0 nova_compute[255040]: 2025-11-29 08:13:41.200 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:41 compute-0 nova_compute[255040]: 2025-11-29 08:13:41.212 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:41 compute-0 nova_compute[255040]: 2025-11-29 08:13:41.213 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c35541-239a-4429-821c-144db308fb06]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:41 compute-0 nova_compute[255040]: 2025-11-29 08:13:41.215 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[f43eb81c-b205-4ca7-abf5-a93587106be9]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:41 compute-0 nova_compute[255040]: 2025-11-29 08:13:41.216 255071 DEBUG oslo_concurrency.processutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:41 compute-0 nova_compute[255040]: 2025-11-29 08:13:41.256 255071 DEBUG oslo_concurrency.processutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "nvme version" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:41 compute-0 nova_compute[255040]: 2025-11-29 08:13:41.261 255071 DEBUG os_brick.initiator.connectors.lightos [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:13:41 compute-0 nova_compute[255040]: 2025-11-29 08:13:41.261 255071 DEBUG os_brick.initiator.connectors.lightos [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:13:41 compute-0 nova_compute[255040]: 2025-11-29 08:13:41.261 255071 DEBUG os_brick.initiator.connectors.lightos [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:13:41 compute-0 nova_compute[255040]: 2025-11-29 08:13:41.262 255071 DEBUG os_brick.utils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] <== get_connector_properties: return (93ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:13:41 compute-0 nova_compute[255040]: 2025-11-29 08:13:41.262 255071 DEBUG nova.virt.block_device [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Updating existing volume attachment record: a993ab48-6981-4804-a0a5-02e79c56b295 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:13:41 compute-0 ovn_controller[153295]: 2025-11-29T08:13:41Z|00205|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Nov 29 08:13:41 compute-0 nova_compute[255040]: 2025-11-29 08:13:41.552 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:41 compute-0 ceph-mon[75237]: osdmap e371: 3 total, 3 up, 3 in
Nov 29 08:13:41 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1005370069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:13:41 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1858857660' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:13:41 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1858857660' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:13:41 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/85726550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 1.6 GiB data, 1.9 GiB used, 58 GiB / 60 GiB avail; 250 KiB/s rd, 19 KiB/s wr, 177 op/s
Nov 29 08:13:41 compute-0 nova_compute[255040]: 2025-11-29 08:13:41.919 255071 DEBUG nova.network.neutron [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Successfully created port: 90cfdac4-0eb9-4a00-9ff4-a7fe2474579d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:13:42 compute-0 nova_compute[255040]: 2025-11-29 08:13:42.341 255071 DEBUG nova.compute.manager [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:13:42 compute-0 nova_compute[255040]: 2025-11-29 08:13:42.342 255071 DEBUG nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:13:42 compute-0 nova_compute[255040]: 2025-11-29 08:13:42.343 255071 INFO nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Creating image(s)
Nov 29 08:13:42 compute-0 nova_compute[255040]: 2025-11-29 08:13:42.343 255071 DEBUG nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:13:42 compute-0 nova_compute[255040]: 2025-11-29 08:13:42.343 255071 DEBUG nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Ensure instance console log exists: /var/lib/nova/instances/40d7aec5-9705-4885-8d58-7fcfdb8eac5c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:13:42 compute-0 nova_compute[255040]: 2025-11-29 08:13:42.344 255071 DEBUG oslo_concurrency.lockutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:42 compute-0 nova_compute[255040]: 2025-11-29 08:13:42.344 255071 DEBUG oslo_concurrency.lockutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:42 compute-0 nova_compute[255040]: 2025-11-29 08:13:42.344 255071 DEBUG oslo_concurrency.lockutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:42.722 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:13:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:42.723 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:13:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:42.723 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:42 compute-0 nova_compute[255040]: 2025-11-29 08:13:42.736 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:42 compute-0 nova_compute[255040]: 2025-11-29 08:13:42.794 255071 DEBUG nova.network.neutron [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Successfully updated port: 90cfdac4-0eb9-4a00-9ff4-a7fe2474579d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:13:42 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1858857660' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:42 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1858857660' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:42 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/85726550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:42 compute-0 ceph-mon[75237]: pgmap v1743: 305 pgs: 305 active+clean; 1.6 GiB data, 1.9 GiB used, 58 GiB / 60 GiB avail; 250 KiB/s rd, 19 KiB/s wr, 177 op/s
Nov 29 08:13:42 compute-0 nova_compute[255040]: 2025-11-29 08:13:42.820 255071 DEBUG oslo_concurrency.lockutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "refresh_cache-40d7aec5-9705-4885-8d58-7fcfdb8eac5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:13:42 compute-0 nova_compute[255040]: 2025-11-29 08:13:42.820 255071 DEBUG oslo_concurrency.lockutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquired lock "refresh_cache-40d7aec5-9705-4885-8d58-7fcfdb8eac5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:13:42 compute-0 nova_compute[255040]: 2025-11-29 08:13:42.820 255071 DEBUG nova.network.neutron [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:13:42 compute-0 nova_compute[255040]: 2025-11-29 08:13:42.876 255071 DEBUG nova.compute.manager [req-da963596-2b3d-4fc0-b70a-0d7477fc6fbb req-6a9e7b0a-9573-42f9-9403-e19e9b553b4f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Received event network-changed-90cfdac4-0eb9-4a00-9ff4-a7fe2474579d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:13:42 compute-0 nova_compute[255040]: 2025-11-29 08:13:42.877 255071 DEBUG nova.compute.manager [req-da963596-2b3d-4fc0-b70a-0d7477fc6fbb req-6a9e7b0a-9573-42f9-9403-e19e9b553b4f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Refreshing instance network info cache due to event network-changed-90cfdac4-0eb9-4a00-9ff4-a7fe2474579d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:13:42 compute-0 nova_compute[255040]: 2025-11-29 08:13:42.877 255071 DEBUG oslo_concurrency.lockutils [req-da963596-2b3d-4fc0-b70a-0d7477fc6fbb req-6a9e7b0a-9573-42f9-9403-e19e9b553b4f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-40d7aec5-9705-4885-8d58-7fcfdb8eac5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:13:42 compute-0 nova_compute[255040]: 2025-11-29 08:13:42.968 255071 DEBUG nova.network.neutron [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:13:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:13:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:13:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:13:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:13:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:13:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:13:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:13:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:13:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:13:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:13:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e371 do_prune osdmap full prune enabled
Nov 29 08:13:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e372 e372: 3 total, 3 up, 3 in
Nov 29 08:13:43 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e372: 3 total, 3 up, 3 in
Nov 29 08:13:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 924 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 16 KiB/s wr, 189 op/s
Nov 29 08:13:43 compute-0 nova_compute[255040]: 2025-11-29 08:13:43.969 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.353 255071 DEBUG nova.network.neutron [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Updating instance_info_cache with network_info: [{"id": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "address": "fa:16:3e:a3:5c:f6", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfdac4-0e", "ovs_interfaceid": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.372 255071 DEBUG oslo_concurrency.lockutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Releasing lock "refresh_cache-40d7aec5-9705-4885-8d58-7fcfdb8eac5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.372 255071 DEBUG nova.compute.manager [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Instance network_info: |[{"id": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "address": "fa:16:3e:a3:5c:f6", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfdac4-0e", "ovs_interfaceid": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.373 255071 DEBUG oslo_concurrency.lockutils [req-da963596-2b3d-4fc0-b70a-0d7477fc6fbb req-6a9e7b0a-9573-42f9-9403-e19e9b553b4f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-40d7aec5-9705-4885-8d58-7fcfdb8eac5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.373 255071 DEBUG nova.network.neutron [req-da963596-2b3d-4fc0-b70a-0d7477fc6fbb req-6a9e7b0a-9573-42f9-9403-e19e9b553b4f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Refreshing network info cache for port 90cfdac4-0eb9-4a00-9ff4-a7fe2474579d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.376 255071 DEBUG nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Start _get_guest_xml network_info=[{"id": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "address": "fa:16:3e:a3:5c:f6", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfdac4-0e", "ovs_interfaceid": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-710d1fe4-9be5-417f-b2c6-0a997cd9f339', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '710d1fe4-9be5-417f-b2c6-0a997cd9f339', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '40d7aec5-9705-4885-8d58-7fcfdb8eac5c', 'attached_at': '', 'detached_at': '', 'volume_id': '710d1fe4-9be5-417f-b2c6-0a997cd9f339', 'serial': '710d1fe4-9be5-417f-b2c6-0a997cd9f339'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'delete_on_termination': False, 'attachment_id': 'a993ab48-6981-4804-a0a5-02e79c56b295', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.381 255071 WARNING nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.389 255071 DEBUG nova.virt.libvirt.host [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.389 255071 DEBUG nova.virt.libvirt.host [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.394 255071 DEBUG nova.virt.libvirt.host [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.395 255071 DEBUG nova.virt.libvirt.host [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.396 255071 DEBUG nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.396 255071 DEBUG nova.virt.hardware [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.396 255071 DEBUG nova.virt.hardware [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.396 255071 DEBUG nova.virt.hardware [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.397 255071 DEBUG nova.virt.hardware [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.397 255071 DEBUG nova.virt.hardware [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.397 255071 DEBUG nova.virt.hardware [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.397 255071 DEBUG nova.virt.hardware [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.398 255071 DEBUG nova.virt.hardware [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.398 255071 DEBUG nova.virt.hardware [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.398 255071 DEBUG nova.virt.hardware [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.398 255071 DEBUG nova.virt.hardware [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.421 255071 DEBUG nova.storage.rbd_utils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] rbd image 40d7aec5-9705-4885-8d58-7fcfdb8eac5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.425 255071 DEBUG oslo_concurrency.processutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:44 compute-0 ceph-mon[75237]: osdmap e372: 3 total, 3 up, 3 in
Nov 29 08:13:44 compute-0 ceph-mon[75237]: pgmap v1745: 305 pgs: 305 active+clean; 924 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 16 KiB/s wr, 189 op/s
Nov 29 08:13:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:13:44 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/203946753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.859 255071 DEBUG oslo_concurrency.processutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.880 255071 DEBUG nova.virt.libvirt.vif [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:13:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-665396298',display_name='tempest-TestVolumeBootPattern-server-665396298',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-665396298',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBNqVOtasX0MqRaMqqfsWVfBGlBxHyLONahirMfYc0xM/PP91rZ4W+N/NUA4y30TxcMcH62LfUYChDkxcMCwFGnIBRbZARerRoVNJBX6SaD1meU9QKaSGEO9I5Zm9Q8bzQ==',key_name='tempest-TestVolumeBootPattern-1223045967',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3df24932e2a44aeab3c2aece8a045774',ramdisk_id='',reservation_id='r-mu2vy1qp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1666331213',owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:40Z,user_data=None,user_id='5e62d407203540599a65ac50d5d447b9',uuid=40d7aec5-9705-4885-8d58-7fcfdb8eac5c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "address": "fa:16:3e:a3:5c:f6", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfdac4-0e", "ovs_interfaceid": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.881 255071 DEBUG nova.network.os_vif_util [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converting VIF {"id": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "address": "fa:16:3e:a3:5c:f6", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfdac4-0e", "ovs_interfaceid": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.882 255071 DEBUG nova.network.os_vif_util [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:5c:f6,bridge_name='br-int',has_traffic_filtering=True,id=90cfdac4-0eb9-4a00-9ff4-a7fe2474579d,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90cfdac4-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.883 255071 DEBUG nova.objects.instance [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lazy-loading 'pci_devices' on Instance uuid 40d7aec5-9705-4885-8d58-7fcfdb8eac5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.895 255071 DEBUG nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:13:44 compute-0 nova_compute[255040]:   <uuid>40d7aec5-9705-4885-8d58-7fcfdb8eac5c</uuid>
Nov 29 08:13:44 compute-0 nova_compute[255040]:   <name>instance-00000017</name>
Nov 29 08:13:44 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:13:44 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:13:44 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <nova:name>tempest-TestVolumeBootPattern-server-665396298</nova:name>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:13:44</nova:creationTime>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:13:44 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:13:44 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:13:44 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:13:44 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:13:44 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:13:44 compute-0 nova_compute[255040]:         <nova:user uuid="5e62d407203540599a65ac50d5d447b9">tempest-TestVolumeBootPattern-1666331213-project-member</nova:user>
Nov 29 08:13:44 compute-0 nova_compute[255040]:         <nova:project uuid="3df24932e2a44aeab3c2aece8a045774">tempest-TestVolumeBootPattern-1666331213</nova:project>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:13:44 compute-0 nova_compute[255040]:         <nova:port uuid="90cfdac4-0eb9-4a00-9ff4-a7fe2474579d">
Nov 29 08:13:44 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:13:44 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:13:44 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <system>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <entry name="serial">40d7aec5-9705-4885-8d58-7fcfdb8eac5c</entry>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <entry name="uuid">40d7aec5-9705-4885-8d58-7fcfdb8eac5c</entry>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     </system>
Nov 29 08:13:44 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:13:44 compute-0 nova_compute[255040]:   <os>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:   </os>
Nov 29 08:13:44 compute-0 nova_compute[255040]:   <features>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:   </features>
Nov 29 08:13:44 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:13:44 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:13:44 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/40d7aec5-9705-4885-8d58-7fcfdb8eac5c_disk.config">
Nov 29 08:13:44 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       </source>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:13:44 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <source protocol="rbd" name="volumes/volume-710d1fe4-9be5-417f-b2c6-0a997cd9f339">
Nov 29 08:13:44 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       </source>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:13:44 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <serial>710d1fe4-9be5-417f-b2c6-0a997cd9f339</serial>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:a3:5c:f6"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <target dev="tap90cfdac4-0e"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/40d7aec5-9705-4885-8d58-7fcfdb8eac5c/console.log" append="off"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <video>
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     </video>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:13:44 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:13:44 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:13:44 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:13:44 compute-0 nova_compute[255040]: </domain>
Nov 29 08:13:44 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.896 255071 DEBUG nova.compute.manager [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Preparing to wait for external event network-vif-plugged-90cfdac4-0eb9-4a00-9ff4-a7fe2474579d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.897 255071 DEBUG oslo_concurrency.lockutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.897 255071 DEBUG oslo_concurrency.lockutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.897 255071 DEBUG oslo_concurrency.lockutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.898 255071 DEBUG nova.virt.libvirt.vif [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:13:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-665396298',display_name='tempest-TestVolumeBootPattern-server-665396298',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-665396298',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBNqVOtasX0MqRaMqqfsWVfBGlBxHyLONahirMfYc0xM/PP91rZ4W+N/NUA4y30TxcMcH62LfUYChDkxcMCwFGnIBRbZARerRoVNJBX6SaD1meU9QKaSGEO9I5Zm9Q8bzQ==',key_name='tempest-TestVolumeBootPattern-1223045967',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3df24932e2a44aeab3c2aece8a045774',ramdisk_id='',reservation_id='r-mu2vy1qp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1666331213',owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:40Z,user_data=None,user_id='5e62d407203540599a65ac50d5d447b9',uuid=40d7aec5-9705-4885-8d58-7fcfdb8eac5c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "address": "fa:16:3e:a3:5c:f6", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfdac4-0e", "ovs_interfaceid": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.898 255071 DEBUG nova.network.os_vif_util [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converting VIF {"id": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "address": "fa:16:3e:a3:5c:f6", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfdac4-0e", "ovs_interfaceid": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.899 255071 DEBUG nova.network.os_vif_util [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:5c:f6,bridge_name='br-int',has_traffic_filtering=True,id=90cfdac4-0eb9-4a00-9ff4-a7fe2474579d,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90cfdac4-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.899 255071 DEBUG os_vif [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:5c:f6,bridge_name='br-int',has_traffic_filtering=True,id=90cfdac4-0eb9-4a00-9ff4-a7fe2474579d,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90cfdac4-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:13:44 compute-0 nova_compute[255040]: 2025-11-29 08:13:44.900 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.070 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.070 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.071 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.081 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.081 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap90cfdac4-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.082 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap90cfdac4-0e, col_values=(('external_ids', {'iface-id': '90cfdac4-0eb9-4a00-9ff4-a7fe2474579d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a3:5c:f6', 'vm-uuid': '40d7aec5-9705-4885-8d58-7fcfdb8eac5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:45 compute-0 NetworkManager[49116]: <info>  [1764404025.0859] manager: (tap90cfdac4-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.085 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.089 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.091 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.094 255071 INFO os_vif [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:5c:f6,bridge_name='br-int',has_traffic_filtering=True,id=90cfdac4-0eb9-4a00-9ff4-a7fe2474579d,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90cfdac4-0e')
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.143 255071 DEBUG nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.144 255071 DEBUG nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.144 255071 DEBUG nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] No VIF found with MAC fa:16:3e:a3:5c:f6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.145 255071 INFO nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Using config drive
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.163 255071 DEBUG nova.storage.rbd_utils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] rbd image 40d7aec5-9705-4885-8d58-7fcfdb8eac5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:13:45 compute-0 ovn_controller[153295]: 2025-11-29T08:13:45Z|00206|binding|INFO|Releasing lport 97dcad99-21fd-4887-92fc-118b6a771872 from this chassis (sb_readonly=0)
Nov 29 08:13:45 compute-0 ovn_controller[153295]: 2025-11-29T08:13:45Z|00207|binding|INFO|Releasing lport c7579d40-4225-44ab-93bd-e31c3efe399f from this chassis (sb_readonly=0)
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.364 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.504 255071 INFO nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Creating config drive at /var/lib/nova/instances/40d7aec5-9705-4885-8d58-7fcfdb8eac5c/disk.config
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.510 255071 DEBUG oslo_concurrency.processutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/40d7aec5-9705-4885-8d58-7fcfdb8eac5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmnggotsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.632 255071 DEBUG nova.network.neutron [req-da963596-2b3d-4fc0-b70a-0d7477fc6fbb req-6a9e7b0a-9573-42f9-9403-e19e9b553b4f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Updated VIF entry in instance network info cache for port 90cfdac4-0eb9-4a00-9ff4-a7fe2474579d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.634 255071 DEBUG nova.network.neutron [req-da963596-2b3d-4fc0-b70a-0d7477fc6fbb req-6a9e7b0a-9573-42f9-9403-e19e9b553b4f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Updating instance_info_cache with network_info: [{"id": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "address": "fa:16:3e:a3:5c:f6", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfdac4-0e", "ovs_interfaceid": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.639 255071 DEBUG oslo_concurrency.processutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/40d7aec5-9705-4885-8d58-7fcfdb8eac5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmnggotsi" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.675 255071 DEBUG nova.storage.rbd_utils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] rbd image 40d7aec5-9705-4885-8d58-7fcfdb8eac5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.680 255071 DEBUG oslo_concurrency.processutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/40d7aec5-9705-4885-8d58-7fcfdb8eac5c/disk.config 40d7aec5-9705-4885-8d58-7fcfdb8eac5c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.712 255071 DEBUG oslo_concurrency.lockutils [req-da963596-2b3d-4fc0-b70a-0d7477fc6fbb req-6a9e7b0a-9573-42f9-9403-e19e9b553b4f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-40d7aec5-9705-4885-8d58-7fcfdb8eac5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:13:45 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/203946753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 248 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 16 KiB/s wr, 174 op/s
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.921 255071 DEBUG oslo_concurrency.processutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/40d7aec5-9705-4885-8d58-7fcfdb8eac5c/disk.config 40d7aec5-9705-4885-8d58-7fcfdb8eac5c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.241s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:45 compute-0 nova_compute[255040]: 2025-11-29 08:13:45.922 255071 INFO nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Deleting local config drive /var/lib/nova/instances/40d7aec5-9705-4885-8d58-7fcfdb8eac5c/disk.config because it was imported into RBD.
Nov 29 08:13:45 compute-0 kernel: tap90cfdac4-0e: entered promiscuous mode
Nov 29 08:13:45 compute-0 NetworkManager[49116]: <info>  [1764404025.9758] manager: (tap90cfdac4-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/117)
Nov 29 08:13:46 compute-0 ovn_controller[153295]: 2025-11-29T08:13:46Z|00208|binding|INFO|Claiming lport 90cfdac4-0eb9-4a00-9ff4-a7fe2474579d for this chassis.
Nov 29 08:13:46 compute-0 ovn_controller[153295]: 2025-11-29T08:13:46Z|00209|binding|INFO|90cfdac4-0eb9-4a00-9ff4-a7fe2474579d: Claiming fa:16:3e:a3:5c:f6 10.100.0.10
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.010 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:46.019 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:5c:f6 10.100.0.10'], port_security=['fa:16:3e:a3:5c:f6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '40d7aec5-9705-4885-8d58-7fcfdb8eac5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3df24932e2a44aeab3c2aece8a045774', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fd76aebb-076a-4516-b4a3-04b7aa482016', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6d2be5e-00f1-4a95-b572-cb93402763d5, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=90cfdac4-0eb9-4a00-9ff4-a7fe2474579d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:13:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:46.020 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 90cfdac4-0eb9-4a00-9ff4-a7fe2474579d in datapath 6e23492e-beff-43f6-b4d1-f88ebeea0b6f bound to our chassis
Nov 29 08:13:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:46.022 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e23492e-beff-43f6-b4d1-f88ebeea0b6f
Nov 29 08:13:46 compute-0 ovn_controller[153295]: 2025-11-29T08:13:46Z|00210|binding|INFO|Setting lport 90cfdac4-0eb9-4a00-9ff4-a7fe2474579d ovn-installed in OVS
Nov 29 08:13:46 compute-0 ovn_controller[153295]: 2025-11-29T08:13:46Z|00211|binding|INFO|Setting lport 90cfdac4-0eb9-4a00-9ff4-a7fe2474579d up in Southbound
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.031 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:46.047 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc6ab3d-a994-4651-a885-db220689a077]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:46 compute-0 systemd-machined[216271]: New machine qemu-23-instance-00000017.
Nov 29 08:13:46 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-00000017.
Nov 29 08:13:46 compute-0 systemd-udevd[289966]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:13:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:46.085 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[f6bf4d7a-947e-40f3-8de1-feac38d56226]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:46 compute-0 NetworkManager[49116]: <info>  [1764404026.0879] device (tap90cfdac4-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:13:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:46.090 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[d9158259-a3c3-4beb-a335-8cead866494a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:46 compute-0 NetworkManager[49116]: <info>  [1764404026.0938] device (tap90cfdac4-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:13:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:46.118 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[ed96afca-4d3b-46a1-9c37-e552a3a8818e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:46 compute-0 podman[289950]: 2025-11-29 08:13:46.122062872 +0000 UTC m=+0.078482145 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 08:13:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:46.143 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[a094a5d1-4840-46d3-9b9a-7b82f5b2aa8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e23492e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:19:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621285, 'reachable_time': 33842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290007, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:46 compute-0 sudo[289969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:46 compute-0 sudo[289969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:46 compute-0 sudo[289969]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:46.160 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[0384de08-2983-47d1-a1bf-abeb65b4a6bc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6e23492e-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621299, 'tstamp': 621299}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290010, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e23492e-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621303, 'tstamp': 621303}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290010, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:46.162 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e23492e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.164 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.165 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:46.166 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e23492e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:46.166 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:13:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:46.166 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e23492e-b0, col_values=(('external_ids', {'iface-id': 'c7579d40-4225-44ab-93bd-e31c3efe399f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:13:46.168 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:13:46 compute-0 sudo[290013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:13:46 compute-0 sudo[290013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:46 compute-0 sudo[290013]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:46 compute-0 sudo[290038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:46 compute-0 sudo[290038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:46 compute-0 sudo[290038]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:46 compute-0 sudo[290099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 08:13:46 compute-0 sudo[290099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.424 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404026.423877, 40d7aec5-9705-4885-8d58-7fcfdb8eac5c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.425 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] VM Started (Lifecycle Event)
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.446 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.450 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404026.423959, 40d7aec5-9705-4885-8d58-7fcfdb8eac5c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.450 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] VM Paused (Lifecycle Event)
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.466 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.469 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.495 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:13:46 compute-0 ceph-mon[75237]: pgmap v1746: 305 pgs: 305 active+clean; 248 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 16 KiB/s wr, 174 op/s
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.747 255071 DEBUG nova.compute.manager [req-b9ac9c86-ac5e-4d07-a143-8aecb5fa339d req-3c656207-9fbb-4d45-8b63-36c0863e5215 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Received event network-vif-plugged-90cfdac4-0eb9-4a00-9ff4-a7fe2474579d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.748 255071 DEBUG oslo_concurrency.lockutils [req-b9ac9c86-ac5e-4d07-a143-8aecb5fa339d req-3c656207-9fbb-4d45-8b63-36c0863e5215 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.748 255071 DEBUG oslo_concurrency.lockutils [req-b9ac9c86-ac5e-4d07-a143-8aecb5fa339d req-3c656207-9fbb-4d45-8b63-36c0863e5215 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.748 255071 DEBUG oslo_concurrency.lockutils [req-b9ac9c86-ac5e-4d07-a143-8aecb5fa339d req-3c656207-9fbb-4d45-8b63-36c0863e5215 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.748 255071 DEBUG nova.compute.manager [req-b9ac9c86-ac5e-4d07-a143-8aecb5fa339d req-3c656207-9fbb-4d45-8b63-36c0863e5215 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Processing event network-vif-plugged-90cfdac4-0eb9-4a00-9ff4-a7fe2474579d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.749 255071 DEBUG nova.compute.manager [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.751 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404026.751655, 40d7aec5-9705-4885-8d58-7fcfdb8eac5c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.751 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] VM Resumed (Lifecycle Event)
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.753 255071 DEBUG nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.756 255071 INFO nova.virt.libvirt.driver [-] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Instance spawned successfully.
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.756 255071 DEBUG nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.772 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.778 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.781 255071 DEBUG nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.781 255071 DEBUG nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.782 255071 DEBUG nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.782 255071 DEBUG nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.782 255071 DEBUG nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.783 255071 DEBUG nova.virt.libvirt.driver [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.810 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.835 255071 INFO nova.compute.manager [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Took 4.49 seconds to spawn the instance on the hypervisor.
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.835 255071 DEBUG nova.compute.manager [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:13:46 compute-0 podman[290199]: 2025-11-29 08:13:46.842943959 +0000 UTC m=+0.068722309 container exec 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.892 255071 INFO nova.compute.manager [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Took 6.74 seconds to build instance.
Nov 29 08:13:46 compute-0 nova_compute[255040]: 2025-11-29 08:13:46.906 255071 DEBUG oslo_concurrency.lockutils [None req-10ba5f20-80f5-4866-a9b9-dc57a27d7cf0 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:46 compute-0 podman[290199]: 2025-11-29 08:13:46.947805545 +0000 UTC m=+0.173583865 container exec_died 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.084 255071 DEBUG oslo_concurrency.lockutils [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Acquiring lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.084 255071 DEBUG oslo_concurrency.lockutils [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.100 255071 DEBUG nova.objects.instance [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lazy-loading 'flavor' on Instance uuid 73161fa0-86cc-4d12-bbb4-64386b62bf99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.132 255071 DEBUG oslo_concurrency.lockutils [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.440 255071 DEBUG oslo_concurrency.lockutils [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Acquiring lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.441 255071 DEBUG oslo_concurrency.lockutils [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.441 255071 INFO nova.compute.manager [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Attaching volume d2fc1a9e-e23e-4730-b2b9-3aec38100e28 to /dev/vdb
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.556 255071 DEBUG os_brick.utils [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.557 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.570 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.571 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[08552a4f-6782-48bc-b954-3778849295f0]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.578 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.586 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.587 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[8bb63b84-8e40-46b0-9a0b-9be97b86c23b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.589 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.599 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.599 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[58901e91-77ac-4df6-9286-4f7ae8634e4b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:47 compute-0 sudo[290099]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.602 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[1db532d3-53e4-446e-b8a9-0370a799ccad]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.603 255071 DEBUG oslo_concurrency.processutils [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:13:47 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:13:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.628 255071 DEBUG oslo_concurrency.processutils [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.631 255071 DEBUG os_brick.initiator.connectors.lightos [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.632 255071 DEBUG os_brick.initiator.connectors.lightos [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.632 255071 DEBUG os_brick.initiator.connectors.lightos [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.632 255071 DEBUG os_brick.utils [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] <== get_connector_properties: return (75ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:13:47 compute-0 nova_compute[255040]: 2025-11-29 08:13:47.633 255071 DEBUG nova.virt.block_device [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Updating existing volume attachment record: 446ab1f6-5adc-4750-ad6a-0c45e401264d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:13:47 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:13:47 compute-0 sudo[290362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:47 compute-0 sudo[290362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:47 compute-0 sudo[290362]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:47 compute-0 sudo[290387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:13:47 compute-0 sudo[290387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:47 compute-0 sudo[290387]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:47 compute-0 sudo[290412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:47 compute-0 sudo[290412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:47 compute-0 sudo[290412]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:47 compute-0 sudo[290437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:13:47 compute-0 sudo[290437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 248 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 14 KiB/s wr, 157 op/s
Nov 29 08:13:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:13:48 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3099458634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:48 compute-0 sudo[290437]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:13:48 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:13:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:13:48 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:13:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:13:48 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:13:48 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev be79dabc-aa07-43e4-8536-63198a3b7e30 does not exist
Nov 29 08:13:48 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 1bc05b89-72a4-4e7f-b8b9-d57a0346057a does not exist
Nov 29 08:13:48 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 035088c4-3dd3-4cd9-8d54-b98d35b6417d does not exist
Nov 29 08:13:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:13:48 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:13:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:13:48 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:13:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:13:48 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:13:48 compute-0 sudo[290494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:48 compute-0 sudo[290494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:48 compute-0 sudo[290494]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:48 compute-0 sudo[290519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:13:48 compute-0 sudo[290519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:48 compute-0 sudo[290519]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:48 compute-0 sudo[290544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:48 compute-0 sudo[290544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:48 compute-0 sudo[290544]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:48 compute-0 sudo[290569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:13:48 compute-0 sudo[290569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e372 do_prune osdmap full prune enabled
Nov 29 08:13:48 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:13:48 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:13:48 compute-0 ceph-mon[75237]: pgmap v1747: 305 pgs: 305 active+clean; 248 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 14 KiB/s wr, 157 op/s
Nov 29 08:13:48 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3099458634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:13:48 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:13:48 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:13:48 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:13:48 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:13:48 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:13:48 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:13:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e373 e373: 3 total, 3 up, 3 in
Nov 29 08:13:48 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e373: 3 total, 3 up, 3 in
Nov 29 08:13:49 compute-0 podman[290634]: 2025-11-29 08:13:49.072348202 +0000 UTC m=+0.049000996 container create 89614ccf8d8feaa32c9bc0548cb85653ecb6053c21dc2d38a53c26cc488930dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_sutherland, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 08:13:49 compute-0 systemd[1]: Started libpod-conmon-89614ccf8d8feaa32c9bc0548cb85653ecb6053c21dc2d38a53c26cc488930dd.scope.
Nov 29 08:13:49 compute-0 podman[290634]: 2025-11-29 08:13:49.051512638 +0000 UTC m=+0.028165462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:13:49 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:13:49 compute-0 podman[290634]: 2025-11-29 08:13:49.183149939 +0000 UTC m=+0.159802783 container init 89614ccf8d8feaa32c9bc0548cb85653ecb6053c21dc2d38a53c26cc488930dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 08:13:49 compute-0 podman[290634]: 2025-11-29 08:13:49.190833797 +0000 UTC m=+0.167486591 container start 89614ccf8d8feaa32c9bc0548cb85653ecb6053c21dc2d38a53c26cc488930dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_sutherland, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 08:13:49 compute-0 podman[290634]: 2025-11-29 08:13:49.195377759 +0000 UTC m=+0.172030583 container attach 89614ccf8d8feaa32c9bc0548cb85653ecb6053c21dc2d38a53c26cc488930dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_sutherland, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:13:49 compute-0 keen_sutherland[290650]: 167 167
Nov 29 08:13:49 compute-0 systemd[1]: libpod-89614ccf8d8feaa32c9bc0548cb85653ecb6053c21dc2d38a53c26cc488930dd.scope: Deactivated successfully.
Nov 29 08:13:49 compute-0 podman[290634]: 2025-11-29 08:13:49.199572763 +0000 UTC m=+0.176225557 container died 89614ccf8d8feaa32c9bc0548cb85653ecb6053c21dc2d38a53c26cc488930dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:13:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-53afc41c3056872571f4ab382deb6d18caba5b380c65cc5add2086616fc9e665-merged.mount: Deactivated successfully.
Nov 29 08:13:49 compute-0 podman[290634]: 2025-11-29 08:13:49.24051237 +0000 UTC m=+0.217165164 container remove 89614ccf8d8feaa32c9bc0548cb85653ecb6053c21dc2d38a53c26cc488930dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_sutherland, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:13:49 compute-0 nova_compute[255040]: 2025-11-29 08:13:49.273 255071 DEBUG nova.objects.instance [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lazy-loading 'flavor' on Instance uuid 73161fa0-86cc-4d12-bbb4-64386b62bf99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:13:49 compute-0 systemd[1]: libpod-conmon-89614ccf8d8feaa32c9bc0548cb85653ecb6053c21dc2d38a53c26cc488930dd.scope: Deactivated successfully.
Nov 29 08:13:49 compute-0 podman[290674]: 2025-11-29 08:13:49.474644613 +0000 UTC m=+0.043593640 container create 7a0f71dc61a2c09c4953e4178d47bd83540c68e0bf3ac1937e42077ba9cb934f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_driscoll, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 08:13:49 compute-0 systemd[1]: Started libpod-conmon-7a0f71dc61a2c09c4953e4178d47bd83540c68e0bf3ac1937e42077ba9cb934f.scope.
Nov 29 08:13:49 compute-0 podman[290674]: 2025-11-29 08:13:49.456317067 +0000 UTC m=+0.025266104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:13:49 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e93c09d3c48814a478f8a8c1ec01b795f177b6255632adfa530ae8ec0fd2b52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e93c09d3c48814a478f8a8c1ec01b795f177b6255632adfa530ae8ec0fd2b52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e93c09d3c48814a478f8a8c1ec01b795f177b6255632adfa530ae8ec0fd2b52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e93c09d3c48814a478f8a8c1ec01b795f177b6255632adfa530ae8ec0fd2b52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e93c09d3c48814a478f8a8c1ec01b795f177b6255632adfa530ae8ec0fd2b52/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:49 compute-0 podman[290674]: 2025-11-29 08:13:49.621312198 +0000 UTC m=+0.190261255 container init 7a0f71dc61a2c09c4953e4178d47bd83540c68e0bf3ac1937e42077ba9cb934f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_driscoll, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:13:49 compute-0 podman[290674]: 2025-11-29 08:13:49.630176539 +0000 UTC m=+0.199125566 container start 7a0f71dc61a2c09c4953e4178d47bd83540c68e0bf3ac1937e42077ba9cb934f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_driscoll, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 08:13:49 compute-0 podman[290674]: 2025-11-29 08:13:49.638300989 +0000 UTC m=+0.207250026 container attach 7a0f71dc61a2c09c4953e4178d47bd83540c68e0bf3ac1937e42077ba9cb934f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Nov 29 08:13:49 compute-0 nova_compute[255040]: 2025-11-29 08:13:49.897 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 248 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 542 KiB/s rd, 5.4 KiB/s wr, 100 op/s
Nov 29 08:13:49 compute-0 ceph-mon[75237]: osdmap e373: 3 total, 3 up, 3 in
Nov 29 08:13:50 compute-0 nova_compute[255040]: 2025-11-29 08:13:50.084 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:50 compute-0 nova_compute[255040]: 2025-11-29 08:13:50.204 255071 DEBUG nova.compute.manager [req-e9c0ebde-8eff-459c-aea4-203c735c1d86 req-7e72c01a-c0e1-48c2-8df8-96957e476391 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Received event network-vif-plugged-90cfdac4-0eb9-4a00-9ff4-a7fe2474579d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:13:50 compute-0 nova_compute[255040]: 2025-11-29 08:13:50.205 255071 DEBUG oslo_concurrency.lockutils [req-e9c0ebde-8eff-459c-aea4-203c735c1d86 req-7e72c01a-c0e1-48c2-8df8-96957e476391 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:50 compute-0 nova_compute[255040]: 2025-11-29 08:13:50.205 255071 DEBUG oslo_concurrency.lockutils [req-e9c0ebde-8eff-459c-aea4-203c735c1d86 req-7e72c01a-c0e1-48c2-8df8-96957e476391 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:50 compute-0 nova_compute[255040]: 2025-11-29 08:13:50.205 255071 DEBUG oslo_concurrency.lockutils [req-e9c0ebde-8eff-459c-aea4-203c735c1d86 req-7e72c01a-c0e1-48c2-8df8-96957e476391 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:50 compute-0 nova_compute[255040]: 2025-11-29 08:13:50.205 255071 DEBUG nova.compute.manager [req-e9c0ebde-8eff-459c-aea4-203c735c1d86 req-7e72c01a-c0e1-48c2-8df8-96957e476391 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] No waiting events found dispatching network-vif-plugged-90cfdac4-0eb9-4a00-9ff4-a7fe2474579d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:13:50 compute-0 nova_compute[255040]: 2025-11-29 08:13:50.206 255071 WARNING nova.compute.manager [req-e9c0ebde-8eff-459c-aea4-203c735c1d86 req-7e72c01a-c0e1-48c2-8df8-96957e476391 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Received unexpected event network-vif-plugged-90cfdac4-0eb9-4a00-9ff4-a7fe2474579d for instance with vm_state active and task_state None.
Nov 29 08:13:50 compute-0 nova_compute[255040]: 2025-11-29 08:13:50.217 255071 DEBUG nova.virt.libvirt.driver [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Attempting to attach volume d2fc1a9e-e23e-4730-b2b9-3aec38100e28 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:13:50 compute-0 nova_compute[255040]: 2025-11-29 08:13:50.220 255071 DEBUG nova.virt.libvirt.guest [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:13:50 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:13:50 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-d2fc1a9e-e23e-4730-b2b9-3aec38100e28">
Nov 29 08:13:50 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:13:50 compute-0 nova_compute[255040]:   </source>
Nov 29 08:13:50 compute-0 nova_compute[255040]:   <auth username="openstack">
Nov 29 08:13:50 compute-0 nova_compute[255040]:     <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:13:50 compute-0 nova_compute[255040]:   </auth>
Nov 29 08:13:50 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:13:50 compute-0 nova_compute[255040]:   <serial>d2fc1a9e-e23e-4730-b2b9-3aec38100e28</serial>
Nov 29 08:13:50 compute-0 nova_compute[255040]: </disk>
Nov 29 08:13:50 compute-0 nova_compute[255040]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:13:50 compute-0 nova_compute[255040]: 2025-11-29 08:13:50.369 255071 DEBUG nova.virt.libvirt.driver [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:13:50 compute-0 nova_compute[255040]: 2025-11-29 08:13:50.369 255071 DEBUG nova.virt.libvirt.driver [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:13:50 compute-0 nova_compute[255040]: 2025-11-29 08:13:50.369 255071 DEBUG nova.virt.libvirt.driver [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:13:50 compute-0 nova_compute[255040]: 2025-11-29 08:13:50.370 255071 DEBUG nova.virt.libvirt.driver [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] No VIF found with MAC fa:16:3e:8d:66:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:13:50 compute-0 nova_compute[255040]: 2025-11-29 08:13:50.675 255071 DEBUG oslo_concurrency.lockutils [None req-0c037ab9-1364-48e5-b277-0213dfa8b2f5 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:50 compute-0 awesome_driscoll[290691]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:13:50 compute-0 awesome_driscoll[290691]: --> relative data size: 1.0
Nov 29 08:13:50 compute-0 awesome_driscoll[290691]: --> All data devices are unavailable
Nov 29 08:13:50 compute-0 systemd[1]: libpod-7a0f71dc61a2c09c4953e4178d47bd83540c68e0bf3ac1937e42077ba9cb934f.scope: Deactivated successfully.
Nov 29 08:13:50 compute-0 systemd[1]: libpod-7a0f71dc61a2c09c4953e4178d47bd83540c68e0bf3ac1937e42077ba9cb934f.scope: Consumed 1.062s CPU time.
Nov 29 08:13:50 compute-0 podman[290674]: 2025-11-29 08:13:50.750477687 +0000 UTC m=+1.319426744 container died 7a0f71dc61a2c09c4953e4178d47bd83540c68e0bf3ac1937e42077ba9cb934f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:13:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e93c09d3c48814a478f8a8c1ec01b795f177b6255632adfa530ae8ec0fd2b52-merged.mount: Deactivated successfully.
Nov 29 08:13:50 compute-0 podman[290674]: 2025-11-29 08:13:50.877050689 +0000 UTC m=+1.445999716 container remove 7a0f71dc61a2c09c4953e4178d47bd83540c68e0bf3ac1937e42077ba9cb934f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:13:50 compute-0 systemd[1]: libpod-conmon-7a0f71dc61a2c09c4953e4178d47bd83540c68e0bf3ac1937e42077ba9cb934f.scope: Deactivated successfully.
Nov 29 08:13:50 compute-0 sudo[290569]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:50 compute-0 sudo[290754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:50 compute-0 sudo[290754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:50 compute-0 sudo[290754]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:51 compute-0 sudo[290779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:13:51 compute-0 sudo[290779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:51 compute-0 sudo[290779]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:51 compute-0 ceph-mon[75237]: pgmap v1749: 305 pgs: 305 active+clean; 248 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 542 KiB/s rd, 5.4 KiB/s wr, 100 op/s
Nov 29 08:13:51 compute-0 sudo[290804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:51 compute-0 sudo[290804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:51 compute-0 sudo[290804]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:51 compute-0 sudo[290829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 08:13:51 compute-0 sudo[290829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:51 compute-0 podman[290894]: 2025-11-29 08:13:51.517272245 +0000 UTC m=+0.054548187 container create e8e02e380d870e8582b060d1071544061bbfbc3eeb2f381e8472313fe9e667d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_franklin, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 08:13:51 compute-0 podman[290894]: 2025-11-29 08:13:51.48457818 +0000 UTC m=+0.021854132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:13:51 compute-0 systemd[1]: Started libpod-conmon-e8e02e380d870e8582b060d1071544061bbfbc3eeb2f381e8472313fe9e667d9.scope.
Nov 29 08:13:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:13:51 compute-0 podman[290894]: 2025-11-29 08:13:51.786694371 +0000 UTC m=+0.323970323 container init e8e02e380d870e8582b060d1071544061bbfbc3eeb2f381e8472313fe9e667d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_franklin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:13:51 compute-0 podman[290894]: 2025-11-29 08:13:51.797023009 +0000 UTC m=+0.334298931 container start e8e02e380d870e8582b060d1071544061bbfbc3eeb2f381e8472313fe9e667d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_franklin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:13:51 compute-0 podman[290894]: 2025-11-29 08:13:51.8003498 +0000 UTC m=+0.337625722 container attach e8e02e380d870e8582b060d1071544061bbfbc3eeb2f381e8472313fe9e667d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_franklin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:13:51 compute-0 upbeat_franklin[290910]: 167 167
Nov 29 08:13:51 compute-0 systemd[1]: libpod-e8e02e380d870e8582b060d1071544061bbfbc3eeb2f381e8472313fe9e667d9.scope: Deactivated successfully.
Nov 29 08:13:51 compute-0 podman[290894]: 2025-11-29 08:13:51.8033183 +0000 UTC m=+0.340594242 container died e8e02e380d870e8582b060d1071544061bbfbc3eeb2f381e8472313fe9e667d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:13:51 compute-0 nova_compute[255040]: 2025-11-29 08:13:51.807 255071 DEBUG nova.compute.manager [req-dcdabc86-72d7-49d9-87d9-88f9290d06d0 req-87f33116-48dc-404d-9cc8-075145ac1c23 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Received event network-changed-90cfdac4-0eb9-4a00-9ff4-a7fe2474579d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:13:51 compute-0 nova_compute[255040]: 2025-11-29 08:13:51.809 255071 DEBUG nova.compute.manager [req-dcdabc86-72d7-49d9-87d9-88f9290d06d0 req-87f33116-48dc-404d-9cc8-075145ac1c23 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Refreshing instance network info cache due to event network-changed-90cfdac4-0eb9-4a00-9ff4-a7fe2474579d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:13:51 compute-0 nova_compute[255040]: 2025-11-29 08:13:51.810 255071 DEBUG oslo_concurrency.lockutils [req-dcdabc86-72d7-49d9-87d9-88f9290d06d0 req-87f33116-48dc-404d-9cc8-075145ac1c23 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-40d7aec5-9705-4885-8d58-7fcfdb8eac5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:13:51 compute-0 nova_compute[255040]: 2025-11-29 08:13:51.810 255071 DEBUG oslo_concurrency.lockutils [req-dcdabc86-72d7-49d9-87d9-88f9290d06d0 req-87f33116-48dc-404d-9cc8-075145ac1c23 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-40d7aec5-9705-4885-8d58-7fcfdb8eac5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:13:51 compute-0 nova_compute[255040]: 2025-11-29 08:13:51.810 255071 DEBUG nova.network.neutron [req-dcdabc86-72d7-49d9-87d9-88f9290d06d0 req-87f33116-48dc-404d-9cc8-075145ac1c23 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Refreshing network info cache for port 90cfdac4-0eb9-4a00-9ff4-a7fe2474579d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:13:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-770a7e8179218f814909726704a5211143c932e726891aac11a241f3dd0c8ebb-merged.mount: Deactivated successfully.
Nov 29 08:13:51 compute-0 podman[290894]: 2025-11-29 08:13:51.918018512 +0000 UTC m=+0.455294434 container remove e8e02e380d870e8582b060d1071544061bbfbc3eeb2f381e8472313fe9e667d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_franklin, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 08:13:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 248 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 22 KiB/s wr, 143 op/s
Nov 29 08:13:51 compute-0 systemd[1]: libpod-conmon-e8e02e380d870e8582b060d1071544061bbfbc3eeb2f381e8472313fe9e667d9.scope: Deactivated successfully.
Nov 29 08:13:52 compute-0 podman[290934]: 2025-11-29 08:13:52.105368549 +0000 UTC m=+0.047985019 container create 6586b499536d15b9e5959cd86b9533789e099723d24bf521d7cf0d66388db6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:13:52 compute-0 systemd[1]: Started libpod-conmon-6586b499536d15b9e5959cd86b9533789e099723d24bf521d7cf0d66388db6f1.scope.
Nov 29 08:13:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc51957ceae6efff23e4f6c1e8e50eb522c207345ba13aa7133f71de679c2f48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc51957ceae6efff23e4f6c1e8e50eb522c207345ba13aa7133f71de679c2f48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc51957ceae6efff23e4f6c1e8e50eb522c207345ba13aa7133f71de679c2f48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc51957ceae6efff23e4f6c1e8e50eb522c207345ba13aa7133f71de679c2f48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:52 compute-0 podman[290934]: 2025-11-29 08:13:52.08726696 +0000 UTC m=+0.029883450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:13:52 compute-0 podman[290934]: 2025-11-29 08:13:52.247355529 +0000 UTC m=+0.189972009 container init 6586b499536d15b9e5959cd86b9533789e099723d24bf521d7cf0d66388db6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 08:13:52 compute-0 podman[290934]: 2025-11-29 08:13:52.255043616 +0000 UTC m=+0.197660086 container start 6586b499536d15b9e5959cd86b9533789e099723d24bf521d7cf0d66388db6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mendel, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:13:52 compute-0 podman[290934]: 2025-11-29 08:13:52.447915643 +0000 UTC m=+0.390532113 container attach 6586b499536d15b9e5959cd86b9533789e099723d24bf521d7cf0d66388db6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 08:13:52 compute-0 nova_compute[255040]: 2025-11-29 08:13:52.696 255071 DEBUG nova.network.neutron [req-dcdabc86-72d7-49d9-87d9-88f9290d06d0 req-87f33116-48dc-404d-9cc8-075145ac1c23 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Updated VIF entry in instance network info cache for port 90cfdac4-0eb9-4a00-9ff4-a7fe2474579d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:13:52 compute-0 nova_compute[255040]: 2025-11-29 08:13:52.698 255071 DEBUG nova.network.neutron [req-dcdabc86-72d7-49d9-87d9-88f9290d06d0 req-87f33116-48dc-404d-9cc8-075145ac1c23 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Updating instance_info_cache with network_info: [{"id": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "address": "fa:16:3e:a3:5c:f6", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfdac4-0e", "ovs_interfaceid": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:13:52 compute-0 nova_compute[255040]: 2025-11-29 08:13:52.717 255071 DEBUG oslo_concurrency.lockutils [req-dcdabc86-72d7-49d9-87d9-88f9290d06d0 req-87f33116-48dc-404d-9cc8-075145ac1c23 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-40d7aec5-9705-4885-8d58-7fcfdb8eac5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]: {
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:     "0": [
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:         {
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "devices": [
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "/dev/loop3"
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             ],
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "lv_name": "ceph_lv0",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "lv_size": "21470642176",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "name": "ceph_lv0",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "tags": {
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.cluster_name": "ceph",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.crush_device_class": "",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.encrypted": "0",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.osd_id": "0",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.type": "block",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.vdo": "0"
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             },
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "type": "block",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "vg_name": "ceph_vg0"
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:         }
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:     ],
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:     "1": [
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:         {
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "devices": [
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "/dev/loop4"
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             ],
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "lv_name": "ceph_lv1",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "lv_size": "21470642176",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "name": "ceph_lv1",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "tags": {
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.cluster_name": "ceph",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.crush_device_class": "",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.encrypted": "0",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.osd_id": "1",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.type": "block",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.vdo": "0"
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             },
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "type": "block",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "vg_name": "ceph_vg1"
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:         }
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:     ],
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:     "2": [
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:         {
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "devices": [
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "/dev/loop5"
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             ],
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "lv_name": "ceph_lv2",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "lv_size": "21470642176",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "name": "ceph_lv2",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "tags": {
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.cluster_name": "ceph",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.crush_device_class": "",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.encrypted": "0",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.osd_id": "2",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.type": "block",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:                 "ceph.vdo": "0"
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             },
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "type": "block",
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:             "vg_name": "ceph_vg2"
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:         }
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]:     ]
Nov 29 08:13:53 compute-0 nostalgic_mendel[290951]: }
Nov 29 08:13:53 compute-0 systemd[1]: libpod-6586b499536d15b9e5959cd86b9533789e099723d24bf521d7cf0d66388db6f1.scope: Deactivated successfully.
Nov 29 08:13:53 compute-0 podman[290934]: 2025-11-29 08:13:53.040409506 +0000 UTC m=+0.983025986 container died 6586b499536d15b9e5959cd86b9533789e099723d24bf521d7cf0d66388db6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:13:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e373 do_prune osdmap full prune enabled
Nov 29 08:13:53 compute-0 ceph-mon[75237]: pgmap v1750: 305 pgs: 305 active+clean; 248 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 22 KiB/s wr, 143 op/s
Nov 29 08:13:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e374 e374: 3 total, 3 up, 3 in
Nov 29 08:13:53 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e374: 3 total, 3 up, 3 in
Nov 29 08:13:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc51957ceae6efff23e4f6c1e8e50eb522c207345ba13aa7133f71de679c2f48-merged.mount: Deactivated successfully.
Nov 29 08:13:53 compute-0 podman[290934]: 2025-11-29 08:13:53.1418566 +0000 UTC m=+1.084473070 container remove 6586b499536d15b9e5959cd86b9533789e099723d24bf521d7cf0d66388db6f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 08:13:53 compute-0 systemd[1]: libpod-conmon-6586b499536d15b9e5959cd86b9533789e099723d24bf521d7cf0d66388db6f1.scope: Deactivated successfully.
Nov 29 08:13:53 compute-0 sudo[290829]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:53 compute-0 sudo[290971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:53 compute-0 sudo[290971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:53 compute-0 sudo[290971]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:53 compute-0 sudo[290996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:13:53 compute-0 sudo[290996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:53 compute-0 sudo[290996]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:53 compute-0 sudo[291021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:53 compute-0 sudo[291021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:53 compute-0 sudo[291021]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:53 compute-0 sudo[291046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 08:13:53 compute-0 sudo[291046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:53 compute-0 podman[291115]: 2025-11-29 08:13:53.722334898 +0000 UTC m=+0.039018325 container create 0994b51f2e361ae8df3a2558ab454c0698cdd4c777de3ca97508df5ba99b7e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 08:13:53 compute-0 systemd[1]: Started libpod-conmon-0994b51f2e361ae8df3a2558ab454c0698cdd4c777de3ca97508df5ba99b7e05.scope.
Nov 29 08:13:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:53 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:13:53 compute-0 podman[291115]: 2025-11-29 08:13:53.705120263 +0000 UTC m=+0.021803710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:13:53 compute-0 podman[291115]: 2025-11-29 08:13:53.811927851 +0000 UTC m=+0.128611298 container init 0994b51f2e361ae8df3a2558ab454c0698cdd4c777de3ca97508df5ba99b7e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:13:53 compute-0 podman[291115]: 2025-11-29 08:13:53.818446298 +0000 UTC m=+0.135129725 container start 0994b51f2e361ae8df3a2558ab454c0698cdd4c777de3ca97508df5ba99b7e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_curie, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 08:13:53 compute-0 podman[291115]: 2025-11-29 08:13:53.822083887 +0000 UTC m=+0.138767324 container attach 0994b51f2e361ae8df3a2558ab454c0698cdd4c777de3ca97508df5ba99b7e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_curie, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 08:13:53 compute-0 affectionate_curie[291131]: 167 167
Nov 29 08:13:53 compute-0 systemd[1]: libpod-0994b51f2e361ae8df3a2558ab454c0698cdd4c777de3ca97508df5ba99b7e05.scope: Deactivated successfully.
Nov 29 08:13:53 compute-0 podman[291115]: 2025-11-29 08:13:53.824461551 +0000 UTC m=+0.141144978 container died 0994b51f2e361ae8df3a2558ab454c0698cdd4c777de3ca97508df5ba99b7e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_curie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 08:13:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-d23060f4428d8d97647b43c96a2709b093e1f31316a210ad29fe90f758a9bcc2-merged.mount: Deactivated successfully.
Nov 29 08:13:53 compute-0 podman[291115]: 2025-11-29 08:13:53.860039893 +0000 UTC m=+0.176723320 container remove 0994b51f2e361ae8df3a2558ab454c0698cdd4c777de3ca97508df5ba99b7e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 08:13:53 compute-0 systemd[1]: libpod-conmon-0994b51f2e361ae8df3a2558ab454c0698cdd4c777de3ca97508df5ba99b7e05.scope: Deactivated successfully.
Nov 29 08:13:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 250 MiB data, 497 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 45 KiB/s wr, 123 op/s
Nov 29 08:13:54 compute-0 podman[291154]: 2025-11-29 08:13:54.063930917 +0000 UTC m=+0.044484003 container create b30db6da6a02a7b9f85cb51662af813c9b199a293bbf2bc72871b649cadba81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_moore, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 08:13:54 compute-0 podman[291154]: 2025-11-29 08:13:54.042539969 +0000 UTC m=+0.023093065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:13:54 compute-0 ceph-mon[75237]: osdmap e374: 3 total, 3 up, 3 in
Nov 29 08:13:54 compute-0 systemd[1]: Started libpod-conmon-b30db6da6a02a7b9f85cb51662af813c9b199a293bbf2bc72871b649cadba81b.scope.
Nov 29 08:13:54 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e26a4babe8d726fb042269e8d53826c268b798ae533cdb00aa067e86cc418a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e26a4babe8d726fb042269e8d53826c268b798ae533cdb00aa067e86cc418a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e26a4babe8d726fb042269e8d53826c268b798ae533cdb00aa067e86cc418a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e26a4babe8d726fb042269e8d53826c268b798ae533cdb00aa067e86cc418a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:54 compute-0 podman[291154]: 2025-11-29 08:13:54.274296397 +0000 UTC m=+0.254849493 container init b30db6da6a02a7b9f85cb51662af813c9b199a293bbf2bc72871b649cadba81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 08:13:54 compute-0 podman[291154]: 2025-11-29 08:13:54.283272149 +0000 UTC m=+0.263825245 container start b30db6da6a02a7b9f85cb51662af813c9b199a293bbf2bc72871b649cadba81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:13:54 compute-0 podman[291154]: 2025-11-29 08:13:54.287777571 +0000 UTC m=+0.268330657 container attach b30db6da6a02a7b9f85cb51662af813c9b199a293bbf2bc72871b649cadba81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_moore, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 08:13:54 compute-0 nova_compute[255040]: 2025-11-29 08:13:54.939 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:55 compute-0 nova_compute[255040]: 2025-11-29 08:13:55.087 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e374 do_prune osdmap full prune enabled
Nov 29 08:13:55 compute-0 ceph-mon[75237]: pgmap v1752: 305 pgs: 305 active+clean; 250 MiB data, 497 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 45 KiB/s wr, 123 op/s
Nov 29 08:13:55 compute-0 goofy_moore[291170]: {
Nov 29 08:13:55 compute-0 goofy_moore[291170]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 08:13:55 compute-0 goofy_moore[291170]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:13:55 compute-0 goofy_moore[291170]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:13:55 compute-0 goofy_moore[291170]:         "osd_id": 2,
Nov 29 08:13:55 compute-0 goofy_moore[291170]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:13:55 compute-0 goofy_moore[291170]:         "type": "bluestore"
Nov 29 08:13:55 compute-0 goofy_moore[291170]:     },
Nov 29 08:13:55 compute-0 goofy_moore[291170]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 08:13:55 compute-0 goofy_moore[291170]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:13:55 compute-0 goofy_moore[291170]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:13:55 compute-0 goofy_moore[291170]:         "osd_id": 0,
Nov 29 08:13:55 compute-0 goofy_moore[291170]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:13:55 compute-0 goofy_moore[291170]:         "type": "bluestore"
Nov 29 08:13:55 compute-0 goofy_moore[291170]:     },
Nov 29 08:13:55 compute-0 goofy_moore[291170]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 08:13:55 compute-0 goofy_moore[291170]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:13:55 compute-0 goofy_moore[291170]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:13:55 compute-0 goofy_moore[291170]:         "osd_id": 1,
Nov 29 08:13:55 compute-0 goofy_moore[291170]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:13:55 compute-0 goofy_moore[291170]:         "type": "bluestore"
Nov 29 08:13:55 compute-0 goofy_moore[291170]:     }
Nov 29 08:13:55 compute-0 goofy_moore[291170]: }
Nov 29 08:13:55 compute-0 systemd[1]: libpod-b30db6da6a02a7b9f85cb51662af813c9b199a293bbf2bc72871b649cadba81b.scope: Deactivated successfully.
Nov 29 08:13:55 compute-0 systemd[1]: libpod-b30db6da6a02a7b9f85cb51662af813c9b199a293bbf2bc72871b649cadba81b.scope: Consumed 1.023s CPU time.
Nov 29 08:13:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e375 e375: 3 total, 3 up, 3 in
Nov 29 08:13:55 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e375: 3 total, 3 up, 3 in
Nov 29 08:13:55 compute-0 podman[291203]: 2025-11-29 08:13:55.356135124 +0000 UTC m=+0.036117457 container died b30db6da6a02a7b9f85cb51662af813c9b199a293bbf2bc72871b649cadba81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:13:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9e26a4babe8d726fb042269e8d53826c268b798ae533cdb00aa067e86cc418a-merged.mount: Deactivated successfully.
Nov 29 08:13:55 compute-0 podman[291203]: 2025-11-29 08:13:55.531208799 +0000 UTC m=+0.211191112 container remove b30db6da6a02a7b9f85cb51662af813c9b199a293bbf2bc72871b649cadba81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Nov 29 08:13:55 compute-0 systemd[1]: libpod-conmon-b30db6da6a02a7b9f85cb51662af813c9b199a293bbf2bc72871b649cadba81b.scope: Deactivated successfully.
Nov 29 08:13:55 compute-0 sudo[291046]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:13:55 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:13:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:13:55 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:13:55 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 5c155bcd-14a0-44cb-b7de-8fc70b08f4c1 does not exist
Nov 29 08:13:55 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev f000c208-fd52-4eef-8083-9770db962f66 does not exist
Nov 29 08:13:55 compute-0 sudo[291218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:55 compute-0 sudo[291218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:55 compute-0 sudo[291218]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:55 compute-0 sudo[291243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:13:55 compute-0 sudo[291243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:55 compute-0 sudo[291243]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 250 MiB data, 497 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 152 KiB/s wr, 174 op/s
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007655685314514553 of space, bias 1.0, pg target 0.2296705594354366 quantized to 32 (current 32)
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0011193236390571076 of space, bias 1.0, pg target 0.3357970917171323 quantized to 32 (current 32)
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:13:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:13:56 compute-0 ceph-mon[75237]: osdmap e375: 3 total, 3 up, 3 in
Nov 29 08:13:56 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:13:56 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:13:56 compute-0 ceph-mon[75237]: pgmap v1754: 305 pgs: 305 active+clean; 250 MiB data, 497 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 152 KiB/s wr, 174 op/s
Nov 29 08:13:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e375 do_prune osdmap full prune enabled
Nov 29 08:13:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e376 e376: 3 total, 3 up, 3 in
Nov 29 08:13:57 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e376: 3 total, 3 up, 3 in
Nov 29 08:13:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 250 MiB data, 497 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 152 KiB/s wr, 78 op/s
Nov 29 08:13:58 compute-0 nova_compute[255040]: 2025-11-29 08:13:58.488 255071 DEBUG oslo_concurrency.lockutils [None req-3003c282-8f09-4930-a618-1c43a2a84254 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Acquiring lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:58 compute-0 nova_compute[255040]: 2025-11-29 08:13:58.489 255071 DEBUG oslo_concurrency.lockutils [None req-3003c282-8f09-4930-a618-1c43a2a84254 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:58 compute-0 nova_compute[255040]: 2025-11-29 08:13:58.510 255071 INFO nova.compute.manager [None req-3003c282-8f09-4930-a618-1c43a2a84254 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Detaching volume d2fc1a9e-e23e-4730-b2b9-3aec38100e28
Nov 29 08:13:58 compute-0 ceph-mon[75237]: osdmap e376: 3 total, 3 up, 3 in
Nov 29 08:13:58 compute-0 ceph-mon[75237]: pgmap v1756: 305 pgs: 305 active+clean; 250 MiB data, 497 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 152 KiB/s wr, 78 op/s
Nov 29 08:13:58 compute-0 nova_compute[255040]: 2025-11-29 08:13:58.632 255071 INFO nova.virt.block_device [None req-3003c282-8f09-4930-a618-1c43a2a84254 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Attempting to driver detach volume d2fc1a9e-e23e-4730-b2b9-3aec38100e28 from mountpoint /dev/vdb
Nov 29 08:13:58 compute-0 nova_compute[255040]: 2025-11-29 08:13:58.642 255071 DEBUG nova.virt.libvirt.driver [None req-3003c282-8f09-4930-a618-1c43a2a84254 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Attempting to detach device vdb from instance 73161fa0-86cc-4d12-bbb4-64386b62bf99 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:13:58 compute-0 nova_compute[255040]: 2025-11-29 08:13:58.642 255071 DEBUG nova.virt.libvirt.guest [None req-3003c282-8f09-4930-a618-1c43a2a84254 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:13:58 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:13:58 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-d2fc1a9e-e23e-4730-b2b9-3aec38100e28">
Nov 29 08:13:58 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:13:58 compute-0 nova_compute[255040]:   </source>
Nov 29 08:13:58 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:13:58 compute-0 nova_compute[255040]:   <serial>d2fc1a9e-e23e-4730-b2b9-3aec38100e28</serial>
Nov 29 08:13:58 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:13:58 compute-0 nova_compute[255040]: </disk>
Nov 29 08:13:58 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:13:58 compute-0 nova_compute[255040]: 2025-11-29 08:13:58.650 255071 INFO nova.virt.libvirt.driver [None req-3003c282-8f09-4930-a618-1c43a2a84254 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Successfully detached device vdb from instance 73161fa0-86cc-4d12-bbb4-64386b62bf99 from the persistent domain config.
Nov 29 08:13:58 compute-0 nova_compute[255040]: 2025-11-29 08:13:58.651 255071 DEBUG nova.virt.libvirt.driver [None req-3003c282-8f09-4930-a618-1c43a2a84254 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 73161fa0-86cc-4d12-bbb4-64386b62bf99 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:13:58 compute-0 nova_compute[255040]: 2025-11-29 08:13:58.651 255071 DEBUG nova.virt.libvirt.guest [None req-3003c282-8f09-4930-a618-1c43a2a84254 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:13:58 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:13:58 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-d2fc1a9e-e23e-4730-b2b9-3aec38100e28">
Nov 29 08:13:58 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:13:58 compute-0 nova_compute[255040]:   </source>
Nov 29 08:13:58 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:13:58 compute-0 nova_compute[255040]:   <serial>d2fc1a9e-e23e-4730-b2b9-3aec38100e28</serial>
Nov 29 08:13:58 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:13:58 compute-0 nova_compute[255040]: </disk>
Nov 29 08:13:58 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:13:58 compute-0 nova_compute[255040]: 2025-11-29 08:13:58.779 255071 DEBUG nova.virt.libvirt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Received event <DeviceRemovedEvent: 1764404038.779616, 73161fa0-86cc-4d12-bbb4-64386b62bf99 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:13:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:58 compute-0 nova_compute[255040]: 2025-11-29 08:13:58.783 255071 DEBUG nova.virt.libvirt.driver [None req-3003c282-8f09-4930-a618-1c43a2a84254 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 73161fa0-86cc-4d12-bbb4-64386b62bf99 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:13:58 compute-0 nova_compute[255040]: 2025-11-29 08:13:58.786 255071 INFO nova.virt.libvirt.driver [None req-3003c282-8f09-4930-a618-1c43a2a84254 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Successfully detached device vdb from instance 73161fa0-86cc-4d12-bbb4-64386b62bf99 from the live domain config.
Nov 29 08:13:58 compute-0 nova_compute[255040]: 2025-11-29 08:13:58.979 255071 DEBUG nova.objects.instance [None req-3003c282-8f09-4930-a618-1c43a2a84254 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lazy-loading 'flavor' on Instance uuid 73161fa0-86cc-4d12-bbb4-64386b62bf99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:13:59 compute-0 nova_compute[255040]: 2025-11-29 08:13:59.016 255071 DEBUG oslo_concurrency.lockutils [None req-3003c282-8f09-4930-a618-1c43a2a84254 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 250 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 166 KiB/s wr, 75 op/s
Nov 29 08:13:59 compute-0 nova_compute[255040]: 2025-11-29 08:13:59.941 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:00 compute-0 nova_compute[255040]: 2025-11-29 08:14:00.090 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:00 compute-0 podman[291270]: 2025-11-29 08:14:00.932186836 +0000 UTC m=+0.096853430 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 08:14:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e376 do_prune osdmap full prune enabled
Nov 29 08:14:00 compute-0 ceph-mon[75237]: pgmap v1757: 305 pgs: 305 active+clean; 250 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 166 KiB/s wr, 75 op/s
Nov 29 08:14:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e377 e377: 3 total, 3 up, 3 in
Nov 29 08:14:00 compute-0 ovn_controller[153295]: 2025-11-29T08:14:00Z|00048|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.9 does not match offer 10.100.0.10
Nov 29 08:14:00 compute-0 ovn_controller[153295]: 2025-11-29T08:14:00Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:a3:5c:f6 10.100.0.10
Nov 29 08:14:00 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e377: 3 total, 3 up, 3 in
Nov 29 08:14:01 compute-0 nova_compute[255040]: 2025-11-29 08:14:01.626 255071 DEBUG oslo_concurrency.lockutils [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Acquiring lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:01 compute-0 nova_compute[255040]: 2025-11-29 08:14:01.626 255071 DEBUG oslo_concurrency.lockutils [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:01 compute-0 nova_compute[255040]: 2025-11-29 08:14:01.641 255071 DEBUG nova.objects.instance [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lazy-loading 'flavor' on Instance uuid 73161fa0-86cc-4d12-bbb4-64386b62bf99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:14:01 compute-0 nova_compute[255040]: 2025-11-29 08:14:01.677 255071 DEBUG oslo_concurrency.lockutils [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:01 compute-0 nova_compute[255040]: 2025-11-29 08:14:01.838 255071 DEBUG oslo_concurrency.lockutils [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Acquiring lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:01 compute-0 nova_compute[255040]: 2025-11-29 08:14:01.839 255071 DEBUG oslo_concurrency.lockutils [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:01 compute-0 nova_compute[255040]: 2025-11-29 08:14:01.839 255071 INFO nova.compute.manager [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Attaching volume 1099f0c4-fa54-4f62-8895-49768608609b to /dev/vdb
Nov 29 08:14:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:14:01 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/370115769' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:14:01 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/370115769' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 260 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 974 KiB/s wr, 241 op/s
Nov 29 08:14:01 compute-0 nova_compute[255040]: 2025-11-29 08:14:01.961 255071 DEBUG os_brick.utils [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:14:01 compute-0 nova_compute[255040]: 2025-11-29 08:14:01.962 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:01 compute-0 nova_compute[255040]: 2025-11-29 08:14:01.975 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:01 compute-0 nova_compute[255040]: 2025-11-29 08:14:01.976 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[0b07651a-af96-4a15-99e3-343045a460f2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:01 compute-0 nova_compute[255040]: 2025-11-29 08:14:01.978 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:01 compute-0 nova_compute[255040]: 2025-11-29 08:14:01.990 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:01 compute-0 nova_compute[255040]: 2025-11-29 08:14:01.990 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[a7214ba1-094c-402f-9196-f9b73727b144]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:01 compute-0 nova_compute[255040]: 2025-11-29 08:14:01.993 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:01 compute-0 ceph-mon[75237]: osdmap e377: 3 total, 3 up, 3 in
Nov 29 08:14:01 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/370115769' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:01 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/370115769' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:02 compute-0 nova_compute[255040]: 2025-11-29 08:14:02.005 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:02 compute-0 nova_compute[255040]: 2025-11-29 08:14:02.006 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[aa270c9d-e5ed-447c-a743-7251b87a6f3f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:02 compute-0 nova_compute[255040]: 2025-11-29 08:14:02.008 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[67727685-e1d1-42f2-b4cf-d98b37a4ccf2]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:02 compute-0 nova_compute[255040]: 2025-11-29 08:14:02.008 255071 DEBUG oslo_concurrency.processutils [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:02 compute-0 nova_compute[255040]: 2025-11-29 08:14:02.037 255071 DEBUG oslo_concurrency.processutils [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:02 compute-0 nova_compute[255040]: 2025-11-29 08:14:02.040 255071 DEBUG os_brick.initiator.connectors.lightos [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:14:02 compute-0 nova_compute[255040]: 2025-11-29 08:14:02.040 255071 DEBUG os_brick.initiator.connectors.lightos [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:14:02 compute-0 nova_compute[255040]: 2025-11-29 08:14:02.040 255071 DEBUG os_brick.initiator.connectors.lightos [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:14:02 compute-0 nova_compute[255040]: 2025-11-29 08:14:02.041 255071 DEBUG os_brick.utils [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] <== get_connector_properties: return (79ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:14:02 compute-0 nova_compute[255040]: 2025-11-29 08:14:02.041 255071 DEBUG nova.virt.block_device [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Updating existing volume attachment record: 7219479d-159c-43fa-bc5e-caac55231568 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:14:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:14:02 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2171127845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:02 compute-0 nova_compute[255040]: 2025-11-29 08:14:02.734 255071 DEBUG nova.objects.instance [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lazy-loading 'flavor' on Instance uuid 73161fa0-86cc-4d12-bbb4-64386b62bf99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:14:02 compute-0 nova_compute[255040]: 2025-11-29 08:14:02.884 255071 DEBUG nova.virt.libvirt.driver [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Attempting to attach volume 1099f0c4-fa54-4f62-8895-49768608609b with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:14:02 compute-0 nova_compute[255040]: 2025-11-29 08:14:02.886 255071 DEBUG nova.virt.libvirt.guest [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:14:02 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:14:02 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-1099f0c4-fa54-4f62-8895-49768608609b">
Nov 29 08:14:02 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:14:02 compute-0 nova_compute[255040]:   </source>
Nov 29 08:14:02 compute-0 nova_compute[255040]:   <auth username="openstack">
Nov 29 08:14:02 compute-0 nova_compute[255040]:     <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:14:02 compute-0 nova_compute[255040]:   </auth>
Nov 29 08:14:02 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:14:02 compute-0 nova_compute[255040]:   <serial>1099f0c4-fa54-4f62-8895-49768608609b</serial>
Nov 29 08:14:02 compute-0 nova_compute[255040]: </disk>
Nov 29 08:14:02 compute-0 nova_compute[255040]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:14:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e377 do_prune osdmap full prune enabled
Nov 29 08:14:03 compute-0 ceph-mon[75237]: pgmap v1759: 305 pgs: 305 active+clean; 260 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 974 KiB/s wr, 241 op/s
Nov 29 08:14:03 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2171127845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e378 e378: 3 total, 3 up, 3 in
Nov 29 08:14:03 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e378: 3 total, 3 up, 3 in
Nov 29 08:14:03 compute-0 nova_compute[255040]: 2025-11-29 08:14:03.049 255071 DEBUG nova.virt.libvirt.driver [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:14:03 compute-0 nova_compute[255040]: 2025-11-29 08:14:03.050 255071 DEBUG nova.virt.libvirt.driver [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:14:03 compute-0 nova_compute[255040]: 2025-11-29 08:14:03.050 255071 DEBUG nova.virt.libvirt.driver [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:14:03 compute-0 nova_compute[255040]: 2025-11-29 08:14:03.050 255071 DEBUG nova.virt.libvirt.driver [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] No VIF found with MAC fa:16:3e:8d:66:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:14:03 compute-0 nova_compute[255040]: 2025-11-29 08:14:03.227 255071 DEBUG oslo_concurrency.lockutils [None req-de4c3e5f-3d2f-4a68-8a8c-11674dc3d433 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 265 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 199 op/s
Nov 29 08:14:04 compute-0 ceph-mon[75237]: osdmap e378: 3 total, 3 up, 3 in
Nov 29 08:14:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:14:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2894291023' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:14:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2894291023' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:04 compute-0 ovn_controller[153295]: 2025-11-29T08:14:04Z|00050|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.9 does not match offer 10.100.0.10
Nov 29 08:14:04 compute-0 ovn_controller[153295]: 2025-11-29T08:14:04Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:a3:5c:f6 10.100.0.10
Nov 29 08:14:04 compute-0 nova_compute[255040]: 2025-11-29 08:14:04.945 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:05 compute-0 ceph-mon[75237]: pgmap v1761: 305 pgs: 305 active+clean; 265 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 199 op/s
Nov 29 08:14:05 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2894291023' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:05 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2894291023' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:05 compute-0 nova_compute[255040]: 2025-11-29 08:14:05.091 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 269 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 928 KiB/s wr, 222 op/s
Nov 29 08:14:05 compute-0 nova_compute[255040]: 2025-11-29 08:14:05.978 255071 DEBUG oslo_concurrency.lockutils [None req-c18c43cc-9c63-4095-b439-5d5780a59f08 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Acquiring lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:05 compute-0 nova_compute[255040]: 2025-11-29 08:14:05.978 255071 DEBUG oslo_concurrency.lockutils [None req-c18c43cc-9c63-4095-b439-5d5780a59f08 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:05 compute-0 nova_compute[255040]: 2025-11-29 08:14:05.990 255071 INFO nova.compute.manager [None req-c18c43cc-9c63-4095-b439-5d5780a59f08 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Detaching volume 1099f0c4-fa54-4f62-8895-49768608609b
Nov 29 08:14:05 compute-0 ovn_controller[153295]: 2025-11-29T08:14:05Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a3:5c:f6 10.100.0.10
Nov 29 08:14:05 compute-0 ovn_controller[153295]: 2025-11-29T08:14:05Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a3:5c:f6 10.100.0.10
Nov 29 08:14:06 compute-0 nova_compute[255040]: 2025-11-29 08:14:06.137 255071 INFO nova.virt.block_device [None req-c18c43cc-9c63-4095-b439-5d5780a59f08 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Attempting to driver detach volume 1099f0c4-fa54-4f62-8895-49768608609b from mountpoint /dev/vdb
Nov 29 08:14:06 compute-0 nova_compute[255040]: 2025-11-29 08:14:06.144 255071 DEBUG nova.virt.libvirt.driver [None req-c18c43cc-9c63-4095-b439-5d5780a59f08 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Attempting to detach device vdb from instance 73161fa0-86cc-4d12-bbb4-64386b62bf99 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:14:06 compute-0 nova_compute[255040]: 2025-11-29 08:14:06.144 255071 DEBUG nova.virt.libvirt.guest [None req-c18c43cc-9c63-4095-b439-5d5780a59f08 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:14:06 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:14:06 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-1099f0c4-fa54-4f62-8895-49768608609b">
Nov 29 08:14:06 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:14:06 compute-0 nova_compute[255040]:   </source>
Nov 29 08:14:06 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:14:06 compute-0 nova_compute[255040]:   <serial>1099f0c4-fa54-4f62-8895-49768608609b</serial>
Nov 29 08:14:06 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:14:06 compute-0 nova_compute[255040]: </disk>
Nov 29 08:14:06 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:14:06 compute-0 nova_compute[255040]: 2025-11-29 08:14:06.150 255071 INFO nova.virt.libvirt.driver [None req-c18c43cc-9c63-4095-b439-5d5780a59f08 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Successfully detached device vdb from instance 73161fa0-86cc-4d12-bbb4-64386b62bf99 from the persistent domain config.
Nov 29 08:14:06 compute-0 nova_compute[255040]: 2025-11-29 08:14:06.150 255071 DEBUG nova.virt.libvirt.driver [None req-c18c43cc-9c63-4095-b439-5d5780a59f08 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 73161fa0-86cc-4d12-bbb4-64386b62bf99 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:14:06 compute-0 nova_compute[255040]: 2025-11-29 08:14:06.151 255071 DEBUG nova.virt.libvirt.guest [None req-c18c43cc-9c63-4095-b439-5d5780a59f08 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:14:06 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:14:06 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-1099f0c4-fa54-4f62-8895-49768608609b">
Nov 29 08:14:06 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:14:06 compute-0 nova_compute[255040]:   </source>
Nov 29 08:14:06 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:14:06 compute-0 nova_compute[255040]:   <serial>1099f0c4-fa54-4f62-8895-49768608609b</serial>
Nov 29 08:14:06 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:14:06 compute-0 nova_compute[255040]: </disk>
Nov 29 08:14:06 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:14:06 compute-0 nova_compute[255040]: 2025-11-29 08:14:06.215 255071 DEBUG nova.virt.libvirt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Received event <DeviceRemovedEvent: 1764404046.215283, 73161fa0-86cc-4d12-bbb4-64386b62bf99 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:14:06 compute-0 nova_compute[255040]: 2025-11-29 08:14:06.218 255071 DEBUG nova.virt.libvirt.driver [None req-c18c43cc-9c63-4095-b439-5d5780a59f08 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 73161fa0-86cc-4d12-bbb4-64386b62bf99 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:14:06 compute-0 nova_compute[255040]: 2025-11-29 08:14:06.219 255071 INFO nova.virt.libvirt.driver [None req-c18c43cc-9c63-4095-b439-5d5780a59f08 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Successfully detached device vdb from instance 73161fa0-86cc-4d12-bbb4-64386b62bf99 from the live domain config.
Nov 29 08:14:06 compute-0 nova_compute[255040]: 2025-11-29 08:14:06.358 255071 DEBUG nova.objects.instance [None req-c18c43cc-9c63-4095-b439-5d5780a59f08 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lazy-loading 'flavor' on Instance uuid 73161fa0-86cc-4d12-bbb4-64386b62bf99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:14:06 compute-0 nova_compute[255040]: 2025-11-29 08:14:06.417 255071 DEBUG oslo_concurrency.lockutils [None req-c18c43cc-9c63-4095-b439-5d5780a59f08 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:07 compute-0 ceph-mon[75237]: pgmap v1762: 305 pgs: 305 active+clean; 269 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 928 KiB/s wr, 222 op/s
Nov 29 08:14:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 270 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 986 KiB/s wr, 229 op/s
Nov 29 08:14:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:14:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:14:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:14:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:14:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:14:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:14:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e378 do_prune osdmap full prune enabled
Nov 29 08:14:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e379 e379: 3 total, 3 up, 3 in
Nov 29 08:14:08 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e379: 3 total, 3 up, 3 in
Nov 29 08:14:08 compute-0 podman[291326]: 2025-11-29 08:14:08.888251684 +0000 UTC m=+0.053492718 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 08:14:09 compute-0 ceph-mon[75237]: pgmap v1763: 305 pgs: 305 active+clean; 270 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 986 KiB/s wr, 229 op/s
Nov 29 08:14:09 compute-0 ceph-mon[75237]: osdmap e379: 3 total, 3 up, 3 in
Nov 29 08:14:09 compute-0 nova_compute[255040]: 2025-11-29 08:14:09.761 255071 DEBUG oslo_concurrency.lockutils [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Acquiring lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:09 compute-0 nova_compute[255040]: 2025-11-29 08:14:09.761 255071 DEBUG oslo_concurrency.lockutils [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:09 compute-0 nova_compute[255040]: 2025-11-29 08:14:09.894 255071 DEBUG nova.objects.instance [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lazy-loading 'flavor' on Instance uuid 73161fa0-86cc-4d12-bbb4-64386b62bf99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:14:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 270 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 228 KiB/s rd, 302 KiB/s wr, 90 op/s
Nov 29 08:14:09 compute-0 nova_compute[255040]: 2025-11-29 08:14:09.946 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:10 compute-0 nova_compute[255040]: 2025-11-29 08:14:10.092 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:10 compute-0 nova_compute[255040]: 2025-11-29 08:14:10.435 255071 DEBUG oslo_concurrency.lockutils [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:11 compute-0 ceph-mon[75237]: pgmap v1765: 305 pgs: 305 active+clean; 270 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 228 KiB/s rd, 302 KiB/s wr, 90 op/s
Nov 29 08:14:11 compute-0 nova_compute[255040]: 2025-11-29 08:14:11.531 255071 DEBUG oslo_concurrency.lockutils [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Acquiring lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:11 compute-0 nova_compute[255040]: 2025-11-29 08:14:11.532 255071 DEBUG oslo_concurrency.lockutils [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:11 compute-0 nova_compute[255040]: 2025-11-29 08:14:11.532 255071 INFO nova.compute.manager [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Attaching volume 8cf27777-1295-4135-af40-e42e81a42b21 to /dev/vdb
Nov 29 08:14:11 compute-0 nova_compute[255040]: 2025-11-29 08:14:11.665 255071 DEBUG os_brick.utils [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:14:11 compute-0 nova_compute[255040]: 2025-11-29 08:14:11.666 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:11 compute-0 nova_compute[255040]: 2025-11-29 08:14:11.678 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:11 compute-0 nova_compute[255040]: 2025-11-29 08:14:11.679 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[acc8d50f-ec0a-4458-be20-39deaddaa76d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:11 compute-0 nova_compute[255040]: 2025-11-29 08:14:11.680 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:11 compute-0 nova_compute[255040]: 2025-11-29 08:14:11.688 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:11 compute-0 nova_compute[255040]: 2025-11-29 08:14:11.688 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[9e9ce7ed-0e45-4050-a5c9-21d2e80b8dd1]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:11 compute-0 nova_compute[255040]: 2025-11-29 08:14:11.690 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:11 compute-0 nova_compute[255040]: 2025-11-29 08:14:11.699 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:11 compute-0 nova_compute[255040]: 2025-11-29 08:14:11.699 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[fe772ec9-8ce6-48ca-af11-56bfd3be0837]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:11 compute-0 nova_compute[255040]: 2025-11-29 08:14:11.700 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[24d42487-8ab1-43b5-a3cc-c320b19e3311]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:11 compute-0 nova_compute[255040]: 2025-11-29 08:14:11.701 255071 DEBUG oslo_concurrency.processutils [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:11 compute-0 nova_compute[255040]: 2025-11-29 08:14:11.725 255071 DEBUG oslo_concurrency.processutils [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:11 compute-0 nova_compute[255040]: 2025-11-29 08:14:11.728 255071 DEBUG os_brick.initiator.connectors.lightos [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:14:11 compute-0 nova_compute[255040]: 2025-11-29 08:14:11.728 255071 DEBUG os_brick.initiator.connectors.lightos [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:14:11 compute-0 nova_compute[255040]: 2025-11-29 08:14:11.728 255071 DEBUG os_brick.initiator.connectors.lightos [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:14:11 compute-0 nova_compute[255040]: 2025-11-29 08:14:11.729 255071 DEBUG os_brick.utils [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] <== get_connector_properties: return (63ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:14:11 compute-0 nova_compute[255040]: 2025-11-29 08:14:11.729 255071 DEBUG nova.virt.block_device [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Updating existing volume attachment record: 262cc89a-0ccd-4151-ac17-91b32db8f05c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:14:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 270 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 311 KiB/s rd, 288 KiB/s wr, 106 op/s
Nov 29 08:14:12 compute-0 ceph-mon[75237]: pgmap v1766: 305 pgs: 305 active+clean; 270 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 311 KiB/s rd, 288 KiB/s wr, 106 op/s
Nov 29 08:14:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:14:12 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4157690069' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:12 compute-0 nova_compute[255040]: 2025-11-29 08:14:12.596 255071 DEBUG nova.objects.instance [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lazy-loading 'flavor' on Instance uuid 73161fa0-86cc-4d12-bbb4-64386b62bf99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:14:12 compute-0 nova_compute[255040]: 2025-11-29 08:14:12.631 255071 DEBUG nova.virt.libvirt.driver [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Attempting to attach volume 8cf27777-1295-4135-af40-e42e81a42b21 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:14:12 compute-0 nova_compute[255040]: 2025-11-29 08:14:12.634 255071 DEBUG nova.virt.libvirt.guest [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:14:12 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:14:12 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-8cf27777-1295-4135-af40-e42e81a42b21">
Nov 29 08:14:12 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:14:12 compute-0 nova_compute[255040]:   </source>
Nov 29 08:14:12 compute-0 nova_compute[255040]:   <auth username="openstack">
Nov 29 08:14:12 compute-0 nova_compute[255040]:     <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:14:12 compute-0 nova_compute[255040]:   </auth>
Nov 29 08:14:12 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:14:12 compute-0 nova_compute[255040]:   <serial>8cf27777-1295-4135-af40-e42e81a42b21</serial>
Nov 29 08:14:12 compute-0 nova_compute[255040]: </disk>
Nov 29 08:14:12 compute-0 nova_compute[255040]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:14:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:14:12 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3562632244' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:12 compute-0 nova_compute[255040]: 2025-11-29 08:14:12.880 255071 DEBUG nova.virt.libvirt.driver [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:14:12 compute-0 nova_compute[255040]: 2025-11-29 08:14:12.881 255071 DEBUG nova.virt.libvirt.driver [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:14:12 compute-0 nova_compute[255040]: 2025-11-29 08:14:12.882 255071 DEBUG nova.virt.libvirt.driver [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:14:12 compute-0 nova_compute[255040]: 2025-11-29 08:14:12.883 255071 DEBUG nova.virt.libvirt.driver [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] No VIF found with MAC fa:16:3e:8d:66:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:14:13 compute-0 nova_compute[255040]: 2025-11-29 08:14:13.228 255071 DEBUG oslo_concurrency.lockutils [None req-a4826fac-d627-4895-8441-041725193536 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e379 do_prune osdmap full prune enabled
Nov 29 08:14:13 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4157690069' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:13 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3562632244' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e380 e380: 3 total, 3 up, 3 in
Nov 29 08:14:13 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e380: 3 total, 3 up, 3 in
Nov 29 08:14:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 270 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 143 KiB/s rd, 104 KiB/s wr, 39 op/s
Nov 29 08:14:14 compute-0 ceph-mon[75237]: osdmap e380: 3 total, 3 up, 3 in
Nov 29 08:14:14 compute-0 ceph-mon[75237]: pgmap v1768: 305 pgs: 305 active+clean; 270 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 143 KiB/s rd, 104 KiB/s wr, 39 op/s
Nov 29 08:14:14 compute-0 nova_compute[255040]: 2025-11-29 08:14:14.949 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:15 compute-0 nova_compute[255040]: 2025-11-29 08:14:15.095 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e380 do_prune osdmap full prune enabled
Nov 29 08:14:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e381 e381: 3 total, 3 up, 3 in
Nov 29 08:14:15 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e381: 3 total, 3 up, 3 in
Nov 29 08:14:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 273 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 290 KiB/s rd, 95 KiB/s wr, 58 op/s
Nov 29 08:14:16 compute-0 nova_compute[255040]: 2025-11-29 08:14:16.214 255071 DEBUG oslo_concurrency.lockutils [None req-f3b5e9fb-8d81-441f-96bc-8ff0fabc2f5f 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Acquiring lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:16 compute-0 nova_compute[255040]: 2025-11-29 08:14:16.215 255071 DEBUG oslo_concurrency.lockutils [None req-f3b5e9fb-8d81-441f-96bc-8ff0fabc2f5f 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:16 compute-0 nova_compute[255040]: 2025-11-29 08:14:16.244 255071 INFO nova.compute.manager [None req-f3b5e9fb-8d81-441f-96bc-8ff0fabc2f5f 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Detaching volume 8cf27777-1295-4135-af40-e42e81a42b21
Nov 29 08:14:16 compute-0 nova_compute[255040]: 2025-11-29 08:14:16.359 255071 INFO nova.virt.block_device [None req-f3b5e9fb-8d81-441f-96bc-8ff0fabc2f5f 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Attempting to driver detach volume 8cf27777-1295-4135-af40-e42e81a42b21 from mountpoint /dev/vdb
Nov 29 08:14:16 compute-0 nova_compute[255040]: 2025-11-29 08:14:16.367 255071 DEBUG nova.virt.libvirt.driver [None req-f3b5e9fb-8d81-441f-96bc-8ff0fabc2f5f 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Attempting to detach device vdb from instance 73161fa0-86cc-4d12-bbb4-64386b62bf99 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:14:16 compute-0 nova_compute[255040]: 2025-11-29 08:14:16.368 255071 DEBUG nova.virt.libvirt.guest [None req-f3b5e9fb-8d81-441f-96bc-8ff0fabc2f5f 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:14:16 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:14:16 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-8cf27777-1295-4135-af40-e42e81a42b21">
Nov 29 08:14:16 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:14:16 compute-0 nova_compute[255040]:   </source>
Nov 29 08:14:16 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:14:16 compute-0 nova_compute[255040]:   <serial>8cf27777-1295-4135-af40-e42e81a42b21</serial>
Nov 29 08:14:16 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:14:16 compute-0 nova_compute[255040]: </disk>
Nov 29 08:14:16 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:14:16 compute-0 nova_compute[255040]: 2025-11-29 08:14:16.388 255071 INFO nova.virt.libvirt.driver [None req-f3b5e9fb-8d81-441f-96bc-8ff0fabc2f5f 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Successfully detached device vdb from instance 73161fa0-86cc-4d12-bbb4-64386b62bf99 from the persistent domain config.
Nov 29 08:14:16 compute-0 nova_compute[255040]: 2025-11-29 08:14:16.389 255071 DEBUG nova.virt.libvirt.driver [None req-f3b5e9fb-8d81-441f-96bc-8ff0fabc2f5f 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 73161fa0-86cc-4d12-bbb4-64386b62bf99 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:14:16 compute-0 nova_compute[255040]: 2025-11-29 08:14:16.389 255071 DEBUG nova.virt.libvirt.guest [None req-f3b5e9fb-8d81-441f-96bc-8ff0fabc2f5f 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:14:16 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:14:16 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-8cf27777-1295-4135-af40-e42e81a42b21">
Nov 29 08:14:16 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:14:16 compute-0 nova_compute[255040]:   </source>
Nov 29 08:14:16 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:14:16 compute-0 nova_compute[255040]:   <serial>8cf27777-1295-4135-af40-e42e81a42b21</serial>
Nov 29 08:14:16 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:14:16 compute-0 nova_compute[255040]: </disk>
Nov 29 08:14:16 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:14:16 compute-0 nova_compute[255040]: 2025-11-29 08:14:16.644 255071 DEBUG nova.virt.libvirt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Received event <DeviceRemovedEvent: 1764404056.6437905, 73161fa0-86cc-4d12-bbb4-64386b62bf99 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:14:16 compute-0 ceph-mon[75237]: osdmap e381: 3 total, 3 up, 3 in
Nov 29 08:14:16 compute-0 ceph-mon[75237]: pgmap v1770: 305 pgs: 305 active+clean; 273 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 290 KiB/s rd, 95 KiB/s wr, 58 op/s
Nov 29 08:14:16 compute-0 nova_compute[255040]: 2025-11-29 08:14:16.645 255071 DEBUG nova.virt.libvirt.driver [None req-f3b5e9fb-8d81-441f-96bc-8ff0fabc2f5f 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 73161fa0-86cc-4d12-bbb4-64386b62bf99 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:14:16 compute-0 nova_compute[255040]: 2025-11-29 08:14:16.648 255071 INFO nova.virt.libvirt.driver [None req-f3b5e9fb-8d81-441f-96bc-8ff0fabc2f5f 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Successfully detached device vdb from instance 73161fa0-86cc-4d12-bbb4-64386b62bf99 from the live domain config.
Nov 29 08:14:16 compute-0 nova_compute[255040]: 2025-11-29 08:14:16.913 255071 DEBUG nova.objects.instance [None req-f3b5e9fb-8d81-441f-96bc-8ff0fabc2f5f 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lazy-loading 'flavor' on Instance uuid 73161fa0-86cc-4d12-bbb4-64386b62bf99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:14:16 compute-0 podman[291376]: 2025-11-29 08:14:16.926822161 +0000 UTC m=+0.084879645 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 08:14:16 compute-0 nova_compute[255040]: 2025-11-29 08:14:16.988 255071 DEBUG oslo_concurrency.lockutils [None req-f3b5e9fb-8d81-441f-96bc-8ff0fabc2f5f 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:14:17 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2059635253' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:17 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2059635253' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 273 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 105 KiB/s wr, 68 op/s
Nov 29 08:14:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e381 do_prune osdmap full prune enabled
Nov 29 08:14:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e382 e382: 3 total, 3 up, 3 in
Nov 29 08:14:18 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e382: 3 total, 3 up, 3 in
Nov 29 08:14:18 compute-0 ceph-mon[75237]: pgmap v1771: 305 pgs: 305 active+clean; 273 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 105 KiB/s wr, 68 op/s
Nov 29 08:14:19 compute-0 nova_compute[255040]: 2025-11-29 08:14:19.467 255071 DEBUG oslo_concurrency.lockutils [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Acquiring lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:19 compute-0 nova_compute[255040]: 2025-11-29 08:14:19.467 255071 DEBUG oslo_concurrency.lockutils [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:19 compute-0 nova_compute[255040]: 2025-11-29 08:14:19.487 255071 DEBUG nova.objects.instance [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lazy-loading 'flavor' on Instance uuid 73161fa0-86cc-4d12-bbb4-64386b62bf99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:14:19 compute-0 nova_compute[255040]: 2025-11-29 08:14:19.540 255071 DEBUG oslo_concurrency.lockutils [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:19 compute-0 nova_compute[255040]: 2025-11-29 08:14:19.808 255071 DEBUG oslo_concurrency.lockutils [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Acquiring lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:19 compute-0 nova_compute[255040]: 2025-11-29 08:14:19.808 255071 DEBUG oslo_concurrency.lockutils [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:19 compute-0 nova_compute[255040]: 2025-11-29 08:14:19.808 255071 INFO nova.compute.manager [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Attaching volume 364a708a-7f7e-4a3b-9a6c-169beaea7cda to /dev/vdb
Nov 29 08:14:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 274 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 223 KiB/s rd, 199 KiB/s wr, 71 op/s
Nov 29 08:14:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e382 do_prune osdmap full prune enabled
Nov 29 08:14:19 compute-0 nova_compute[255040]: 2025-11-29 08:14:19.952 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:19 compute-0 nova_compute[255040]: 2025-11-29 08:14:19.958 255071 DEBUG os_brick.utils [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:14:19 compute-0 nova_compute[255040]: 2025-11-29 08:14:19.959 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e383 e383: 3 total, 3 up, 3 in
Nov 29 08:14:19 compute-0 nova_compute[255040]: 2025-11-29 08:14:19.974 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:19 compute-0 nova_compute[255040]: 2025-11-29 08:14:19.975 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[9f0dd150-865b-4df5-8ec7-9e5f96440f1f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:19 compute-0 nova_compute[255040]: 2025-11-29 08:14:19.977 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:19 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e383: 3 total, 3 up, 3 in
Nov 29 08:14:19 compute-0 nova_compute[255040]: 2025-11-29 08:14:19.989 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:19 compute-0 nova_compute[255040]: 2025-11-29 08:14:19.990 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[5bf67310-2ee4-48ba-9ff7-7a5cc316f675]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:19 compute-0 nova_compute[255040]: 2025-11-29 08:14:19.992 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:20 compute-0 nova_compute[255040]: 2025-11-29 08:14:19.999 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:20 compute-0 nova_compute[255040]: 2025-11-29 08:14:20.000 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[368b9cf0-b0cb-44a5-be88-f9f1dff31688]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:20 compute-0 nova_compute[255040]: 2025-11-29 08:14:20.001 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[ab3bcafe-4178-4031-af48-a8787e6dc7f7]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:20 compute-0 nova_compute[255040]: 2025-11-29 08:14:20.002 255071 DEBUG oslo_concurrency.processutils [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:20 compute-0 ceph-mon[75237]: osdmap e382: 3 total, 3 up, 3 in
Nov 29 08:14:20 compute-0 nova_compute[255040]: 2025-11-29 08:14:20.026 255071 DEBUG oslo_concurrency.processutils [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:20 compute-0 nova_compute[255040]: 2025-11-29 08:14:20.028 255071 DEBUG os_brick.initiator.connectors.lightos [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:14:20 compute-0 nova_compute[255040]: 2025-11-29 08:14:20.029 255071 DEBUG os_brick.initiator.connectors.lightos [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:14:20 compute-0 nova_compute[255040]: 2025-11-29 08:14:20.029 255071 DEBUG os_brick.initiator.connectors.lightos [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:14:20 compute-0 nova_compute[255040]: 2025-11-29 08:14:20.029 255071 DEBUG os_brick.utils [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:14:20 compute-0 nova_compute[255040]: 2025-11-29 08:14:20.030 255071 DEBUG nova.virt.block_device [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Updating existing volume attachment record: 5721c523-3300-4d87-b527-920976cd409b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:14:20 compute-0 nova_compute[255040]: 2025-11-29 08:14:20.096 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:14:20 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2116850840' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:21 compute-0 ceph-mon[75237]: pgmap v1773: 305 pgs: 305 active+clean; 274 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 223 KiB/s rd, 199 KiB/s wr, 71 op/s
Nov 29 08:14:21 compute-0 ceph-mon[75237]: osdmap e383: 3 total, 3 up, 3 in
Nov 29 08:14:21 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2116850840' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:21 compute-0 nova_compute[255040]: 2025-11-29 08:14:21.285 255071 DEBUG nova.objects.instance [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lazy-loading 'flavor' on Instance uuid 73161fa0-86cc-4d12-bbb4-64386b62bf99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:14:21 compute-0 nova_compute[255040]: 2025-11-29 08:14:21.317 255071 DEBUG nova.virt.libvirt.driver [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Attempting to attach volume 364a708a-7f7e-4a3b-9a6c-169beaea7cda with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:14:21 compute-0 nova_compute[255040]: 2025-11-29 08:14:21.319 255071 DEBUG nova.virt.libvirt.guest [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:14:21 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:14:21 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-364a708a-7f7e-4a3b-9a6c-169beaea7cda">
Nov 29 08:14:21 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:14:21 compute-0 nova_compute[255040]:   </source>
Nov 29 08:14:21 compute-0 nova_compute[255040]:   <auth username="openstack">
Nov 29 08:14:21 compute-0 nova_compute[255040]:     <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:14:21 compute-0 nova_compute[255040]:   </auth>
Nov 29 08:14:21 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:14:21 compute-0 nova_compute[255040]:   <serial>364a708a-7f7e-4a3b-9a6c-169beaea7cda</serial>
Nov 29 08:14:21 compute-0 nova_compute[255040]: </disk>
Nov 29 08:14:21 compute-0 nova_compute[255040]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:14:21 compute-0 ovn_controller[153295]: 2025-11-29T08:14:21Z|00212|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 29 08:14:21 compute-0 nova_compute[255040]: 2025-11-29 08:14:21.605 255071 DEBUG nova.virt.libvirt.driver [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:14:21 compute-0 nova_compute[255040]: 2025-11-29 08:14:21.605 255071 DEBUG nova.virt.libvirt.driver [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:14:21 compute-0 nova_compute[255040]: 2025-11-29 08:14:21.605 255071 DEBUG nova.virt.libvirt.driver [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:14:21 compute-0 nova_compute[255040]: 2025-11-29 08:14:21.606 255071 DEBUG nova.virt.libvirt.driver [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] No VIF found with MAC fa:16:3e:8d:66:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:14:21 compute-0 nova_compute[255040]: 2025-11-29 08:14:21.808 255071 DEBUG oslo_concurrency.lockutils [None req-f904d672-435b-4140-b01f-8a8dc8c3ee2a 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 274 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 162 KiB/s rd, 208 KiB/s wr, 101 op/s
Nov 29 08:14:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e383 do_prune osdmap full prune enabled
Nov 29 08:14:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e384 e384: 3 total, 3 up, 3 in
Nov 29 08:14:22 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e384: 3 total, 3 up, 3 in
Nov 29 08:14:23 compute-0 ceph-mon[75237]: pgmap v1775: 305 pgs: 305 active+clean; 274 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 162 KiB/s rd, 208 KiB/s wr, 101 op/s
Nov 29 08:14:23 compute-0 ceph-mon[75237]: osdmap e384: 3 total, 3 up, 3 in
Nov 29 08:14:23 compute-0 nova_compute[255040]: 2025-11-29 08:14:23.663 255071 DEBUG oslo_concurrency.lockutils [None req-38878251-dd7c-40f8-9dfc-31c0b5aebdfc 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Acquiring lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:23 compute-0 nova_compute[255040]: 2025-11-29 08:14:23.663 255071 DEBUG oslo_concurrency.lockutils [None req-38878251-dd7c-40f8-9dfc-31c0b5aebdfc 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:23 compute-0 nova_compute[255040]: 2025-11-29 08:14:23.725 255071 INFO nova.compute.manager [None req-38878251-dd7c-40f8-9dfc-31c0b5aebdfc 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Detaching volume 364a708a-7f7e-4a3b-9a6c-169beaea7cda
Nov 29 08:14:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:23 compute-0 nova_compute[255040]: 2025-11-29 08:14:23.846 255071 INFO nova.virt.block_device [None req-38878251-dd7c-40f8-9dfc-31c0b5aebdfc 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Attempting to driver detach volume 364a708a-7f7e-4a3b-9a6c-169beaea7cda from mountpoint /dev/vdb
Nov 29 08:14:23 compute-0 nova_compute[255040]: 2025-11-29 08:14:23.853 255071 DEBUG nova.virt.libvirt.driver [None req-38878251-dd7c-40f8-9dfc-31c0b5aebdfc 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Attempting to detach device vdb from instance 73161fa0-86cc-4d12-bbb4-64386b62bf99 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:14:23 compute-0 nova_compute[255040]: 2025-11-29 08:14:23.853 255071 DEBUG nova.virt.libvirt.guest [None req-38878251-dd7c-40f8-9dfc-31c0b5aebdfc 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:14:23 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:14:23 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-364a708a-7f7e-4a3b-9a6c-169beaea7cda">
Nov 29 08:14:23 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:14:23 compute-0 nova_compute[255040]:   </source>
Nov 29 08:14:23 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:14:23 compute-0 nova_compute[255040]:   <serial>364a708a-7f7e-4a3b-9a6c-169beaea7cda</serial>
Nov 29 08:14:23 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:14:23 compute-0 nova_compute[255040]: </disk>
Nov 29 08:14:23 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:14:23 compute-0 nova_compute[255040]: 2025-11-29 08:14:23.859 255071 INFO nova.virt.libvirt.driver [None req-38878251-dd7c-40f8-9dfc-31c0b5aebdfc 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Successfully detached device vdb from instance 73161fa0-86cc-4d12-bbb4-64386b62bf99 from the persistent domain config.
Nov 29 08:14:23 compute-0 nova_compute[255040]: 2025-11-29 08:14:23.859 255071 DEBUG nova.virt.libvirt.driver [None req-38878251-dd7c-40f8-9dfc-31c0b5aebdfc 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 73161fa0-86cc-4d12-bbb4-64386b62bf99 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:14:23 compute-0 nova_compute[255040]: 2025-11-29 08:14:23.859 255071 DEBUG nova.virt.libvirt.guest [None req-38878251-dd7c-40f8-9dfc-31c0b5aebdfc 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:14:23 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:14:23 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-364a708a-7f7e-4a3b-9a6c-169beaea7cda">
Nov 29 08:14:23 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:14:23 compute-0 nova_compute[255040]:   </source>
Nov 29 08:14:23 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:14:23 compute-0 nova_compute[255040]:   <serial>364a708a-7f7e-4a3b-9a6c-169beaea7cda</serial>
Nov 29 08:14:23 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:14:23 compute-0 nova_compute[255040]: </disk>
Nov 29 08:14:23 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:14:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:14:23 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1371143352' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:23 compute-0 nova_compute[255040]: 2025-11-29 08:14:23.915 255071 DEBUG nova.virt.libvirt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Received event <DeviceRemovedEvent: 1764404063.9148622, 73161fa0-86cc-4d12-bbb4-64386b62bf99 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:14:23 compute-0 nova_compute[255040]: 2025-11-29 08:14:23.916 255071 DEBUG nova.virt.libvirt.driver [None req-38878251-dd7c-40f8-9dfc-31c0b5aebdfc 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 73161fa0-86cc-4d12-bbb4-64386b62bf99 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:14:23 compute-0 nova_compute[255040]: 2025-11-29 08:14:23.919 255071 INFO nova.virt.libvirt.driver [None req-38878251-dd7c-40f8-9dfc-31c0b5aebdfc 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Successfully detached device vdb from instance 73161fa0-86cc-4d12-bbb4-64386b62bf99 from the live domain config.
Nov 29 08:14:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 274 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 105 KiB/s wr, 67 op/s
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.133 255071 DEBUG nova.objects.instance [None req-38878251-dd7c-40f8-9dfc-31c0b5aebdfc 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lazy-loading 'flavor' on Instance uuid 73161fa0-86cc-4d12-bbb4-64386b62bf99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:14:24 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1371143352' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.173 255071 DEBUG oslo_concurrency.lockutils [None req-38878251-dd7c-40f8-9dfc-31c0b5aebdfc 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.488 255071 DEBUG nova.compute.manager [req-290e56f7-e155-42ec-be82-0028969f61ab req-433bf760-92e6-4052-a069-61dc9119fba1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Received event network-changed-90cfdac4-0eb9-4a00-9ff4-a7fe2474579d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.489 255071 DEBUG nova.compute.manager [req-290e56f7-e155-42ec-be82-0028969f61ab req-433bf760-92e6-4052-a069-61dc9119fba1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Refreshing instance network info cache due to event network-changed-90cfdac4-0eb9-4a00-9ff4-a7fe2474579d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.489 255071 DEBUG oslo_concurrency.lockutils [req-290e56f7-e155-42ec-be82-0028969f61ab req-433bf760-92e6-4052-a069-61dc9119fba1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-40d7aec5-9705-4885-8d58-7fcfdb8eac5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.489 255071 DEBUG oslo_concurrency.lockutils [req-290e56f7-e155-42ec-be82-0028969f61ab req-433bf760-92e6-4052-a069-61dc9119fba1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-40d7aec5-9705-4885-8d58-7fcfdb8eac5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.489 255071 DEBUG nova.network.neutron [req-290e56f7-e155-42ec-be82-0028969f61ab req-433bf760-92e6-4052-a069-61dc9119fba1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Refreshing network info cache for port 90cfdac4-0eb9-4a00-9ff4-a7fe2474579d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.541 255071 DEBUG oslo_concurrency.lockutils [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.542 255071 DEBUG oslo_concurrency.lockutils [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.542 255071 DEBUG oslo_concurrency.lockutils [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.542 255071 DEBUG oslo_concurrency.lockutils [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.542 255071 DEBUG oslo_concurrency.lockutils [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.543 255071 INFO nova.compute.manager [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Terminating instance
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.544 255071 DEBUG nova.compute.manager [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:14:24 compute-0 kernel: tap90cfdac4-0e (unregistering): left promiscuous mode
Nov 29 08:14:24 compute-0 NetworkManager[49116]: <info>  [1764404064.5958] device (tap90cfdac4-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.607 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:24 compute-0 ovn_controller[153295]: 2025-11-29T08:14:24Z|00213|binding|INFO|Releasing lport 90cfdac4-0eb9-4a00-9ff4-a7fe2474579d from this chassis (sb_readonly=0)
Nov 29 08:14:24 compute-0 ovn_controller[153295]: 2025-11-29T08:14:24Z|00214|binding|INFO|Setting lport 90cfdac4-0eb9-4a00-9ff4-a7fe2474579d down in Southbound
Nov 29 08:14:24 compute-0 ovn_controller[153295]: 2025-11-29T08:14:24Z|00215|binding|INFO|Removing iface tap90cfdac4-0e ovn-installed in OVS
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.609 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:24 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:24.614 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:5c:f6 10.100.0.10'], port_security=['fa:16:3e:a3:5c:f6 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '40d7aec5-9705-4885-8d58-7fcfdb8eac5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3df24932e2a44aeab3c2aece8a045774', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd76aebb-076a-4516-b4a3-04b7aa482016', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6d2be5e-00f1-4a95-b572-cb93402763d5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=90cfdac4-0eb9-4a00-9ff4-a7fe2474579d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:14:24 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:24.615 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 90cfdac4-0eb9-4a00-9ff4-a7fe2474579d in datapath 6e23492e-beff-43f6-b4d1-f88ebeea0b6f unbound from our chassis
Nov 29 08:14:24 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:24.617 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e23492e-beff-43f6-b4d1-f88ebeea0b6f
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.623 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:24 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:24.636 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[159d7022-4284-49c0-b0e6-65802c4d8d8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:24 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:24.663 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[2d8f03bc-4976-447e-9f3b-213835ac6288]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:24 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Deactivated successfully.
Nov 29 08:14:24 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Consumed 15.158s CPU time.
Nov 29 08:14:24 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:24.667 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[db971ad2-7b07-40f5-adb8-533a9fa615f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:24 compute-0 systemd-machined[216271]: Machine qemu-23-instance-00000017 terminated.
Nov 29 08:14:24 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:24.694 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[35f29119-af7a-4216-8f53-60fc6ea30079]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:24 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:24.714 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[1066baf1-25b7-4fb5-a3c3-8d37cef152f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e23492e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:19:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621285, 'reachable_time': 33842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291437, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:24 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:24.729 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[3cbe1696-0f36-4eb8-aee2-7b0ab268dc11]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6e23492e-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621299, 'tstamp': 621299}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291438, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6e23492e-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621303, 'tstamp': 621303}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291438, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:24 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:24.732 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e23492e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.734 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.738 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:24 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:24.739 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e23492e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:14:24 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:24.739 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:14:24 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:24.739 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e23492e-b0, col_values=(('external_ids', {'iface-id': 'c7579d40-4225-44ab-93bd-e31c3efe399f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:14:24 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:24.740 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.763 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.768 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.781 255071 INFO nova.virt.libvirt.driver [-] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Instance destroyed successfully.
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.781 255071 DEBUG nova.objects.instance [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lazy-loading 'resources' on Instance uuid 40d7aec5-9705-4885-8d58-7fcfdb8eac5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.796 255071 DEBUG nova.virt.libvirt.vif [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-665396298',display_name='tempest-TestVolumeBootPattern-server-665396298',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-665396298',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBNqVOtasX0MqRaMqqfsWVfBGlBxHyLONahirMfYc0xM/PP91rZ4W+N/NUA4y30TxcMcH62LfUYChDkxcMCwFGnIBRbZARerRoVNJBX6SaD1meU9QKaSGEO9I5Zm9Q8bzQ==',key_name='tempest-TestVolumeBootPattern-1223045967',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3df24932e2a44aeab3c2aece8a045774',ramdisk_id='',reservation_id='r-mu2vy1qp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1666331213',owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:13:46Z,user_data=None,user_id='5e62d407203540599a65ac50d5d447b9',uuid=40d7aec5-9705-4885-8d58-7fcfdb8eac5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "address": "fa:16:3e:a3:5c:f6", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfdac4-0e", "ovs_interfaceid": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.797 255071 DEBUG nova.network.os_vif_util [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converting VIF {"id": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "address": "fa:16:3e:a3:5c:f6", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfdac4-0e", "ovs_interfaceid": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.798 255071 DEBUG nova.network.os_vif_util [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a3:5c:f6,bridge_name='br-int',has_traffic_filtering=True,id=90cfdac4-0eb9-4a00-9ff4-a7fe2474579d,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90cfdac4-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.798 255071 DEBUG os_vif [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:5c:f6,bridge_name='br-int',has_traffic_filtering=True,id=90cfdac4-0eb9-4a00-9ff4-a7fe2474579d,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90cfdac4-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.801 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.801 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap90cfdac4-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.803 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.804 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.806 255071 INFO os_vif [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:5c:f6,bridge_name='br-int',has_traffic_filtering=True,id=90cfdac4-0eb9-4a00-9ff4-a7fe2474579d,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90cfdac4-0e')
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.953 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.995 255071 INFO nova.virt.libvirt.driver [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Deleting instance files /var/lib/nova/instances/40d7aec5-9705-4885-8d58-7fcfdb8eac5c_del
Nov 29 08:14:24 compute-0 nova_compute[255040]: 2025-11-29 08:14:24.996 255071 INFO nova.virt.libvirt.driver [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Deletion of /var/lib/nova/instances/40d7aec5-9705-4885-8d58-7fcfdb8eac5c_del complete
Nov 29 08:14:25 compute-0 nova_compute[255040]: 2025-11-29 08:14:25.042 255071 INFO nova.compute.manager [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Took 0.50 seconds to destroy the instance on the hypervisor.
Nov 29 08:14:25 compute-0 nova_compute[255040]: 2025-11-29 08:14:25.043 255071 DEBUG oslo.service.loopingcall [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:14:25 compute-0 nova_compute[255040]: 2025-11-29 08:14:25.044 255071 DEBUG nova.compute.manager [-] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:14:25 compute-0 nova_compute[255040]: 2025-11-29 08:14:25.044 255071 DEBUG nova.network.neutron [-] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:14:25 compute-0 nova_compute[255040]: 2025-11-29 08:14:25.147 255071 DEBUG nova.compute.manager [req-e5a24735-f6bb-430a-b309-fa8b03496a3d req-8294946a-a09d-4e32-9ee6-7e91bb19559d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Received event network-vif-unplugged-90cfdac4-0eb9-4a00-9ff4-a7fe2474579d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:14:25 compute-0 nova_compute[255040]: 2025-11-29 08:14:25.147 255071 DEBUG oslo_concurrency.lockutils [req-e5a24735-f6bb-430a-b309-fa8b03496a3d req-8294946a-a09d-4e32-9ee6-7e91bb19559d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:25 compute-0 nova_compute[255040]: 2025-11-29 08:14:25.147 255071 DEBUG oslo_concurrency.lockutils [req-e5a24735-f6bb-430a-b309-fa8b03496a3d req-8294946a-a09d-4e32-9ee6-7e91bb19559d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:25 compute-0 nova_compute[255040]: 2025-11-29 08:14:25.148 255071 DEBUG oslo_concurrency.lockutils [req-e5a24735-f6bb-430a-b309-fa8b03496a3d req-8294946a-a09d-4e32-9ee6-7e91bb19559d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:25 compute-0 nova_compute[255040]: 2025-11-29 08:14:25.148 255071 DEBUG nova.compute.manager [req-e5a24735-f6bb-430a-b309-fa8b03496a3d req-8294946a-a09d-4e32-9ee6-7e91bb19559d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] No waiting events found dispatching network-vif-unplugged-90cfdac4-0eb9-4a00-9ff4-a7fe2474579d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:14:25 compute-0 nova_compute[255040]: 2025-11-29 08:14:25.148 255071 DEBUG nova.compute.manager [req-e5a24735-f6bb-430a-b309-fa8b03496a3d req-8294946a-a09d-4e32-9ee6-7e91bb19559d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Received event network-vif-unplugged-90cfdac4-0eb9-4a00-9ff4-a7fe2474579d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:14:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e384 do_prune osdmap full prune enabled
Nov 29 08:14:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e385 e385: 3 total, 3 up, 3 in
Nov 29 08:14:25 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e385: 3 total, 3 up, 3 in
Nov 29 08:14:25 compute-0 ceph-mon[75237]: pgmap v1777: 305 pgs: 305 active+clean; 274 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 105 KiB/s wr, 67 op/s
Nov 29 08:14:25 compute-0 ceph-mon[75237]: osdmap e385: 3 total, 3 up, 3 in
Nov 29 08:14:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:14:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1872683235' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:14:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1872683235' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:25 compute-0 nova_compute[255040]: 2025-11-29 08:14:25.582 255071 DEBUG nova.network.neutron [-] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:14:25 compute-0 nova_compute[255040]: 2025-11-29 08:14:25.596 255071 INFO nova.compute.manager [-] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Took 0.55 seconds to deallocate network for instance.
Nov 29 08:14:25 compute-0 nova_compute[255040]: 2025-11-29 08:14:25.701 255071 DEBUG nova.network.neutron [req-290e56f7-e155-42ec-be82-0028969f61ab req-433bf760-92e6-4052-a069-61dc9119fba1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Updated VIF entry in instance network info cache for port 90cfdac4-0eb9-4a00-9ff4-a7fe2474579d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:14:25 compute-0 nova_compute[255040]: 2025-11-29 08:14:25.702 255071 DEBUG nova.network.neutron [req-290e56f7-e155-42ec-be82-0028969f61ab req-433bf760-92e6-4052-a069-61dc9119fba1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Updating instance_info_cache with network_info: [{"id": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "address": "fa:16:3e:a3:5c:f6", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90cfdac4-0e", "ovs_interfaceid": "90cfdac4-0eb9-4a00-9ff4-a7fe2474579d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:14:25 compute-0 nova_compute[255040]: 2025-11-29 08:14:25.746 255071 DEBUG oslo_concurrency.lockutils [req-290e56f7-e155-42ec-be82-0028969f61ab req-433bf760-92e6-4052-a069-61dc9119fba1 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-40d7aec5-9705-4885-8d58-7fcfdb8eac5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:14:25 compute-0 nova_compute[255040]: 2025-11-29 08:14:25.753 255071 INFO nova.compute.manager [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Took 0.16 seconds to detach 1 volumes for instance.
Nov 29 08:14:25 compute-0 nova_compute[255040]: 2025-11-29 08:14:25.790 255071 DEBUG oslo_concurrency.lockutils [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:25 compute-0 nova_compute[255040]: 2025-11-29 08:14:25.791 255071 DEBUG oslo_concurrency.lockutils [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:25 compute-0 nova_compute[255040]: 2025-11-29 08:14:25.866 255071 DEBUG oslo_concurrency.processutils [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 274 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 480 KiB/s rd, 28 KiB/s wr, 123 op/s
Nov 29 08:14:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e385 do_prune osdmap full prune enabled
Nov 29 08:14:26 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1872683235' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:26 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1872683235' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e386 e386: 3 total, 3 up, 3 in
Nov 29 08:14:26 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e386: 3 total, 3 up, 3 in
Nov 29 08:14:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:14:26 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2922858135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:26 compute-0 nova_compute[255040]: 2025-11-29 08:14:26.354 255071 DEBUG oslo_concurrency.processutils [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:26 compute-0 nova_compute[255040]: 2025-11-29 08:14:26.361 255071 DEBUG nova.compute.provider_tree [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:14:26 compute-0 nova_compute[255040]: 2025-11-29 08:14:26.378 255071 DEBUG nova.scheduler.client.report [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:14:26 compute-0 nova_compute[255040]: 2025-11-29 08:14:26.398 255071 DEBUG oslo_concurrency.lockutils [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:26 compute-0 nova_compute[255040]: 2025-11-29 08:14:26.422 255071 INFO nova.scheduler.client.report [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Deleted allocations for instance 40d7aec5-9705-4885-8d58-7fcfdb8eac5c
Nov 29 08:14:26 compute-0 nova_compute[255040]: 2025-11-29 08:14:26.505 255071 DEBUG oslo_concurrency.lockutils [None req-3f46cf1c-e882-40e9-aaf0-d844546d717a 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:26 compute-0 nova_compute[255040]: 2025-11-29 08:14:26.558 255071 DEBUG nova.compute.manager [req-700f1561-1ae1-4172-b867-09a188cb122e req-bfcd4a08-7ebe-4133-a15d-50f57f648e43 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Received event network-vif-deleted-90cfdac4-0eb9-4a00-9ff4-a7fe2474579d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:14:26 compute-0 nova_compute[255040]: 2025-11-29 08:14:26.558 255071 INFO nova.compute.manager [req-700f1561-1ae1-4172-b867-09a188cb122e req-bfcd4a08-7ebe-4133-a15d-50f57f648e43 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Neutron deleted interface 90cfdac4-0eb9-4a00-9ff4-a7fe2474579d; detaching it from the instance and deleting it from the info cache
Nov 29 08:14:26 compute-0 nova_compute[255040]: 2025-11-29 08:14:26.559 255071 DEBUG nova.network.neutron [req-700f1561-1ae1-4172-b867-09a188cb122e req-bfcd4a08-7ebe-4133-a15d-50f57f648e43 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Nov 29 08:14:26 compute-0 nova_compute[255040]: 2025-11-29 08:14:26.561 255071 DEBUG nova.compute.manager [req-700f1561-1ae1-4172-b867-09a188cb122e req-bfcd4a08-7ebe-4133-a15d-50f57f648e43 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Detach interface failed, port_id=90cfdac4-0eb9-4a00-9ff4-a7fe2474579d, reason: Instance 40d7aec5-9705-4885-8d58-7fcfdb8eac5c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 29 08:14:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:14:26 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3340145324' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:14:26 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3340145324' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:27.138 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:27.138 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:27.139 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:27 compute-0 ceph-mon[75237]: pgmap v1779: 305 pgs: 305 active+clean; 274 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 480 KiB/s rd, 28 KiB/s wr, 123 op/s
Nov 29 08:14:27 compute-0 ceph-mon[75237]: osdmap e386: 3 total, 3 up, 3 in
Nov 29 08:14:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2922858135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3340145324' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3340145324' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:27 compute-0 nova_compute[255040]: 2025-11-29 08:14:27.217 255071 DEBUG nova.compute.manager [req-38c61a47-0dc3-4d10-883f-dc04d6085f01 req-8ac7c52e-45fa-4bb6-bbdb-7fa07b87bcb7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Received event network-vif-plugged-90cfdac4-0eb9-4a00-9ff4-a7fe2474579d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:14:27 compute-0 nova_compute[255040]: 2025-11-29 08:14:27.217 255071 DEBUG oslo_concurrency.lockutils [req-38c61a47-0dc3-4d10-883f-dc04d6085f01 req-8ac7c52e-45fa-4bb6-bbdb-7fa07b87bcb7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:27 compute-0 nova_compute[255040]: 2025-11-29 08:14:27.217 255071 DEBUG oslo_concurrency.lockutils [req-38c61a47-0dc3-4d10-883f-dc04d6085f01 req-8ac7c52e-45fa-4bb6-bbdb-7fa07b87bcb7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:27 compute-0 nova_compute[255040]: 2025-11-29 08:14:27.217 255071 DEBUG oslo_concurrency.lockutils [req-38c61a47-0dc3-4d10-883f-dc04d6085f01 req-8ac7c52e-45fa-4bb6-bbdb-7fa07b87bcb7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "40d7aec5-9705-4885-8d58-7fcfdb8eac5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:27 compute-0 nova_compute[255040]: 2025-11-29 08:14:27.218 255071 DEBUG nova.compute.manager [req-38c61a47-0dc3-4d10-883f-dc04d6085f01 req-8ac7c52e-45fa-4bb6-bbdb-7fa07b87bcb7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] No waiting events found dispatching network-vif-plugged-90cfdac4-0eb9-4a00-9ff4-a7fe2474579d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:14:27 compute-0 nova_compute[255040]: 2025-11-29 08:14:27.218 255071 WARNING nova.compute.manager [req-38c61a47-0dc3-4d10-883f-dc04d6085f01 req-8ac7c52e-45fa-4bb6-bbdb-7fa07b87bcb7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Received unexpected event network-vif-plugged-90cfdac4-0eb9-4a00-9ff4-a7fe2474579d for instance with vm_state deleted and task_state None.
Nov 29 08:14:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 274 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 623 KiB/s rd, 119 KiB/s wr, 128 op/s
Nov 29 08:14:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e386 do_prune osdmap full prune enabled
Nov 29 08:14:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e387 e387: 3 total, 3 up, 3 in
Nov 29 08:14:28 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e387: 3 total, 3 up, 3 in
Nov 29 08:14:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e387 do_prune osdmap full prune enabled
Nov 29 08:14:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e388 e388: 3 total, 3 up, 3 in
Nov 29 08:14:28 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e388: 3 total, 3 up, 3 in
Nov 29 08:14:29 compute-0 ceph-mon[75237]: pgmap v1781: 305 pgs: 305 active+clean; 274 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 623 KiB/s rd, 119 KiB/s wr, 128 op/s
Nov 29 08:14:29 compute-0 ceph-mon[75237]: osdmap e387: 3 total, 3 up, 3 in
Nov 29 08:14:29 compute-0 ceph-mon[75237]: osdmap e388: 3 total, 3 up, 3 in
Nov 29 08:14:29 compute-0 nova_compute[255040]: 2025-11-29 08:14:29.834 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:14:29 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3321554491' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:14:29 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3321554491' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 269 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 320 KiB/s rd, 126 KiB/s wr, 189 op/s
Nov 29 08:14:29 compute-0 nova_compute[255040]: 2025-11-29 08:14:29.955 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:14:29 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1514677858' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:14:29 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1514677858' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:14:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/301593372' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:14:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/301593372' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:30 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3321554491' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:30 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3321554491' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:30 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1514677858' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:30 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1514677858' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:30 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/301593372' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:30 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/301593372' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:31 compute-0 ceph-mon[75237]: pgmap v1784: 305 pgs: 305 active+clean; 269 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 320 KiB/s rd, 126 KiB/s wr, 189 op/s
Nov 29 08:14:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e388 do_prune osdmap full prune enabled
Nov 29 08:14:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e389 e389: 3 total, 3 up, 3 in
Nov 29 08:14:31 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e389: 3 total, 3 up, 3 in
Nov 29 08:14:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 251 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 189 KiB/s rd, 34 KiB/s wr, 254 op/s
Nov 29 08:14:31 compute-0 podman[291491]: 2025-11-29 08:14:31.978903096 +0000 UTC m=+0.140562853 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 08:14:32 compute-0 nova_compute[255040]: 2025-11-29 08:14:32.328 255071 DEBUG nova.compute.manager [req-293a9d69-1909-4f17-8936-e13cb6972c11 req-965939e8-6ae7-4272-9aeb-9d9019783817 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Received event network-changed-9fb97b8d-7982-4dac-8c85-e972bacc8ad7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:14:32 compute-0 nova_compute[255040]: 2025-11-29 08:14:32.329 255071 DEBUG nova.compute.manager [req-293a9d69-1909-4f17-8936-e13cb6972c11 req-965939e8-6ae7-4272-9aeb-9d9019783817 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Refreshing instance network info cache due to event network-changed-9fb97b8d-7982-4dac-8c85-e972bacc8ad7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:14:32 compute-0 nova_compute[255040]: 2025-11-29 08:14:32.329 255071 DEBUG oslo_concurrency.lockutils [req-293a9d69-1909-4f17-8936-e13cb6972c11 req-965939e8-6ae7-4272-9aeb-9d9019783817 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-cde9039b-1882-4723-9524-c51a289f67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:14:32 compute-0 nova_compute[255040]: 2025-11-29 08:14:32.330 255071 DEBUG oslo_concurrency.lockutils [req-293a9d69-1909-4f17-8936-e13cb6972c11 req-965939e8-6ae7-4272-9aeb-9d9019783817 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-cde9039b-1882-4723-9524-c51a289f67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:14:32 compute-0 nova_compute[255040]: 2025-11-29 08:14:32.330 255071 DEBUG nova.network.neutron [req-293a9d69-1909-4f17-8936-e13cb6972c11 req-965939e8-6ae7-4272-9aeb-9d9019783817 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Refreshing network info cache for port 9fb97b8d-7982-4dac-8c85-e972bacc8ad7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:14:32 compute-0 ceph-mon[75237]: osdmap e389: 3 total, 3 up, 3 in
Nov 29 08:14:32 compute-0 ceph-mon[75237]: pgmap v1786: 305 pgs: 305 active+clean; 251 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 189 KiB/s rd, 34 KiB/s wr, 254 op/s
Nov 29 08:14:32 compute-0 nova_compute[255040]: 2025-11-29 08:14:32.560 255071 DEBUG oslo_concurrency.lockutils [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "cde9039b-1882-4723-9524-c51a289f67b0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:32 compute-0 nova_compute[255040]: 2025-11-29 08:14:32.561 255071 DEBUG oslo_concurrency.lockutils [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "cde9039b-1882-4723-9524-c51a289f67b0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:32 compute-0 nova_compute[255040]: 2025-11-29 08:14:32.561 255071 DEBUG oslo_concurrency.lockutils [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "cde9039b-1882-4723-9524-c51a289f67b0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:32 compute-0 nova_compute[255040]: 2025-11-29 08:14:32.561 255071 DEBUG oslo_concurrency.lockutils [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "cde9039b-1882-4723-9524-c51a289f67b0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:32 compute-0 nova_compute[255040]: 2025-11-29 08:14:32.561 255071 DEBUG oslo_concurrency.lockutils [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "cde9039b-1882-4723-9524-c51a289f67b0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:32 compute-0 nova_compute[255040]: 2025-11-29 08:14:32.563 255071 INFO nova.compute.manager [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Terminating instance
Nov 29 08:14:32 compute-0 nova_compute[255040]: 2025-11-29 08:14:32.564 255071 DEBUG nova.compute.manager [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:14:32 compute-0 kernel: tap9fb97b8d-79 (unregistering): left promiscuous mode
Nov 29 08:14:32 compute-0 NetworkManager[49116]: <info>  [1764404072.6292] device (tap9fb97b8d-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:14:32 compute-0 ovn_controller[153295]: 2025-11-29T08:14:32Z|00216|binding|INFO|Releasing lport 9fb97b8d-7982-4dac-8c85-e972bacc8ad7 from this chassis (sb_readonly=0)
Nov 29 08:14:32 compute-0 ovn_controller[153295]: 2025-11-29T08:14:32Z|00217|binding|INFO|Setting lport 9fb97b8d-7982-4dac-8c85-e972bacc8ad7 down in Southbound
Nov 29 08:14:32 compute-0 ovn_controller[153295]: 2025-11-29T08:14:32Z|00218|binding|INFO|Removing iface tap9fb97b8d-79 ovn-installed in OVS
Nov 29 08:14:32 compute-0 nova_compute[255040]: 2025-11-29 08:14:32.639 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:32.647 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:12:20 10.100.0.9'], port_security=['fa:16:3e:9f:12:20 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'cde9039b-1882-4723-9524-c51a289f67b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3df24932e2a44aeab3c2aece8a045774', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd76aebb-076a-4516-b4a3-04b7aa482016', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6d2be5e-00f1-4a95-b572-cb93402763d5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=9fb97b8d-7982-4dac-8c85-e972bacc8ad7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:14:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:32.649 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 9fb97b8d-7982-4dac-8c85-e972bacc8ad7 in datapath 6e23492e-beff-43f6-b4d1-f88ebeea0b6f unbound from our chassis
Nov 29 08:14:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:32.650 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6e23492e-beff-43f6-b4d1-f88ebeea0b6f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:14:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:32.652 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[2742ac1a-844d-4a0b-a911-855c7ec048a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:32.652 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f namespace which is not needed anymore
Nov 29 08:14:32 compute-0 nova_compute[255040]: 2025-11-29 08:14:32.662 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:32 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Deactivated successfully.
Nov 29 08:14:32 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Consumed 18.037s CPU time.
Nov 29 08:14:32 compute-0 systemd-machined[216271]: Machine qemu-21-instance-00000015 terminated.
Nov 29 08:14:32 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[289066]: [NOTICE]   (289070) : haproxy version is 2.8.14-c23fe91
Nov 29 08:14:32 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[289066]: [NOTICE]   (289070) : path to executable is /usr/sbin/haproxy
Nov 29 08:14:32 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[289066]: [WARNING]  (289070) : Exiting Master process...
Nov 29 08:14:32 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[289066]: [ALERT]    (289070) : Current worker (289072) exited with code 143 (Terminated)
Nov 29 08:14:32 compute-0 neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f[289066]: [WARNING]  (289070) : All workers exited. Exiting... (0)
Nov 29 08:14:32 compute-0 nova_compute[255040]: 2025-11-29 08:14:32.804 255071 INFO nova.virt.libvirt.driver [-] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Instance destroyed successfully.
Nov 29 08:14:32 compute-0 nova_compute[255040]: 2025-11-29 08:14:32.805 255071 DEBUG nova.objects.instance [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lazy-loading 'resources' on Instance uuid cde9039b-1882-4723-9524-c51a289f67b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:14:32 compute-0 systemd[1]: libpod-d23b5cad16de08aecebc407082690b709024a44454cb53cb8ee27c8a7aa8965f.scope: Deactivated successfully.
Nov 29 08:14:32 compute-0 podman[291540]: 2025-11-29 08:14:32.813725443 +0000 UTC m=+0.061022672 container died d23b5cad16de08aecebc407082690b709024a44454cb53cb8ee27c8a7aa8965f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 08:14:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d23b5cad16de08aecebc407082690b709024a44454cb53cb8ee27c8a7aa8965f-userdata-shm.mount: Deactivated successfully.
Nov 29 08:14:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-837dc8d65587c503a995c087963bf7886a1f5f626d46713a14029772f6f7a075-merged.mount: Deactivated successfully.
Nov 29 08:14:32 compute-0 podman[291540]: 2025-11-29 08:14:32.85356769 +0000 UTC m=+0.100864909 container cleanup d23b5cad16de08aecebc407082690b709024a44454cb53cb8ee27c8a7aa8965f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 08:14:32 compute-0 systemd[1]: libpod-conmon-d23b5cad16de08aecebc407082690b709024a44454cb53cb8ee27c8a7aa8965f.scope: Deactivated successfully.
Nov 29 08:14:32 compute-0 podman[291578]: 2025-11-29 08:14:32.909484612 +0000 UTC m=+0.035751887 container remove d23b5cad16de08aecebc407082690b709024a44454cb53cb8ee27c8a7aa8965f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 08:14:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:32.914 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5e2e1410-ffb0-4724-aef2-993ee3210a6c]: (4, ('Sat Nov 29 08:14:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f (d23b5cad16de08aecebc407082690b709024a44454cb53cb8ee27c8a7aa8965f)\nd23b5cad16de08aecebc407082690b709024a44454cb53cb8ee27c8a7aa8965f\nSat Nov 29 08:14:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f (d23b5cad16de08aecebc407082690b709024a44454cb53cb8ee27c8a7aa8965f)\nd23b5cad16de08aecebc407082690b709024a44454cb53cb8ee27c8a7aa8965f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:32.916 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[77ce52ea-8542-4857-9ca9-782912e03cbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:32.917 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e23492e-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:14:32 compute-0 nova_compute[255040]: 2025-11-29 08:14:32.919 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:32 compute-0 kernel: tap6e23492e-b0: left promiscuous mode
Nov 29 08:14:32 compute-0 nova_compute[255040]: 2025-11-29 08:14:32.940 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:32.942 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[ff941970-3688-461a-b07b-f48fdc146a0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:32.954 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[4e230a85-245c-41d0-9efc-981dbdb541a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:32.955 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[8d35a3d3-d6dc-4ef2-99b1-bdfd9e523229]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:32.969 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5e8f58c5-039e-41ab-881e-8b52bcabb913]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621276, 'reachable_time': 32313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291601, 'error': None, 'target': 'ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d6e23492e\x2dbeff\x2d43f6\x2db4d1\x2df88ebeea0b6f.mount: Deactivated successfully.
Nov 29 08:14:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:32.973 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6e23492e-beff-43f6-b4d1-f88ebeea0b6f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:14:32 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:32.974 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[ab6205f3-b508-4a14-9ee6-19c29c0c2356]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:32 compute-0 nova_compute[255040]: 2025-11-29 08:14:32.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:32 compute-0 nova_compute[255040]: 2025-11-29 08:14:32.976 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:14:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:14:33 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3692703232' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:33 compute-0 nova_compute[255040]: 2025-11-29 08:14:33.422 255071 DEBUG nova.virt.libvirt.vif [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-627045043',display_name='tempest-TestVolumeBootPattern-server-627045043',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-627045043',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBNqVOtasX0MqRaMqqfsWVfBGlBxHyLONahirMfYc0xM/PP91rZ4W+N/NUA4y30TxcMcH62LfUYChDkxcMCwFGnIBRbZARerRoVNJBX6SaD1meU9QKaSGEO9I5Zm9Q8bzQ==',key_name='tempest-TestVolumeBootPattern-1223045967',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3df24932e2a44aeab3c2aece8a045774',ramdisk_id='',reservation_id='r-r93fjp7h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1666331213',owner_user_name='tempest-TestVolumeBootPattern-1666331213-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:56Z,user_data=None,user_id='5e62d407203540599a65ac50d5d447b9',uuid=cde9039b-1882-4723-9524-c51a289f67b0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "address": "fa:16:3e:9f:12:20", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb97b8d-79", "ovs_interfaceid": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:14:33 compute-0 nova_compute[255040]: 2025-11-29 08:14:33.423 255071 DEBUG nova.network.os_vif_util [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converting VIF {"id": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "address": "fa:16:3e:9f:12:20", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb97b8d-79", "ovs_interfaceid": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:14:33 compute-0 nova_compute[255040]: 2025-11-29 08:14:33.424 255071 DEBUG nova.network.os_vif_util [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9f:12:20,bridge_name='br-int',has_traffic_filtering=True,id=9fb97b8d-7982-4dac-8c85-e972bacc8ad7,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb97b8d-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:14:33 compute-0 nova_compute[255040]: 2025-11-29 08:14:33.424 255071 DEBUG os_vif [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:12:20,bridge_name='br-int',has_traffic_filtering=True,id=9fb97b8d-7982-4dac-8c85-e972bacc8ad7,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb97b8d-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:14:33 compute-0 nova_compute[255040]: 2025-11-29 08:14:33.427 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:33 compute-0 nova_compute[255040]: 2025-11-29 08:14:33.427 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9fb97b8d-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:14:33 compute-0 nova_compute[255040]: 2025-11-29 08:14:33.429 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:33 compute-0 nova_compute[255040]: 2025-11-29 08:14:33.430 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:14:33 compute-0 nova_compute[255040]: 2025-11-29 08:14:33.432 255071 INFO os_vif [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:12:20,bridge_name='br-int',has_traffic_filtering=True,id=9fb97b8d-7982-4dac-8c85-e972bacc8ad7,network=Network(6e23492e-beff-43f6-b4d1-f88ebeea0b6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb97b8d-79')
Nov 29 08:14:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e389 do_prune osdmap full prune enabled
Nov 29 08:14:33 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3692703232' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e390 e390: 3 total, 3 up, 3 in
Nov 29 08:14:33 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e390: 3 total, 3 up, 3 in
Nov 29 08:14:33 compute-0 nova_compute[255040]: 2025-11-29 08:14:33.621 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "refresh_cache-73161fa0-86cc-4d12-bbb4-64386b62bf99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:14:33 compute-0 nova_compute[255040]: 2025-11-29 08:14:33.621 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquired lock "refresh_cache-73161fa0-86cc-4d12-bbb4-64386b62bf99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:14:33 compute-0 nova_compute[255040]: 2025-11-29 08:14:33.621 255071 DEBUG nova.network.neutron [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:14:33 compute-0 nova_compute[255040]: 2025-11-29 08:14:33.677 255071 INFO nova.virt.libvirt.driver [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Deleting instance files /var/lib/nova/instances/cde9039b-1882-4723-9524-c51a289f67b0_del
Nov 29 08:14:33 compute-0 nova_compute[255040]: 2025-11-29 08:14:33.678 255071 INFO nova.virt.libvirt.driver [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Deletion of /var/lib/nova/instances/cde9039b-1882-4723-9524-c51a289f67b0_del complete
Nov 29 08:14:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e390 do_prune osdmap full prune enabled
Nov 29 08:14:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e391 e391: 3 total, 3 up, 3 in
Nov 29 08:14:33 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e391: 3 total, 3 up, 3 in
Nov 29 08:14:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 251 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 214 KiB/s rd, 39 KiB/s wr, 287 op/s
Nov 29 08:14:33 compute-0 nova_compute[255040]: 2025-11-29 08:14:33.973 255071 DEBUG nova.compute.manager [req-36985458-0627-400b-b3cc-09c0ff459033 req-1abf633b-df54-40bc-bbbd-4c39f6dfb271 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Received event network-vif-unplugged-9fb97b8d-7982-4dac-8c85-e972bacc8ad7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:14:33 compute-0 nova_compute[255040]: 2025-11-29 08:14:33.973 255071 DEBUG oslo_concurrency.lockutils [req-36985458-0627-400b-b3cc-09c0ff459033 req-1abf633b-df54-40bc-bbbd-4c39f6dfb271 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "cde9039b-1882-4723-9524-c51a289f67b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:33 compute-0 nova_compute[255040]: 2025-11-29 08:14:33.973 255071 DEBUG oslo_concurrency.lockutils [req-36985458-0627-400b-b3cc-09c0ff459033 req-1abf633b-df54-40bc-bbbd-4c39f6dfb271 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "cde9039b-1882-4723-9524-c51a289f67b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:33 compute-0 nova_compute[255040]: 2025-11-29 08:14:33.974 255071 DEBUG oslo_concurrency.lockutils [req-36985458-0627-400b-b3cc-09c0ff459033 req-1abf633b-df54-40bc-bbbd-4c39f6dfb271 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "cde9039b-1882-4723-9524-c51a289f67b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:33 compute-0 nova_compute[255040]: 2025-11-29 08:14:33.974 255071 DEBUG nova.compute.manager [req-36985458-0627-400b-b3cc-09c0ff459033 req-1abf633b-df54-40bc-bbbd-4c39f6dfb271 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] No waiting events found dispatching network-vif-unplugged-9fb97b8d-7982-4dac-8c85-e972bacc8ad7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:14:33 compute-0 nova_compute[255040]: 2025-11-29 08:14:33.974 255071 DEBUG nova.compute.manager [req-36985458-0627-400b-b3cc-09c0ff459033 req-1abf633b-df54-40bc-bbbd-4c39f6dfb271 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Received event network-vif-unplugged-9fb97b8d-7982-4dac-8c85-e972bacc8ad7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:14:34 compute-0 nova_compute[255040]: 2025-11-29 08:14:34.022 255071 INFO nova.compute.manager [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Took 1.46 seconds to destroy the instance on the hypervisor.
Nov 29 08:14:34 compute-0 nova_compute[255040]: 2025-11-29 08:14:34.023 255071 DEBUG oslo.service.loopingcall [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:14:34 compute-0 nova_compute[255040]: 2025-11-29 08:14:34.023 255071 DEBUG nova.compute.manager [-] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:14:34 compute-0 nova_compute[255040]: 2025-11-29 08:14:34.023 255071 DEBUG nova.network.neutron [-] [instance: cde9039b-1882-4723-9524-c51a289f67b0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:14:34 compute-0 nova_compute[255040]: 2025-11-29 08:14:34.360 255071 DEBUG nova.network.neutron [req-293a9d69-1909-4f17-8936-e13cb6972c11 req-965939e8-6ae7-4272-9aeb-9d9019783817 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Updated VIF entry in instance network info cache for port 9fb97b8d-7982-4dac-8c85-e972bacc8ad7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:14:34 compute-0 nova_compute[255040]: 2025-11-29 08:14:34.360 255071 DEBUG nova.network.neutron [req-293a9d69-1909-4f17-8936-e13cb6972c11 req-965939e8-6ae7-4272-9aeb-9d9019783817 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Updating instance_info_cache with network_info: [{"id": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "address": "fa:16:3e:9f:12:20", "network": {"id": "6e23492e-beff-43f6-b4d1-f88ebeea0b6f", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-868779300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3df24932e2a44aeab3c2aece8a045774", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb97b8d-79", "ovs_interfaceid": "9fb97b8d-7982-4dac-8c85-e972bacc8ad7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:14:34 compute-0 nova_compute[255040]: 2025-11-29 08:14:34.543 255071 DEBUG oslo_concurrency.lockutils [req-293a9d69-1909-4f17-8936-e13cb6972c11 req-965939e8-6ae7-4272-9aeb-9d9019783817 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-cde9039b-1882-4723-9524-c51a289f67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:14:34 compute-0 ceph-mon[75237]: osdmap e390: 3 total, 3 up, 3 in
Nov 29 08:14:34 compute-0 ceph-mon[75237]: osdmap e391: 3 total, 3 up, 3 in
Nov 29 08:14:34 compute-0 ceph-mon[75237]: pgmap v1789: 305 pgs: 305 active+clean; 251 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 214 KiB/s rd, 39 KiB/s wr, 287 op/s
Nov 29 08:14:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e391 do_prune osdmap full prune enabled
Nov 29 08:14:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e392 e392: 3 total, 3 up, 3 in
Nov 29 08:14:34 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e392: 3 total, 3 up, 3 in
Nov 29 08:14:34 compute-0 nova_compute[255040]: 2025-11-29 08:14:34.957 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:35 compute-0 nova_compute[255040]: 2025-11-29 08:14:35.427 255071 DEBUG nova.network.neutron [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Updating instance_info_cache with network_info: [{"id": "118420be-1bec-4d74-a16f-38c9916df2ec", "address": "fa:16:3e:8d:66:24", "network": {"id": "3ac59a05-6e29-4fc3-9a46-eac16f636fbf", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1938605210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4685ebb42c1b47019026ac85736a2f9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap118420be-1b", "ovs_interfaceid": "118420be-1bec-4d74-a16f-38c9916df2ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:14:35 compute-0 nova_compute[255040]: 2025-11-29 08:14:35.627 255071 DEBUG nova.compute.manager [req-80b7fb68-426c-438b-a696-342cfff0de88 req-7fcc7001-f04f-4226-a15b-37ff2189c59a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Received event network-vif-deleted-9fb97b8d-7982-4dac-8c85-e972bacc8ad7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:14:35 compute-0 nova_compute[255040]: 2025-11-29 08:14:35.627 255071 INFO nova.compute.manager [req-80b7fb68-426c-438b-a696-342cfff0de88 req-7fcc7001-f04f-4226-a15b-37ff2189c59a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Neutron deleted interface 9fb97b8d-7982-4dac-8c85-e972bacc8ad7; detaching it from the instance and deleting it from the info cache
Nov 29 08:14:35 compute-0 nova_compute[255040]: 2025-11-29 08:14:35.627 255071 DEBUG nova.network.neutron [req-80b7fb68-426c-438b-a696-342cfff0de88 req-7fcc7001-f04f-4226-a15b-37ff2189c59a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:14:35 compute-0 nova_compute[255040]: 2025-11-29 08:14:35.631 255071 DEBUG nova.network.neutron [-] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:14:35 compute-0 nova_compute[255040]: 2025-11-29 08:14:35.636 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Releasing lock "refresh_cache-73161fa0-86cc-4d12-bbb4-64386b62bf99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:14:35 compute-0 nova_compute[255040]: 2025-11-29 08:14:35.637 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:14:35 compute-0 nova_compute[255040]: 2025-11-29 08:14:35.637 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:35 compute-0 nova_compute[255040]: 2025-11-29 08:14:35.637 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:35 compute-0 nova_compute[255040]: 2025-11-29 08:14:35.637 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:35 compute-0 nova_compute[255040]: 2025-11-29 08:14:35.637 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:35 compute-0 nova_compute[255040]: 2025-11-29 08:14:35.638 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:14:35 compute-0 nova_compute[255040]: 2025-11-29 08:14:35.638 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:35 compute-0 nova_compute[255040]: 2025-11-29 08:14:35.715 255071 DEBUG nova.compute.manager [req-80b7fb68-426c-438b-a696-342cfff0de88 req-7fcc7001-f04f-4226-a15b-37ff2189c59a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Detach interface failed, port_id=9fb97b8d-7982-4dac-8c85-e972bacc8ad7, reason: Instance cde9039b-1882-4723-9524-c51a289f67b0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 29 08:14:35 compute-0 nova_compute[255040]: 2025-11-29 08:14:35.720 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:35 compute-0 nova_compute[255040]: 2025-11-29 08:14:35.720 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:35 compute-0 nova_compute[255040]: 2025-11-29 08:14:35.720 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:35 compute-0 nova_compute[255040]: 2025-11-29 08:14:35.721 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:14:35 compute-0 nova_compute[255040]: 2025-11-29 08:14:35.721 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:35 compute-0 nova_compute[255040]: 2025-11-29 08:14:35.744 255071 INFO nova.compute.manager [-] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Took 1.72 seconds to deallocate network for instance.
Nov 29 08:14:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e392 do_prune osdmap full prune enabled
Nov 29 08:14:35 compute-0 ceph-mon[75237]: osdmap e392: 3 total, 3 up, 3 in
Nov 29 08:14:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e393 e393: 3 total, 3 up, 3 in
Nov 29 08:14:35 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e393: 3 total, 3 up, 3 in
Nov 29 08:14:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 251 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 9.0 KiB/s wr, 156 op/s
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.057 255071 DEBUG nova.compute.manager [req-653e0923-cd78-4dbd-8cbb-8a06a396e771 req-dee5c658-2aa1-4055-9042-401ed3411cb7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Received event network-vif-plugged-9fb97b8d-7982-4dac-8c85-e972bacc8ad7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.058 255071 DEBUG oslo_concurrency.lockutils [req-653e0923-cd78-4dbd-8cbb-8a06a396e771 req-dee5c658-2aa1-4055-9042-401ed3411cb7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "cde9039b-1882-4723-9524-c51a289f67b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.058 255071 DEBUG oslo_concurrency.lockutils [req-653e0923-cd78-4dbd-8cbb-8a06a396e771 req-dee5c658-2aa1-4055-9042-401ed3411cb7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "cde9039b-1882-4723-9524-c51a289f67b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.058 255071 DEBUG oslo_concurrency.lockutils [req-653e0923-cd78-4dbd-8cbb-8a06a396e771 req-dee5c658-2aa1-4055-9042-401ed3411cb7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "cde9039b-1882-4723-9524-c51a289f67b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.059 255071 DEBUG nova.compute.manager [req-653e0923-cd78-4dbd-8cbb-8a06a396e771 req-dee5c658-2aa1-4055-9042-401ed3411cb7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] No waiting events found dispatching network-vif-plugged-9fb97b8d-7982-4dac-8c85-e972bacc8ad7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.059 255071 WARNING nova.compute.manager [req-653e0923-cd78-4dbd-8cbb-8a06a396e771 req-dee5c658-2aa1-4055-9042-401ed3411cb7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Received unexpected event network-vif-plugged-9fb97b8d-7982-4dac-8c85-e972bacc8ad7 for instance with vm_state active and task_state deleting.
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.106 255071 INFO nova.compute.manager [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Took 0.36 seconds to detach 1 volumes for instance.
Nov 29 08:14:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:14:36 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1137498501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.168 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.199 255071 DEBUG oslo_concurrency.lockutils [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.199 255071 DEBUG oslo_concurrency.lockutils [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.269 255071 DEBUG oslo_concurrency.processutils [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.363 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.363 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.518 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.519 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4195MB free_disk=59.94236755371094GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.519 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:14:36 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/571408966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.754 255071 DEBUG oslo_concurrency.processutils [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.758 255071 DEBUG nova.compute.provider_tree [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.771 255071 DEBUG nova.scheduler.client.report [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.789 255071 DEBUG oslo_concurrency.lockutils [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.792 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.811 255071 INFO nova.scheduler.client.report [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Deleted allocations for instance cde9039b-1882-4723-9524-c51a289f67b0
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.854 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance 73161fa0-86cc-4d12-bbb4-64386b62bf99 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.854 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.855 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.888 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:36 compute-0 ceph-mon[75237]: osdmap e393: 3 total, 3 up, 3 in
Nov 29 08:14:36 compute-0 ceph-mon[75237]: pgmap v1792: 305 pgs: 305 active+clean; 251 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 9.0 KiB/s wr, 156 op/s
Nov 29 08:14:36 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1137498501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:36 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/571408966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:36 compute-0 nova_compute[255040]: 2025-11-29 08:14:36.921 255071 DEBUG oslo_concurrency.lockutils [None req-179cf219-7c9a-411e-bc4d-63f42bca8ddc 5e62d407203540599a65ac50d5d447b9 3df24932e2a44aeab3c2aece8a045774 - - default default] Lock "cde9039b-1882-4723-9524-c51a289f67b0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:14:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3546462735' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:14:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3546462735' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:14:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2959726637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:37 compute-0 nova_compute[255040]: 2025-11-29 08:14:37.336 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:37 compute-0 nova_compute[255040]: 2025-11-29 08:14:37.341 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:14:37 compute-0 nova_compute[255040]: 2025-11-29 08:14:37.353 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:14:37 compute-0 nova_compute[255040]: 2025-11-29 08:14:37.371 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:14:37 compute-0 nova_compute[255040]: 2025-11-29 08:14:37.371 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:37 compute-0 nova_compute[255040]: 2025-11-29 08:14:37.710 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3546462735' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3546462735' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2959726637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:37 compute-0 nova_compute[255040]: 2025-11-29 08:14:37.900 255071 DEBUG oslo_concurrency.lockutils [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Acquiring lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:37 compute-0 nova_compute[255040]: 2025-11-29 08:14:37.901 255071 DEBUG oslo_concurrency.lockutils [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:37 compute-0 nova_compute[255040]: 2025-11-29 08:14:37.902 255071 DEBUG oslo_concurrency.lockutils [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Acquiring lock "73161fa0-86cc-4d12-bbb4-64386b62bf99-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:37 compute-0 nova_compute[255040]: 2025-11-29 08:14:37.902 255071 DEBUG oslo_concurrency.lockutils [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:37 compute-0 nova_compute[255040]: 2025-11-29 08:14:37.902 255071 DEBUG oslo_concurrency.lockutils [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:37 compute-0 nova_compute[255040]: 2025-11-29 08:14:37.903 255071 INFO nova.compute.manager [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Terminating instance
Nov 29 08:14:37 compute-0 nova_compute[255040]: 2025-11-29 08:14:37.904 255071 DEBUG nova.compute.manager [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:14:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 251 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 11 KiB/s wr, 185 op/s
Nov 29 08:14:37 compute-0 nova_compute[255040]: 2025-11-29 08:14:37.969 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:38 compute-0 kernel: tap118420be-1b (unregistering): left promiscuous mode
Nov 29 08:14:38 compute-0 NetworkManager[49116]: <info>  [1764404078.2638] device (tap118420be-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:14:38 compute-0 ovn_controller[153295]: 2025-11-29T08:14:38Z|00219|binding|INFO|Releasing lport 118420be-1bec-4d74-a16f-38c9916df2ec from this chassis (sb_readonly=0)
Nov 29 08:14:38 compute-0 ovn_controller[153295]: 2025-11-29T08:14:38Z|00220|binding|INFO|Setting lport 118420be-1bec-4d74-a16f-38c9916df2ec down in Southbound
Nov 29 08:14:38 compute-0 nova_compute[255040]: 2025-11-29 08:14:38.270 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:38 compute-0 ovn_controller[153295]: 2025-11-29T08:14:38Z|00221|binding|INFO|Removing iface tap118420be-1b ovn-installed in OVS
Nov 29 08:14:38 compute-0 nova_compute[255040]: 2025-11-29 08:14:38.291 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:38.305 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:66:24 10.100.0.9'], port_security=['fa:16:3e:8d:66:24 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '73161fa0-86cc-4d12-bbb4-64386b62bf99', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3ac59a05-6e29-4fc3-9a46-eac16f636fbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4685ebb42c1b47019026ac85736a2f9e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '145d1636-c578-4a94-b60c-1faee92485c3 c93e00d2-a27f-4f8a-a521-48da2c2cd6cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.173'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69515d80-21fb-4aef-ab68-ee0d263d9564, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=118420be-1bec-4d74-a16f-38c9916df2ec) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:14:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:38.306 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 118420be-1bec-4d74-a16f-38c9916df2ec in datapath 3ac59a05-6e29-4fc3-9a46-eac16f636fbf unbound from our chassis
Nov 29 08:14:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:38.308 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3ac59a05-6e29-4fc3-9a46-eac16f636fbf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:14:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:38.308 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[c3de7e0d-d5b1-45e4-ae6f-3e9a663d38c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:38.309 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf namespace which is not needed anymore
Nov 29 08:14:38 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Deactivated successfully.
Nov 29 08:14:38 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Consumed 23.076s CPU time.
Nov 29 08:14:38 compute-0 systemd-machined[216271]: Machine qemu-22-instance-00000016 terminated.
Nov 29 08:14:38 compute-0 nova_compute[255040]: 2025-11-29 08:14:38.429 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:38 compute-0 nova_compute[255040]: 2025-11-29 08:14:38.473 255071 DEBUG nova.compute.manager [req-f28b3403-7e9a-420b-99d1-def8f8c07188 req-6add39f9-8e67-49af-8977-0d67c54954fc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Received event network-vif-unplugged-118420be-1bec-4d74-a16f-38c9916df2ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:14:38 compute-0 nova_compute[255040]: 2025-11-29 08:14:38.474 255071 DEBUG oslo_concurrency.lockutils [req-f28b3403-7e9a-420b-99d1-def8f8c07188 req-6add39f9-8e67-49af-8977-0d67c54954fc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "73161fa0-86cc-4d12-bbb4-64386b62bf99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:38 compute-0 nova_compute[255040]: 2025-11-29 08:14:38.474 255071 DEBUG oslo_concurrency.lockutils [req-f28b3403-7e9a-420b-99d1-def8f8c07188 req-6add39f9-8e67-49af-8977-0d67c54954fc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:38 compute-0 nova_compute[255040]: 2025-11-29 08:14:38.474 255071 DEBUG oslo_concurrency.lockutils [req-f28b3403-7e9a-420b-99d1-def8f8c07188 req-6add39f9-8e67-49af-8977-0d67c54954fc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:38 compute-0 nova_compute[255040]: 2025-11-29 08:14:38.474 255071 DEBUG nova.compute.manager [req-f28b3403-7e9a-420b-99d1-def8f8c07188 req-6add39f9-8e67-49af-8977-0d67c54954fc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] No waiting events found dispatching network-vif-unplugged-118420be-1bec-4d74-a16f-38c9916df2ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:14:38 compute-0 nova_compute[255040]: 2025-11-29 08:14:38.474 255071 DEBUG nova.compute.manager [req-f28b3403-7e9a-420b-99d1-def8f8c07188 req-6add39f9-8e67-49af-8977-0d67c54954fc cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Received event network-vif-unplugged-118420be-1bec-4d74-a16f-38c9916df2ec for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:14:38 compute-0 nova_compute[255040]: 2025-11-29 08:14:38.522 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:38 compute-0 neutron-haproxy-ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf[289688]: [NOTICE]   (289692) : haproxy version is 2.8.14-c23fe91
Nov 29 08:14:38 compute-0 neutron-haproxy-ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf[289688]: [NOTICE]   (289692) : path to executable is /usr/sbin/haproxy
Nov 29 08:14:38 compute-0 neutron-haproxy-ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf[289688]: [WARNING]  (289692) : Exiting Master process...
Nov 29 08:14:38 compute-0 neutron-haproxy-ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf[289688]: [WARNING]  (289692) : Exiting Master process...
Nov 29 08:14:38 compute-0 neutron-haproxy-ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf[289688]: [ALERT]    (289692) : Current worker (289694) exited with code 143 (Terminated)
Nov 29 08:14:38 compute-0 neutron-haproxy-ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf[289688]: [WARNING]  (289692) : All workers exited. Exiting... (0)
Nov 29 08:14:38 compute-0 nova_compute[255040]: 2025-11-29 08:14:38.527 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:38 compute-0 systemd[1]: libpod-9895dd5b1fccf542ea5e659eedb86e59f8e736c9ed5fe3b5c942e197dd4d413a.scope: Deactivated successfully.
Nov 29 08:14:38 compute-0 nova_compute[255040]: 2025-11-29 08:14:38.534 255071 INFO nova.virt.libvirt.driver [-] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Instance destroyed successfully.
Nov 29 08:14:38 compute-0 nova_compute[255040]: 2025-11-29 08:14:38.535 255071 DEBUG nova.objects.instance [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lazy-loading 'resources' on Instance uuid 73161fa0-86cc-4d12-bbb4-64386b62bf99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:14:38 compute-0 podman[291712]: 2025-11-29 08:14:38.536510752 +0000 UTC m=+0.140841320 container died 9895dd5b1fccf542ea5e659eedb86e59f8e736c9ed5fe3b5c942e197dd4d413a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 08:14:38 compute-0 nova_compute[255040]: 2025-11-29 08:14:38.551 255071 DEBUG nova.virt.libvirt.vif [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-2007858834',display_name='tempest-SnapshotDataIntegrityTests-server-2007858834',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-2007858834',id=22,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBKaRIcO7AmYjaJvzcprrw01/xfXd0JXKSpN5qfxtlP/ZK/lXduysUlgNUBiHURonuasBwtRu1mSrog6vjuWzi0jEJhcL/o3xoH/UXmwYNWA1x2U/xHUSdn4L6A8zrUHjg==',key_name='tempest-keypair-1134313540',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4685ebb42c1b47019026ac85736a2f9e',ramdisk_id='',reservation_id='r-vdbkyupn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SnapshotDataIntegrityTests-438088631',owner_user_name='tempest-SnapshotDataIntegrityTests-438088631-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:13:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='628f195ee2d74504ac3b01a64427c25f',uuid=73161fa0-86cc-4d12-bbb4-64386b62bf99,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "118420be-1bec-4d74-a16f-38c9916df2ec", "address": "fa:16:3e:8d:66:24", "network": {"id": "3ac59a05-6e29-4fc3-9a46-eac16f636fbf", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1938605210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4685ebb42c1b47019026ac85736a2f9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap118420be-1b", "ovs_interfaceid": "118420be-1bec-4d74-a16f-38c9916df2ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:14:38 compute-0 nova_compute[255040]: 2025-11-29 08:14:38.552 255071 DEBUG nova.network.os_vif_util [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Converting VIF {"id": "118420be-1bec-4d74-a16f-38c9916df2ec", "address": "fa:16:3e:8d:66:24", "network": {"id": "3ac59a05-6e29-4fc3-9a46-eac16f636fbf", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-1938605210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4685ebb42c1b47019026ac85736a2f9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap118420be-1b", "ovs_interfaceid": "118420be-1bec-4d74-a16f-38c9916df2ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:14:38 compute-0 nova_compute[255040]: 2025-11-29 08:14:38.553 255071 DEBUG nova.network.os_vif_util [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8d:66:24,bridge_name='br-int',has_traffic_filtering=True,id=118420be-1bec-4d74-a16f-38c9916df2ec,network=Network(3ac59a05-6e29-4fc3-9a46-eac16f636fbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap118420be-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:14:38 compute-0 nova_compute[255040]: 2025-11-29 08:14:38.553 255071 DEBUG os_vif [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:66:24,bridge_name='br-int',has_traffic_filtering=True,id=118420be-1bec-4d74-a16f-38c9916df2ec,network=Network(3ac59a05-6e29-4fc3-9a46-eac16f636fbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap118420be-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:14:38 compute-0 nova_compute[255040]: 2025-11-29 08:14:38.556 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:38 compute-0 nova_compute[255040]: 2025-11-29 08:14:38.556 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap118420be-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:14:38 compute-0 nova_compute[255040]: 2025-11-29 08:14:38.557 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:38 compute-0 nova_compute[255040]: 2025-11-29 08:14:38.559 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:38 compute-0 nova_compute[255040]: 2025-11-29 08:14:38.561 255071 INFO os_vif [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:66:24,bridge_name='br-int',has_traffic_filtering=True,id=118420be-1bec-4d74-a16f-38c9916df2ec,network=Network(3ac59a05-6e29-4fc3-9a46-eac16f636fbf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap118420be-1b')
Nov 29 08:14:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:14:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:14:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:14:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:14:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:14:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:14:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:14:38
Nov 29 08:14:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:14:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:14:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'images', 'volumes', '.mgr', 'backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data']
Nov 29 08:14:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:14:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e393 do_prune osdmap full prune enabled
Nov 29 08:14:39 compute-0 ceph-mon[75237]: pgmap v1793: 305 pgs: 305 active+clean; 251 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 11 KiB/s wr, 185 op/s
Nov 29 08:14:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e394 e394: 3 total, 3 up, 3 in
Nov 29 08:14:39 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e394: 3 total, 3 up, 3 in
Nov 29 08:14:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-24178fad610e2166897e96c8259caaf4dfab50842dcd197f0abfc061a790e85c-merged.mount: Deactivated successfully.
Nov 29 08:14:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9895dd5b1fccf542ea5e659eedb86e59f8e736c9ed5fe3b5c942e197dd4d413a-userdata-shm.mount: Deactivated successfully.
Nov 29 08:14:39 compute-0 podman[291712]: 2025-11-29 08:14:39.292969671 +0000 UTC m=+0.897300239 container cleanup 9895dd5b1fccf542ea5e659eedb86e59f8e736c9ed5fe3b5c942e197dd4d413a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 08:14:39 compute-0 systemd[1]: libpod-conmon-9895dd5b1fccf542ea5e659eedb86e59f8e736c9ed5fe3b5c942e197dd4d413a.scope: Deactivated successfully.
Nov 29 08:14:39 compute-0 podman[291766]: 2025-11-29 08:14:39.345279275 +0000 UTC m=+0.081788733 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true)
Nov 29 08:14:39 compute-0 podman[291781]: 2025-11-29 08:14:39.490306687 +0000 UTC m=+0.163804211 container remove 9895dd5b1fccf542ea5e659eedb86e59f8e736c9ed5fe3b5c942e197dd4d413a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 08:14:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:39.495 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9c4529-c4bb-4041-a5a8-4d43d346c4a3]: (4, ('Sat Nov 29 08:14:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf (9895dd5b1fccf542ea5e659eedb86e59f8e736c9ed5fe3b5c942e197dd4d413a)\n9895dd5b1fccf542ea5e659eedb86e59f8e736c9ed5fe3b5c942e197dd4d413a\nSat Nov 29 08:14:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf (9895dd5b1fccf542ea5e659eedb86e59f8e736c9ed5fe3b5c942e197dd4d413a)\n9895dd5b1fccf542ea5e659eedb86e59f8e736c9ed5fe3b5c942e197dd4d413a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:39.497 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[f2974aa9-7a45-4dad-b75e-65f4f982fed8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:39.498 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ac59a05-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:14:39 compute-0 nova_compute[255040]: 2025-11-29 08:14:39.500 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:39 compute-0 kernel: tap3ac59a05-60: left promiscuous mode
Nov 29 08:14:39 compute-0 nova_compute[255040]: 2025-11-29 08:14:39.502 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:39.507 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7bdcb8-ad84-44a9-909d-5f54f39535a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:39 compute-0 nova_compute[255040]: 2025-11-29 08:14:39.517 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:39.523 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[bfb4b9dc-f128-4cbf-bfd4-e62788a697c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:39.524 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[46cd3704-2882-4c28-929b-25884bc9812c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:39.540 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[c1337cee-a237-4c16-b93f-a905032a3934]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622491, 'reachable_time': 19456, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291802, 'error': None, 'target': 'ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:39 compute-0 systemd[1]: run-netns-ovnmeta\x2d3ac59a05\x2d6e29\x2d4fc3\x2d9a46\x2deac16f636fbf.mount: Deactivated successfully.
Nov 29 08:14:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:39.544 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3ac59a05-6e29-4fc3-9a46-eac16f636fbf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:14:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:39.544 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[415b028d-4a70-43f5-87ac-ccc9f896c738]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:14:39 compute-0 nova_compute[255040]: 2025-11-29 08:14:39.780 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404064.7792234, 40d7aec5-9705-4885-8d58-7fcfdb8eac5c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:14:39 compute-0 nova_compute[255040]: 2025-11-29 08:14:39.780 255071 INFO nova.compute.manager [-] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] VM Stopped (Lifecycle Event)
Nov 29 08:14:39 compute-0 nova_compute[255040]: 2025-11-29 08:14:39.817 255071 DEBUG nova.compute.manager [None req-cab2d1d5-e816-4c58-b4a5-0642e6bb93b0 - - - - - -] [instance: 40d7aec5-9705-4885-8d58-7fcfdb8eac5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:14:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 249 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 8.2 KiB/s wr, 152 op/s
Nov 29 08:14:39 compute-0 nova_compute[255040]: 2025-11-29 08:14:39.960 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:39 compute-0 nova_compute[255040]: 2025-11-29 08:14:39.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:40 compute-0 ceph-mon[75237]: osdmap e394: 3 total, 3 up, 3 in
Nov 29 08:14:40 compute-0 ceph-mon[75237]: pgmap v1795: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 249 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 8.2 KiB/s wr, 152 op/s
Nov 29 08:14:40 compute-0 nova_compute[255040]: 2025-11-29 08:14:40.586 255071 DEBUG nova.compute.manager [req-48f27f31-7b07-4ff8-acc1-fd1067811b10 req-44421e47-5b35-4b43-90b0-222ec8013329 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Received event network-vif-plugged-118420be-1bec-4d74-a16f-38c9916df2ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:14:40 compute-0 nova_compute[255040]: 2025-11-29 08:14:40.587 255071 DEBUG oslo_concurrency.lockutils [req-48f27f31-7b07-4ff8-acc1-fd1067811b10 req-44421e47-5b35-4b43-90b0-222ec8013329 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "73161fa0-86cc-4d12-bbb4-64386b62bf99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:40 compute-0 nova_compute[255040]: 2025-11-29 08:14:40.587 255071 DEBUG oslo_concurrency.lockutils [req-48f27f31-7b07-4ff8-acc1-fd1067811b10 req-44421e47-5b35-4b43-90b0-222ec8013329 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:40 compute-0 nova_compute[255040]: 2025-11-29 08:14:40.588 255071 DEBUG oslo_concurrency.lockutils [req-48f27f31-7b07-4ff8-acc1-fd1067811b10 req-44421e47-5b35-4b43-90b0-222ec8013329 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:40 compute-0 nova_compute[255040]: 2025-11-29 08:14:40.588 255071 DEBUG nova.compute.manager [req-48f27f31-7b07-4ff8-acc1-fd1067811b10 req-44421e47-5b35-4b43-90b0-222ec8013329 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] No waiting events found dispatching network-vif-plugged-118420be-1bec-4d74-a16f-38c9916df2ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:14:40 compute-0 nova_compute[255040]: 2025-11-29 08:14:40.588 255071 WARNING nova.compute.manager [req-48f27f31-7b07-4ff8-acc1-fd1067811b10 req-44421e47-5b35-4b43-90b0-222ec8013329 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Received unexpected event network-vif-plugged-118420be-1bec-4d74-a16f-38c9916df2ec for instance with vm_state active and task_state deleting.
Nov 29 08:14:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:14:40 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2904276608' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:14:40 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2904276608' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:14:40 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3547840655' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:14:40 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3547840655' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:41 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2904276608' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:41 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2904276608' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:41 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3547840655' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:41 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3547840655' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 174 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 139 KiB/s rd, 11 KiB/s wr, 192 op/s
Nov 29 08:14:42 compute-0 nova_compute[255040]: 2025-11-29 08:14:42.069 255071 INFO nova.virt.libvirt.driver [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Deleting instance files /var/lib/nova/instances/73161fa0-86cc-4d12-bbb4-64386b62bf99_del
Nov 29 08:14:42 compute-0 nova_compute[255040]: 2025-11-29 08:14:42.070 255071 INFO nova.virt.libvirt.driver [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Deletion of /var/lib/nova/instances/73161fa0-86cc-4d12-bbb4-64386b62bf99_del complete
Nov 29 08:14:42 compute-0 nova_compute[255040]: 2025-11-29 08:14:42.115 255071 INFO nova.compute.manager [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Took 4.21 seconds to destroy the instance on the hypervisor.
Nov 29 08:14:42 compute-0 nova_compute[255040]: 2025-11-29 08:14:42.116 255071 DEBUG oslo.service.loopingcall [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:14:42 compute-0 nova_compute[255040]: 2025-11-29 08:14:42.116 255071 DEBUG nova.compute.manager [-] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:14:42 compute-0 nova_compute[255040]: 2025-11-29 08:14:42.116 255071 DEBUG nova.network.neutron [-] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:14:42 compute-0 ceph-mon[75237]: pgmap v1796: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 174 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 139 KiB/s rd, 11 KiB/s wr, 192 op/s
Nov 29 08:14:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:43.053 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:14:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:43.054 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:14:43 compute-0 nova_compute[255040]: 2025-11-29 08:14:43.054 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:43 compute-0 nova_compute[255040]: 2025-11-29 08:14:43.077 255071 DEBUG nova.network.neutron [-] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:14:43 compute-0 nova_compute[255040]: 2025-11-29 08:14:43.094 255071 INFO nova.compute.manager [-] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Took 0.98 seconds to deallocate network for instance.
Nov 29 08:14:43 compute-0 nova_compute[255040]: 2025-11-29 08:14:43.149 255071 DEBUG oslo_concurrency.lockutils [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:43 compute-0 nova_compute[255040]: 2025-11-29 08:14:43.149 255071 DEBUG oslo_concurrency.lockutils [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:43 compute-0 nova_compute[255040]: 2025-11-29 08:14:43.169 255071 DEBUG nova.compute.manager [req-0fce13b6-1b89-4242-9299-3527df5cdd12 req-fd60a29a-75d0-4161-b34c-d725c19a4ce0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Received event network-vif-deleted-118420be-1bec-4d74-a16f-38c9916df2ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:14:43 compute-0 nova_compute[255040]: 2025-11-29 08:14:43.194 255071 DEBUG oslo_concurrency.processutils [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:14:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:14:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:14:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:14:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:14:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:14:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:14:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:14:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:14:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:14:43 compute-0 nova_compute[255040]: 2025-11-29 08:14:43.559 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:14:43 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4105765803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:14:43 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2084146712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:43 compute-0 nova_compute[255040]: 2025-11-29 08:14:43.654 255071 DEBUG oslo_concurrency.processutils [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:43 compute-0 nova_compute[255040]: 2025-11-29 08:14:43.660 255071 DEBUG nova.compute.provider_tree [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:14:43 compute-0 nova_compute[255040]: 2025-11-29 08:14:43.674 255071 DEBUG nova.scheduler.client.report [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:14:43 compute-0 nova_compute[255040]: 2025-11-29 08:14:43.691 255071 DEBUG oslo_concurrency.lockutils [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:43 compute-0 nova_compute[255040]: 2025-11-29 08:14:43.714 255071 INFO nova.scheduler.client.report [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Deleted allocations for instance 73161fa0-86cc-4d12-bbb4-64386b62bf99
Nov 29 08:14:43 compute-0 nova_compute[255040]: 2025-11-29 08:14:43.734 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:43 compute-0 nova_compute[255040]: 2025-11-29 08:14:43.783 255071 DEBUG oslo_concurrency.lockutils [None req-e902f9ed-b06c-495a-b3bc-9cb36afb6ad8 628f195ee2d74504ac3b01a64427c25f 4685ebb42c1b47019026ac85736a2f9e - - default default] Lock "73161fa0-86cc-4d12-bbb4-64386b62bf99" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e394 do_prune osdmap full prune enabled
Nov 29 08:14:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e395 e395: 3 total, 3 up, 3 in
Nov 29 08:14:43 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e395: 3 total, 3 up, 3 in
Nov 29 08:14:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 174 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 6.4 KiB/s wr, 113 op/s
Nov 29 08:14:43 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4105765803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:43 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2084146712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:43 compute-0 ceph-mon[75237]: osdmap e395: 3 total, 3 up, 3 in
Nov 29 08:14:44 compute-0 nova_compute[255040]: 2025-11-29 08:14:44.028 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:44 compute-0 nova_compute[255040]: 2025-11-29 08:14:44.962 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e395 do_prune osdmap full prune enabled
Nov 29 08:14:45 compute-0 ceph-mon[75237]: pgmap v1798: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 174 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 6.4 KiB/s wr, 113 op/s
Nov 29 08:14:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e396 e396: 3 total, 3 up, 3 in
Nov 29 08:14:45 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e396: 3 total, 3 up, 3 in
Nov 29 08:14:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 88 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 8.0 KiB/s wr, 151 op/s
Nov 29 08:14:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e396 do_prune osdmap full prune enabled
Nov 29 08:14:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e397 e397: 3 total, 3 up, 3 in
Nov 29 08:14:46 compute-0 ceph-mon[75237]: osdmap e396: 3 total, 3 up, 3 in
Nov 29 08:14:46 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e397: 3 total, 3 up, 3 in
Nov 29 08:14:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e397 do_prune osdmap full prune enabled
Nov 29 08:14:47 compute-0 ceph-mon[75237]: pgmap v1800: 305 pgs: 305 active+clean; 88 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 8.0 KiB/s wr, 151 op/s
Nov 29 08:14:47 compute-0 ceph-mon[75237]: osdmap e397: 3 total, 3 up, 3 in
Nov 29 08:14:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e398 e398: 3 total, 3 up, 3 in
Nov 29 08:14:47 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e398: 3 total, 3 up, 3 in
Nov 29 08:14:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:14:47 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1654889162' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:14:47 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1654889162' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:47 compute-0 nova_compute[255040]: 2025-11-29 08:14:47.801 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404072.7999084, cde9039b-1882-4723-9524-c51a289f67b0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:14:47 compute-0 nova_compute[255040]: 2025-11-29 08:14:47.801 255071 INFO nova.compute.manager [-] [instance: cde9039b-1882-4723-9524-c51a289f67b0] VM Stopped (Lifecycle Event)
Nov 29 08:14:47 compute-0 nova_compute[255040]: 2025-11-29 08:14:47.818 255071 DEBUG nova.compute.manager [None req-a0006d75-5df3-4fc6-a62b-3660d2623724 - - - - - -] [instance: cde9039b-1882-4723-9524-c51a289f67b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:14:47 compute-0 podman[291827]: 2025-11-29 08:14:47.893281841 +0000 UTC m=+0.064740522 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 08:14:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 88 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 5.4 KiB/s wr, 87 op/s
Nov 29 08:14:48 compute-0 ceph-mon[75237]: osdmap e398: 3 total, 3 up, 3 in
Nov 29 08:14:48 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1654889162' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:48 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1654889162' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:48 compute-0 nova_compute[255040]: 2025-11-29 08:14:48.601 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e398 do_prune osdmap full prune enabled
Nov 29 08:14:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e399 e399: 3 total, 3 up, 3 in
Nov 29 08:14:48 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e399: 3 total, 3 up, 3 in
Nov 29 08:14:49 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:14:49.056 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:14:49 compute-0 ceph-mon[75237]: pgmap v1803: 305 pgs: 305 active+clean; 88 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 5.4 KiB/s wr, 87 op/s
Nov 29 08:14:49 compute-0 ceph-mon[75237]: osdmap e399: 3 total, 3 up, 3 in
Nov 29 08:14:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:14:49 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1634940926' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 88 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.9 KiB/s wr, 68 op/s
Nov 29 08:14:49 compute-0 nova_compute[255040]: 2025-11-29 08:14:49.964 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e399 do_prune osdmap full prune enabled
Nov 29 08:14:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e400 e400: 3 total, 3 up, 3 in
Nov 29 08:14:50 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1634940926' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:50 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e400: 3 total, 3 up, 3 in
Nov 29 08:14:51 compute-0 ceph-mon[75237]: pgmap v1805: 305 pgs: 305 active+clean; 88 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.9 KiB/s wr, 68 op/s
Nov 29 08:14:51 compute-0 ceph-mon[75237]: osdmap e400: 3 total, 3 up, 3 in
Nov 29 08:14:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 88 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 3.6 KiB/s wr, 85 op/s
Nov 29 08:14:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e400 do_prune osdmap full prune enabled
Nov 29 08:14:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e401 e401: 3 total, 3 up, 3 in
Nov 29 08:14:52 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e401: 3 total, 3 up, 3 in
Nov 29 08:14:52 compute-0 ceph-mon[75237]: pgmap v1807: 305 pgs: 305 active+clean; 88 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 3.6 KiB/s wr, 85 op/s
Nov 29 08:14:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:14:52 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/184586669' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:14:52 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/184586669' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:14:52 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/321611209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e401 do_prune osdmap full prune enabled
Nov 29 08:14:53 compute-0 ceph-mon[75237]: osdmap e401: 3 total, 3 up, 3 in
Nov 29 08:14:53 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/184586669' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:53 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/184586669' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:53 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/321611209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:14:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e402 e402: 3 total, 3 up, 3 in
Nov 29 08:14:53 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e402: 3 total, 3 up, 3 in
Nov 29 08:14:53 compute-0 nova_compute[255040]: 2025-11-29 08:14:53.533 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404078.531509, 73161fa0-86cc-4d12-bbb4-64386b62bf99 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:14:53 compute-0 nova_compute[255040]: 2025-11-29 08:14:53.533 255071 INFO nova.compute.manager [-] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] VM Stopped (Lifecycle Event)
Nov 29 08:14:53 compute-0 nova_compute[255040]: 2025-11-29 08:14:53.556 255071 DEBUG nova.compute.manager [None req-99921ac0-09cd-407a-a252-674aade0dad0 - - - - - -] [instance: 73161fa0-86cc-4d12-bbb4-64386b62bf99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:14:53 compute-0 nova_compute[255040]: 2025-11-29 08:14:53.603 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e402 do_prune osdmap full prune enabled
Nov 29 08:14:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e403 e403: 3 total, 3 up, 3 in
Nov 29 08:14:53 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e403: 3 total, 3 up, 3 in
Nov 29 08:14:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 88 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 3.2 KiB/s wr, 53 op/s
Nov 29 08:14:54 compute-0 ceph-mon[75237]: osdmap e402: 3 total, 3 up, 3 in
Nov 29 08:14:54 compute-0 ceph-mon[75237]: osdmap e403: 3 total, 3 up, 3 in
Nov 29 08:14:54 compute-0 ceph-mon[75237]: pgmap v1811: 305 pgs: 305 active+clean; 88 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 3.2 KiB/s wr, 53 op/s
Nov 29 08:14:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e403 do_prune osdmap full prune enabled
Nov 29 08:14:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e404 e404: 3 total, 3 up, 3 in
Nov 29 08:14:54 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e404: 3 total, 3 up, 3 in
Nov 29 08:14:54 compute-0 nova_compute[255040]: 2025-11-29 08:14:54.967 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:55 compute-0 sudo[291848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:55 compute-0 sudo[291848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:55 compute-0 sudo[291848]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:55 compute-0 sudo[291873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:14:55 compute-0 sudo[291873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:55 compute-0 sudo[291873]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:55 compute-0 sudo[291898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:55 compute-0 sudo[291898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e404 do_prune osdmap full prune enabled
Nov 29 08:14:55 compute-0 ceph-mon[75237]: osdmap e404: 3 total, 3 up, 3 in
Nov 29 08:14:55 compute-0 sudo[291898]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e405 e405: 3 total, 3 up, 3 in
Nov 29 08:14:55 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e405: 3 total, 3 up, 3 in
Nov 29 08:14:55 compute-0 sudo[291923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:14:55 compute-0 sudo[291923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 88 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 3.9 KiB/s wr, 116 op/s
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003470780832844957 of space, bias 1.0, pg target 0.10412342498534871 quantized to 32 (current 32)
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 8.266792016669923e-07 of space, bias 1.0, pg target 0.0002480037605000977 quantized to 32 (current 32)
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:14:56 compute-0 sudo[291923]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:14:56 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:14:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:14:56 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:14:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:14:56 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 02eeda3e-9b0a-4baf-b46c-bd7a762ce9da does not exist
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 46094a18-d4db-47d1-a785-444e24c419c8 does not exist
Nov 29 08:14:56 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 060c2962-931b-4129-9b80-f9c2a741b8ca does not exist
Nov 29 08:14:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:14:56 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:14:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:14:56 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:14:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:14:56 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:14:56 compute-0 sudo[291979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:56 compute-0 sudo[291979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:56 compute-0 sudo[291979]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:56 compute-0 sudo[292004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:14:56 compute-0 sudo[292004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:56 compute-0 sudo[292004]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:56 compute-0 sudo[292029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:56 compute-0 sudo[292029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:56 compute-0 sudo[292029]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:56 compute-0 sudo[292054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:14:56 compute-0 sudo[292054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e405 do_prune osdmap full prune enabled
Nov 29 08:14:56 compute-0 ceph-mon[75237]: osdmap e405: 3 total, 3 up, 3 in
Nov 29 08:14:56 compute-0 ceph-mon[75237]: pgmap v1814: 305 pgs: 305 active+clean; 88 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 3.9 KiB/s wr, 116 op/s
Nov 29 08:14:56 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:14:56 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:14:56 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:14:56 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:14:56 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:14:56 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:14:56 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Nov 29 08:14:56 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:14:56.913130) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:14:56 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Nov 29 08:14:56 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404096913318, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1929, "num_deletes": 274, "total_data_size": 2641144, "memory_usage": 2687264, "flush_reason": "Manual Compaction"}
Nov 29 08:14:56 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Nov 29 08:14:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e406 e406: 3 total, 3 up, 3 in
Nov 29 08:14:56 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e406: 3 total, 3 up, 3 in
Nov 29 08:14:56 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404096937225, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 2593538, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32647, "largest_seqno": 34575, "table_properties": {"data_size": 2584275, "index_size": 5821, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 20313, "raw_average_key_size": 21, "raw_value_size": 2565580, "raw_average_value_size": 2746, "num_data_blocks": 253, "num_entries": 934, "num_filter_entries": 934, "num_deletions": 274, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403979, "oldest_key_time": 1764403979, "file_creation_time": 1764404096, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:14:56 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 24193 microseconds, and 7971 cpu microseconds.
Nov 29 08:14:56 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:14:56 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:14:56.937326) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 2593538 bytes OK
Nov 29 08:14:56 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:14:56.937349) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Nov 29 08:14:56 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:14:56.941527) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Nov 29 08:14:56 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:14:56.941549) EVENT_LOG_v1 {"time_micros": 1764404096941541, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:14:56 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:14:56.941569) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:14:56 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 2632455, prev total WAL file size 2632496, number of live WAL files 2.
Nov 29 08:14:56 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:14:56 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:14:56.942367) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 29 08:14:56 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:14:56 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(2532KB)], [68(8715KB)]
Nov 29 08:14:56 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404096942505, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 11517970, "oldest_snapshot_seqno": -1}
Nov 29 08:14:57 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6590 keys, 9801929 bytes, temperature: kUnknown
Nov 29 08:14:57 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404097019546, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 9801929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9753728, "index_size": 30617, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16517, "raw_key_size": 166836, "raw_average_key_size": 25, "raw_value_size": 9631210, "raw_average_value_size": 1461, "num_data_blocks": 1225, "num_entries": 6590, "num_filter_entries": 6590, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764404096, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:14:57 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:14:57 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:14:57.019840) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9801929 bytes
Nov 29 08:14:57 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:14:57.022563) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.3 rd, 127.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 8.5 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 7136, records dropped: 546 output_compression: NoCompression
Nov 29 08:14:57 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:14:57.022590) EVENT_LOG_v1 {"time_micros": 1764404097022577, "job": 38, "event": "compaction_finished", "compaction_time_micros": 77160, "compaction_time_cpu_micros": 28374, "output_level": 6, "num_output_files": 1, "total_output_size": 9801929, "num_input_records": 7136, "num_output_records": 6590, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:14:57 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:14:57 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404097023422, "job": 38, "event": "table_file_deletion", "file_number": 70}
Nov 29 08:14:57 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:14:57 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404097025296, "job": 38, "event": "table_file_deletion", "file_number": 68}
Nov 29 08:14:57 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:14:56.942292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:14:57 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:14:57.025369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:14:57 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:14:57.025375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:14:57 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:14:57.025376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:14:57 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:14:57.025378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:14:57 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:14:57.025379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:14:57 compute-0 podman[292117]: 2025-11-29 08:14:57.125070711 +0000 UTC m=+0.044272078 container create e14319e92bd3ef2792fc058e787ebcaafd13148b6fddf807234195548cfadcfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_neumann, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:14:57 compute-0 systemd[1]: Started libpod-conmon-e14319e92bd3ef2792fc058e787ebcaafd13148b6fddf807234195548cfadcfe.scope.
Nov 29 08:14:57 compute-0 podman[292117]: 2025-11-29 08:14:57.108584526 +0000 UTC m=+0.027785913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:14:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:14:57 compute-0 podman[292117]: 2025-11-29 08:14:57.224297264 +0000 UTC m=+0.143498651 container init e14319e92bd3ef2792fc058e787ebcaafd13148b6fddf807234195548cfadcfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_neumann, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 08:14:57 compute-0 podman[292117]: 2025-11-29 08:14:57.23153386 +0000 UTC m=+0.150735237 container start e14319e92bd3ef2792fc058e787ebcaafd13148b6fddf807234195548cfadcfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_neumann, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:14:57 compute-0 sharp_neumann[292133]: 167 167
Nov 29 08:14:57 compute-0 systemd[1]: libpod-e14319e92bd3ef2792fc058e787ebcaafd13148b6fddf807234195548cfadcfe.scope: Deactivated successfully.
Nov 29 08:14:57 compute-0 conmon[292133]: conmon e14319e92bd3ef2792fc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e14319e92bd3ef2792fc058e787ebcaafd13148b6fddf807234195548cfadcfe.scope/container/memory.events
Nov 29 08:14:57 compute-0 podman[292117]: 2025-11-29 08:14:57.239112796 +0000 UTC m=+0.158314213 container attach e14319e92bd3ef2792fc058e787ebcaafd13148b6fddf807234195548cfadcfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_neumann, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 08:14:57 compute-0 podman[292117]: 2025-11-29 08:14:57.239595788 +0000 UTC m=+0.158797165 container died e14319e92bd3ef2792fc058e787ebcaafd13148b6fddf807234195548cfadcfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_neumann, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-05237b8bd90de6e5fca05515ca3cf7e0d6d1addbcfcbaced3e27f3d58890560d-merged.mount: Deactivated successfully.
Nov 29 08:14:57 compute-0 podman[292117]: 2025-11-29 08:14:57.289457067 +0000 UTC m=+0.208658444 container remove e14319e92bd3ef2792fc058e787ebcaafd13148b6fddf807234195548cfadcfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:14:57 compute-0 systemd[1]: libpod-conmon-e14319e92bd3ef2792fc058e787ebcaafd13148b6fddf807234195548cfadcfe.scope: Deactivated successfully.
Nov 29 08:14:57 compute-0 podman[292155]: 2025-11-29 08:14:57.462082445 +0000 UTC m=+0.050341862 container create db7907006b69c726594943102640437833d5be13985f10ac747897873b2c07eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:14:57 compute-0 systemd[1]: Started libpod-conmon-db7907006b69c726594943102640437833d5be13985f10ac747897873b2c07eb.scope.
Nov 29 08:14:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:14:57 compute-0 podman[292155]: 2025-11-29 08:14:57.437639565 +0000 UTC m=+0.025899002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5cb021d1d5fff04f2ba809a102145b829bba4699b3d09c1986f4340951883d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5cb021d1d5fff04f2ba809a102145b829bba4699b3d09c1986f4340951883d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5cb021d1d5fff04f2ba809a102145b829bba4699b3d09c1986f4340951883d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5cb021d1d5fff04f2ba809a102145b829bba4699b3d09c1986f4340951883d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5cb021d1d5fff04f2ba809a102145b829bba4699b3d09c1986f4340951883d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 88 MiB data, 397 MiB used, 60 GiB / 60 GiB avail; 176 KiB/s rd, 11 KiB/s wr, 237 op/s
Nov 29 08:14:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e406 do_prune osdmap full prune enabled
Nov 29 08:14:58 compute-0 podman[292155]: 2025-11-29 08:14:58.12303619 +0000 UTC m=+0.711295637 container init db7907006b69c726594943102640437833d5be13985f10ac747897873b2c07eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cannon, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 08:14:58 compute-0 ceph-mon[75237]: osdmap e406: 3 total, 3 up, 3 in
Nov 29 08:14:58 compute-0 podman[292155]: 2025-11-29 08:14:58.133581156 +0000 UTC m=+0.721840583 container start db7907006b69c726594943102640437833d5be13985f10ac747897873b2c07eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cannon, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 08:14:58 compute-0 podman[292155]: 2025-11-29 08:14:58.147190333 +0000 UTC m=+0.735449770 container attach db7907006b69c726594943102640437833d5be13985f10ac747897873b2c07eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cannon, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:14:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e407 e407: 3 total, 3 up, 3 in
Nov 29 08:14:58 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e407: 3 total, 3 up, 3 in
Nov 29 08:14:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:14:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3486741922' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:14:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3486741922' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:58 compute-0 nova_compute[255040]: 2025-11-29 08:14:58.606 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:14:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2807096218' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:14:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2807096218' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:14:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1193846381' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:14:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1193846381' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e407 do_prune osdmap full prune enabled
Nov 29 08:14:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e408 e408: 3 total, 3 up, 3 in
Nov 29 08:14:58 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e408: 3 total, 3 up, 3 in
Nov 29 08:14:59 compute-0 ceph-mon[75237]: pgmap v1816: 305 pgs: 305 active+clean; 88 MiB data, 397 MiB used, 60 GiB / 60 GiB avail; 176 KiB/s rd, 11 KiB/s wr, 237 op/s
Nov 29 08:14:59 compute-0 ceph-mon[75237]: osdmap e407: 3 total, 3 up, 3 in
Nov 29 08:14:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3486741922' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3486741922' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2807096218' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2807096218' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1193846381' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1193846381' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:59 compute-0 ceph-mon[75237]: osdmap e408: 3 total, 3 up, 3 in
Nov 29 08:14:59 compute-0 reverent_cannon[292171]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:14:59 compute-0 reverent_cannon[292171]: --> relative data size: 1.0
Nov 29 08:14:59 compute-0 reverent_cannon[292171]: --> All data devices are unavailable
Nov 29 08:14:59 compute-0 systemd[1]: libpod-db7907006b69c726594943102640437833d5be13985f10ac747897873b2c07eb.scope: Deactivated successfully.
Nov 29 08:14:59 compute-0 systemd[1]: libpod-db7907006b69c726594943102640437833d5be13985f10ac747897873b2c07eb.scope: Consumed 1.036s CPU time.
Nov 29 08:14:59 compute-0 podman[292155]: 2025-11-29 08:14:59.247820431 +0000 UTC m=+1.836079858 container died db7907006b69c726594943102640437833d5be13985f10ac747897873b2c07eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cannon, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 08:14:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5cb021d1d5fff04f2ba809a102145b829bba4699b3d09c1986f4340951883d9-merged.mount: Deactivated successfully.
Nov 29 08:14:59 compute-0 podman[292155]: 2025-11-29 08:14:59.318371708 +0000 UTC m=+1.906631125 container remove db7907006b69c726594943102640437833d5be13985f10ac747897873b2c07eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cannon, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:14:59 compute-0 systemd[1]: libpod-conmon-db7907006b69c726594943102640437833d5be13985f10ac747897873b2c07eb.scope: Deactivated successfully.
Nov 29 08:14:59 compute-0 sudo[292054]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:14:59 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1785231768' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:14:59 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1785231768' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:59 compute-0 sudo[292214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:59 compute-0 sudo[292214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:59 compute-0 sudo[292214]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:59 compute-0 sudo[292239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:14:59 compute-0 sudo[292239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:59 compute-0 sudo[292239]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:59 compute-0 sudo[292264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:59 compute-0 sudo[292264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:59 compute-0 sudo[292264]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:59 compute-0 sudo[292289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 08:14:59 compute-0 sudo[292289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 175 KiB/s rd, 11 KiB/s wr, 239 op/s
Nov 29 08:14:59 compute-0 nova_compute[255040]: 2025-11-29 08:14:59.968 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:00 compute-0 podman[292354]: 2025-11-29 08:15:00.052573484 +0000 UTC m=+0.055256475 container create b9eec04a8bf3543165ced2ef66393343ea8a2c76eed326f9801dd79c6d629254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:15:00 compute-0 systemd[1]: Started libpod-conmon-b9eec04a8bf3543165ced2ef66393343ea8a2c76eed326f9801dd79c6d629254.scope.
Nov 29 08:15:00 compute-0 podman[292354]: 2025-11-29 08:15:00.02504179 +0000 UTC m=+0.027724841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:15:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:15:00 compute-0 podman[292354]: 2025-11-29 08:15:00.129761491 +0000 UTC m=+0.132444472 container init b9eec04a8bf3543165ced2ef66393343ea8a2c76eed326f9801dd79c6d629254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 08:15:00 compute-0 podman[292354]: 2025-11-29 08:15:00.136498703 +0000 UTC m=+0.139181694 container start b9eec04a8bf3543165ced2ef66393343ea8a2c76eed326f9801dd79c6d629254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 08:15:00 compute-0 podman[292354]: 2025-11-29 08:15:00.140888492 +0000 UTC m=+0.143571473 container attach b9eec04a8bf3543165ced2ef66393343ea8a2c76eed326f9801dd79c6d629254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_khorana, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 08:15:00 compute-0 wonderful_khorana[292370]: 167 167
Nov 29 08:15:00 compute-0 systemd[1]: libpod-b9eec04a8bf3543165ced2ef66393343ea8a2c76eed326f9801dd79c6d629254.scope: Deactivated successfully.
Nov 29 08:15:00 compute-0 podman[292354]: 2025-11-29 08:15:00.141852018 +0000 UTC m=+0.144534989 container died b9eec04a8bf3543165ced2ef66393343ea8a2c76eed326f9801dd79c6d629254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:15:00 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1785231768' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:00 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1785231768' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef705abd90d17cb5966240866b0c070e5d44df228dbb18594a9f7f132196bbfd-merged.mount: Deactivated successfully.
Nov 29 08:15:00 compute-0 podman[292354]: 2025-11-29 08:15:00.208733497 +0000 UTC m=+0.211416478 container remove b9eec04a8bf3543165ced2ef66393343ea8a2c76eed326f9801dd79c6d629254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_khorana, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 08:15:00 compute-0 systemd[1]: libpod-conmon-b9eec04a8bf3543165ced2ef66393343ea8a2c76eed326f9801dd79c6d629254.scope: Deactivated successfully.
Nov 29 08:15:00 compute-0 podman[292392]: 2025-11-29 08:15:00.391191041 +0000 UTC m=+0.057110055 container create e67dbbcf20d01be6b8428260d0c181267708df29b3c26aa35555db4e45dc976b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:15:00 compute-0 systemd[1]: Started libpod-conmon-e67dbbcf20d01be6b8428260d0c181267708df29b3c26aa35555db4e45dc976b.scope.
Nov 29 08:15:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:15:00 compute-0 podman[292392]: 2025-11-29 08:15:00.365397754 +0000 UTC m=+0.031316748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912c591a51c2acb3226b69aa813fb5b78096f50f7a6ac5efbd8cf41009392ab9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912c591a51c2acb3226b69aa813fb5b78096f50f7a6ac5efbd8cf41009392ab9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912c591a51c2acb3226b69aa813fb5b78096f50f7a6ac5efbd8cf41009392ab9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912c591a51c2acb3226b69aa813fb5b78096f50f7a6ac5efbd8cf41009392ab9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:00 compute-0 podman[292392]: 2025-11-29 08:15:00.476117708 +0000 UTC m=+0.142036722 container init e67dbbcf20d01be6b8428260d0c181267708df29b3c26aa35555db4e45dc976b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:15:00 compute-0 podman[292392]: 2025-11-29 08:15:00.484596698 +0000 UTC m=+0.150515692 container start e67dbbcf20d01be6b8428260d0c181267708df29b3c26aa35555db4e45dc976b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:15:00 compute-0 podman[292392]: 2025-11-29 08:15:00.490804075 +0000 UTC m=+0.156723089 container attach e67dbbcf20d01be6b8428260d0c181267708df29b3c26aa35555db4e45dc976b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:15:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:15:00 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1373209578' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:15:00 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1373209578' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:01 compute-0 ceph-mon[75237]: pgmap v1819: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 175 KiB/s rd, 11 KiB/s wr, 239 op/s
Nov 29 08:15:01 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1373209578' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:01 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1373209578' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]: {
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:     "0": [
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:         {
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "devices": [
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "/dev/loop3"
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             ],
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "lv_name": "ceph_lv0",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "lv_size": "21470642176",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "name": "ceph_lv0",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "tags": {
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.cluster_name": "ceph",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.crush_device_class": "",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.encrypted": "0",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.osd_id": "0",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.type": "block",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.vdo": "0"
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             },
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "type": "block",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "vg_name": "ceph_vg0"
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:         }
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:     ],
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:     "1": [
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:         {
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "devices": [
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "/dev/loop4"
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             ],
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "lv_name": "ceph_lv1",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "lv_size": "21470642176",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "name": "ceph_lv1",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "tags": {
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.cluster_name": "ceph",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.crush_device_class": "",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.encrypted": "0",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.osd_id": "1",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.type": "block",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.vdo": "0"
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             },
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "type": "block",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "vg_name": "ceph_vg1"
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:         }
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:     ],
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:     "2": [
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:         {
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "devices": [
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "/dev/loop5"
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             ],
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "lv_name": "ceph_lv2",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "lv_size": "21470642176",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "name": "ceph_lv2",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "tags": {
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.cluster_name": "ceph",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.crush_device_class": "",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.encrypted": "0",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.osd_id": "2",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.type": "block",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:                 "ceph.vdo": "0"
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             },
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "type": "block",
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:             "vg_name": "ceph_vg2"
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:         }
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]:     ]
Nov 29 08:15:01 compute-0 eloquent_fermat[292409]: }
Nov 29 08:15:01 compute-0 systemd[1]: libpod-e67dbbcf20d01be6b8428260d0c181267708df29b3c26aa35555db4e45dc976b.scope: Deactivated successfully.
Nov 29 08:15:01 compute-0 podman[292392]: 2025-11-29 08:15:01.296598928 +0000 UTC m=+0.962517982 container died e67dbbcf20d01be6b8428260d0c181267708df29b3c26aa35555db4e45dc976b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:15:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-912c591a51c2acb3226b69aa813fb5b78096f50f7a6ac5efbd8cf41009392ab9-merged.mount: Deactivated successfully.
Nov 29 08:15:01 compute-0 podman[292392]: 2025-11-29 08:15:01.358682987 +0000 UTC m=+1.024601981 container remove e67dbbcf20d01be6b8428260d0c181267708df29b3c26aa35555db4e45dc976b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:15:01 compute-0 systemd[1]: libpod-conmon-e67dbbcf20d01be6b8428260d0c181267708df29b3c26aa35555db4e45dc976b.scope: Deactivated successfully.
Nov 29 08:15:01 compute-0 sudo[292289]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:01 compute-0 sudo[292428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:15:01 compute-0 sudo[292428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:01 compute-0 sudo[292428]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:01 compute-0 sudo[292453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:15:01 compute-0 sudo[292453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:01 compute-0 sudo[292453]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:01 compute-0 sudo[292478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:15:01 compute-0 sudo[292478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:01 compute-0 sudo[292478]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:01 compute-0 sudo[292503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 08:15:01 compute-0 sudo[292503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:15:01 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2097290667' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:15:01 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2097290667' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 216 KiB/s rd, 13 KiB/s wr, 294 op/s
Nov 29 08:15:01 compute-0 podman[292568]: 2025-11-29 08:15:01.974487471 +0000 UTC m=+0.039328384 container create a8788cadbcd664412ba2f53ed1e59e9fec153f38c22a29196c3b174475c1ad1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:15:02 compute-0 systemd[1]: Started libpod-conmon-a8788cadbcd664412ba2f53ed1e59e9fec153f38c22a29196c3b174475c1ad1e.scope.
Nov 29 08:15:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:15:02 compute-0 podman[292568]: 2025-11-29 08:15:02.039805798 +0000 UTC m=+0.104646731 container init a8788cadbcd664412ba2f53ed1e59e9fec153f38c22a29196c3b174475c1ad1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 08:15:02 compute-0 podman[292568]: 2025-11-29 08:15:02.046739666 +0000 UTC m=+0.111580579 container start a8788cadbcd664412ba2f53ed1e59e9fec153f38c22a29196c3b174475c1ad1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 08:15:02 compute-0 podman[292568]: 2025-11-29 08:15:02.051203076 +0000 UTC m=+0.116043989 container attach a8788cadbcd664412ba2f53ed1e59e9fec153f38c22a29196c3b174475c1ad1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_tu, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 29 08:15:02 compute-0 podman[292568]: 2025-11-29 08:15:01.957559303 +0000 UTC m=+0.022400246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:15:02 compute-0 nice_tu[292584]: 167 167
Nov 29 08:15:02 compute-0 systemd[1]: libpod-a8788cadbcd664412ba2f53ed1e59e9fec153f38c22a29196c3b174475c1ad1e.scope: Deactivated successfully.
Nov 29 08:15:02 compute-0 podman[292568]: 2025-11-29 08:15:02.057334812 +0000 UTC m=+0.122175735 container died a8788cadbcd664412ba2f53ed1e59e9fec153f38c22a29196c3b174475c1ad1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_tu, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:15:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a63fb51caa06fabfff6a824cc7e6a6c51dd8b38da47553384a722125beafd48-merged.mount: Deactivated successfully.
Nov 29 08:15:02 compute-0 podman[292568]: 2025-11-29 08:15:02.111503017 +0000 UTC m=+0.176343950 container remove a8788cadbcd664412ba2f53ed1e59e9fec153f38c22a29196c3b174475c1ad1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 08:15:02 compute-0 systemd[1]: libpod-conmon-a8788cadbcd664412ba2f53ed1e59e9fec153f38c22a29196c3b174475c1ad1e.scope: Deactivated successfully.
Nov 29 08:15:02 compute-0 podman[292585]: 2025-11-29 08:15:02.146907784 +0000 UTC m=+0.118747613 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:15:02 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2097290667' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:02 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2097290667' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:02 compute-0 podman[292633]: 2025-11-29 08:15:02.331903857 +0000 UTC m=+0.048425480 container create 47b2fb89a901d0aaa8996659db39595e489f8d56f2796f97bcefe2c403cfaab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:15:02 compute-0 systemd[1]: Started libpod-conmon-47b2fb89a901d0aaa8996659db39595e489f8d56f2796f97bcefe2c403cfaab7.scope.
Nov 29 08:15:02 compute-0 podman[292633]: 2025-11-29 08:15:02.311885866 +0000 UTC m=+0.028407539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:15:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53a1afc67c51933d97891cae9a8034c9657ff8cfe06e019f69e948796f84ad47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53a1afc67c51933d97891cae9a8034c9657ff8cfe06e019f69e948796f84ad47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53a1afc67c51933d97891cae9a8034c9657ff8cfe06e019f69e948796f84ad47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53a1afc67c51933d97891cae9a8034c9657ff8cfe06e019f69e948796f84ad47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:02 compute-0 podman[292633]: 2025-11-29 08:15:02.430493844 +0000 UTC m=+0.147015527 container init 47b2fb89a901d0aaa8996659db39595e489f8d56f2796f97bcefe2c403cfaab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Nov 29 08:15:02 compute-0 podman[292633]: 2025-11-29 08:15:02.43812433 +0000 UTC m=+0.154645953 container start 47b2fb89a901d0aaa8996659db39595e489f8d56f2796f97bcefe2c403cfaab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_gagarin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 08:15:02 compute-0 podman[292633]: 2025-11-29 08:15:02.442539549 +0000 UTC m=+0.159061192 container attach 47b2fb89a901d0aaa8996659db39595e489f8d56f2796f97bcefe2c403cfaab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:15:03 compute-0 ceph-mon[75237]: pgmap v1820: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 216 KiB/s rd, 13 KiB/s wr, 294 op/s
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]: {
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]:         "osd_id": 2,
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]:         "type": "bluestore"
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]:     },
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]:         "osd_id": 0,
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]:         "type": "bluestore"
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]:     },
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]:         "osd_id": 1,
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]:         "type": "bluestore"
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]:     }
Nov 29 08:15:03 compute-0 awesome_gagarin[292650]: }
Nov 29 08:15:03 compute-0 systemd[1]: libpod-47b2fb89a901d0aaa8996659db39595e489f8d56f2796f97bcefe2c403cfaab7.scope: Deactivated successfully.
Nov 29 08:15:03 compute-0 podman[292633]: 2025-11-29 08:15:03.537331757 +0000 UTC m=+1.253853440 container died 47b2fb89a901d0aaa8996659db39595e489f8d56f2796f97bcefe2c403cfaab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_gagarin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 08:15:03 compute-0 systemd[1]: libpod-47b2fb89a901d0aaa8996659db39595e489f8d56f2796f97bcefe2c403cfaab7.scope: Consumed 1.096s CPU time.
Nov 29 08:15:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-53a1afc67c51933d97891cae9a8034c9657ff8cfe06e019f69e948796f84ad47-merged.mount: Deactivated successfully.
Nov 29 08:15:03 compute-0 nova_compute[255040]: 2025-11-29 08:15:03.610 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:03 compute-0 podman[292633]: 2025-11-29 08:15:03.615762469 +0000 UTC m=+1.332284092 container remove 47b2fb89a901d0aaa8996659db39595e489f8d56f2796f97bcefe2c403cfaab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:15:03 compute-0 systemd[1]: libpod-conmon-47b2fb89a901d0aaa8996659db39595e489f8d56f2796f97bcefe2c403cfaab7.scope: Deactivated successfully.
Nov 29 08:15:03 compute-0 sudo[292503]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:15:03 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:15:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:15:03 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:15:03 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev c0798a1f-0f28-40de-8db0-96994afa1f29 does not exist
Nov 29 08:15:03 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 93f7e020-ea04-4c13-b1e8-b14986969f8a does not exist
Nov 29 08:15:03 compute-0 sudo[292697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:15:03 compute-0 sudo[292697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:03 compute-0 sudo[292697]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:03 compute-0 sudo[292722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:15:03 compute-0 sudo[292722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:03 compute-0 sudo[292722]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e408 do_prune osdmap full prune enabled
Nov 29 08:15:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e409 e409: 3 total, 3 up, 3 in
Nov 29 08:15:03 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e409: 3 total, 3 up, 3 in
Nov 29 08:15:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 7.8 KiB/s wr, 202 op/s
Nov 29 08:15:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:15:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1010108291' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:15:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1010108291' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:15:04 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:15:04 compute-0 ceph-mon[75237]: osdmap e409: 3 total, 3 up, 3 in
Nov 29 08:15:04 compute-0 ceph-mon[75237]: pgmap v1822: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 7.8 KiB/s wr, 202 op/s
Nov 29 08:15:04 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1010108291' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:04 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1010108291' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:15:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2066044341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:04 compute-0 nova_compute[255040]: 2025-11-29 08:15:04.970 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 6.4 KiB/s wr, 167 op/s
Nov 29 08:15:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e409 do_prune osdmap full prune enabled
Nov 29 08:15:06 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2066044341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e410 e410: 3 total, 3 up, 3 in
Nov 29 08:15:06 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e410: 3 total, 3 up, 3 in
Nov 29 08:15:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:15:06 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/578072751' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:15:06 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/578072751' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:07 compute-0 ceph-mon[75237]: pgmap v1823: 305 pgs: 305 active+clean; 88 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 6.4 KiB/s wr, 167 op/s
Nov 29 08:15:07 compute-0 ceph-mon[75237]: osdmap e410: 3 total, 3 up, 3 in
Nov 29 08:15:07 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/578072751' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:07 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/578072751' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e410 do_prune osdmap full prune enabled
Nov 29 08:15:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e411 e411: 3 total, 3 up, 3 in
Nov 29 08:15:07 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e411: 3 total, 3 up, 3 in
Nov 29 08:15:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 88 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 2.3 KiB/s wr, 74 op/s
Nov 29 08:15:08 compute-0 ceph-mon[75237]: osdmap e411: 3 total, 3 up, 3 in
Nov 29 08:15:08 compute-0 nova_compute[255040]: 2025-11-29 08:15:08.612 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:15:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:15:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:15:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:15:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:15:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:15:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e411 do_prune osdmap full prune enabled
Nov 29 08:15:09 compute-0 ceph-mon[75237]: pgmap v1826: 305 pgs: 305 active+clean; 88 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 2.3 KiB/s wr, 74 op/s
Nov 29 08:15:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e412 e412: 3 total, 3 up, 3 in
Nov 29 08:15:09 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e412: 3 total, 3 up, 3 in
Nov 29 08:15:09 compute-0 podman[292747]: 2025-11-29 08:15:09.901388259 +0000 UTC m=+0.065055789 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:15:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 88 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 2.5 KiB/s wr, 77 op/s
Nov 29 08:15:09 compute-0 nova_compute[255040]: 2025-11-29 08:15:09.972 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:10 compute-0 ceph-mon[75237]: osdmap e412: 3 total, 3 up, 3 in
Nov 29 08:15:10 compute-0 ceph-mon[75237]: pgmap v1828: 305 pgs: 305 active+clean; 88 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 2.5 KiB/s wr, 77 op/s
Nov 29 08:15:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e412 do_prune osdmap full prune enabled
Nov 29 08:15:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 88 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 5.7 KiB/s wr, 141 op/s
Nov 29 08:15:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e413 e413: 3 total, 3 up, 3 in
Nov 29 08:15:12 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e413: 3 total, 3 up, 3 in
Nov 29 08:15:13 compute-0 ceph-mon[75237]: pgmap v1829: 305 pgs: 305 active+clean; 88 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 5.7 KiB/s wr, 141 op/s
Nov 29 08:15:13 compute-0 ceph-mon[75237]: osdmap e413: 3 total, 3 up, 3 in
Nov 29 08:15:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e413 do_prune osdmap full prune enabled
Nov 29 08:15:13 compute-0 nova_compute[255040]: 2025-11-29 08:15:13.615 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e414 e414: 3 total, 3 up, 3 in
Nov 29 08:15:13 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e414: 3 total, 3 up, 3 in
Nov 29 08:15:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e414 do_prune osdmap full prune enabled
Nov 29 08:15:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e415 e415: 3 total, 3 up, 3 in
Nov 29 08:15:13 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e415: 3 total, 3 up, 3 in
Nov 29 08:15:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 88 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 4.9 KiB/s wr, 105 op/s
Nov 29 08:15:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:15:14 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3172904242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:15:14 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3172904242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:14 compute-0 ceph-mon[75237]: osdmap e414: 3 total, 3 up, 3 in
Nov 29 08:15:14 compute-0 ceph-mon[75237]: osdmap e415: 3 total, 3 up, 3 in
Nov 29 08:15:14 compute-0 ceph-mon[75237]: pgmap v1833: 305 pgs: 305 active+clean; 88 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 4.9 KiB/s wr, 105 op/s
Nov 29 08:15:14 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3172904242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:14 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3172904242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:14 compute-0 nova_compute[255040]: 2025-11-29 08:15:14.974 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 88 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 4.2 KiB/s wr, 97 op/s
Nov 29 08:15:16 compute-0 ceph-mon[75237]: pgmap v1834: 305 pgs: 305 active+clean; 88 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 4.2 KiB/s wr, 97 op/s
Nov 29 08:15:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 88 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 2.7 KiB/s wr, 64 op/s
Nov 29 08:15:18 compute-0 nova_compute[255040]: 2025-11-29 08:15:18.617 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:18 compute-0 podman[292770]: 2025-11-29 08:15:18.895079929 +0000 UTC m=+0.064138626 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:15:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e415 do_prune osdmap full prune enabled
Nov 29 08:15:19 compute-0 ceph-mon[75237]: pgmap v1835: 305 pgs: 305 active+clean; 88 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 2.7 KiB/s wr, 64 op/s
Nov 29 08:15:19 compute-0 nova_compute[255040]: 2025-11-29 08:15:19.975 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 88 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 36 KiB/s wr, 62 op/s
Nov 29 08:15:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e416 e416: 3 total, 3 up, 3 in
Nov 29 08:15:20 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Nov 29 08:15:21 compute-0 ceph-mon[75237]: pgmap v1836: 305 pgs: 305 active+clean; 88 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 36 KiB/s wr, 62 op/s
Nov 29 08:15:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 88 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 37 KiB/s wr, 69 op/s
Nov 29 08:15:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e416 do_prune osdmap full prune enabled
Nov 29 08:15:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e417 e417: 3 total, 3 up, 3 in
Nov 29 08:15:22 compute-0 ceph-mon[75237]: osdmap e416: 3 total, 3 up, 3 in
Nov 29 08:15:22 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e417: 3 total, 3 up, 3 in
Nov 29 08:15:22 compute-0 sshd-session[292768]: Received disconnect from 45.78.219.195 port 52822:11: Bye Bye [preauth]
Nov 29 08:15:22 compute-0 sshd-session[292768]: Disconnected from authenticating user root 45.78.219.195 port 52822 [preauth]
Nov 29 08:15:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:15:22 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/829048507' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:15:22 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/829048507' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:23 compute-0 ceph-mon[75237]: pgmap v1838: 305 pgs: 305 active+clean; 88 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 37 KiB/s wr, 69 op/s
Nov 29 08:15:23 compute-0 ceph-mon[75237]: osdmap e417: 3 total, 3 up, 3 in
Nov 29 08:15:23 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/829048507' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:23 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/829048507' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:23 compute-0 nova_compute[255040]: 2025-11-29 08:15:23.619 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 88 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 37 KiB/s wr, 55 op/s
Nov 29 08:15:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:24 compute-0 ceph-mon[75237]: pgmap v1840: 305 pgs: 305 active+clean; 88 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 37 KiB/s wr, 55 op/s
Nov 29 08:15:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:15:24 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4186261643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:15:24 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4190903292' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:15:24 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4190903292' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:15:24 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1181881186' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:24 compute-0 nova_compute[255040]: 2025-11-29 08:15:24.977 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:25 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 36 KiB/s wr, 47 op/s
Nov 29 08:15:26 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4186261643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:26 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4190903292' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:26 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4190903292' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:26 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1181881186' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:15:26 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2943945143' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:15:26 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2943945143' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:27.140 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:27.140 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:27.141 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e417 do_prune osdmap full prune enabled
Nov 29 08:15:27 compute-0 ceph-mon[75237]: pgmap v1841: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 36 KiB/s wr, 47 op/s
Nov 29 08:15:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2943945143' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2943945143' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e418 e418: 3 total, 3 up, 3 in
Nov 29 08:15:27 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e418: 3 total, 3 up, 3 in
Nov 29 08:15:27 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 4.4 KiB/s wr, 76 op/s
Nov 29 08:15:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e418 do_prune osdmap full prune enabled
Nov 29 08:15:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e419 e419: 3 total, 3 up, 3 in
Nov 29 08:15:28 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e419: 3 total, 3 up, 3 in
Nov 29 08:15:28 compute-0 ovn_controller[153295]: 2025-11-29T08:15:28Z|00222|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 29 08:15:28 compute-0 ceph-mon[75237]: osdmap e418: 3 total, 3 up, 3 in
Nov 29 08:15:28 compute-0 ceph-mon[75237]: pgmap v1843: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 4.4 KiB/s wr, 76 op/s
Nov 29 08:15:28 compute-0 nova_compute[255040]: 2025-11-29 08:15:28.621 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:15:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3220379695' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:15:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3220379695' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:29 compute-0 ceph-mon[75237]: osdmap e419: 3 total, 3 up, 3 in
Nov 29 08:15:29 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3220379695' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:29 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3220379695' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:29 compute-0 nova_compute[255040]: 2025-11-29 08:15:29.978 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:29 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 3.6 KiB/s wr, 114 op/s
Nov 29 08:15:30 compute-0 ceph-mon[75237]: pgmap v1845: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 3.6 KiB/s wr, 114 op/s
Nov 29 08:15:30 compute-0 nova_compute[255040]: 2025-11-29 08:15:30.895 255071 DEBUG oslo_concurrency.lockutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:30 compute-0 nova_compute[255040]: 2025-11-29 08:15:30.895 255071 DEBUG oslo_concurrency.lockutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:30 compute-0 nova_compute[255040]: 2025-11-29 08:15:30.910 255071 DEBUG nova.compute.manager [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:15:30 compute-0 nova_compute[255040]: 2025-11-29 08:15:30.999 255071 DEBUG oslo_concurrency.lockutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:31 compute-0 nova_compute[255040]: 2025-11-29 08:15:30.999 255071 DEBUG oslo_concurrency.lockutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:31 compute-0 nova_compute[255040]: 2025-11-29 08:15:31.009 255071 DEBUG nova.virt.hardware [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:15:31 compute-0 nova_compute[255040]: 2025-11-29 08:15:31.009 255071 INFO nova.compute.claims [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:15:31 compute-0 nova_compute[255040]: 2025-11-29 08:15:31.142 255071 DEBUG oslo_concurrency.processutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:15:31 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3706092555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:31 compute-0 nova_compute[255040]: 2025-11-29 08:15:31.616 255071 DEBUG oslo_concurrency.processutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:31 compute-0 nova_compute[255040]: 2025-11-29 08:15:31.625 255071 DEBUG nova.compute.provider_tree [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:15:31 compute-0 nova_compute[255040]: 2025-11-29 08:15:31.648 255071 DEBUG nova.scheduler.client.report [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:15:31 compute-0 nova_compute[255040]: 2025-11-29 08:15:31.674 255071 DEBUG oslo_concurrency.lockutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:31 compute-0 nova_compute[255040]: 2025-11-29 08:15:31.675 255071 DEBUG nova.compute.manager [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:15:31 compute-0 nova_compute[255040]: 2025-11-29 08:15:31.751 255071 DEBUG nova.compute.manager [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:15:31 compute-0 nova_compute[255040]: 2025-11-29 08:15:31.751 255071 DEBUG nova.network.neutron [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:15:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:15:31 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1334267552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:31 compute-0 nova_compute[255040]: 2025-11-29 08:15:31.772 255071 INFO nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:15:31 compute-0 nova_compute[255040]: 2025-11-29 08:15:31.791 255071 DEBUG nova.compute.manager [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:15:31 compute-0 nova_compute[255040]: 2025-11-29 08:15:31.933 255071 DEBUG nova.compute.manager [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:15:31 compute-0 nova_compute[255040]: 2025-11-29 08:15:31.934 255071 DEBUG nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:15:31 compute-0 nova_compute[255040]: 2025-11-29 08:15:31.935 255071 INFO nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Creating image(s)
Nov 29 08:15:31 compute-0 nova_compute[255040]: 2025-11-29 08:15:31.956 255071 DEBUG nova.storage.rbd_utils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] rbd image 6568f6b1-4266-4fcc-b566-ae29baaa5c0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:15:31 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 5.7 KiB/s wr, 135 op/s
Nov 29 08:15:31 compute-0 nova_compute[255040]: 2025-11-29 08:15:31.981 255071 DEBUG nova.storage.rbd_utils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] rbd image 6568f6b1-4266-4fcc-b566-ae29baaa5c0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:15:32 compute-0 nova_compute[255040]: 2025-11-29 08:15:32.001 255071 DEBUG nova.storage.rbd_utils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] rbd image 6568f6b1-4266-4fcc-b566-ae29baaa5c0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:15:32 compute-0 nova_compute[255040]: 2025-11-29 08:15:32.005 255071 DEBUG oslo_concurrency.processutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:32 compute-0 nova_compute[255040]: 2025-11-29 08:15:32.028 255071 DEBUG nova.policy [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8a7b756f6c364e97a9d0d5298587d61c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e6a2673206a04ec28205d820751e3174', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:15:32 compute-0 nova_compute[255040]: 2025-11-29 08:15:32.072 255071 DEBUG oslo_concurrency.processutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:32 compute-0 nova_compute[255040]: 2025-11-29 08:15:32.073 255071 DEBUG oslo_concurrency.lockutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "55a6637599f7119d0d1afd670bb8713620840059" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:32 compute-0 nova_compute[255040]: 2025-11-29 08:15:32.074 255071 DEBUG oslo_concurrency.lockutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:32 compute-0 nova_compute[255040]: 2025-11-29 08:15:32.074 255071 DEBUG oslo_concurrency.lockutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "55a6637599f7119d0d1afd670bb8713620840059" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:32 compute-0 nova_compute[255040]: 2025-11-29 08:15:32.176 255071 DEBUG nova.storage.rbd_utils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] rbd image 6568f6b1-4266-4fcc-b566-ae29baaa5c0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:15:32 compute-0 nova_compute[255040]: 2025-11-29 08:15:32.180 255071 DEBUG oslo_concurrency.processutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 6568f6b1-4266-4fcc-b566-ae29baaa5c0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:32 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3706092555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:32 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1334267552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:32 compute-0 podman[292907]: 2025-11-29 08:15:32.957033857 +0000 UTC m=+0.120804488 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:15:32 compute-0 nova_compute[255040]: 2025-11-29 08:15:32.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:33 compute-0 nova_compute[255040]: 2025-11-29 08:15:33.145 255071 DEBUG oslo_concurrency.processutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55a6637599f7119d0d1afd670bb8713620840059 6568f6b1-4266-4fcc-b566-ae29baaa5c0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.965s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:33 compute-0 nova_compute[255040]: 2025-11-29 08:15:33.204 255071 DEBUG nova.storage.rbd_utils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] resizing rbd image 6568f6b1-4266-4fcc-b566-ae29baaa5c0f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:15:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e419 do_prune osdmap full prune enabled
Nov 29 08:15:33 compute-0 ceph-mon[75237]: pgmap v1846: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 5.7 KiB/s wr, 135 op/s
Nov 29 08:15:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e420 e420: 3 total, 3 up, 3 in
Nov 29 08:15:33 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e420: 3 total, 3 up, 3 in
Nov 29 08:15:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:15:33 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2222201286' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:15:33 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2222201286' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:33 compute-0 nova_compute[255040]: 2025-11-29 08:15:33.603 255071 DEBUG nova.objects.instance [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lazy-loading 'migration_context' on Instance uuid 6568f6b1-4266-4fcc-b566-ae29baaa5c0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:15:33 compute-0 nova_compute[255040]: 2025-11-29 08:15:33.623 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:33 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 3.0 KiB/s wr, 87 op/s
Nov 29 08:15:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:34 compute-0 nova_compute[255040]: 2025-11-29 08:15:34.216 255071 DEBUG nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:15:34 compute-0 nova_compute[255040]: 2025-11-29 08:15:34.217 255071 DEBUG nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Ensure instance console log exists: /var/lib/nova/instances/6568f6b1-4266-4fcc-b566-ae29baaa5c0f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:15:34 compute-0 nova_compute[255040]: 2025-11-29 08:15:34.218 255071 DEBUG oslo_concurrency.lockutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:34 compute-0 nova_compute[255040]: 2025-11-29 08:15:34.218 255071 DEBUG oslo_concurrency.lockutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:34 compute-0 nova_compute[255040]: 2025-11-29 08:15:34.218 255071 DEBUG oslo_concurrency.lockutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:34 compute-0 nova_compute[255040]: 2025-11-29 08:15:34.234 255071 DEBUG nova.network.neutron [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Successfully created port: d26e4c07-fd2e-4219-811a-8b7a975e0e27 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:15:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e420 do_prune osdmap full prune enabled
Nov 29 08:15:34 compute-0 ceph-mon[75237]: osdmap e420: 3 total, 3 up, 3 in
Nov 29 08:15:34 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2222201286' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:34 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2222201286' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:34 compute-0 ceph-mon[75237]: pgmap v1848: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 3.0 KiB/s wr, 87 op/s
Nov 29 08:15:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e421 e421: 3 total, 3 up, 3 in
Nov 29 08:15:34 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e421: 3 total, 3 up, 3 in
Nov 29 08:15:34 compute-0 nova_compute[255040]: 2025-11-29 08:15:34.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:34 compute-0 nova_compute[255040]: 2025-11-29 08:15:34.976 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:15:34 compute-0 nova_compute[255040]: 2025-11-29 08:15:34.976 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:15:34 compute-0 nova_compute[255040]: 2025-11-29 08:15:34.980 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:35 compute-0 nova_compute[255040]: 2025-11-29 08:15:34.999 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 08:15:35 compute-0 nova_compute[255040]: 2025-11-29 08:15:35.000 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:15:35 compute-0 nova_compute[255040]: 2025-11-29 08:15:35.001 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:35 compute-0 nova_compute[255040]: 2025-11-29 08:15:35.001 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:35 compute-0 nova_compute[255040]: 2025-11-29 08:15:35.291 255071 DEBUG nova.network.neutron [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Successfully updated port: d26e4c07-fd2e-4219-811a-8b7a975e0e27 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:15:35 compute-0 nova_compute[255040]: 2025-11-29 08:15:35.308 255071 DEBUG oslo_concurrency.lockutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "refresh_cache-6568f6b1-4266-4fcc-b566-ae29baaa5c0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:15:35 compute-0 nova_compute[255040]: 2025-11-29 08:15:35.308 255071 DEBUG oslo_concurrency.lockutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquired lock "refresh_cache-6568f6b1-4266-4fcc-b566-ae29baaa5c0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:15:35 compute-0 nova_compute[255040]: 2025-11-29 08:15:35.308 255071 DEBUG nova.network.neutron [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:15:35 compute-0 nova_compute[255040]: 2025-11-29 08:15:35.388 255071 DEBUG nova.compute.manager [req-7676cb97-e07f-4653-9741-1755e401647d req-f6158bac-b11f-4342-b95d-d9c8656c6c4e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Received event network-changed-d26e4c07-fd2e-4219-811a-8b7a975e0e27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:15:35 compute-0 nova_compute[255040]: 2025-11-29 08:15:35.388 255071 DEBUG nova.compute.manager [req-7676cb97-e07f-4653-9741-1755e401647d req-f6158bac-b11f-4342-b95d-d9c8656c6c4e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Refreshing instance network info cache due to event network-changed-d26e4c07-fd2e-4219-811a-8b7a975e0e27. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:15:35 compute-0 nova_compute[255040]: 2025-11-29 08:15:35.389 255071 DEBUG oslo_concurrency.lockutils [req-7676cb97-e07f-4653-9741-1755e401647d req-f6158bac-b11f-4342-b95d-d9c8656c6c4e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-6568f6b1-4266-4fcc-b566-ae29baaa5c0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:15:35 compute-0 nova_compute[255040]: 2025-11-29 08:15:35.459 255071 DEBUG nova.network.neutron [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:15:35 compute-0 ceph-mon[75237]: osdmap e421: 3 total, 3 up, 3 in
Nov 29 08:15:35 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 114 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 1.6 MiB/s wr, 139 op/s
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.116 255071 DEBUG nova.network.neutron [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Updating instance_info_cache with network_info: [{"id": "d26e4c07-fd2e-4219-811a-8b7a975e0e27", "address": "fa:16:3e:14:3b:b4", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd26e4c07-fd", "ovs_interfaceid": "d26e4c07-fd2e-4219-811a-8b7a975e0e27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.138 255071 DEBUG oslo_concurrency.lockutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Releasing lock "refresh_cache-6568f6b1-4266-4fcc-b566-ae29baaa5c0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.139 255071 DEBUG nova.compute.manager [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Instance network_info: |[{"id": "d26e4c07-fd2e-4219-811a-8b7a975e0e27", "address": "fa:16:3e:14:3b:b4", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd26e4c07-fd", "ovs_interfaceid": "d26e4c07-fd2e-4219-811a-8b7a975e0e27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.139 255071 DEBUG oslo_concurrency.lockutils [req-7676cb97-e07f-4653-9741-1755e401647d req-f6158bac-b11f-4342-b95d-d9c8656c6c4e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-6568f6b1-4266-4fcc-b566-ae29baaa5c0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.139 255071 DEBUG nova.network.neutron [req-7676cb97-e07f-4653-9741-1755e401647d req-f6158bac-b11f-4342-b95d-d9c8656c6c4e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Refreshing network info cache for port d26e4c07-fd2e-4219-811a-8b7a975e0e27 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.143 255071 DEBUG nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Start _get_guest_xml network_info=[{"id": "d26e4c07-fd2e-4219-811a-8b7a975e0e27", "address": "fa:16:3e:14:3b:b4", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd26e4c07-fd", "ovs_interfaceid": "d26e4c07-fd2e-4219-811a-8b7a975e0e27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'encrypted': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'image_id': '36a9388d-0d77-4d24-a915-be92247e5dbc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.148 255071 WARNING nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.154 255071 DEBUG nova.virt.libvirt.host [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.155 255071 DEBUG nova.virt.libvirt.host [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.161 255071 DEBUG nova.virt.libvirt.host [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.162 255071 DEBUG nova.virt.libvirt.host [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.162 255071 DEBUG nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.163 255071 DEBUG nova.virt.hardware [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:56:45Z,direct_url=<?>,disk_format='qcow2',id=36a9388d-0d77-4d24-a915-be92247e5dbc,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b480beb2d434be883470bfd9174d524',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:56:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.163 255071 DEBUG nova.virt.hardware [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.164 255071 DEBUG nova.virt.hardware [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.164 255071 DEBUG nova.virt.hardware [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.165 255071 DEBUG nova.virt.hardware [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.165 255071 DEBUG nova.virt.hardware [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.165 255071 DEBUG nova.virt.hardware [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.166 255071 DEBUG nova.virt.hardware [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.166 255071 DEBUG nova.virt.hardware [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.167 255071 DEBUG nova.virt.hardware [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.167 255071 DEBUG nova.virt.hardware [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.171 255071 DEBUG oslo_concurrency.processutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e421 do_prune osdmap full prune enabled
Nov 29 08:15:36 compute-0 ceph-mon[75237]: pgmap v1850: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 114 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 1.6 MiB/s wr, 139 op/s
Nov 29 08:15:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e422 e422: 3 total, 3 up, 3 in
Nov 29 08:15:36 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e422: 3 total, 3 up, 3 in
Nov 29 08:15:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:15:36 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/30736828' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.601 255071 DEBUG oslo_concurrency.processutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.784 255071 DEBUG nova.storage.rbd_utils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] rbd image 6568f6b1-4266-4fcc-b566-ae29baaa5c0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.789 255071 DEBUG oslo_concurrency.processutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.976 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.995 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.996 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.996 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.996 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:15:36 compute-0 nova_compute[255040]: 2025-11-29 08:15:36.997 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:15:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2398822075' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.233 255071 DEBUG oslo_concurrency.processutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.236 255071 DEBUG nova.virt.libvirt.vif [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:15:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-326543642',display_name='tempest-TestEncryptedCinderVolumes-server-326543642',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-326543642',id=24,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJ6IEXau3I2DYOLEkG9RIUH26bBA6kM3bMwX6LL/0O9tLj8zF4tA1UMRWuJS2Vf7WxBxf1SQRqfvTwsmd72QjleNxnnLHtSmC1XedHN3bm1oH9bTJmHDQHMF1nbKKRMHQ==',key_name='tempest-keypair-1501577220',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6a2673206a04ec28205d820751e3174',ramdisk_id='',reservation_id='r-3zy1jq8l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-2116890995',owner_user_name='tempest-TestEncryptedCinderVolumes-2116890995-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:15:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8a7b756f6c364e97a9d0d5298587d61c',uuid=6568f6b1-4266-4fcc-b566-ae29baaa5c0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d26e4c07-fd2e-4219-811a-8b7a975e0e27", "address": "fa:16:3e:14:3b:b4", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd26e4c07-fd", "ovs_interfaceid": "d26e4c07-fd2e-4219-811a-8b7a975e0e27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.237 255071 DEBUG nova.network.os_vif_util [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Converting VIF {"id": "d26e4c07-fd2e-4219-811a-8b7a975e0e27", "address": "fa:16:3e:14:3b:b4", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd26e4c07-fd", "ovs_interfaceid": "d26e4c07-fd2e-4219-811a-8b7a975e0e27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.239 255071 DEBUG nova.network.os_vif_util [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:3b:b4,bridge_name='br-int',has_traffic_filtering=True,id=d26e4c07-fd2e-4219-811a-8b7a975e0e27,network=Network(7844e875-d723-468d-8c4a-c3bb5b3b635a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd26e4c07-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.240 255071 DEBUG nova.objects.instance [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6568f6b1-4266-4fcc-b566-ae29baaa5c0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.259 255071 DEBUG nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:15:37 compute-0 nova_compute[255040]:   <uuid>6568f6b1-4266-4fcc-b566-ae29baaa5c0f</uuid>
Nov 29 08:15:37 compute-0 nova_compute[255040]:   <name>instance-00000018</name>
Nov 29 08:15:37 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:15:37 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:15:37 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-326543642</nova:name>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:15:36</nova:creationTime>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:15:37 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:15:37 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:15:37 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:15:37 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:15:37 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:15:37 compute-0 nova_compute[255040]:         <nova:user uuid="8a7b756f6c364e97a9d0d5298587d61c">tempest-TestEncryptedCinderVolumes-2116890995-project-member</nova:user>
Nov 29 08:15:37 compute-0 nova_compute[255040]:         <nova:project uuid="e6a2673206a04ec28205d820751e3174">tempest-TestEncryptedCinderVolumes-2116890995</nova:project>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <nova:root type="image" uuid="36a9388d-0d77-4d24-a915-be92247e5dbc"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:15:37 compute-0 nova_compute[255040]:         <nova:port uuid="d26e4c07-fd2e-4219-811a-8b7a975e0e27">
Nov 29 08:15:37 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:15:37 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:15:37 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <system>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <entry name="serial">6568f6b1-4266-4fcc-b566-ae29baaa5c0f</entry>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <entry name="uuid">6568f6b1-4266-4fcc-b566-ae29baaa5c0f</entry>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     </system>
Nov 29 08:15:37 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:15:37 compute-0 nova_compute[255040]:   <os>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:   </os>
Nov 29 08:15:37 compute-0 nova_compute[255040]:   <features>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:   </features>
Nov 29 08:15:37 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:15:37 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:15:37 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/6568f6b1-4266-4fcc-b566-ae29baaa5c0f_disk">
Nov 29 08:15:37 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       </source>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:15:37 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/6568f6b1-4266-4fcc-b566-ae29baaa5c0f_disk.config">
Nov 29 08:15:37 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       </source>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:15:37 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:14:3b:b4"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <target dev="tapd26e4c07-fd"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/6568f6b1-4266-4fcc-b566-ae29baaa5c0f/console.log" append="off"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <video>
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     </video>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:15:37 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:15:37 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:15:37 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:15:37 compute-0 nova_compute[255040]: </domain>
Nov 29 08:15:37 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.268 255071 DEBUG nova.compute.manager [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Preparing to wait for external event network-vif-plugged-d26e4c07-fd2e-4219-811a-8b7a975e0e27 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.268 255071 DEBUG oslo_concurrency.lockutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.269 255071 DEBUG oslo_concurrency.lockutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.269 255071 DEBUG oslo_concurrency.lockutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.270 255071 DEBUG nova.virt.libvirt.vif [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:15:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-326543642',display_name='tempest-TestEncryptedCinderVolumes-server-326543642',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-326543642',id=24,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJ6IEXau3I2DYOLEkG9RIUH26bBA6kM3bMwX6LL/0O9tLj8zF4tA1UMRWuJS2Vf7WxBxf1SQRqfvTwsmd72QjleNxnnLHtSmC1XedHN3bm1oH9bTJmHDQHMF1nbKKRMHQ==',key_name='tempest-keypair-1501577220',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6a2673206a04ec28205d820751e3174',ramdisk_id='',reservation_id='r-3zy1jq8l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-2116890995',owner_user_name='tempest-TestEncryptedCinderVolumes-2116890995-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:15:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8a7b756f6c364e97a9d0d5298587d61c',uuid=6568f6b1-4266-4fcc-b566-ae29baaa5c0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d26e4c07-fd2e-4219-811a-8b7a975e0e27", "address": "fa:16:3e:14:3b:b4", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd26e4c07-fd", "ovs_interfaceid": "d26e4c07-fd2e-4219-811a-8b7a975e0e27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.271 255071 DEBUG nova.network.os_vif_util [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Converting VIF {"id": "d26e4c07-fd2e-4219-811a-8b7a975e0e27", "address": "fa:16:3e:14:3b:b4", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd26e4c07-fd", "ovs_interfaceid": "d26e4c07-fd2e-4219-811a-8b7a975e0e27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.272 255071 DEBUG nova.network.os_vif_util [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:3b:b4,bridge_name='br-int',has_traffic_filtering=True,id=d26e4c07-fd2e-4219-811a-8b7a975e0e27,network=Network(7844e875-d723-468d-8c4a-c3bb5b3b635a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd26e4c07-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.272 255071 DEBUG os_vif [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:3b:b4,bridge_name='br-int',has_traffic_filtering=True,id=d26e4c07-fd2e-4219-811a-8b7a975e0e27,network=Network(7844e875-d723-468d-8c4a-c3bb5b3b635a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd26e4c07-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.273 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.274 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.274 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.278 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.278 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd26e4c07-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.279 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd26e4c07-fd, col_values=(('external_ids', {'iface-id': 'd26e4c07-fd2e-4219-811a-8b7a975e0e27', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:3b:b4', 'vm-uuid': '6568f6b1-4266-4fcc-b566-ae29baaa5c0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:37 compute-0 NetworkManager[49116]: <info>  [1764404137.2820] manager: (tapd26e4c07-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.283 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.289 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:15:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/354615202' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.290 255071 INFO os_vif [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:3b:b4,bridge_name='br-int',has_traffic_filtering=True,id=d26e4c07-fd2e-4219-811a-8b7a975e0e27,network=Network(7844e875-d723-468d-8c4a-c3bb5b3b635a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd26e4c07-fd')
Nov 29 08:15:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:15:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/354615202' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.304 255071 DEBUG nova.network.neutron [req-7676cb97-e07f-4653-9741-1755e401647d req-f6158bac-b11f-4342-b95d-d9c8656c6c4e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Updated VIF entry in instance network info cache for port d26e4c07-fd2e-4219-811a-8b7a975e0e27. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.305 255071 DEBUG nova.network.neutron [req-7676cb97-e07f-4653-9741-1755e401647d req-f6158bac-b11f-4342-b95d-d9c8656c6c4e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Updating instance_info_cache with network_info: [{"id": "d26e4c07-fd2e-4219-811a-8b7a975e0e27", "address": "fa:16:3e:14:3b:b4", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd26e4c07-fd", "ovs_interfaceid": "d26e4c07-fd2e-4219-811a-8b7a975e0e27", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.323 255071 DEBUG oslo_concurrency.lockutils [req-7676cb97-e07f-4653-9741-1755e401647d req-f6158bac-b11f-4342-b95d-d9c8656c6c4e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-6568f6b1-4266-4fcc-b566-ae29baaa5c0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.354 255071 DEBUG nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.355 255071 DEBUG nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.355 255071 DEBUG nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] No VIF found with MAC fa:16:3e:14:3b:b4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.356 255071 INFO nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Using config drive
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.376 255071 DEBUG nova.storage.rbd_utils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] rbd image 6568f6b1-4266-4fcc-b566-ae29baaa5c0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:15:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:15:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1543108003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.475 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:37 compute-0 ceph-mon[75237]: osdmap e422: 3 total, 3 up, 3 in
Nov 29 08:15:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/30736828' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2398822075' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/354615202' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/354615202' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1543108003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.526 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.526 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.652 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.653 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4354MB free_disk=59.97626495361328GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.653 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.654 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.661 255071 INFO nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Creating config drive at /var/lib/nova/instances/6568f6b1-4266-4fcc-b566-ae29baaa5c0f/disk.config
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.667 255071 DEBUG oslo_concurrency.processutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6568f6b1-4266-4fcc-b566-ae29baaa5c0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3xl4ui0c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.729 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance 6568f6b1-4266-4fcc-b566-ae29baaa5c0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.729 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.730 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.765 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.796 255071 DEBUG oslo_concurrency.processutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6568f6b1-4266-4fcc-b566-ae29baaa5c0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3xl4ui0c" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.819 255071 DEBUG nova.storage.rbd_utils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] rbd image 6568f6b1-4266-4fcc-b566-ae29baaa5c0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:15:37 compute-0 nova_compute[255040]: 2025-11-29 08:15:37.822 255071 DEBUG oslo_concurrency.processutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6568f6b1-4266-4fcc-b566-ae29baaa5c0f/disk.config 6568f6b1-4266-4fcc-b566-ae29baaa5c0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:15:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4294081122' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:15:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4294081122' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:37 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 134 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 3.5 MiB/s wr, 130 op/s
Nov 29 08:15:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:15:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2453522830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:38 compute-0 nova_compute[255040]: 2025-11-29 08:15:38.194 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:38 compute-0 nova_compute[255040]: 2025-11-29 08:15:38.200 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:15:38 compute-0 nova_compute[255040]: 2025-11-29 08:15:38.236 255071 DEBUG oslo_concurrency.processutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6568f6b1-4266-4fcc-b566-ae29baaa5c0f/disk.config 6568f6b1-4266-4fcc-b566-ae29baaa5c0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:38 compute-0 nova_compute[255040]: 2025-11-29 08:15:38.237 255071 INFO nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Deleting local config drive /var/lib/nova/instances/6568f6b1-4266-4fcc-b566-ae29baaa5c0f/disk.config because it was imported into RBD.
Nov 29 08:15:38 compute-0 kernel: tapd26e4c07-fd: entered promiscuous mode
Nov 29 08:15:38 compute-0 NetworkManager[49116]: <info>  [1764404138.2936] manager: (tapd26e4c07-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/119)
Nov 29 08:15:38 compute-0 nova_compute[255040]: 2025-11-29 08:15:38.334 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:38 compute-0 ovn_controller[153295]: 2025-11-29T08:15:38Z|00223|binding|INFO|Claiming lport d26e4c07-fd2e-4219-811a-8b7a975e0e27 for this chassis.
Nov 29 08:15:38 compute-0 ovn_controller[153295]: 2025-11-29T08:15:38Z|00224|binding|INFO|d26e4c07-fd2e-4219-811a-8b7a975e0e27: Claiming fa:16:3e:14:3b:b4 10.100.0.8
Nov 29 08:15:38 compute-0 systemd-udevd[293184]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:15:38 compute-0 nova_compute[255040]: 2025-11-29 08:15:38.341 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:38 compute-0 NetworkManager[49116]: <info>  [1764404138.3506] device (tapd26e4c07-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:15:38 compute-0 NetworkManager[49116]: <info>  [1764404138.3518] device (tapd26e4c07-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:15:38 compute-0 systemd-machined[216271]: New machine qemu-24-instance-00000018.
Nov 29 08:15:38 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-00000018.
Nov 29 08:15:38 compute-0 ovn_controller[153295]: 2025-11-29T08:15:38Z|00225|binding|INFO|Setting lport d26e4c07-fd2e-4219-811a-8b7a975e0e27 ovn-installed in OVS
Nov 29 08:15:38 compute-0 nova_compute[255040]: 2025-11-29 08:15:38.420 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:38 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4294081122' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:38 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4294081122' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:38 compute-0 ceph-mon[75237]: pgmap v1852: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 134 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 3.5 MiB/s wr, 130 op/s
Nov 29 08:15:38 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2453522830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:15:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:15:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:15:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:15:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:15:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:15:38 compute-0 ovn_controller[153295]: 2025-11-29T08:15:38Z|00226|binding|INFO|Setting lport d26e4c07-fd2e-4219-811a-8b7a975e0e27 up in Southbound
Nov 29 08:15:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:38.778 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:3b:b4 10.100.0.8'], port_security=['fa:16:3e:14:3b:b4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6568f6b1-4266-4fcc-b566-ae29baaa5c0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6a2673206a04ec28205d820751e3174', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63f1d851-e4a5-47d7-a466-30b92fb8edc0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e40ac74c-e68a-47d3-8a1f-fd021a26891c, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=d26e4c07-fd2e-4219-811a-8b7a975e0e27) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:15:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:38.779 163500 INFO neutron.agent.ovn.metadata.agent [-] Port d26e4c07-fd2e-4219-811a-8b7a975e0e27 in datapath 7844e875-d723-468d-8c4a-c3bb5b3b635a bound to our chassis
Nov 29 08:15:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:38.780 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7844e875-d723-468d-8c4a-c3bb5b3b635a
Nov 29 08:15:38 compute-0 nova_compute[255040]: 2025-11-29 08:15:38.780 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:15:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:38.792 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[6664fdbd-bb9d-4498-a56f-54342085d04c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:38.793 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7844e875-d1 in ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:15:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:38.795 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7844e875-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:15:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:38.795 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[34c0df25-406a-4b80-b547-d5dd7120c986]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:38.796 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[56304d25-25ad-42e7-9aca-ae7b7f4e63b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:38 compute-0 nova_compute[255040]: 2025-11-29 08:15:38.803 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:15:38 compute-0 nova_compute[255040]: 2025-11-29 08:15:38.804 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:38.808 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[5cca006f-e3ed-4803-ba89-052cde39fc8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:38.830 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[3c2c38ab-b762-41e6-b4b2-773beb97e44f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:38 compute-0 nova_compute[255040]: 2025-11-29 08:15:38.864 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404138.8633766, 6568f6b1-4266-4fcc-b566-ae29baaa5c0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:15:38 compute-0 nova_compute[255040]: 2025-11-29 08:15:38.864 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] VM Started (Lifecycle Event)
Nov 29 08:15:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:38.869 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[36651310-c623-4557-8e2b-eaef28ea4d20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:38.874 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[93aaf1a5-707d-4d05-a66f-ffdbe0497092]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:38 compute-0 systemd-udevd[293188]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:15:38 compute-0 NetworkManager[49116]: <info>  [1764404138.8764] manager: (tap7844e875-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/120)
Nov 29 08:15:38 compute-0 nova_compute[255040]: 2025-11-29 08:15:38.891 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:15:38 compute-0 nova_compute[255040]: 2025-11-29 08:15:38.894 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404138.8636165, 6568f6b1-4266-4fcc-b566-ae29baaa5c0f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:15:38 compute-0 nova_compute[255040]: 2025-11-29 08:15:38.894 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] VM Paused (Lifecycle Event)
Nov 29 08:15:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:15:38
Nov 29 08:15:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:15:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:15:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'vms', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.log', 'images']
Nov 29 08:15:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:15:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:38.910 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[7134f47b-0584-4c76-acc8-373887abc741]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:38.913 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[6e680301-0b23-4971-9396-fa96c466e1cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:38 compute-0 NetworkManager[49116]: <info>  [1764404138.9364] device (tap7844e875-d0): carrier: link connected
Nov 29 08:15:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:38.940 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[47bbeab9-a401-40d2-9b2b-fdfedb582040]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:38.956 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe7c132-f0c4-4c14-8594-5b4b0c82f831]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7844e875-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:72:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637627, 'reachable_time': 25657, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293262, 'error': None, 'target': 'ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:38.971 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[7a43ef57-9bf1-4412-9cae-c3dc999983db]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:7298'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637627, 'tstamp': 637627}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293263, 'error': None, 'target': 'ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:38 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:38.988 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[b7474856-55c2-4416-aee6-2c022d4d9af7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7844e875-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:72:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637627, 'reachable_time': 25657, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293264, 'error': None, 'target': 'ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:39.026 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[2c1a153b-90fe-4a0a-b9c8-526493da7da2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:39.109 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[89ba4505-e0ed-46ed-85a0-2b91adbd89b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:39.110 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7844e875-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:39.110 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:39.111 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7844e875-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:39 compute-0 NetworkManager[49116]: <info>  [1764404139.1137] manager: (tap7844e875-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.113 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:39 compute-0 kernel: tap7844e875-d0: entered promiscuous mode
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:39.117 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7844e875-d0, col_values=(('external_ids', {'iface-id': 'b495613a-3fb1-48c4-aa81-640b29e83d9b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:39 compute-0 ovn_controller[153295]: 2025-11-29T08:15:39Z|00227|binding|INFO|Releasing lport b495613a-3fb1-48c4-aa81-640b29e83d9b from this chassis (sb_readonly=0)
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.118 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:39.122 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7844e875-d723-468d-8c4a-c3bb5b3b635a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7844e875-d723-468d-8c4a-c3bb5b3b635a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:39.123 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[fd6aa24e-07cc-44bb-b1a0-95ad94b6b7d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:39.124 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-7844e875-d723-468d-8c4a-c3bb5b3b635a
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/7844e875-d723-468d-8c4a-c3bb5b3b635a.pid.haproxy
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID 7844e875-d723-468d-8c4a-c3bb5b3b635a
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:15:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:39.126 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'env', 'PROCESS_TAG=haproxy-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7844e875-d723-468d-8c4a-c3bb5b3b635a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.131 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.135 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.141 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.169 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.303 255071 DEBUG nova.compute.manager [req-d26a27ca-b33b-4294-86d1-cf9c1e18af5c req-5e4201ff-09fc-4790-86ca-dc2fcb05325c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Received event network-vif-plugged-d26e4c07-fd2e-4219-811a-8b7a975e0e27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.304 255071 DEBUG oslo_concurrency.lockutils [req-d26a27ca-b33b-4294-86d1-cf9c1e18af5c req-5e4201ff-09fc-4790-86ca-dc2fcb05325c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.304 255071 DEBUG oslo_concurrency.lockutils [req-d26a27ca-b33b-4294-86d1-cf9c1e18af5c req-5e4201ff-09fc-4790-86ca-dc2fcb05325c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.305 255071 DEBUG oslo_concurrency.lockutils [req-d26a27ca-b33b-4294-86d1-cf9c1e18af5c req-5e4201ff-09fc-4790-86ca-dc2fcb05325c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.305 255071 DEBUG nova.compute.manager [req-d26a27ca-b33b-4294-86d1-cf9c1e18af5c req-5e4201ff-09fc-4790-86ca-dc2fcb05325c cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Processing event network-vif-plugged-d26e4c07-fd2e-4219-811a-8b7a975e0e27 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.307 255071 DEBUG nova.compute.manager [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.314 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404139.3143659, 6568f6b1-4266-4fcc-b566-ae29baaa5c0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.315 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] VM Resumed (Lifecycle Event)
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.316 255071 DEBUG nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.321 255071 INFO nova.virt.libvirt.driver [-] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Instance spawned successfully.
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.321 255071 DEBUG nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.346 255071 DEBUG nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.347 255071 DEBUG nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.347 255071 DEBUG nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.347 255071 DEBUG nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.348 255071 DEBUG nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.348 255071 DEBUG nova.virt.libvirt.driver [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.352 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.355 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.388 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.408 255071 INFO nova.compute.manager [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Took 7.47 seconds to spawn the instance on the hypervisor.
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.409 255071 DEBUG nova.compute.manager [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.479 255071 INFO nova.compute.manager [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Took 8.52 seconds to build instance.
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.495 255071 DEBUG oslo_concurrency.lockutils [None req-58fc12f6-a2b6-4d43-be1b-d19367531638 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:39 compute-0 podman[293296]: 2025-11-29 08:15:39.51227551 +0000 UTC m=+0.022651193 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:15:39 compute-0 podman[293296]: 2025-11-29 08:15:39.671422044 +0000 UTC m=+0.181797697 container create fbe5f8f49edce44c1da52fc10e6da3307888a0d6a04383bfbf2cedb4a1ca6042 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 08:15:39 compute-0 systemd[1]: Started libpod-conmon-fbe5f8f49edce44c1da52fc10e6da3307888a0d6a04383bfbf2cedb4a1ca6042.scope.
Nov 29 08:15:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:15:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d596e9b52b6dab291ca97019feb5a2eedd74002d231461bb6daa8ef3877db4b3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:39 compute-0 podman[293296]: 2025-11-29 08:15:39.807608677 +0000 UTC m=+0.317984360 container init fbe5f8f49edce44c1da52fc10e6da3307888a0d6a04383bfbf2cedb4a1ca6042 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.808 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.809 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:39 compute-0 podman[293296]: 2025-11-29 08:15:39.814215696 +0000 UTC m=+0.324591349 container start fbe5f8f49edce44c1da52fc10e6da3307888a0d6a04383bfbf2cedb4a1ca6042 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 08:15:39 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[293312]: [NOTICE]   (293316) : New worker (293318) forked
Nov 29 08:15:39 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[293312]: [NOTICE]   (293316) : Loading success.
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:39 compute-0 nova_compute[255040]: 2025-11-29 08:15:39.982 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:39 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 134 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 3.2 MiB/s wr, 132 op/s
Nov 29 08:15:40 compute-0 podman[293327]: 2025-11-29 08:15:40.893978798 +0000 UTC m=+0.060135417 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 08:15:41 compute-0 ceph-mon[75237]: pgmap v1853: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 134 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 3.2 MiB/s wr, 132 op/s
Nov 29 08:15:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:15:41 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/400603341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:41 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 134 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.7 MiB/s wr, 210 op/s
Nov 29 08:15:42 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/400603341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:15:42 compute-0 nova_compute[255040]: 2025-11-29 08:15:42.282 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:42 compute-0 nova_compute[255040]: 2025-11-29 08:15:42.674 255071 DEBUG nova.compute.manager [req-bf175d95-3d21-4997-8a60-b68c74d01c89 req-80e8f278-8033-48c4-a4ae-025e59841b9f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Received event network-vif-plugged-d26e4c07-fd2e-4219-811a-8b7a975e0e27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:15:42 compute-0 nova_compute[255040]: 2025-11-29 08:15:42.674 255071 DEBUG oslo_concurrency.lockutils [req-bf175d95-3d21-4997-8a60-b68c74d01c89 req-80e8f278-8033-48c4-a4ae-025e59841b9f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:42 compute-0 nova_compute[255040]: 2025-11-29 08:15:42.675 255071 DEBUG oslo_concurrency.lockutils [req-bf175d95-3d21-4997-8a60-b68c74d01c89 req-80e8f278-8033-48c4-a4ae-025e59841b9f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:42 compute-0 nova_compute[255040]: 2025-11-29 08:15:42.675 255071 DEBUG oslo_concurrency.lockutils [req-bf175d95-3d21-4997-8a60-b68c74d01c89 req-80e8f278-8033-48c4-a4ae-025e59841b9f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:42 compute-0 nova_compute[255040]: 2025-11-29 08:15:42.675 255071 DEBUG nova.compute.manager [req-bf175d95-3d21-4997-8a60-b68c74d01c89 req-80e8f278-8033-48c4-a4ae-025e59841b9f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] No waiting events found dispatching network-vif-plugged-d26e4c07-fd2e-4219-811a-8b7a975e0e27 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:15:42 compute-0 nova_compute[255040]: 2025-11-29 08:15:42.675 255071 WARNING nova.compute.manager [req-bf175d95-3d21-4997-8a60-b68c74d01c89 req-80e8f278-8033-48c4-a4ae-025e59841b9f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Received unexpected event network-vif-plugged-d26e4c07-fd2e-4219-811a-8b7a975e0e27 for instance with vm_state active and task_state None.
Nov 29 08:15:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e422 do_prune osdmap full prune enabled
Nov 29 08:15:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e423 e423: 3 total, 3 up, 3 in
Nov 29 08:15:43 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e423: 3 total, 3 up, 3 in
Nov 29 08:15:43 compute-0 ceph-mon[75237]: pgmap v1854: 305 pgs: 305 active+clean; 134 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.7 MiB/s wr, 210 op/s
Nov 29 08:15:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:15:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:15:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:15:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:15:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:15:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:15:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:15:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:15:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:15:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:15:43 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 134 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.1 MiB/s wr, 152 op/s
Nov 29 08:15:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e423 do_prune osdmap full prune enabled
Nov 29 08:15:44 compute-0 ceph-mon[75237]: osdmap e423: 3 total, 3 up, 3 in
Nov 29 08:15:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e424 e424: 3 total, 3 up, 3 in
Nov 29 08:15:44 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e424: 3 total, 3 up, 3 in
Nov 29 08:15:44 compute-0 nova_compute[255040]: 2025-11-29 08:15:44.984 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:45 compute-0 ceph-mon[75237]: pgmap v1856: 305 pgs: 305 active+clean; 134 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.1 MiB/s wr, 152 op/s
Nov 29 08:15:45 compute-0 ceph-mon[75237]: osdmap e424: 3 total, 3 up, 3 in
Nov 29 08:15:45 compute-0 NetworkManager[49116]: <info>  [1764404145.3766] manager: (patch-provnet-0b50aea8-d2d6-4416-bd00-1ceabb7a7c1d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Nov 29 08:15:45 compute-0 nova_compute[255040]: 2025-11-29 08:15:45.375 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:45 compute-0 NetworkManager[49116]: <info>  [1764404145.3777] manager: (patch-br-int-to-provnet-0b50aea8-d2d6-4416-bd00-1ceabb7a7c1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Nov 29 08:15:45 compute-0 nova_compute[255040]: 2025-11-29 08:15:45.460 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:45 compute-0 ovn_controller[153295]: 2025-11-29T08:15:45Z|00228|binding|INFO|Releasing lport b495613a-3fb1-48c4-aa81-640b29e83d9b from this chassis (sb_readonly=0)
Nov 29 08:15:45 compute-0 nova_compute[255040]: 2025-11-29 08:15:45.469 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:45 compute-0 nova_compute[255040]: 2025-11-29 08:15:45.554 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:45.554 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:15:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:45.555 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:15:45 compute-0 nova_compute[255040]: 2025-11-29 08:15:45.663 255071 DEBUG nova.compute.manager [req-27d115cd-8b37-4288-848d-74e995dda00e req-8190353b-e8cc-49db-93ad-22b5a709f3e6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Received event network-changed-d26e4c07-fd2e-4219-811a-8b7a975e0e27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:15:45 compute-0 nova_compute[255040]: 2025-11-29 08:15:45.663 255071 DEBUG nova.compute.manager [req-27d115cd-8b37-4288-848d-74e995dda00e req-8190353b-e8cc-49db-93ad-22b5a709f3e6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Refreshing instance network info cache due to event network-changed-d26e4c07-fd2e-4219-811a-8b7a975e0e27. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:15:45 compute-0 nova_compute[255040]: 2025-11-29 08:15:45.663 255071 DEBUG oslo_concurrency.lockutils [req-27d115cd-8b37-4288-848d-74e995dda00e req-8190353b-e8cc-49db-93ad-22b5a709f3e6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-6568f6b1-4266-4fcc-b566-ae29baaa5c0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:15:45 compute-0 nova_compute[255040]: 2025-11-29 08:15:45.664 255071 DEBUG oslo_concurrency.lockutils [req-27d115cd-8b37-4288-848d-74e995dda00e req-8190353b-e8cc-49db-93ad-22b5a709f3e6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-6568f6b1-4266-4fcc-b566-ae29baaa5c0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:15:45 compute-0 nova_compute[255040]: 2025-11-29 08:15:45.664 255071 DEBUG nova.network.neutron [req-27d115cd-8b37-4288-848d-74e995dda00e req-8190353b-e8cc-49db-93ad-22b5a709f3e6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Refreshing network info cache for port d26e4c07-fd2e-4219-811a-8b7a975e0e27 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:15:45 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 134 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 24 KiB/s wr, 173 op/s
Nov 29 08:15:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e424 do_prune osdmap full prune enabled
Nov 29 08:15:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e425 e425: 3 total, 3 up, 3 in
Nov 29 08:15:46 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e425: 3 total, 3 up, 3 in
Nov 29 08:15:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:15:46 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/508522099' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:15:46 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/508522099' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:47 compute-0 nova_compute[255040]: 2025-11-29 08:15:47.103 255071 DEBUG nova.network.neutron [req-27d115cd-8b37-4288-848d-74e995dda00e req-8190353b-e8cc-49db-93ad-22b5a709f3e6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Updated VIF entry in instance network info cache for port d26e4c07-fd2e-4219-811a-8b7a975e0e27. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:15:47 compute-0 nova_compute[255040]: 2025-11-29 08:15:47.103 255071 DEBUG nova.network.neutron [req-27d115cd-8b37-4288-848d-74e995dda00e req-8190353b-e8cc-49db-93ad-22b5a709f3e6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Updating instance_info_cache with network_info: [{"id": "d26e4c07-fd2e-4219-811a-8b7a975e0e27", "address": "fa:16:3e:14:3b:b4", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd26e4c07-fd", "ovs_interfaceid": "d26e4c07-fd2e-4219-811a-8b7a975e0e27", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:15:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e425 do_prune osdmap full prune enabled
Nov 29 08:15:47 compute-0 ceph-mon[75237]: pgmap v1858: 305 pgs: 305 active+clean; 134 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 24 KiB/s wr, 173 op/s
Nov 29 08:15:47 compute-0 ceph-mon[75237]: osdmap e425: 3 total, 3 up, 3 in
Nov 29 08:15:47 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/508522099' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:47 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/508522099' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e426 e426: 3 total, 3 up, 3 in
Nov 29 08:15:47 compute-0 nova_compute[255040]: 2025-11-29 08:15:47.119 255071 DEBUG oslo_concurrency.lockutils [req-27d115cd-8b37-4288-848d-74e995dda00e req-8190353b-e8cc-49db-93ad-22b5a709f3e6 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-6568f6b1-4266-4fcc-b566-ae29baaa5c0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:15:47 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e426: 3 total, 3 up, 3 in
Nov 29 08:15:47 compute-0 nova_compute[255040]: 2025-11-29 08:15:47.306 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:15:47.557 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:15:47 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 134 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 KiB/s wr, 109 op/s
Nov 29 08:15:48 compute-0 ceph-mon[75237]: osdmap e426: 3 total, 3 up, 3 in
Nov 29 08:15:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e426 do_prune osdmap full prune enabled
Nov 29 08:15:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e427 e427: 3 total, 3 up, 3 in
Nov 29 08:15:48 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e427: 3 total, 3 up, 3 in
Nov 29 08:15:48 compute-0 nova_compute[255040]: 2025-11-29 08:15:48.969 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e427 do_prune osdmap full prune enabled
Nov 29 08:15:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e428 e428: 3 total, 3 up, 3 in
Nov 29 08:15:49 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e428: 3 total, 3 up, 3 in
Nov 29 08:15:49 compute-0 ceph-mon[75237]: pgmap v1861: 305 pgs: 305 active+clean; 134 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 KiB/s wr, 109 op/s
Nov 29 08:15:49 compute-0 ceph-mon[75237]: osdmap e427: 3 total, 3 up, 3 in
Nov 29 08:15:49 compute-0 ceph-mon[75237]: osdmap e428: 3 total, 3 up, 3 in
Nov 29 08:15:49 compute-0 podman[293347]: 2025-11-29 08:15:49.905463708 +0000 UTC m=+0.063399635 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:15:49 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 134 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 3.2 KiB/s wr, 69 op/s
Nov 29 08:15:49 compute-0 nova_compute[255040]: 2025-11-29 08:15:49.988 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e428 do_prune osdmap full prune enabled
Nov 29 08:15:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e429 e429: 3 total, 3 up, 3 in
Nov 29 08:15:50 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e429: 3 total, 3 up, 3 in
Nov 29 08:15:50 compute-0 ceph-mon[75237]: pgmap v1864: 305 pgs: 305 active+clean; 134 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 3.2 KiB/s wr, 69 op/s
Nov 29 08:15:50 compute-0 ceph-mon[75237]: osdmap e429: 3 total, 3 up, 3 in
Nov 29 08:15:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e429 do_prune osdmap full prune enabled
Nov 29 08:15:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e430 e430: 3 total, 3 up, 3 in
Nov 29 08:15:51 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e430: 3 total, 3 up, 3 in
Nov 29 08:15:51 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 143 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 354 op/s
Nov 29 08:15:52 compute-0 nova_compute[255040]: 2025-11-29 08:15:52.310 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e430 do_prune osdmap full prune enabled
Nov 29 08:15:52 compute-0 ceph-mon[75237]: osdmap e430: 3 total, 3 up, 3 in
Nov 29 08:15:52 compute-0 ceph-mon[75237]: pgmap v1867: 305 pgs: 305 active+clean; 143 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 354 op/s
Nov 29 08:15:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e431 e431: 3 total, 3 up, 3 in
Nov 29 08:15:52 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e431: 3 total, 3 up, 3 in
Nov 29 08:15:53 compute-0 ovn_controller[153295]: 2025-11-29T08:15:53Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:14:3b:b4 10.100.0.8
Nov 29 08:15:53 compute-0 ovn_controller[153295]: 2025-11-29T08:15:53Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:14:3b:b4 10.100.0.8
Nov 29 08:15:53 compute-0 ceph-mon[75237]: osdmap e431: 3 total, 3 up, 3 in
Nov 29 08:15:53 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 143 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 247 KiB/s rd, 1.8 MiB/s wr, 289 op/s
Nov 29 08:15:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e431 do_prune osdmap full prune enabled
Nov 29 08:15:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e432 e432: 3 total, 3 up, 3 in
Nov 29 08:15:54 compute-0 nova_compute[255040]: 2025-11-29 08:15:54.991 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:55 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e432: 3 total, 3 up, 3 in
Nov 29 08:15:55 compute-0 ceph-mon[75237]: pgmap v1869: 305 pgs: 305 active+clean; 143 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 247 KiB/s rd, 1.8 MiB/s wr, 289 op/s
Nov 29 08:15:55 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 160 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 650 KiB/s rd, 4.3 MiB/s wr, 297 op/s
Nov 29 08:15:56 compute-0 ceph-mon[75237]: osdmap e432: 3 total, 3 up, 3 in
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007476359518460636 of space, bias 1.0, pg target 0.2242907855538191 quantized to 32 (current 32)
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003517837956632155 of space, bias 1.0, pg target 0.10553513869896464 quantized to 32 (current 32)
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:15:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:15:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:15:56 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2583695917' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:15:56 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2583695917' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:57 compute-0 ceph-mon[75237]: pgmap v1871: 305 pgs: 305 active+clean; 160 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 650 KiB/s rd, 4.3 MiB/s wr, 297 op/s
Nov 29 08:15:57 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2583695917' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:57 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2583695917' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:57 compute-0 nova_compute[255040]: 2025-11-29 08:15:57.315 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:15:57 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3869248220' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:57 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:15:57 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3869248220' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:57 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 162 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 678 KiB/s rd, 2.5 MiB/s wr, 217 op/s
Nov 29 08:15:58 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3869248220' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:58 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3869248220' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:15:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3745695085' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:15:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3745695085' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:59 compute-0 ceph-mon[75237]: pgmap v1872: 305 pgs: 305 active+clean; 162 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 678 KiB/s rd, 2.5 MiB/s wr, 217 op/s
Nov 29 08:15:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3745695085' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3745695085' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e432 do_prune osdmap full prune enabled
Nov 29 08:15:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e433 e433: 3 total, 3 up, 3 in
Nov 29 08:15:59 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e433: 3 total, 3 up, 3 in
Nov 29 08:15:59 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 165 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 685 KiB/s rd, 2.2 MiB/s wr, 222 op/s
Nov 29 08:15:59 compute-0 nova_compute[255040]: 2025-11-29 08:15:59.994 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e433 do_prune osdmap full prune enabled
Nov 29 08:16:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e434 e434: 3 total, 3 up, 3 in
Nov 29 08:16:00 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e434: 3 total, 3 up, 3 in
Nov 29 08:16:00 compute-0 ceph-mon[75237]: osdmap e433: 3 total, 3 up, 3 in
Nov 29 08:16:00 compute-0 ceph-mon[75237]: pgmap v1874: 305 pgs: 305 active+clean; 165 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 685 KiB/s rd, 2.2 MiB/s wr, 222 op/s
Nov 29 08:16:01 compute-0 ceph-mon[75237]: osdmap e434: 3 total, 3 up, 3 in
Nov 29 08:16:01 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 606 KiB/s rd, 1.4 MiB/s wr, 242 op/s
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.052 255071 DEBUG oslo_concurrency.lockutils [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.053 255071 DEBUG oslo_concurrency.lockutils [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.070 255071 DEBUG nova.objects.instance [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lazy-loading 'flavor' on Instance uuid 6568f6b1-4266-4fcc-b566-ae29baaa5c0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.111 255071 DEBUG oslo_concurrency.lockutils [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.301 255071 DEBUG oslo_concurrency.lockutils [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.302 255071 DEBUG oslo_concurrency.lockutils [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.302 255071 INFO nova.compute.manager [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Attaching volume b7f4af08-46f1-44a9-848a-48448dc371c9 to /dev/vdb
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.319 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:16:02 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1419644169' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:16:02 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1419644169' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.479 255071 DEBUG os_brick.utils [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.482 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.494 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.494 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[31c1a511-637e-4355-8ae5-e7ef0d17b521]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.496 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.505 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.505 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[e117dd2c-ad04-4f8e-8d2c-63ba3ce54701]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.506 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.516 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.517 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7f8ba5-6bac-4801-bc6e-367375b9b67d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.518 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[74318cd4-8806-4868-8322-e7d84d6233c3]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.518 255071 DEBUG oslo_concurrency.processutils [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.542 255071 DEBUG oslo_concurrency.processutils [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.547 255071 DEBUG os_brick.initiator.connectors.lightos [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.547 255071 DEBUG os_brick.initiator.connectors.lightos [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.548 255071 DEBUG os_brick.initiator.connectors.lightos [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.548 255071 DEBUG os_brick.utils [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] <== get_connector_properties: return (68ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:16:02 compute-0 nova_compute[255040]: 2025-11-29 08:16:02.549 255071 DEBUG nova.virt.block_device [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Updating existing volume attachment record: a2a376df-430b-4b9d-93df-9614d4cb5723 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:16:02 compute-0 ceph-mon[75237]: pgmap v1876: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 606 KiB/s rd, 1.4 MiB/s wr, 242 op/s
Nov 29 08:16:02 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1419644169' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:02 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1419644169' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:16:03 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1261598158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:16:03 compute-0 nova_compute[255040]: 2025-11-29 08:16:03.719 255071 DEBUG os_brick.encryptors [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Using volume encryption metadata '{'encryption_key_id': '890bb1f4-8585-47ed-b970-4c43c3317ad0', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-b7f4af08-46f1-44a9-848a-48448dc371c9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'b7f4af08-46f1-44a9-848a-48448dc371c9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '6568f6b1-4266-4fcc-b566-ae29baaa5c0f', 'attached_at': '', 'detached_at': '', 'volume_id': 'b7f4af08-46f1-44a9-848a-48448dc371c9', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 08:16:03 compute-0 nova_compute[255040]: 2025-11-29 08:16:03.727 255071 DEBUG barbicanclient.client [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 29 08:16:03 compute-0 nova_compute[255040]: 2025-11-29 08:16:03.742 255071 DEBUG barbicanclient.v1.secrets [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/890bb1f4-8585-47ed-b970-4c43c3317ad0 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 29 08:16:03 compute-0 nova_compute[255040]: 2025-11-29 08:16:03.743 255071 INFO barbicanclient.base [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/890bb1f4-8585-47ed-b970-4c43c3317ad0
Nov 29 08:16:03 compute-0 nova_compute[255040]: 2025-11-29 08:16:03.766 255071 DEBUG barbicanclient.client [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:16:03 compute-0 nova_compute[255040]: 2025-11-29 08:16:03.767 255071 INFO barbicanclient.base [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/890bb1f4-8585-47ed-b970-4c43c3317ad0
Nov 29 08:16:03 compute-0 nova_compute[255040]: 2025-11-29 08:16:03.791 255071 DEBUG barbicanclient.client [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:16:03 compute-0 nova_compute[255040]: 2025-11-29 08:16:03.792 255071 INFO barbicanclient.base [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/890bb1f4-8585-47ed-b970-4c43c3317ad0
Nov 29 08:16:03 compute-0 nova_compute[255040]: 2025-11-29 08:16:03.825 255071 DEBUG barbicanclient.client [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:16:03 compute-0 nova_compute[255040]: 2025-11-29 08:16:03.826 255071 INFO barbicanclient.base [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/890bb1f4-8585-47ed-b970-4c43c3317ad0
Nov 29 08:16:03 compute-0 nova_compute[255040]: 2025-11-29 08:16:03.850 255071 DEBUG barbicanclient.client [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:16:03 compute-0 nova_compute[255040]: 2025-11-29 08:16:03.851 255071 INFO barbicanclient.base [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/890bb1f4-8585-47ed-b970-4c43c3317ad0
Nov 29 08:16:03 compute-0 nova_compute[255040]: 2025-11-29 08:16:03.889 255071 DEBUG barbicanclient.client [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:16:03 compute-0 nova_compute[255040]: 2025-11-29 08:16:03.891 255071 INFO barbicanclient.base [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/890bb1f4-8585-47ed-b970-4c43c3317ad0
Nov 29 08:16:03 compute-0 nova_compute[255040]: 2025-11-29 08:16:03.923 255071 DEBUG barbicanclient.client [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:16:03 compute-0 nova_compute[255040]: 2025-11-29 08:16:03.924 255071 INFO barbicanclient.base [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/890bb1f4-8585-47ed-b970-4c43c3317ad0
Nov 29 08:16:03 compute-0 sudo[293378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:16:03 compute-0 nova_compute[255040]: 2025-11-29 08:16:03.945 255071 DEBUG barbicanclient.client [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:16:03 compute-0 nova_compute[255040]: 2025-11-29 08:16:03.945 255071 INFO barbicanclient.base [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/890bb1f4-8585-47ed-b970-4c43c3317ad0
Nov 29 08:16:03 compute-0 sudo[293378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:03 compute-0 sudo[293378]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:03 compute-0 nova_compute[255040]: 2025-11-29 08:16:03.968 255071 DEBUG barbicanclient.client [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:16:03 compute-0 nova_compute[255040]: 2025-11-29 08:16:03.970 255071 INFO barbicanclient.base [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/890bb1f4-8585-47ed-b970-4c43c3317ad0
Nov 29 08:16:03 compute-0 nova_compute[255040]: 2025-11-29 08:16:03.991 255071 DEBUG barbicanclient.client [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:16:03 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 353 KiB/s rd, 125 KiB/s wr, 179 op/s
Nov 29 08:16:03 compute-0 nova_compute[255040]: 2025-11-29 08:16:03.992 255071 INFO barbicanclient.base [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/890bb1f4-8585-47ed-b970-4c43c3317ad0
Nov 29 08:16:04 compute-0 nova_compute[255040]: 2025-11-29 08:16:04.019 255071 DEBUG barbicanclient.client [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:16:04 compute-0 nova_compute[255040]: 2025-11-29 08:16:04.020 255071 INFO barbicanclient.base [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/890bb1f4-8585-47ed-b970-4c43c3317ad0
Nov 29 08:16:04 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1261598158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:16:04 compute-0 podman[293377]: 2025-11-29 08:16:04.042708902 +0000 UTC m=+0.177046840 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 08:16:04 compute-0 nova_compute[255040]: 2025-11-29 08:16:04.044 255071 DEBUG barbicanclient.client [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:16:04 compute-0 nova_compute[255040]: 2025-11-29 08:16:04.045 255071 INFO barbicanclient.base [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/890bb1f4-8585-47ed-b970-4c43c3317ad0
Nov 29 08:16:04 compute-0 sudo[293419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:16:04 compute-0 sudo[293419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:04 compute-0 sudo[293419]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:04 compute-0 nova_compute[255040]: 2025-11-29 08:16:04.075 255071 DEBUG barbicanclient.client [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:16:04 compute-0 nova_compute[255040]: 2025-11-29 08:16:04.075 255071 INFO barbicanclient.base [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/890bb1f4-8585-47ed-b970-4c43c3317ad0
Nov 29 08:16:04 compute-0 nova_compute[255040]: 2025-11-29 08:16:04.104 255071 DEBUG barbicanclient.client [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:16:04 compute-0 nova_compute[255040]: 2025-11-29 08:16:04.104 255071 INFO barbicanclient.base [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/890bb1f4-8585-47ed-b970-4c43c3317ad0
Nov 29 08:16:04 compute-0 sudo[293452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:16:04 compute-0 nova_compute[255040]: 2025-11-29 08:16:04.130 255071 DEBUG barbicanclient.client [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:16:04 compute-0 sudo[293452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:04 compute-0 nova_compute[255040]: 2025-11-29 08:16:04.130 255071 INFO barbicanclient.base [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/890bb1f4-8585-47ed-b970-4c43c3317ad0
Nov 29 08:16:04 compute-0 sudo[293452]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:04 compute-0 nova_compute[255040]: 2025-11-29 08:16:04.149 255071 DEBUG barbicanclient.client [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:16:04 compute-0 nova_compute[255040]: 2025-11-29 08:16:04.150 255071 DEBUG nova.virt.libvirt.host [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 08:16:04 compute-0 nova_compute[255040]:   <usage type="volume">
Nov 29 08:16:04 compute-0 nova_compute[255040]:     <volume>b7f4af08-46f1-44a9-848a-48448dc371c9</volume>
Nov 29 08:16:04 compute-0 nova_compute[255040]:   </usage>
Nov 29 08:16:04 compute-0 nova_compute[255040]: </secret>
Nov 29 08:16:04 compute-0 nova_compute[255040]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 29 08:16:04 compute-0 sudo[293477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:16:04 compute-0 sudo[293477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:04 compute-0 nova_compute[255040]: 2025-11-29 08:16:04.480 255071 DEBUG nova.objects.instance [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lazy-loading 'flavor' on Instance uuid 6568f6b1-4266-4fcc-b566-ae29baaa5c0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:16:04 compute-0 nova_compute[255040]: 2025-11-29 08:16:04.501 255071 DEBUG nova.virt.libvirt.driver [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Attempting to attach volume b7f4af08-46f1-44a9-848a-48448dc371c9 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:16:04 compute-0 nova_compute[255040]: 2025-11-29 08:16:04.505 255071 DEBUG nova.virt.libvirt.guest [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:16:04 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:16:04 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-b7f4af08-46f1-44a9-848a-48448dc371c9">
Nov 29 08:16:04 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:16:04 compute-0 nova_compute[255040]:   </source>
Nov 29 08:16:04 compute-0 nova_compute[255040]:   <auth username="openstack">
Nov 29 08:16:04 compute-0 nova_compute[255040]:     <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:16:04 compute-0 nova_compute[255040]:   </auth>
Nov 29 08:16:04 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:16:04 compute-0 nova_compute[255040]:   <serial>b7f4af08-46f1-44a9-848a-48448dc371c9</serial>
Nov 29 08:16:04 compute-0 nova_compute[255040]:   <encryption format="luks">
Nov 29 08:16:04 compute-0 nova_compute[255040]:     <secret type="passphrase" uuid="4df17d5f-a88c-4bfb-94c4-b9537fd77407"/>
Nov 29 08:16:04 compute-0 nova_compute[255040]:   </encryption>
Nov 29 08:16:04 compute-0 nova_compute[255040]: </disk>
Nov 29 08:16:04 compute-0 nova_compute[255040]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:16:04 compute-0 sudo[293477]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:16:04 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:16:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:16:04 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:16:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:16:04 compute-0 nova_compute[255040]: 2025-11-29 08:16:04.996 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:05 compute-0 ceph-mon[75237]: pgmap v1877: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 353 KiB/s rd, 125 KiB/s wr, 179 op/s
Nov 29 08:16:05 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:16:05 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:16:05 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:16:05 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev ef0f3580-5280-4e9d-b573-ffc49178b41f does not exist
Nov 29 08:16:05 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 13f43f8e-06d6-4738-9fcf-a41bb8a81de3 does not exist
Nov 29 08:16:05 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 05a60a65-27d1-43da-95ac-2f2b634e9e67 does not exist
Nov 29 08:16:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:16:05 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:16:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:16:05 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:16:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:16:05 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:16:05 compute-0 sudo[293553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:16:05 compute-0 sudo[293553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:05 compute-0 sudo[293553]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:05 compute-0 sudo[293578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:16:05 compute-0 sudo[293578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:05 compute-0 sudo[293578]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:05 compute-0 sudo[293603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:16:05 compute-0 sudo[293603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:05 compute-0 sudo[293603]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:05 compute-0 sudo[293628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:16:05 compute-0 sudo[293628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:05 compute-0 podman[293694]: 2025-11-29 08:16:05.895305754 +0000 UTC m=+0.076771757 container create 83235221fc585ee4beeee7c4c3ff56feaaba64e8faedbd65bba67cb17fcd0af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:16:05 compute-0 podman[293694]: 2025-11-29 08:16:05.84191327 +0000 UTC m=+0.023379303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:16:05 compute-0 systemd[1]: Started libpod-conmon-83235221fc585ee4beeee7c4c3ff56feaaba64e8faedbd65bba67cb17fcd0af5.scope.
Nov 29 08:16:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:16:05 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 156 KiB/s rd, 67 KiB/s wr, 107 op/s
Nov 29 08:16:06 compute-0 podman[293694]: 2025-11-29 08:16:06.149812607 +0000 UTC m=+0.331278660 container init 83235221fc585ee4beeee7c4c3ff56feaaba64e8faedbd65bba67cb17fcd0af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:16:06 compute-0 podman[293694]: 2025-11-29 08:16:06.15771243 +0000 UTC m=+0.339178443 container start 83235221fc585ee4beeee7c4c3ff56feaaba64e8faedbd65bba67cb17fcd0af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:16:06 compute-0 elastic_banzai[293711]: 167 167
Nov 29 08:16:06 compute-0 systemd[1]: libpod-83235221fc585ee4beeee7c4c3ff56feaaba64e8faedbd65bba67cb17fcd0af5.scope: Deactivated successfully.
Nov 29 08:16:06 compute-0 podman[293694]: 2025-11-29 08:16:06.231730372 +0000 UTC m=+0.413196375 container attach 83235221fc585ee4beeee7c4c3ff56feaaba64e8faedbd65bba67cb17fcd0af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:16:06 compute-0 podman[293694]: 2025-11-29 08:16:06.232115413 +0000 UTC m=+0.413581416 container died 83235221fc585ee4beeee7c4c3ff56feaaba64e8faedbd65bba67cb17fcd0af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:16:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e434 do_prune osdmap full prune enabled
Nov 29 08:16:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-e28de760a47eff2705c584dacc67d133a49c485ff08a9fb20f95bb77b41d7b93-merged.mount: Deactivated successfully.
Nov 29 08:16:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e435 e435: 3 total, 3 up, 3 in
Nov 29 08:16:06 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:16:06 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:16:06 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:16:06 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:16:06 compute-0 ceph-mon[75237]: pgmap v1878: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 156 KiB/s rd, 67 KiB/s wr, 107 op/s
Nov 29 08:16:06 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e435: 3 total, 3 up, 3 in
Nov 29 08:16:06 compute-0 podman[293694]: 2025-11-29 08:16:06.891037023 +0000 UTC m=+1.072503026 container remove 83235221fc585ee4beeee7c4c3ff56feaaba64e8faedbd65bba67cb17fcd0af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:16:06 compute-0 systemd[1]: libpod-conmon-83235221fc585ee4beeee7c4c3ff56feaaba64e8faedbd65bba67cb17fcd0af5.scope: Deactivated successfully.
Nov 29 08:16:07 compute-0 podman[293736]: 2025-11-29 08:16:07.068176524 +0000 UTC m=+0.027028793 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:16:07 compute-0 podman[293736]: 2025-11-29 08:16:07.279474489 +0000 UTC m=+0.238326778 container create 306e13493dd5c46dbc275cd8a78be01c0de5cef7210b163866cbbb6241c889a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:16:07 compute-0 nova_compute[255040]: 2025-11-29 08:16:07.321 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:07 compute-0 systemd[1]: Started libpod-conmon-306e13493dd5c46dbc275cd8a78be01c0de5cef7210b163866cbbb6241c889a3.scope.
Nov 29 08:16:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24b2b6bedc51a949fb086961e373d1383190fff0c5077841d2d9b28d746a6076/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24b2b6bedc51a949fb086961e373d1383190fff0c5077841d2d9b28d746a6076/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24b2b6bedc51a949fb086961e373d1383190fff0c5077841d2d9b28d746a6076/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24b2b6bedc51a949fb086961e373d1383190fff0c5077841d2d9b28d746a6076/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24b2b6bedc51a949fb086961e373d1383190fff0c5077841d2d9b28d746a6076/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:16:07 compute-0 podman[293736]: 2025-11-29 08:16:07.449932378 +0000 UTC m=+0.408784627 container init 306e13493dd5c46dbc275cd8a78be01c0de5cef7210b163866cbbb6241c889a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 08:16:07 compute-0 podman[293736]: 2025-11-29 08:16:07.458158501 +0000 UTC m=+0.417010750 container start 306e13493dd5c46dbc275cd8a78be01c0de5cef7210b163866cbbb6241c889a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:16:07 compute-0 podman[293736]: 2025-11-29 08:16:07.632145626 +0000 UTC m=+0.590997915 container attach 306e13493dd5c46dbc275cd8a78be01c0de5cef7210b163866cbbb6241c889a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 08:16:07 compute-0 ceph-mon[75237]: osdmap e435: 3 total, 3 up, 3 in
Nov 29 08:16:07 compute-0 nova_compute[255040]: 2025-11-29 08:16:07.736 255071 DEBUG nova.virt.libvirt.driver [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:16:07 compute-0 nova_compute[255040]: 2025-11-29 08:16:07.737 255071 DEBUG nova.virt.libvirt.driver [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:16:07 compute-0 nova_compute[255040]: 2025-11-29 08:16:07.737 255071 DEBUG nova.virt.libvirt.driver [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:16:07 compute-0 nova_compute[255040]: 2025-11-29 08:16:07.737 255071 DEBUG nova.virt.libvirt.driver [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] No VIF found with MAC fa:16:3e:14:3b:b4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:16:07 compute-0 nova_compute[255040]: 2025-11-29 08:16:07.950 255071 DEBUG oslo_concurrency.lockutils [None req-f69b78e9-cd78-4144-8bac-e4a1cb5fe466 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 5.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:07 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 61 KiB/s wr, 116 op/s
Nov 29 08:16:08 compute-0 trusting_galois[293753]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:16:08 compute-0 trusting_galois[293753]: --> relative data size: 1.0
Nov 29 08:16:08 compute-0 trusting_galois[293753]: --> All data devices are unavailable
Nov 29 08:16:08 compute-0 systemd[1]: libpod-306e13493dd5c46dbc275cd8a78be01c0de5cef7210b163866cbbb6241c889a3.scope: Deactivated successfully.
Nov 29 08:16:08 compute-0 podman[293736]: 2025-11-29 08:16:08.587832602 +0000 UTC m=+1.546684901 container died 306e13493dd5c46dbc275cd8a78be01c0de5cef7210b163866cbbb6241c889a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 08:16:08 compute-0 systemd[1]: libpod-306e13493dd5c46dbc275cd8a78be01c0de5cef7210b163866cbbb6241c889a3.scope: Consumed 1.072s CPU time.
Nov 29 08:16:08 compute-0 ceph-mon[75237]: pgmap v1880: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 61 KiB/s wr, 116 op/s
Nov 29 08:16:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:16:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:16:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:16:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:16:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:16:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:16:08 compute-0 nova_compute[255040]: 2025-11-29 08:16:08.720 255071 DEBUG oslo_concurrency.lockutils [None req-d394f690-e877-4b92-9d8f-b982844a2a0e 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:08 compute-0 nova_compute[255040]: 2025-11-29 08:16:08.720 255071 DEBUG oslo_concurrency.lockutils [None req-d394f690-e877-4b92-9d8f-b982844a2a0e 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:08 compute-0 nova_compute[255040]: 2025-11-29 08:16:08.733 255071 INFO nova.compute.manager [None req-d394f690-e877-4b92-9d8f-b982844a2a0e 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Detaching volume b7f4af08-46f1-44a9-848a-48448dc371c9
Nov 29 08:16:08 compute-0 nova_compute[255040]: 2025-11-29 08:16:08.834 255071 INFO nova.virt.block_device [None req-d394f690-e877-4b92-9d8f-b982844a2a0e 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Attempting to driver detach volume b7f4af08-46f1-44a9-848a-48448dc371c9 from mountpoint /dev/vdb
Nov 29 08:16:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-24b2b6bedc51a949fb086961e373d1383190fff0c5077841d2d9b28d746a6076-merged.mount: Deactivated successfully.
Nov 29 08:16:08 compute-0 nova_compute[255040]: 2025-11-29 08:16:08.957 255071 DEBUG os_brick.encryptors [None req-d394f690-e877-4b92-9d8f-b982844a2a0e 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Using volume encryption metadata '{'encryption_key_id': '890bb1f4-8585-47ed-b970-4c43c3317ad0', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-b7f4af08-46f1-44a9-848a-48448dc371c9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'b7f4af08-46f1-44a9-848a-48448dc371c9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '6568f6b1-4266-4fcc-b566-ae29baaa5c0f', 'attached_at': '', 'detached_at': '', 'volume_id': 'b7f4af08-46f1-44a9-848a-48448dc371c9', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 08:16:08 compute-0 nova_compute[255040]: 2025-11-29 08:16:08.965 255071 DEBUG nova.virt.libvirt.driver [None req-d394f690-e877-4b92-9d8f-b982844a2a0e 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Attempting to detach device vdb from instance 6568f6b1-4266-4fcc-b566-ae29baaa5c0f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:16:08 compute-0 nova_compute[255040]: 2025-11-29 08:16:08.966 255071 DEBUG nova.virt.libvirt.guest [None req-d394f690-e877-4b92-9d8f-b982844a2a0e 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:16:08 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:16:08 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-b7f4af08-46f1-44a9-848a-48448dc371c9">
Nov 29 08:16:08 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:16:08 compute-0 nova_compute[255040]:   </source>
Nov 29 08:16:08 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:16:08 compute-0 nova_compute[255040]:   <serial>b7f4af08-46f1-44a9-848a-48448dc371c9</serial>
Nov 29 08:16:08 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:16:08 compute-0 nova_compute[255040]:   <encryption format="luks">
Nov 29 08:16:08 compute-0 nova_compute[255040]:     <secret type="passphrase" uuid="4df17d5f-a88c-4bfb-94c4-b9537fd77407"/>
Nov 29 08:16:08 compute-0 nova_compute[255040]:   </encryption>
Nov 29 08:16:08 compute-0 nova_compute[255040]: </disk>
Nov 29 08:16:08 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:16:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:16:08 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3931969632' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:16:08 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3931969632' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e435 do_prune osdmap full prune enabled
Nov 29 08:16:09 compute-0 nova_compute[255040]: 2025-11-29 08:16:09.777 255071 INFO nova.virt.libvirt.driver [None req-d394f690-e877-4b92-9d8f-b982844a2a0e 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Successfully detached device vdb from instance 6568f6b1-4266-4fcc-b566-ae29baaa5c0f from the persistent domain config.
Nov 29 08:16:09 compute-0 nova_compute[255040]: 2025-11-29 08:16:09.778 255071 DEBUG nova.virt.libvirt.driver [None req-d394f690-e877-4b92-9d8f-b982844a2a0e 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 6568f6b1-4266-4fcc-b566-ae29baaa5c0f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:16:09 compute-0 nova_compute[255040]: 2025-11-29 08:16:09.778 255071 DEBUG nova.virt.libvirt.guest [None req-d394f690-e877-4b92-9d8f-b982844a2a0e 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:16:09 compute-0 nova_compute[255040]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:16:09 compute-0 nova_compute[255040]:   <source protocol="rbd" name="volumes/volume-b7f4af08-46f1-44a9-848a-48448dc371c9">
Nov 29 08:16:09 compute-0 nova_compute[255040]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:16:09 compute-0 nova_compute[255040]:   </source>
Nov 29 08:16:09 compute-0 nova_compute[255040]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:16:09 compute-0 nova_compute[255040]:   <serial>b7f4af08-46f1-44a9-848a-48448dc371c9</serial>
Nov 29 08:16:09 compute-0 nova_compute[255040]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:16:09 compute-0 nova_compute[255040]:   <encryption format="luks">
Nov 29 08:16:09 compute-0 nova_compute[255040]:     <secret type="passphrase" uuid="4df17d5f-a88c-4bfb-94c4-b9537fd77407"/>
Nov 29 08:16:09 compute-0 nova_compute[255040]:   </encryption>
Nov 29 08:16:09 compute-0 nova_compute[255040]: </disk>
Nov 29 08:16:09 compute-0 nova_compute[255040]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:16:09 compute-0 podman[293736]: 2025-11-29 08:16:09.897171883 +0000 UTC m=+2.856024152 container remove 306e13493dd5c46dbc275cd8a78be01c0de5cef7210b163866cbbb6241c889a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 08:16:09 compute-0 systemd[1]: libpod-conmon-306e13493dd5c46dbc275cd8a78be01c0de5cef7210b163866cbbb6241c889a3.scope: Deactivated successfully.
Nov 29 08:16:09 compute-0 sudo[293628]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:09 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 19 KiB/s wr, 103 op/s
Nov 29 08:16:09 compute-0 nova_compute[255040]: 2025-11-29 08:16:09.998 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:10 compute-0 sudo[293796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:16:10 compute-0 sudo[293796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:10 compute-0 sudo[293796]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:10 compute-0 sudo[293821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:16:10 compute-0 sudo[293821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:10 compute-0 sudo[293821]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:10 compute-0 sudo[293846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:16:10 compute-0 sudo[293846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:10 compute-0 sudo[293846]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:10 compute-0 sudo[293871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 08:16:10 compute-0 sudo[293871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e436 e436: 3 total, 3 up, 3 in
Nov 29 08:16:10 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e436: 3 total, 3 up, 3 in
Nov 29 08:16:10 compute-0 nova_compute[255040]: 2025-11-29 08:16:10.298 255071 DEBUG nova.virt.libvirt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Received event <DeviceRemovedEvent: 1764404170.2981663, 6568f6b1-4266-4fcc-b566-ae29baaa5c0f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:16:10 compute-0 nova_compute[255040]: 2025-11-29 08:16:10.300 255071 DEBUG nova.virt.libvirt.driver [None req-d394f690-e877-4b92-9d8f-b982844a2a0e 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 6568f6b1-4266-4fcc-b566-ae29baaa5c0f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:16:10 compute-0 nova_compute[255040]: 2025-11-29 08:16:10.302 255071 INFO nova.virt.libvirt.driver [None req-d394f690-e877-4b92-9d8f-b982844a2a0e 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Successfully detached device vdb from instance 6568f6b1-4266-4fcc-b566-ae29baaa5c0f from the live domain config.
Nov 29 08:16:10 compute-0 nova_compute[255040]: 2025-11-29 08:16:10.471 255071 DEBUG nova.objects.instance [None req-d394f690-e877-4b92-9d8f-b982844a2a0e 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lazy-loading 'flavor' on Instance uuid 6568f6b1-4266-4fcc-b566-ae29baaa5c0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:16:10 compute-0 nova_compute[255040]: 2025-11-29 08:16:10.519 255071 DEBUG oslo_concurrency.lockutils [None req-d394f690-e877-4b92-9d8f-b982844a2a0e 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:10 compute-0 podman[293939]: 2025-11-29 08:16:10.670934989 +0000 UTC m=+0.045311177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:16:10 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3931969632' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:10 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3931969632' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:11 compute-0 nova_compute[255040]: 2025-11-29 08:16:11.270 255071 DEBUG oslo_concurrency.lockutils [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:11 compute-0 nova_compute[255040]: 2025-11-29 08:16:11.271 255071 DEBUG oslo_concurrency.lockutils [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:11 compute-0 nova_compute[255040]: 2025-11-29 08:16:11.271 255071 DEBUG oslo_concurrency.lockutils [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:11 compute-0 nova_compute[255040]: 2025-11-29 08:16:11.271 255071 DEBUG oslo_concurrency.lockutils [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:11 compute-0 nova_compute[255040]: 2025-11-29 08:16:11.272 255071 DEBUG oslo_concurrency.lockutils [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:11 compute-0 nova_compute[255040]: 2025-11-29 08:16:11.273 255071 INFO nova.compute.manager [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Terminating instance
Nov 29 08:16:11 compute-0 nova_compute[255040]: 2025-11-29 08:16:11.274 255071 DEBUG nova.compute.manager [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:16:11 compute-0 podman[293939]: 2025-11-29 08:16:11.539135439 +0000 UTC m=+0.913511587 container create 0f03e96b9454d4e94374168b3cd0fc4c161bef3e953970ee8169c81512655d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_franklin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:16:11 compute-0 systemd[1]: Started libpod-conmon-0f03e96b9454d4e94374168b3cd0fc4c161bef3e953970ee8169c81512655d93.scope.
Nov 29 08:16:11 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 22 KiB/s wr, 86 op/s
Nov 29 08:16:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:16:12 compute-0 nova_compute[255040]: 2025-11-29 08:16:12.324 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:12 compute-0 podman[293939]: 2025-11-29 08:16:12.405697574 +0000 UTC m=+1.780073702 container init 0f03e96b9454d4e94374168b3cd0fc4c161bef3e953970ee8169c81512655d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_franklin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:16:12 compute-0 ceph-mon[75237]: pgmap v1881: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 19 KiB/s wr, 103 op/s
Nov 29 08:16:12 compute-0 ceph-mon[75237]: osdmap e436: 3 total, 3 up, 3 in
Nov 29 08:16:12 compute-0 podman[293939]: 2025-11-29 08:16:12.416234479 +0000 UTC m=+1.790610607 container start 0f03e96b9454d4e94374168b3cd0fc4c161bef3e953970ee8169c81512655d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_franklin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 08:16:12 compute-0 podman[293939]: 2025-11-29 08:16:12.423279349 +0000 UTC m=+1.797655477 container attach 0f03e96b9454d4e94374168b3cd0fc4c161bef3e953970ee8169c81512655d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_franklin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 08:16:12 compute-0 bold_franklin[293966]: 167 167
Nov 29 08:16:12 compute-0 systemd[1]: libpod-0f03e96b9454d4e94374168b3cd0fc4c161bef3e953970ee8169c81512655d93.scope: Deactivated successfully.
Nov 29 08:16:12 compute-0 podman[293939]: 2025-11-29 08:16:12.427286998 +0000 UTC m=+1.801663106 container died 0f03e96b9454d4e94374168b3cd0fc4c161bef3e953970ee8169c81512655d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_franklin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 08:16:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-c724d7f1783e8f482119a46d4e09a4023b973aa695d58aa97d5fcdfafb47d279-merged.mount: Deactivated successfully.
Nov 29 08:16:13 compute-0 ceph-mon[75237]: pgmap v1883: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 22 KiB/s wr, 86 op/s
Nov 29 08:16:13 compute-0 podman[293939]: 2025-11-29 08:16:13.694508729 +0000 UTC m=+3.068884837 container remove 0f03e96b9454d4e94374168b3cd0fc4c161bef3e953970ee8169c81512655d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:16:13 compute-0 kernel: tapd26e4c07-fd (unregistering): left promiscuous mode
Nov 29 08:16:13 compute-0 NetworkManager[49116]: <info>  [1764404173.7054] device (tapd26e4c07-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:16:13 compute-0 systemd[1]: libpod-conmon-0f03e96b9454d4e94374168b3cd0fc4c161bef3e953970ee8169c81512655d93.scope: Deactivated successfully.
Nov 29 08:16:13 compute-0 ovn_controller[153295]: 2025-11-29T08:16:13Z|00229|binding|INFO|Releasing lport d26e4c07-fd2e-4219-811a-8b7a975e0e27 from this chassis (sb_readonly=0)
Nov 29 08:16:13 compute-0 ovn_controller[153295]: 2025-11-29T08:16:13Z|00230|binding|INFO|Setting lport d26e4c07-fd2e-4219-811a-8b7a975e0e27 down in Southbound
Nov 29 08:16:13 compute-0 ovn_controller[153295]: 2025-11-29T08:16:13Z|00231|binding|INFO|Removing iface tapd26e4c07-fd ovn-installed in OVS
Nov 29 08:16:13 compute-0 nova_compute[255040]: 2025-11-29 08:16:13.763 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:13 compute-0 nova_compute[255040]: 2025-11-29 08:16:13.765 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:16:13.770 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:3b:b4 10.100.0.8'], port_security=['fa:16:3e:14:3b:b4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6568f6b1-4266-4fcc-b566-ae29baaa5c0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6a2673206a04ec28205d820751e3174', 'neutron:revision_number': '4', 'neutron:security_group_ids': '63f1d851-e4a5-47d7-a466-30b92fb8edc0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.195'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e40ac74c-e68a-47d3-8a1f-fd021a26891c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=d26e4c07-fd2e-4219-811a-8b7a975e0e27) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:16:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:16:13.772 163500 INFO neutron.agent.ovn.metadata.agent [-] Port d26e4c07-fd2e-4219-811a-8b7a975e0e27 in datapath 7844e875-d723-468d-8c4a-c3bb5b3b635a unbound from our chassis
Nov 29 08:16:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:16:13.773 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7844e875-d723-468d-8c4a-c3bb5b3b635a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:16:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:16:13.775 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[1291e88b-fd3a-4962-9108-fee66f962d0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:13 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:16:13.776 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a namespace which is not needed anymore
Nov 29 08:16:13 compute-0 nova_compute[255040]: 2025-11-29 08:16:13.783 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:13 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Deactivated successfully.
Nov 29 08:16:13 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Consumed 16.585s CPU time.
Nov 29 08:16:13 compute-0 systemd-machined[216271]: Machine qemu-24-instance-00000018 terminated.
Nov 29 08:16:13 compute-0 podman[293953]: 2025-11-29 08:16:13.83024548 +0000 UTC m=+2.244475111 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:16:13 compute-0 nova_compute[255040]: 2025-11-29 08:16:13.917 255071 INFO nova.virt.libvirt.driver [-] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Instance destroyed successfully.
Nov 29 08:16:13 compute-0 nova_compute[255040]: 2025-11-29 08:16:13.919 255071 DEBUG nova.objects.instance [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lazy-loading 'resources' on Instance uuid 6568f6b1-4266-4fcc-b566-ae29baaa5c0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:16:13 compute-0 nova_compute[255040]: 2025-11-29 08:16:13.932 255071 DEBUG nova.virt.libvirt.vif [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:15:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-326543642',display_name='tempest-TestEncryptedCinderVolumes-server-326543642',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-326543642',id=24,image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJ6IEXau3I2DYOLEkG9RIUH26bBA6kM3bMwX6LL/0O9tLj8zF4tA1UMRWuJS2Vf7WxBxf1SQRqfvTwsmd72QjleNxnnLHtSmC1XedHN3bm1oH9bTJmHDQHMF1nbKKRMHQ==',key_name='tempest-keypair-1501577220',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:15:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e6a2673206a04ec28205d820751e3174',ramdisk_id='',reservation_id='r-3zy1jq8l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36a9388d-0d77-4d24-a915-be92247e5dbc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-2116890995',owner_user_name='tempest-TestEncryptedCinderVolumes-2116890995-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:15:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8a7b756f6c364e97a9d0d5298587d61c',uuid=6568f6b1-4266-4fcc-b566-ae29baaa5c0f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d26e4c07-fd2e-4219-811a-8b7a975e0e27", "address": "fa:16:3e:14:3b:b4", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd26e4c07-fd", "ovs_interfaceid": "d26e4c07-fd2e-4219-811a-8b7a975e0e27", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:16:13 compute-0 nova_compute[255040]: 2025-11-29 08:16:13.932 255071 DEBUG nova.network.os_vif_util [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Converting VIF {"id": "d26e4c07-fd2e-4219-811a-8b7a975e0e27", "address": "fa:16:3e:14:3b:b4", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd26e4c07-fd", "ovs_interfaceid": "d26e4c07-fd2e-4219-811a-8b7a975e0e27", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:16:13 compute-0 nova_compute[255040]: 2025-11-29 08:16:13.933 255071 DEBUG nova.network.os_vif_util [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:14:3b:b4,bridge_name='br-int',has_traffic_filtering=True,id=d26e4c07-fd2e-4219-811a-8b7a975e0e27,network=Network(7844e875-d723-468d-8c4a-c3bb5b3b635a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd26e4c07-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:16:13 compute-0 nova_compute[255040]: 2025-11-29 08:16:13.934 255071 DEBUG os_vif [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:3b:b4,bridge_name='br-int',has_traffic_filtering=True,id=d26e4c07-fd2e-4219-811a-8b7a975e0e27,network=Network(7844e875-d723-468d-8c4a-c3bb5b3b635a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd26e4c07-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:16:13 compute-0 nova_compute[255040]: 2025-11-29 08:16:13.936 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:13 compute-0 nova_compute[255040]: 2025-11-29 08:16:13.937 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd26e4c07-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:16:13 compute-0 nova_compute[255040]: 2025-11-29 08:16:13.938 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:13 compute-0 nova_compute[255040]: 2025-11-29 08:16:13.941 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:16:13 compute-0 nova_compute[255040]: 2025-11-29 08:16:13.943 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:13 compute-0 nova_compute[255040]: 2025-11-29 08:16:13.947 255071 INFO os_vif [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:3b:b4,bridge_name='br-int',has_traffic_filtering=True,id=d26e4c07-fd2e-4219-811a-8b7a975e0e27,network=Network(7844e875-d723-468d-8c4a-c3bb5b3b635a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd26e4c07-fd')
Nov 29 08:16:13 compute-0 nova_compute[255040]: 2025-11-29 08:16:13.969 255071 DEBUG nova.compute.manager [req-f4edb30f-9699-4c9c-b313-4127e43cc20b req-9c62bdc0-901a-4727-ac65-1ba320291a0f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Received event network-vif-unplugged-d26e4c07-fd2e-4219-811a-8b7a975e0e27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:16:13 compute-0 nova_compute[255040]: 2025-11-29 08:16:13.969 255071 DEBUG oslo_concurrency.lockutils [req-f4edb30f-9699-4c9c-b313-4127e43cc20b req-9c62bdc0-901a-4727-ac65-1ba320291a0f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:13 compute-0 nova_compute[255040]: 2025-11-29 08:16:13.970 255071 DEBUG oslo_concurrency.lockutils [req-f4edb30f-9699-4c9c-b313-4127e43cc20b req-9c62bdc0-901a-4727-ac65-1ba320291a0f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:13 compute-0 nova_compute[255040]: 2025-11-29 08:16:13.970 255071 DEBUG oslo_concurrency.lockutils [req-f4edb30f-9699-4c9c-b313-4127e43cc20b req-9c62bdc0-901a-4727-ac65-1ba320291a0f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:13 compute-0 nova_compute[255040]: 2025-11-29 08:16:13.971 255071 DEBUG nova.compute.manager [req-f4edb30f-9699-4c9c-b313-4127e43cc20b req-9c62bdc0-901a-4727-ac65-1ba320291a0f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] No waiting events found dispatching network-vif-unplugged-d26e4c07-fd2e-4219-811a-8b7a975e0e27 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:16:13 compute-0 nova_compute[255040]: 2025-11-29 08:16:13.971 255071 DEBUG nova.compute.manager [req-f4edb30f-9699-4c9c-b313-4127e43cc20b req-9c62bdc0-901a-4727-ac65-1ba320291a0f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Received event network-vif-unplugged-d26e4c07-fd2e-4219-811a-8b7a975e0e27 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:16:13 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 3.4 KiB/s wr, 75 op/s
Nov 29 08:16:14 compute-0 podman[294020]: 2025-11-29 08:16:13.915464635 +0000 UTC m=+0.040226239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:16:14 compute-0 podman[294020]: 2025-11-29 08:16:14.368731754 +0000 UTC m=+0.493493378 container create 181c433f04073fabc2b4020ad9a3027891eb0390fead14717217655df1e1b90d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kalam, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:16:14 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[293312]: [NOTICE]   (293316) : haproxy version is 2.8.14-c23fe91
Nov 29 08:16:14 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[293312]: [NOTICE]   (293316) : path to executable is /usr/sbin/haproxy
Nov 29 08:16:14 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[293312]: [WARNING]  (293316) : Exiting Master process...
Nov 29 08:16:14 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[293312]: [ALERT]    (293316) : Current worker (293318) exited with code 143 (Terminated)
Nov 29 08:16:14 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[293312]: [WARNING]  (293316) : All workers exited. Exiting... (0)
Nov 29 08:16:14 compute-0 systemd[1]: libpod-fbe5f8f49edce44c1da52fc10e6da3307888a0d6a04383bfbf2cedb4a1ca6042.scope: Deactivated successfully.
Nov 29 08:16:14 compute-0 podman[294023]: 2025-11-29 08:16:14.380555063 +0000 UTC m=+0.486667142 container died fbe5f8f49edce44c1da52fc10e6da3307888a0d6a04383bfbf2cedb4a1ca6042 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 08:16:14 compute-0 systemd[1]: Started libpod-conmon-181c433f04073fabc2b4020ad9a3027891eb0390fead14717217655df1e1b90d.scope.
Nov 29 08:16:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fbe5f8f49edce44c1da52fc10e6da3307888a0d6a04383bfbf2cedb4a1ca6042-userdata-shm.mount: Deactivated successfully.
Nov 29 08:16:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d596e9b52b6dab291ca97019feb5a2eedd74002d231461bb6daa8ef3877db4b3-merged.mount: Deactivated successfully.
Nov 29 08:16:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:16:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce2913b56f74801bddf8ce74f99ce87ac2e665322a0d56733c034d74ea2b02e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:16:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce2913b56f74801bddf8ce74f99ce87ac2e665322a0d56733c034d74ea2b02e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:16:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce2913b56f74801bddf8ce74f99ce87ac2e665322a0d56733c034d74ea2b02e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:16:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce2913b56f74801bddf8ce74f99ce87ac2e665322a0d56733c034d74ea2b02e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:16:14 compute-0 podman[294020]: 2025-11-29 08:16:14.542472532 +0000 UTC m=+0.667234146 container init 181c433f04073fabc2b4020ad9a3027891eb0390fead14717217655df1e1b90d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kalam, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:16:14 compute-0 podman[294020]: 2025-11-29 08:16:14.550540631 +0000 UTC m=+0.675302215 container start 181c433f04073fabc2b4020ad9a3027891eb0390fead14717217655df1e1b90d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 08:16:14 compute-0 podman[294023]: 2025-11-29 08:16:14.551567008 +0000 UTC m=+0.657679057 container cleanup fbe5f8f49edce44c1da52fc10e6da3307888a0d6a04383bfbf2cedb4a1ca6042 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 08:16:14 compute-0 podman[294020]: 2025-11-29 08:16:14.568675471 +0000 UTC m=+0.693437055 container attach 181c433f04073fabc2b4020ad9a3027891eb0390fead14717217655df1e1b90d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kalam, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:16:14 compute-0 systemd[1]: libpod-conmon-fbe5f8f49edce44c1da52fc10e6da3307888a0d6a04383bfbf2cedb4a1ca6042.scope: Deactivated successfully.
Nov 29 08:16:14 compute-0 podman[294096]: 2025-11-29 08:16:14.658472999 +0000 UTC m=+0.064707340 container remove fbe5f8f49edce44c1da52fc10e6da3307888a0d6a04383bfbf2cedb4a1ca6042 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 08:16:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:16:14.671 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5090f07f-6a43-4059-87e3-657e72751b71]: (4, ('Sat Nov 29 08:16:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a (fbe5f8f49edce44c1da52fc10e6da3307888a0d6a04383bfbf2cedb4a1ca6042)\nfbe5f8f49edce44c1da52fc10e6da3307888a0d6a04383bfbf2cedb4a1ca6042\nSat Nov 29 08:16:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a (fbe5f8f49edce44c1da52fc10e6da3307888a0d6a04383bfbf2cedb4a1ca6042)\nfbe5f8f49edce44c1da52fc10e6da3307888a0d6a04383bfbf2cedb4a1ca6042\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:16:14.674 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[3750780f-db7d-4394-a1bd-b613768ef904]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:16:14.675 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7844e875-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:16:14 compute-0 nova_compute[255040]: 2025-11-29 08:16:14.678 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:14 compute-0 kernel: tap7844e875-d0: left promiscuous mode
Nov 29 08:16:14 compute-0 nova_compute[255040]: 2025-11-29 08:16:14.696 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:16:14.701 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[ae89dc9f-bda0-40b6-a43d-44f79ca26b10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:16:14.715 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[ac04f307-b483-4389-8a82-19b5b69a6982]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:16:14.716 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[e8221b45-8e80-4abb-88d7-53072fce0421]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:16:14.733 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[2a05e0ef-9c90-4e33-ba34-9c4347763b4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637620, 'reachable_time': 36898, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294111, 'error': None, 'target': 'ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:14 compute-0 systemd[1]: run-netns-ovnmeta\x2d7844e875\x2dd723\x2d468d\x2d8c4a\x2dc3bb5b3b635a.mount: Deactivated successfully.
Nov 29 08:16:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:16:14.741 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:16:14 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:16:14.741 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[676103ed-8b92-4cef-9b89-d05a8feff693]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:16:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e436 do_prune osdmap full prune enabled
Nov 29 08:16:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e437 e437: 3 total, 3 up, 3 in
Nov 29 08:16:14 compute-0 ceph-mon[75237]: pgmap v1884: 305 pgs: 305 active+clean; 167 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 3.4 KiB/s wr, 75 op/s
Nov 29 08:16:14 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e437: 3 total, 3 up, 3 in
Nov 29 08:16:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:16:14 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1810309607' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:16:14 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1810309607' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:15 compute-0 nova_compute[255040]: 2025-11-29 08:16:14.999 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:15 compute-0 nova_compute[255040]: 2025-11-29 08:16:15.311 255071 INFO nova.virt.libvirt.driver [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Deleting instance files /var/lib/nova/instances/6568f6b1-4266-4fcc-b566-ae29baaa5c0f_del
Nov 29 08:16:15 compute-0 nova_compute[255040]: 2025-11-29 08:16:15.312 255071 INFO nova.virt.libvirt.driver [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Deletion of /var/lib/nova/instances/6568f6b1-4266-4fcc-b566-ae29baaa5c0f_del complete
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]: {
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:     "0": [
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:         {
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "devices": [
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "/dev/loop3"
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             ],
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "lv_name": "ceph_lv0",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "lv_size": "21470642176",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "name": "ceph_lv0",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "tags": {
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.cluster_name": "ceph",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.crush_device_class": "",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.encrypted": "0",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.osd_id": "0",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.type": "block",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.vdo": "0"
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             },
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "type": "block",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "vg_name": "ceph_vg0"
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:         }
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:     ],
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:     "1": [
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:         {
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "devices": [
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "/dev/loop4"
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             ],
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "lv_name": "ceph_lv1",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "lv_size": "21470642176",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "name": "ceph_lv1",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "tags": {
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.cluster_name": "ceph",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.crush_device_class": "",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.encrypted": "0",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.osd_id": "1",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.type": "block",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.vdo": "0"
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             },
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "type": "block",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "vg_name": "ceph_vg1"
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:         }
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:     ],
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:     "2": [
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:         {
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "devices": [
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "/dev/loop5"
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             ],
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "lv_name": "ceph_lv2",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "lv_size": "21470642176",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "name": "ceph_lv2",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "tags": {
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.cluster_name": "ceph",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.crush_device_class": "",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.encrypted": "0",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.osd_id": "2",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.type": "block",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:                 "ceph.vdo": "0"
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             },
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "type": "block",
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:             "vg_name": "ceph_vg2"
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:         }
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]:     ]
Nov 29 08:16:15 compute-0 vigorous_kalam[294090]: }
Nov 29 08:16:15 compute-0 systemd[1]: libpod-181c433f04073fabc2b4020ad9a3027891eb0390fead14717217655df1e1b90d.scope: Deactivated successfully.
Nov 29 08:16:15 compute-0 conmon[294090]: conmon 181c433f04073fabc2b4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-181c433f04073fabc2b4020ad9a3027891eb0390fead14717217655df1e1b90d.scope/container/memory.events
Nov 29 08:16:15 compute-0 podman[294020]: 2025-11-29 08:16:15.365620744 +0000 UTC m=+1.490382338 container died 181c433f04073fabc2b4020ad9a3027891eb0390fead14717217655df1e1b90d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 08:16:15 compute-0 nova_compute[255040]: 2025-11-29 08:16:15.384 255071 INFO nova.compute.manager [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Took 4.11 seconds to destroy the instance on the hypervisor.
Nov 29 08:16:15 compute-0 nova_compute[255040]: 2025-11-29 08:16:15.386 255071 DEBUG oslo.service.loopingcall [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:16:15 compute-0 nova_compute[255040]: 2025-11-29 08:16:15.386 255071 DEBUG nova.compute.manager [-] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:16:15 compute-0 nova_compute[255040]: 2025-11-29 08:16:15.387 255071 DEBUG nova.network.neutron [-] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:16:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ce2913b56f74801bddf8ce74f99ce87ac2e665322a0d56733c034d74ea2b02e-merged.mount: Deactivated successfully.
Nov 29 08:16:15 compute-0 podman[294020]: 2025-11-29 08:16:15.43353897 +0000 UTC m=+1.558300554 container remove 181c433f04073fabc2b4020ad9a3027891eb0390fead14717217655df1e1b90d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kalam, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:16:15 compute-0 systemd[1]: libpod-conmon-181c433f04073fabc2b4020ad9a3027891eb0390fead14717217655df1e1b90d.scope: Deactivated successfully.
Nov 29 08:16:15 compute-0 sudo[293871]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:15 compute-0 sudo[294134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:16:15 compute-0 sudo[294134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:15 compute-0 sudo[294134]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:15 compute-0 sudo[294159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:16:15 compute-0 sudo[294159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:15 compute-0 sudo[294159]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:15 compute-0 sudo[294184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:16:15 compute-0 sudo[294184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:15 compute-0 sudo[294184]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:16:15 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2676193615' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:16:15 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2676193615' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:15 compute-0 sudo[294209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 08:16:15 compute-0 sudo[294209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:15 compute-0 ceph-mon[75237]: osdmap e437: 3 total, 3 up, 3 in
Nov 29 08:16:15 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1810309607' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:15 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1810309607' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:15 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2676193615' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:15 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2676193615' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:15 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 145 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 3.9 KiB/s wr, 76 op/s
Nov 29 08:16:16 compute-0 podman[294274]: 2025-11-29 08:16:16.01404871 +0000 UTC m=+0.042552012 container create df53ff36738a306c2ccc56e1e5ed9442c7a91da489a65d91f20c143fb21729ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:16:16 compute-0 systemd[1]: Started libpod-conmon-df53ff36738a306c2ccc56e1e5ed9442c7a91da489a65d91f20c143fb21729ad.scope.
Nov 29 08:16:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:16:16 compute-0 podman[294274]: 2025-11-29 08:16:15.993940906 +0000 UTC m=+0.022444228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:16:16 compute-0 podman[294274]: 2025-11-29 08:16:16.093011375 +0000 UTC m=+0.121514777 container init df53ff36738a306c2ccc56e1e5ed9442c7a91da489a65d91f20c143fb21729ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 08:16:16 compute-0 podman[294274]: 2025-11-29 08:16:16.10021761 +0000 UTC m=+0.128720912 container start df53ff36738a306c2ccc56e1e5ed9442c7a91da489a65d91f20c143fb21729ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:16:16 compute-0 podman[294274]: 2025-11-29 08:16:16.103866809 +0000 UTC m=+0.132370231 container attach df53ff36738a306c2ccc56e1e5ed9442c7a91da489a65d91f20c143fb21729ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 08:16:16 compute-0 objective_kalam[294290]: 167 167
Nov 29 08:16:16 compute-0 systemd[1]: libpod-df53ff36738a306c2ccc56e1e5ed9442c7a91da489a65d91f20c143fb21729ad.scope: Deactivated successfully.
Nov 29 08:16:16 compute-0 conmon[294290]: conmon df53ff36738a306c2ccc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df53ff36738a306c2ccc56e1e5ed9442c7a91da489a65d91f20c143fb21729ad.scope/container/memory.events
Nov 29 08:16:16 compute-0 podman[294274]: 2025-11-29 08:16:16.109116161 +0000 UTC m=+0.137619483 container died df53ff36738a306c2ccc56e1e5ed9442c7a91da489a65d91f20c143fb21729ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:16:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a6d547bddea08c8832fb7b66d077dd44e99915a07ac6c9823e54f08a93fd499-merged.mount: Deactivated successfully.
Nov 29 08:16:16 compute-0 podman[294274]: 2025-11-29 08:16:16.146358837 +0000 UTC m=+0.174862149 container remove df53ff36738a306c2ccc56e1e5ed9442c7a91da489a65d91f20c143fb21729ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 08:16:16 compute-0 systemd[1]: libpod-conmon-df53ff36738a306c2ccc56e1e5ed9442c7a91da489a65d91f20c143fb21729ad.scope: Deactivated successfully.
Nov 29 08:16:16 compute-0 podman[294313]: 2025-11-29 08:16:16.369649687 +0000 UTC m=+0.066661364 container create a01279d3e24db761cb7c3570fbf58e06c2e2b0b4e635314cbaa40f597496be2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pasteur, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:16:16 compute-0 systemd[1]: Started libpod-conmon-a01279d3e24db761cb7c3570fbf58e06c2e2b0b4e635314cbaa40f597496be2c.scope.
Nov 29 08:16:16 compute-0 podman[294313]: 2025-11-29 08:16:16.339033939 +0000 UTC m=+0.036045696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:16:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65a267a739a2045d33f34c281ecd585025bda5588f7bacdded31af3762dd697/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65a267a739a2045d33f34c281ecd585025bda5588f7bacdded31af3762dd697/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65a267a739a2045d33f34c281ecd585025bda5588f7bacdded31af3762dd697/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65a267a739a2045d33f34c281ecd585025bda5588f7bacdded31af3762dd697/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:16:16 compute-0 podman[294313]: 2025-11-29 08:16:16.471415689 +0000 UTC m=+0.168427366 container init a01279d3e24db761cb7c3570fbf58e06c2e2b0b4e635314cbaa40f597496be2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 08:16:16 compute-0 podman[294313]: 2025-11-29 08:16:16.483878725 +0000 UTC m=+0.180890412 container start a01279d3e24db761cb7c3570fbf58e06c2e2b0b4e635314cbaa40f597496be2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:16:16 compute-0 podman[294313]: 2025-11-29 08:16:16.488056448 +0000 UTC m=+0.185068145 container attach a01279d3e24db761cb7c3570fbf58e06c2e2b0b4e635314cbaa40f597496be2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 08:16:16 compute-0 ceph-mon[75237]: pgmap v1886: 305 pgs: 305 active+clean; 145 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 3.9 KiB/s wr, 76 op/s
Nov 29 08:16:16 compute-0 nova_compute[255040]: 2025-11-29 08:16:16.981 255071 DEBUG nova.compute.manager [req-c1325375-8c9c-4fc9-b058-65db84bebf7c req-7afd83d5-9326-4c6a-a0ae-a98db49dcac0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Received event network-vif-plugged-d26e4c07-fd2e-4219-811a-8b7a975e0e27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:16:16 compute-0 nova_compute[255040]: 2025-11-29 08:16:16.983 255071 DEBUG oslo_concurrency.lockutils [req-c1325375-8c9c-4fc9-b058-65db84bebf7c req-7afd83d5-9326-4c6a-a0ae-a98db49dcac0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:16 compute-0 nova_compute[255040]: 2025-11-29 08:16:16.984 255071 DEBUG oslo_concurrency.lockutils [req-c1325375-8c9c-4fc9-b058-65db84bebf7c req-7afd83d5-9326-4c6a-a0ae-a98db49dcac0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:16 compute-0 nova_compute[255040]: 2025-11-29 08:16:16.984 255071 DEBUG oslo_concurrency.lockutils [req-c1325375-8c9c-4fc9-b058-65db84bebf7c req-7afd83d5-9326-4c6a-a0ae-a98db49dcac0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:16 compute-0 nova_compute[255040]: 2025-11-29 08:16:16.984 255071 DEBUG nova.compute.manager [req-c1325375-8c9c-4fc9-b058-65db84bebf7c req-7afd83d5-9326-4c6a-a0ae-a98db49dcac0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] No waiting events found dispatching network-vif-plugged-d26e4c07-fd2e-4219-811a-8b7a975e0e27 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:16:16 compute-0 nova_compute[255040]: 2025-11-29 08:16:16.985 255071 WARNING nova.compute.manager [req-c1325375-8c9c-4fc9-b058-65db84bebf7c req-7afd83d5-9326-4c6a-a0ae-a98db49dcac0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Received unexpected event network-vif-plugged-d26e4c07-fd2e-4219-811a-8b7a975e0e27 for instance with vm_state active and task_state deleting.
Nov 29 08:16:17 compute-0 nova_compute[255040]: 2025-11-29 08:16:17.072 255071 DEBUG nova.network.neutron [-] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:16:17 compute-0 nova_compute[255040]: 2025-11-29 08:16:17.281 255071 INFO nova.compute.manager [-] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Took 1.89 seconds to deallocate network for instance.
Nov 29 08:16:17 compute-0 elated_pasteur[294330]: {
Nov 29 08:16:17 compute-0 elated_pasteur[294330]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 08:16:17 compute-0 elated_pasteur[294330]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:16:17 compute-0 elated_pasteur[294330]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:16:17 compute-0 elated_pasteur[294330]:         "osd_id": 2,
Nov 29 08:16:17 compute-0 elated_pasteur[294330]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:16:17 compute-0 elated_pasteur[294330]:         "type": "bluestore"
Nov 29 08:16:17 compute-0 elated_pasteur[294330]:     },
Nov 29 08:16:17 compute-0 elated_pasteur[294330]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 08:16:17 compute-0 elated_pasteur[294330]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:16:17 compute-0 elated_pasteur[294330]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:16:17 compute-0 elated_pasteur[294330]:         "osd_id": 0,
Nov 29 08:16:17 compute-0 elated_pasteur[294330]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:16:17 compute-0 elated_pasteur[294330]:         "type": "bluestore"
Nov 29 08:16:17 compute-0 elated_pasteur[294330]:     },
Nov 29 08:16:17 compute-0 elated_pasteur[294330]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 08:16:17 compute-0 elated_pasteur[294330]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:16:17 compute-0 elated_pasteur[294330]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:16:17 compute-0 elated_pasteur[294330]:         "osd_id": 1,
Nov 29 08:16:17 compute-0 elated_pasteur[294330]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:16:17 compute-0 elated_pasteur[294330]:         "type": "bluestore"
Nov 29 08:16:17 compute-0 elated_pasteur[294330]:     }
Nov 29 08:16:17 compute-0 elated_pasteur[294330]: }
Nov 29 08:16:17 compute-0 nova_compute[255040]: 2025-11-29 08:16:17.452 255071 DEBUG oslo_concurrency.lockutils [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:17 compute-0 nova_compute[255040]: 2025-11-29 08:16:17.453 255071 DEBUG oslo_concurrency.lockutils [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:17 compute-0 systemd[1]: libpod-a01279d3e24db761cb7c3570fbf58e06c2e2b0b4e635314cbaa40f597496be2c.scope: Deactivated successfully.
Nov 29 08:16:17 compute-0 podman[294363]: 2025-11-29 08:16:17.521666785 +0000 UTC m=+0.039878579 container died a01279d3e24db761cb7c3570fbf58e06c2e2b0b4e635314cbaa40f597496be2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pasteur, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 08:16:17 compute-0 nova_compute[255040]: 2025-11-29 08:16:17.528 255071 DEBUG oslo_concurrency.processutils [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d65a267a739a2045d33f34c281ecd585025bda5588f7bacdded31af3762dd697-merged.mount: Deactivated successfully.
Nov 29 08:16:17 compute-0 podman[294363]: 2025-11-29 08:16:17.590290483 +0000 UTC m=+0.108502257 container remove a01279d3e24db761cb7c3570fbf58e06c2e2b0b4e635314cbaa40f597496be2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:16:17 compute-0 systemd[1]: libpod-conmon-a01279d3e24db761cb7c3570fbf58e06c2e2b0b4e635314cbaa40f597496be2c.scope: Deactivated successfully.
Nov 29 08:16:17 compute-0 sudo[294209]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:16:17 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:16:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:16:17 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:16:17 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 2ddff8ae-5544-41a5-bf36-0beea5027a26 does not exist
Nov 29 08:16:17 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev b5e92fdf-0dec-4e77-a924-fab6784fcb7a does not exist
Nov 29 08:16:17 compute-0 sudo[294379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:16:17 compute-0 sudo[294379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:17 compute-0 sudo[294379]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:17 compute-0 sudo[294423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:16:17 compute-0 sudo[294423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:17 compute-0 sudo[294423]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e437 do_prune osdmap full prune enabled
Nov 29 08:16:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e438 e438: 3 total, 3 up, 3 in
Nov 29 08:16:17 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e438: 3 total, 3 up, 3 in
Nov 29 08:16:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:16:17 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2989710764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:17 compute-0 nova_compute[255040]: 2025-11-29 08:16:17.986 255071 DEBUG oslo_concurrency.processutils [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:17 compute-0 nova_compute[255040]: 2025-11-29 08:16:17.993 255071 DEBUG nova.compute.provider_tree [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:16:17 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 114 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 2.7 KiB/s wr, 77 op/s
Nov 29 08:16:18 compute-0 nova_compute[255040]: 2025-11-29 08:16:18.103 255071 DEBUG nova.scheduler.client.report [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:16:18 compute-0 nova_compute[255040]: 2025-11-29 08:16:18.173 255071 DEBUG oslo_concurrency.lockutils [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:18 compute-0 nova_compute[255040]: 2025-11-29 08:16:18.200 255071 INFO nova.scheduler.client.report [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Deleted allocations for instance 6568f6b1-4266-4fcc-b566-ae29baaa5c0f
Nov 29 08:16:18 compute-0 nova_compute[255040]: 2025-11-29 08:16:18.508 255071 DEBUG oslo_concurrency.lockutils [None req-27c8bdfe-9743-4883-b72f-84f6e2fcff10 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "6568f6b1-4266-4fcc-b566-ae29baaa5c0f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:18 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:16:18 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:16:18 compute-0 ceph-mon[75237]: osdmap e438: 3 total, 3 up, 3 in
Nov 29 08:16:18 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2989710764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:18 compute-0 ceph-mon[75237]: pgmap v1888: 305 pgs: 305 active+clean; 114 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 2.7 KiB/s wr, 77 op/s
Nov 29 08:16:18 compute-0 nova_compute[255040]: 2025-11-29 08:16:18.938 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:19 compute-0 nova_compute[255040]: 2025-11-29 08:16:19.045 255071 DEBUG nova.compute.manager [req-9bdd4e36-0f97-436d-8975-7faad7b78f11 req-ae46846c-846e-4020-a1fe-0e0c1f6866b0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Received event network-vif-deleted-d26e4c07-fd2e-4219-811a-8b7a975e0e27 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:16:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e438 do_prune osdmap full prune enabled
Nov 29 08:16:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e439 e439: 3 total, 3 up, 3 in
Nov 29 08:16:19 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e439: 3 total, 3 up, 3 in
Nov 29 08:16:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e439 do_prune osdmap full prune enabled
Nov 29 08:16:19 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e440 e440: 3 total, 3 up, 3 in
Nov 29 08:16:19 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e440: 3 total, 3 up, 3 in
Nov 29 08:16:19 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 6.5 KiB/s wr, 142 op/s
Nov 29 08:16:20 compute-0 nova_compute[255040]: 2025-11-29 08:16:20.002 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:20 compute-0 ceph-mon[75237]: osdmap e439: 3 total, 3 up, 3 in
Nov 29 08:16:20 compute-0 ceph-mon[75237]: osdmap e440: 3 total, 3 up, 3 in
Nov 29 08:16:20 compute-0 ceph-mon[75237]: pgmap v1891: 305 pgs: 305 active+clean; 88 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 6.5 KiB/s wr, 142 op/s
Nov 29 08:16:20 compute-0 podman[294450]: 2025-11-29 08:16:20.892886653 +0000 UTC m=+0.063035039 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 08:16:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e440 do_prune osdmap full prune enabled
Nov 29 08:16:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e441 e441: 3 total, 3 up, 3 in
Nov 29 08:16:21 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e441: 3 total, 3 up, 3 in
Nov 29 08:16:21 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 9.4 KiB/s wr, 157 op/s
Nov 29 08:16:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e441 do_prune osdmap full prune enabled
Nov 29 08:16:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e442 e442: 3 total, 3 up, 3 in
Nov 29 08:16:22 compute-0 ceph-mon[75237]: osdmap e441: 3 total, 3 up, 3 in
Nov 29 08:16:22 compute-0 ceph-mon[75237]: pgmap v1893: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 9.4 KiB/s wr, 157 op/s
Nov 29 08:16:22 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e442: 3 total, 3 up, 3 in
Nov 29 08:16:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:16:23 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1806673919' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:16:23 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1806673919' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:23 compute-0 ceph-mon[75237]: osdmap e442: 3 total, 3 up, 3 in
Nov 29 08:16:23 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1806673919' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:23 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1806673919' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:23 compute-0 nova_compute[255040]: 2025-11-29 08:16:23.940 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:23 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 3.8 KiB/s wr, 55 op/s
Nov 29 08:16:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e442 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e442 do_prune osdmap full prune enabled
Nov 29 08:16:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e443 e443: 3 total, 3 up, 3 in
Nov 29 08:16:24 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e443: 3 total, 3 up, 3 in
Nov 29 08:16:24 compute-0 ceph-mon[75237]: pgmap v1895: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 3.8 KiB/s wr, 55 op/s
Nov 29 08:16:24 compute-0 ceph-mon[75237]: osdmap e443: 3 total, 3 up, 3 in
Nov 29 08:16:25 compute-0 nova_compute[255040]: 2025-11-29 08:16:25.003 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:16:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3663678015' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:16:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3663678015' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:25 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3663678015' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:25 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3663678015' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 6.2 KiB/s wr, 136 op/s
Nov 29 08:16:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:16:26 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/953566135' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:16:26 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/953566135' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:26 compute-0 ceph-mon[75237]: pgmap v1897: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 6.2 KiB/s wr, 136 op/s
Nov 29 08:16:26 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/953566135' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:26 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/953566135' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:16:27.141 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:16:27.141 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:16:27.142 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e443 do_prune osdmap full prune enabled
Nov 29 08:16:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e444 e444: 3 total, 3 up, 3 in
Nov 29 08:16:27 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e444: 3 total, 3 up, 3 in
Nov 29 08:16:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 6.3 KiB/s wr, 153 op/s
Nov 29 08:16:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:16:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1544078001' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:16:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1544078001' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:28 compute-0 nova_compute[255040]: 2025-11-29 08:16:28.916 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404173.9135633, 6568f6b1-4266-4fcc-b566-ae29baaa5c0f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:16:28 compute-0 nova_compute[255040]: 2025-11-29 08:16:28.916 255071 INFO nova.compute.manager [-] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] VM Stopped (Lifecycle Event)
Nov 29 08:16:28 compute-0 nova_compute[255040]: 2025-11-29 08:16:28.933 255071 DEBUG nova.compute.manager [None req-cd6a80ca-3e02-43c2-87ab-f1428cc23b83 - - - - - -] [instance: 6568f6b1-4266-4fcc-b566-ae29baaa5c0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:16:28 compute-0 nova_compute[255040]: 2025-11-29 08:16:28.942 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:29 compute-0 ceph-mon[75237]: osdmap e444: 3 total, 3 up, 3 in
Nov 29 08:16:29 compute-0 ceph-mon[75237]: pgmap v1899: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 6.3 KiB/s wr, 153 op/s
Nov 29 08:16:29 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1544078001' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:29 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1544078001' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e444 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e444 do_prune osdmap full prune enabled
Nov 29 08:16:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 5.8 KiB/s wr, 163 op/s
Nov 29 08:16:30 compute-0 nova_compute[255040]: 2025-11-29 08:16:30.005 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e445 e445: 3 total, 3 up, 3 in
Nov 29 08:16:30 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e445: 3 total, 3 up, 3 in
Nov 29 08:16:31 compute-0 ceph-mon[75237]: pgmap v1900: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 5.8 KiB/s wr, 163 op/s
Nov 29 08:16:31 compute-0 ceph-mon[75237]: osdmap e445: 3 total, 3 up, 3 in
Nov 29 08:16:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 180 KiB/s rd, 7.1 KiB/s wr, 232 op/s
Nov 29 08:16:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e445 do_prune osdmap full prune enabled
Nov 29 08:16:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e446 e446: 3 total, 3 up, 3 in
Nov 29 08:16:32 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e446: 3 total, 3 up, 3 in
Nov 29 08:16:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:16:32 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/220965479' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:16:32 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/220965479' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:32 compute-0 nova_compute[255040]: 2025-11-29 08:16:32.974 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:33 compute-0 ceph-mon[75237]: pgmap v1902: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 180 KiB/s rd, 7.1 KiB/s wr, 232 op/s
Nov 29 08:16:33 compute-0 ceph-mon[75237]: osdmap e446: 3 total, 3 up, 3 in
Nov 29 08:16:33 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/220965479' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:33 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/220965479' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:33 compute-0 nova_compute[255040]: 2025-11-29 08:16:33.945 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 4.7 KiB/s wr, 177 op/s
Nov 29 08:16:34 compute-0 ceph-mon[75237]: pgmap v1904: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 4.7 KiB/s wr, 177 op/s
Nov 29 08:16:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:16:34 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3728135238' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:16:34 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3728135238' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:34 compute-0 podman[294470]: 2025-11-29 08:16:34.937839903 +0000 UTC m=+0.101222973 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 08:16:34 compute-0 nova_compute[255040]: 2025-11-29 08:16:34.974 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:34 compute-0 nova_compute[255040]: 2025-11-29 08:16:34.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:34 compute-0 nova_compute[255040]: 2025-11-29 08:16:34.976 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 08:16:34 compute-0 nova_compute[255040]: 2025-11-29 08:16:34.995 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 08:16:35 compute-0 nova_compute[255040]: 2025-11-29 08:16:35.006 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e446 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e446 do_prune osdmap full prune enabled
Nov 29 08:16:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e447 e447: 3 total, 3 up, 3 in
Nov 29 08:16:35 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e447: 3 total, 3 up, 3 in
Nov 29 08:16:35 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3728135238' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:35 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3728135238' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:35 compute-0 ceph-mon[75237]: osdmap e447: 3 total, 3 up, 3 in
Nov 29 08:16:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.8 KiB/s wr, 165 op/s
Nov 29 08:16:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:16:36 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1498144148' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:16:36 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1498144148' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e447 do_prune osdmap full prune enabled
Nov 29 08:16:36 compute-0 ceph-mon[75237]: pgmap v1906: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.8 KiB/s wr, 165 op/s
Nov 29 08:16:36 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1498144148' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:36 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1498144148' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e448 e448: 3 total, 3 up, 3 in
Nov 29 08:16:36 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e448: 3 total, 3 up, 3 in
Nov 29 08:16:36 compute-0 nova_compute[255040]: 2025-11-29 08:16:36.996 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:36 compute-0 nova_compute[255040]: 2025-11-29 08:16:36.996 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:16:36 compute-0 nova_compute[255040]: 2025-11-29 08:16:36.996 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:16:37 compute-0 nova_compute[255040]: 2025-11-29 08:16:37.020 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:16:37 compute-0 nova_compute[255040]: 2025-11-29 08:16:37.020 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:37 compute-0 ceph-mon[75237]: osdmap e448: 3 total, 3 up, 3 in
Nov 29 08:16:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.8 KiB/s wr, 81 op/s
Nov 29 08:16:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:16:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:16:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:16:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:16:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:16:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:16:38 compute-0 ceph-mon[75237]: pgmap v1908: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.8 KiB/s wr, 81 op/s
Nov 29 08:16:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:16:38
Nov 29 08:16:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:16:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:16:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'images', '.mgr']
Nov 29 08:16:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:16:38 compute-0 nova_compute[255040]: 2025-11-29 08:16:38.948 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:38 compute-0 nova_compute[255040]: 2025-11-29 08:16:38.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:38 compute-0 nova_compute[255040]: 2025-11-29 08:16:38.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:38 compute-0 nova_compute[255040]: 2025-11-29 08:16:38.975 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:16:38 compute-0 nova_compute[255040]: 2025-11-29 08:16:38.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:39 compute-0 nova_compute[255040]: 2025-11-29 08:16:39.130 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:39 compute-0 nova_compute[255040]: 2025-11-29 08:16:39.131 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:39 compute-0 nova_compute[255040]: 2025-11-29 08:16:39.131 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:39 compute-0 nova_compute[255040]: 2025-11-29 08:16:39.131 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:16:39 compute-0 nova_compute[255040]: 2025-11-29 08:16:39.131 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:16:39 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2382925772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:39 compute-0 nova_compute[255040]: 2025-11-29 08:16:39.597 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:39 compute-0 nova_compute[255040]: 2025-11-29 08:16:39.769 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:16:39 compute-0 nova_compute[255040]: 2025-11-29 08:16:39.770 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4397MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:16:39 compute-0 nova_compute[255040]: 2025-11-29 08:16:39.771 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:39 compute-0 nova_compute[255040]: 2025-11-29 08:16:39.771 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e448 do_prune osdmap full prune enabled
Nov 29 08:16:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e449 e449: 3 total, 3 up, 3 in
Nov 29 08:16:39 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e449: 3 total, 3 up, 3 in
Nov 29 08:16:39 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2382925772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:16:39 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/442947132' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:16:39 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/442947132' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:39 compute-0 nova_compute[255040]: 2025-11-29 08:16:39.939 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:16:39 compute-0 nova_compute[255040]: 2025-11-29 08:16:39.939 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:16:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.5 KiB/s wr, 153 op/s
Nov 29 08:16:40 compute-0 nova_compute[255040]: 2025-11-29 08:16:40.008 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:40 compute-0 nova_compute[255040]: 2025-11-29 08:16:40.012 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e449 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e449 do_prune osdmap full prune enabled
Nov 29 08:16:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e450 e450: 3 total, 3 up, 3 in
Nov 29 08:16:40 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e450: 3 total, 3 up, 3 in
Nov 29 08:16:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:16:40 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/562210083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:40 compute-0 nova_compute[255040]: 2025-11-29 08:16:40.421 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:40 compute-0 nova_compute[255040]: 2025-11-29 08:16:40.426 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:16:40 compute-0 nova_compute[255040]: 2025-11-29 08:16:40.471 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:16:40 compute-0 nova_compute[255040]: 2025-11-29 08:16:40.631 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:16:40 compute-0 nova_compute[255040]: 2025-11-29 08:16:40.631 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:40 compute-0 ceph-mon[75237]: osdmap e449: 3 total, 3 up, 3 in
Nov 29 08:16:40 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/442947132' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:40 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/442947132' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:40 compute-0 ceph-mon[75237]: pgmap v1910: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.5 KiB/s wr, 153 op/s
Nov 29 08:16:40 compute-0 ceph-mon[75237]: osdmap e450: 3 total, 3 up, 3 in
Nov 29 08:16:40 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/562210083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:41 compute-0 nova_compute[255040]: 2025-11-29 08:16:41.632 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:41 compute-0 nova_compute[255040]: 2025-11-29 08:16:41.633 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 868 KiB/s rd, 5.5 KiB/s wr, 146 op/s
Nov 29 08:16:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e450 do_prune osdmap full prune enabled
Nov 29 08:16:43 compute-0 ceph-mon[75237]: pgmap v1912: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 868 KiB/s rd, 5.5 KiB/s wr, 146 op/s
Nov 29 08:16:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e451 e451: 3 total, 3 up, 3 in
Nov 29 08:16:43 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e451: 3 total, 3 up, 3 in
Nov 29 08:16:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:16:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:16:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:16:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:16:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:16:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:16:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:16:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:16:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:16:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:16:43 compute-0 nova_compute[255040]: 2025-11-29 08:16:43.950 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 3.3 KiB/s wr, 91 op/s
Nov 29 08:16:44 compute-0 ceph-mon[75237]: osdmap e451: 3 total, 3 up, 3 in
Nov 29 08:16:44 compute-0 podman[294541]: 2025-11-29 08:16:44.890003059 +0000 UTC m=+0.057718057 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 08:16:45 compute-0 nova_compute[255040]: 2025-11-29 08:16:45.011 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e451 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e451 do_prune osdmap full prune enabled
Nov 29 08:16:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e452 e452: 3 total, 3 up, 3 in
Nov 29 08:16:45 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e452: 3 total, 3 up, 3 in
Nov 29 08:16:45 compute-0 ceph-mon[75237]: pgmap v1914: 305 pgs: 305 active+clean; 88 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 3.3 KiB/s wr, 91 op/s
Nov 29 08:16:45 compute-0 ceph-mon[75237]: osdmap e452: 3 total, 3 up, 3 in
Nov 29 08:16:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:16:45 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2279290322' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:16:45 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2279290322' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 48 KiB/s wr, 81 op/s
Nov 29 08:16:46 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2279290322' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:46 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2279290322' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:46 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:16:46 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 7815 writes, 36K keys, 7815 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 7815 writes, 7815 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2011 writes, 9173 keys, 2011 commit groups, 1.0 writes per commit group, ingest: 11.97 MB, 0.02 MB/s
                                           Interval WAL: 2011 writes, 2011 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.2      4.07              0.17        19    0.214       0      0       0.0       0.0
                                             L6      1/0    9.35 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.6     58.8     48.3      3.06              0.53        18    0.170     97K    10K       0.0       0.0
                                            Sum      1/0    9.35 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.6     25.2     26.5      7.13              0.70        37    0.193     97K    10K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.0     14.4     14.9      3.20              0.16         8    0.401     27K   2648       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0     58.8     48.3      3.06              0.53        18    0.170     97K    10K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.2      4.07              0.17        18    0.226       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     24.3      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.041, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.18 GB write, 0.06 MB/s write, 0.18 GB read, 0.06 MB/s read, 7.1 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 3.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55dbdf32d1f0#2 capacity: 304.00 MB usage: 19.93 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000186 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1328,19.14 MB,6.29598%) FilterBlock(38,280.17 KB,0.0900018%) IndexBlock(38,525.17 KB,0.168705%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 08:16:47 compute-0 ovn_controller[153295]: 2025-11-29T08:16:47Z|00232|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Nov 29 08:16:47 compute-0 ceph-mon[75237]: pgmap v1916: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 48 KiB/s wr, 81 op/s
Nov 29 08:16:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:16:47 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1438301876' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:16:47 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1438301876' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 39 KiB/s wr, 129 op/s
Nov 29 08:16:48 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1438301876' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:48 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1438301876' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:48 compute-0 nova_compute[255040]: 2025-11-29 08:16:48.953 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e452 do_prune osdmap full prune enabled
Nov 29 08:16:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e453 e453: 3 total, 3 up, 3 in
Nov 29 08:16:49 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e453: 3 total, 3 up, 3 in
Nov 29 08:16:49 compute-0 ceph-mon[75237]: pgmap v1917: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 39 KiB/s wr, 129 op/s
Nov 29 08:16:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 119 KiB/s rd, 43 KiB/s wr, 156 op/s
Nov 29 08:16:50 compute-0 nova_compute[255040]: 2025-11-29 08:16:50.012 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e453 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:50 compute-0 ceph-mon[75237]: osdmap e453: 3 total, 3 up, 3 in
Nov 29 08:16:50 compute-0 ceph-mon[75237]: pgmap v1919: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 119 KiB/s rd, 43 KiB/s wr, 156 op/s
Nov 29 08:16:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e453 do_prune osdmap full prune enabled
Nov 29 08:16:51 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e454 e454: 3 total, 3 up, 3 in
Nov 29 08:16:51 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e454: 3 total, 3 up, 3 in
Nov 29 08:16:51 compute-0 podman[294561]: 2025-11-29 08:16:51.885962485 +0000 UTC m=+0.056419071 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:16:51 compute-0 nova_compute[255040]: 2025-11-29 08:16:51.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:51 compute-0 nova_compute[255040]: 2025-11-29 08:16:51.976 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 08:16:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 7.1 KiB/s wr, 150 op/s
Nov 29 08:16:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:16:52 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4265203736' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:16:52 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4265203736' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:52 compute-0 ceph-mon[75237]: osdmap e454: 3 total, 3 up, 3 in
Nov 29 08:16:52 compute-0 ceph-mon[75237]: pgmap v1921: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 7.1 KiB/s wr, 150 op/s
Nov 29 08:16:52 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4265203736' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:52 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4265203736' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:16:53 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2756122187' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:16:53 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2756122187' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e454 do_prune osdmap full prune enabled
Nov 29 08:16:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e455 e455: 3 total, 3 up, 3 in
Nov 29 08:16:53 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e455: 3 total, 3 up, 3 in
Nov 29 08:16:53 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2756122187' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:53 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2756122187' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:53 compute-0 nova_compute[255040]: 2025-11-29 08:16:53.960 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 4.5 KiB/s wr, 70 op/s
Nov 29 08:16:54 compute-0 ceph-mon[75237]: osdmap e455: 3 total, 3 up, 3 in
Nov 29 08:16:54 compute-0 ceph-mon[75237]: pgmap v1923: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 4.5 KiB/s wr, 70 op/s
Nov 29 08:16:55 compute-0 nova_compute[255040]: 2025-11-29 08:16:55.047 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e455 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e455 do_prune osdmap full prune enabled
Nov 29 08:16:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e456 e456: 3 total, 3 up, 3 in
Nov 29 08:16:55 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e456: 3 total, 3 up, 3 in
Nov 29 08:16:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:16:55 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/786219061' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:16:55 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/786219061' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:16:55 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1438786281' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:16:55 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1438786281' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:55 compute-0 nova_compute[255040]: 2025-11-29 08:16:55.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 144 MiB data, 467 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 9.3 MiB/s wr, 189 op/s
Nov 29 08:16:56 compute-0 ceph-mon[75237]: osdmap e456: 3 total, 3 up, 3 in
Nov 29 08:16:56 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/786219061' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:56 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/786219061' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:56 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1438786281' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:56 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1438786281' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.001328409886371036 of space, bias 1.0, pg target 0.3985229659113108 quantized to 32 (current 32)
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:16:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:16:57 compute-0 ceph-mon[75237]: pgmap v1925: 305 pgs: 305 active+clean; 144 MiB data, 467 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 9.3 MiB/s wr, 189 op/s
Nov 29 08:16:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 180 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 139 KiB/s rd, 14 MiB/s wr, 195 op/s
Nov 29 08:16:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e456 do_prune osdmap full prune enabled
Nov 29 08:16:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e457 e457: 3 total, 3 up, 3 in
Nov 29 08:16:58 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e457: 3 total, 3 up, 3 in
Nov 29 08:16:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:16:58.349 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:16:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:16:58.350 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:16:58 compute-0 nova_compute[255040]: 2025-11-29 08:16:58.350 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:58 compute-0 nova_compute[255040]: 2025-11-29 08:16:58.937 255071 DEBUG oslo_concurrency.lockutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "5719d39a-a6d3-4e05-868e-db103119cdb6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:58 compute-0 nova_compute[255040]: 2025-11-29 08:16:58.937 255071 DEBUG oslo_concurrency.lockutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "5719d39a-a6d3-4e05-868e-db103119cdb6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:16:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1448037970' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:16:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1448037970' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:58 compute-0 nova_compute[255040]: 2025-11-29 08:16:58.956 255071 DEBUG nova.compute.manager [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:16:58 compute-0 nova_compute[255040]: 2025-11-29 08:16:58.963 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:59 compute-0 ceph-mon[75237]: pgmap v1926: 305 pgs: 305 active+clean; 180 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 139 KiB/s rd, 14 MiB/s wr, 195 op/s
Nov 29 08:16:59 compute-0 ceph-mon[75237]: osdmap e457: 3 total, 3 up, 3 in
Nov 29 08:16:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1448037970' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1448037970' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e457 do_prune osdmap full prune enabled
Nov 29 08:16:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e458 e458: 3 total, 3 up, 3 in
Nov 29 08:16:59 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e458: 3 total, 3 up, 3 in
Nov 29 08:16:59 compute-0 nova_compute[255040]: 2025-11-29 08:16:59.165 255071 DEBUG oslo_concurrency.lockutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:59 compute-0 nova_compute[255040]: 2025-11-29 08:16:59.165 255071 DEBUG oslo_concurrency.lockutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:59 compute-0 nova_compute[255040]: 2025-11-29 08:16:59.173 255071 DEBUG nova.virt.hardware [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:16:59 compute-0 nova_compute[255040]: 2025-11-29 08:16:59.173 255071 INFO nova.compute.claims [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:16:59 compute-0 nova_compute[255040]: 2025-11-29 08:16:59.275 255071 DEBUG oslo_concurrency.processutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:16:59 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3716788531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:59 compute-0 nova_compute[255040]: 2025-11-29 08:16:59.715 255071 DEBUG oslo_concurrency.processutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:59 compute-0 nova_compute[255040]: 2025-11-29 08:16:59.723 255071 DEBUG nova.compute.provider_tree [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:16:59 compute-0 nova_compute[255040]: 2025-11-29 08:16:59.742 255071 DEBUG nova.scheduler.client.report [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:16:59 compute-0 nova_compute[255040]: 2025-11-29 08:16:59.769 255071 DEBUG oslo_concurrency.lockutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:59 compute-0 nova_compute[255040]: 2025-11-29 08:16:59.771 255071 DEBUG nova.compute.manager [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:16:59 compute-0 nova_compute[255040]: 2025-11-29 08:16:59.816 255071 DEBUG nova.compute.manager [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:16:59 compute-0 nova_compute[255040]: 2025-11-29 08:16:59.817 255071 DEBUG nova.network.neutron [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:16:59 compute-0 nova_compute[255040]: 2025-11-29 08:16:59.836 255071 INFO nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:16:59 compute-0 nova_compute[255040]: 2025-11-29 08:16:59.862 255071 DEBUG nova.compute.manager [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:16:59 compute-0 nova_compute[255040]: 2025-11-29 08:16:59.930 255071 INFO nova.virt.block_device [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Booting with volume 4a79eff0-f475-4892-8383-3e3a66532246 at /dev/vda
Nov 29 08:16:59 compute-0 nova_compute[255040]: 2025-11-29 08:16:59.996 255071 DEBUG nova.policy [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8a7b756f6c364e97a9d0d5298587d61c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e6a2673206a04ec28205d820751e3174', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:17:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 202 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 159 KiB/s rd, 19 MiB/s wr, 224 op/s
Nov 29 08:17:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e458 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e458 do_prune osdmap full prune enabled
Nov 29 08:17:00 compute-0 nova_compute[255040]: 2025-11-29 08:17:00.095 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e459 e459: 3 total, 3 up, 3 in
Nov 29 08:17:00 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e459: 3 total, 3 up, 3 in
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:00.111927) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404220112060, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 2068, "num_deletes": 285, "total_data_size": 2792798, "memory_usage": 2852560, "flush_reason": "Manual Compaction"}
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Nov 29 08:17:00 compute-0 ceph-mon[75237]: osdmap e458: 3 total, 3 up, 3 in
Nov 29 08:17:00 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3716788531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:00 compute-0 ceph-mon[75237]: osdmap e459: 3 total, 3 up, 3 in
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404220138250, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 2742316, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34577, "largest_seqno": 36643, "table_properties": {"data_size": 2732474, "index_size": 6208, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21893, "raw_average_key_size": 21, "raw_value_size": 2712418, "raw_average_value_size": 2696, "num_data_blocks": 269, "num_entries": 1006, "num_filter_entries": 1006, "num_deletions": 285, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404096, "oldest_key_time": 1764404096, "file_creation_time": 1764404220, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 26374 microseconds, and 8766 cpu microseconds.
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:00.138317) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 2742316 bytes OK
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:00.138336) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:00.139738) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:00.139754) EVENT_LOG_v1 {"time_micros": 1764404220139749, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:00.139767) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 2783461, prev total WAL file size 2786362, number of live WAL files 2.
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:00.140796) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303037' seq:72057594037927935, type:22 .. '6C6F676D0031323630' seq:0, type:0; will stop at (end)
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(2678KB)], [71(9572KB)]
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404220140936, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 12544245, "oldest_snapshot_seqno": -1}
Nov 29 08:17:00 compute-0 nova_compute[255040]: 2025-11-29 08:17:00.147 255071 DEBUG os_brick.utils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:17:00 compute-0 nova_compute[255040]: 2025-11-29 08:17:00.148 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:00 compute-0 nova_compute[255040]: 2025-11-29 08:17:00.169 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:00 compute-0 nova_compute[255040]: 2025-11-29 08:17:00.170 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[0070ea0f-926f-4e85-af91-995b21e054a7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:00 compute-0 nova_compute[255040]: 2025-11-29 08:17:00.171 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:00 compute-0 nova_compute[255040]: 2025-11-29 08:17:00.182 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:00 compute-0 nova_compute[255040]: 2025-11-29 08:17:00.183 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[33289e7c-9b4b-420d-bbd7-cbda88a31a89]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:00 compute-0 nova_compute[255040]: 2025-11-29 08:17:00.184 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:00 compute-0 nova_compute[255040]: 2025-11-29 08:17:00.202 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:00 compute-0 nova_compute[255040]: 2025-11-29 08:17:00.203 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[77534916-bdb4-4882-b39d-e8eb72e911a7]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:00 compute-0 nova_compute[255040]: 2025-11-29 08:17:00.205 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[b10c5308-36f3-4b0d-9643-48f1c42744cb]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:00 compute-0 nova_compute[255040]: 2025-11-29 08:17:00.205 255071 DEBUG oslo_concurrency.processutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:00 compute-0 nova_compute[255040]: 2025-11-29 08:17:00.240 255071 DEBUG oslo_concurrency.processutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:00 compute-0 nova_compute[255040]: 2025-11-29 08:17:00.243 255071 DEBUG os_brick.initiator.connectors.lightos [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:17:00 compute-0 nova_compute[255040]: 2025-11-29 08:17:00.244 255071 DEBUG os_brick.initiator.connectors.lightos [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:17:00 compute-0 nova_compute[255040]: 2025-11-29 08:17:00.244 255071 DEBUG os_brick.initiator.connectors.lightos [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:17:00 compute-0 nova_compute[255040]: 2025-11-29 08:17:00.245 255071 DEBUG os_brick.utils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] <== get_connector_properties: return (97ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:17:00 compute-0 nova_compute[255040]: 2025-11-29 08:17:00.245 255071 DEBUG nova.virt.block_device [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Updating existing volume attachment record: 7ca07772-69d3-41d7-9852-78a6cf4a88ac _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 7029 keys, 12397438 bytes, temperature: kUnknown
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404220256670, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 12397438, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12342453, "index_size": 36313, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17605, "raw_key_size": 177704, "raw_average_key_size": 25, "raw_value_size": 12208193, "raw_average_value_size": 1736, "num_data_blocks": 1461, "num_entries": 7029, "num_filter_entries": 7029, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764404220, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:00.256989) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 12397438 bytes
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:00.259166) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.3 rd, 107.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 9.3 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(9.1) write-amplify(4.5) OK, records in: 7596, records dropped: 567 output_compression: NoCompression
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:00.259204) EVENT_LOG_v1 {"time_micros": 1764404220259189, "job": 40, "event": "compaction_finished", "compaction_time_micros": 115848, "compaction_time_cpu_micros": 47317, "output_level": 6, "num_output_files": 1, "total_output_size": 12397438, "num_input_records": 7596, "num_output_records": 7029, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404220260764, "job": 40, "event": "table_file_deletion", "file_number": 73}
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404220263114, "job": 40, "event": "table_file_deletion", "file_number": 71}
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:00.140684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:00.263159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:00.263167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:00.263170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:00.263172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:17:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:00.263175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:17:00 compute-0 nova_compute[255040]: 2025-11-29 08:17:00.676 255071 DEBUG nova.network.neutron [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Successfully created port: acd40164-54b0-4ce3-aa1f-8a4e056c6881 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:17:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:17:00 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/250642570' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:17:00 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/250642570' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:17:00 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2928667328' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e459 do_prune osdmap full prune enabled
Nov 29 08:17:01 compute-0 ceph-mon[75237]: pgmap v1929: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 202 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 159 KiB/s rd, 19 MiB/s wr, 224 op/s
Nov 29 08:17:01 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/250642570' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:01 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/250642570' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:01 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2928667328' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e460 e460: 3 total, 3 up, 3 in
Nov 29 08:17:01 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e460: 3 total, 3 up, 3 in
Nov 29 08:17:01 compute-0 nova_compute[255040]: 2025-11-29 08:17:01.192 255071 DEBUG nova.compute.manager [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:17:01 compute-0 nova_compute[255040]: 2025-11-29 08:17:01.195 255071 DEBUG nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:17:01 compute-0 nova_compute[255040]: 2025-11-29 08:17:01.196 255071 INFO nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Creating image(s)
Nov 29 08:17:01 compute-0 nova_compute[255040]: 2025-11-29 08:17:01.197 255071 DEBUG nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:17:01 compute-0 nova_compute[255040]: 2025-11-29 08:17:01.198 255071 DEBUG nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Ensure instance console log exists: /var/lib/nova/instances/5719d39a-a6d3-4e05-868e-db103119cdb6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:17:01 compute-0 nova_compute[255040]: 2025-11-29 08:17:01.199 255071 DEBUG oslo_concurrency.lockutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:01 compute-0 nova_compute[255040]: 2025-11-29 08:17:01.199 255071 DEBUG oslo_concurrency.lockutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:01 compute-0 nova_compute[255040]: 2025-11-29 08:17:01.200 255071 DEBUG oslo_concurrency.lockutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:01 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:01.351 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:01 compute-0 nova_compute[255040]: 2025-11-29 08:17:01.369 255071 DEBUG nova.network.neutron [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Successfully updated port: acd40164-54b0-4ce3-aa1f-8a4e056c6881 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:17:01 compute-0 nova_compute[255040]: 2025-11-29 08:17:01.388 255071 DEBUG oslo_concurrency.lockutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "refresh_cache-5719d39a-a6d3-4e05-868e-db103119cdb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:17:01 compute-0 nova_compute[255040]: 2025-11-29 08:17:01.389 255071 DEBUG oslo_concurrency.lockutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquired lock "refresh_cache-5719d39a-a6d3-4e05-868e-db103119cdb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:17:01 compute-0 nova_compute[255040]: 2025-11-29 08:17:01.389 255071 DEBUG nova.network.neutron [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:17:01 compute-0 nova_compute[255040]: 2025-11-29 08:17:01.499 255071 DEBUG nova.compute.manager [req-c257ce61-2aa9-4f44-b52f-e2a5c8620b5a req-29020a36-fa41-4788-abfe-87406109ca39 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Received event network-changed-acd40164-54b0-4ce3-aa1f-8a4e056c6881 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:17:01 compute-0 nova_compute[255040]: 2025-11-29 08:17:01.499 255071 DEBUG nova.compute.manager [req-c257ce61-2aa9-4f44-b52f-e2a5c8620b5a req-29020a36-fa41-4788-abfe-87406109ca39 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Refreshing instance network info cache due to event network-changed-acd40164-54b0-4ce3-aa1f-8a4e056c6881. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:17:01 compute-0 nova_compute[255040]: 2025-11-29 08:17:01.499 255071 DEBUG oslo_concurrency.lockutils [req-c257ce61-2aa9-4f44-b52f-e2a5c8620b5a req-29020a36-fa41-4788-abfe-87406109ca39 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-5719d39a-a6d3-4e05-868e-db103119cdb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:17:01 compute-0 nova_compute[255040]: 2025-11-29 08:17:01.581 255071 DEBUG nova.network.neutron [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:17:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 202 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 5.5 MiB/s wr, 106 op/s
Nov 29 08:17:02 compute-0 ceph-mon[75237]: osdmap e460: 3 total, 3 up, 3 in
Nov 29 08:17:02 compute-0 nova_compute[255040]: 2025-11-29 08:17:02.711 255071 DEBUG nova.network.neutron [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Updating instance_info_cache with network_info: [{"id": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "address": "fa:16:3e:9a:f9:a3", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd40164-54", "ovs_interfaceid": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:17:02 compute-0 nova_compute[255040]: 2025-11-29 08:17:02.984 255071 DEBUG oslo_concurrency.lockutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Releasing lock "refresh_cache-5719d39a-a6d3-4e05-868e-db103119cdb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:17:02 compute-0 nova_compute[255040]: 2025-11-29 08:17:02.985 255071 DEBUG nova.compute.manager [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Instance network_info: |[{"id": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "address": "fa:16:3e:9a:f9:a3", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd40164-54", "ovs_interfaceid": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:17:02 compute-0 nova_compute[255040]: 2025-11-29 08:17:02.986 255071 DEBUG oslo_concurrency.lockutils [req-c257ce61-2aa9-4f44-b52f-e2a5c8620b5a req-29020a36-fa41-4788-abfe-87406109ca39 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-5719d39a-a6d3-4e05-868e-db103119cdb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:17:02 compute-0 nova_compute[255040]: 2025-11-29 08:17:02.986 255071 DEBUG nova.network.neutron [req-c257ce61-2aa9-4f44-b52f-e2a5c8620b5a req-29020a36-fa41-4788-abfe-87406109ca39 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Refreshing network info cache for port acd40164-54b0-4ce3-aa1f-8a4e056c6881 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:17:02 compute-0 nova_compute[255040]: 2025-11-29 08:17:02.991 255071 DEBUG nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Start _get_guest_xml network_info=[{"id": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "address": "fa:16:3e:9a:f9:a3", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd40164-54", "ovs_interfaceid": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4a79eff0-f475-4892-8383-3e3a66532246', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4a79eff0-f475-4892-8383-3e3a66532246', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '5719d39a-a6d3-4e05-868e-db103119cdb6', 'attached_at': '', 'detached_at': '', 'volume_id': '4a79eff0-f475-4892-8383-3e3a66532246', 'serial': '4a79eff0-f475-4892-8383-3e3a66532246'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'delete_on_termination': False, 'attachment_id': '7ca07772-69d3-41d7-9852-78a6cf4a88ac', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:02.999 255071 WARNING nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.008 255071 DEBUG nova.virt.libvirt.host [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.008 255071 DEBUG nova.virt.libvirt.host [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.013 255071 DEBUG nova.virt.libvirt.host [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.013 255071 DEBUG nova.virt.libvirt.host [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.014 255071 DEBUG nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.014 255071 DEBUG nova.virt.hardware [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.015 255071 DEBUG nova.virt.hardware [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.015 255071 DEBUG nova.virt.hardware [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.015 255071 DEBUG nova.virt.hardware [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.016 255071 DEBUG nova.virt.hardware [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.016 255071 DEBUG nova.virt.hardware [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.016 255071 DEBUG nova.virt.hardware [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.017 255071 DEBUG nova.virt.hardware [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.017 255071 DEBUG nova.virt.hardware [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.017 255071 DEBUG nova.virt.hardware [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.017 255071 DEBUG nova.virt.hardware [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.040 255071 DEBUG nova.storage.rbd_utils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] rbd image 5719d39a-a6d3-4e05-868e-db103119cdb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.044 255071 DEBUG oslo_concurrency.processutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:03 compute-0 ceph-mon[75237]: pgmap v1932: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 202 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 5.5 MiB/s wr, 106 op/s
Nov 29 08:17:03 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:17:03 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3368967473' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.449 255071 DEBUG oslo_concurrency.processutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.554 255071 DEBUG os_brick.encryptors [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Using volume encryption metadata '{'encryption_key_id': 'aa49936d-bef6-451b-9a2e-12915c631597', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4a79eff0-f475-4892-8383-3e3a66532246', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4a79eff0-f475-4892-8383-3e3a66532246', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '5719d39a-a6d3-4e05-868e-db103119cdb6', 'attached_at': '', 'detached_at': '', 'volume_id': '4a79eff0-f475-4892-8383-3e3a66532246', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.556 255071 DEBUG barbicanclient.client [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.570 255071 DEBUG barbicanclient.v1.secrets [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/aa49936d-bef6-451b-9a2e-12915c631597 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.570 255071 INFO barbicanclient.base [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/aa49936d-bef6-451b-9a2e-12915c631597
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.604 255071 DEBUG barbicanclient.client [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.605 255071 INFO barbicanclient.base [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/aa49936d-bef6-451b-9a2e-12915c631597
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.628 255071 DEBUG barbicanclient.client [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.629 255071 INFO barbicanclient.base [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/aa49936d-bef6-451b-9a2e-12915c631597
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.656 255071 DEBUG barbicanclient.client [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.657 255071 INFO barbicanclient.base [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/aa49936d-bef6-451b-9a2e-12915c631597
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.681 255071 DEBUG barbicanclient.client [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.682 255071 INFO barbicanclient.base [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/aa49936d-bef6-451b-9a2e-12915c631597
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.703 255071 DEBUG barbicanclient.client [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.704 255071 INFO barbicanclient.base [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/aa49936d-bef6-451b-9a2e-12915c631597
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.728 255071 DEBUG barbicanclient.client [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.729 255071 INFO barbicanclient.base [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/aa49936d-bef6-451b-9a2e-12915c631597
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.750 255071 DEBUG barbicanclient.client [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.751 255071 INFO barbicanclient.base [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/aa49936d-bef6-451b-9a2e-12915c631597
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.771 255071 DEBUG barbicanclient.client [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.772 255071 INFO barbicanclient.base [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/aa49936d-bef6-451b-9a2e-12915c631597
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.794 255071 DEBUG barbicanclient.client [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.795 255071 INFO barbicanclient.base [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/aa49936d-bef6-451b-9a2e-12915c631597
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.817 255071 DEBUG barbicanclient.client [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.818 255071 INFO barbicanclient.base [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/aa49936d-bef6-451b-9a2e-12915c631597
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.850 255071 DEBUG barbicanclient.client [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.851 255071 INFO barbicanclient.base [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/aa49936d-bef6-451b-9a2e-12915c631597
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.862 255071 DEBUG nova.network.neutron [req-c257ce61-2aa9-4f44-b52f-e2a5c8620b5a req-29020a36-fa41-4788-abfe-87406109ca39 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Updated VIF entry in instance network info cache for port acd40164-54b0-4ce3-aa1f-8a4e056c6881. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.862 255071 DEBUG nova.network.neutron [req-c257ce61-2aa9-4f44-b52f-e2a5c8620b5a req-29020a36-fa41-4788-abfe-87406109ca39 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Updating instance_info_cache with network_info: [{"id": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "address": "fa:16:3e:9a:f9:a3", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd40164-54", "ovs_interfaceid": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.876 255071 DEBUG oslo_concurrency.lockutils [req-c257ce61-2aa9-4f44-b52f-e2a5c8620b5a req-29020a36-fa41-4788-abfe-87406109ca39 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-5719d39a-a6d3-4e05-868e-db103119cdb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.887 255071 DEBUG barbicanclient.client [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.888 255071 INFO barbicanclient.base [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/aa49936d-bef6-451b-9a2e-12915c631597
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.912 255071 DEBUG barbicanclient.client [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.913 255071 INFO barbicanclient.base [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/aa49936d-bef6-451b-9a2e-12915c631597
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.937 255071 DEBUG barbicanclient.client [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.938 255071 INFO barbicanclient.base [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/aa49936d-bef6-451b-9a2e-12915c631597
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.960 255071 DEBUG barbicanclient.client [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.962 255071 DEBUG nova.virt.libvirt.host [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 08:17:03 compute-0 nova_compute[255040]:   <usage type="volume">
Nov 29 08:17:03 compute-0 nova_compute[255040]:     <volume>4a79eff0-f475-4892-8383-3e3a66532246</volume>
Nov 29 08:17:03 compute-0 nova_compute[255040]:   </usage>
Nov 29 08:17:03 compute-0 nova_compute[255040]: </secret>
Nov 29 08:17:03 compute-0 nova_compute[255040]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.966 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.997 255071 DEBUG nova.virt.libvirt.vif [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:16:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-846120083',display_name='tempest-TestEncryptedCinderVolumes-server-846120083',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-846120083',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEdRfFooeDrPdIr34Yh+0fce0QIhdx7hRFz43DuSx97qmzkIJdqTsJhIJpvFpHMnUcNk19c2heDhEKtTmUb/iXamAI7Q4J7B78+R5sIhgPtRSP6lsf7edjGY0plIk9Wynw==',key_name='tempest-TestEncryptedCinderVolumes-458026389',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6a2673206a04ec28205d820751e3174',ramdisk_id='',reservation_id='r-detp00aq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-2116890995',owner_user_name='tempest-TestEncryptedCinderVolumes-2116890995-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:59Z,user_data=None,user_id='8a7b756f6c364e97a9d0d5298587d61c',uuid=5719d39a-a6d3-4e05-868e-db103119cdb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "address": "fa:16:3e:9a:f9:a3", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd40164-54", "ovs_interfaceid": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.998 255071 DEBUG nova.network.os_vif_util [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Converting VIF {"id": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "address": "fa:16:3e:9a:f9:a3", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd40164-54", "ovs_interfaceid": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:17:03 compute-0 nova_compute[255040]: 2025-11-29 08:17:03.999 255071 DEBUG nova.network.os_vif_util [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:f9:a3,bridge_name='br-int',has_traffic_filtering=True,id=acd40164-54b0-4ce3-aa1f-8a4e056c6881,network=Network(7844e875-d723-468d-8c4a-c3bb5b3b635a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacd40164-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.001 255071 DEBUG nova.objects.instance [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5719d39a-a6d3-4e05-868e-db103119cdb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.013 255071 DEBUG nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:17:04 compute-0 nova_compute[255040]:   <uuid>5719d39a-a6d3-4e05-868e-db103119cdb6</uuid>
Nov 29 08:17:04 compute-0 nova_compute[255040]:   <name>instance-00000019</name>
Nov 29 08:17:04 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:17:04 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:17:04 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-846120083</nova:name>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:17:03</nova:creationTime>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:17:04 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:17:04 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:17:04 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:17:04 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:17:04 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:17:04 compute-0 nova_compute[255040]:         <nova:user uuid="8a7b756f6c364e97a9d0d5298587d61c">tempest-TestEncryptedCinderVolumes-2116890995-project-member</nova:user>
Nov 29 08:17:04 compute-0 nova_compute[255040]:         <nova:project uuid="e6a2673206a04ec28205d820751e3174">tempest-TestEncryptedCinderVolumes-2116890995</nova:project>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:17:04 compute-0 nova_compute[255040]:         <nova:port uuid="acd40164-54b0-4ce3-aa1f-8a4e056c6881">
Nov 29 08:17:04 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:17:04 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:17:04 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <system>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <entry name="serial">5719d39a-a6d3-4e05-868e-db103119cdb6</entry>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <entry name="uuid">5719d39a-a6d3-4e05-868e-db103119cdb6</entry>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     </system>
Nov 29 08:17:04 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:17:04 compute-0 nova_compute[255040]:   <os>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:   </os>
Nov 29 08:17:04 compute-0 nova_compute[255040]:   <features>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:   </features>
Nov 29 08:17:04 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:17:04 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:17:04 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/5719d39a-a6d3-4e05-868e-db103119cdb6_disk.config">
Nov 29 08:17:04 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       </source>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:17:04 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <source protocol="rbd" name="volumes/volume-4a79eff0-f475-4892-8383-3e3a66532246">
Nov 29 08:17:04 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       </source>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:17:04 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <serial>4a79eff0-f475-4892-8383-3e3a66532246</serial>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <encryption format="luks">
Nov 29 08:17:04 compute-0 nova_compute[255040]:         <secret type="passphrase" uuid="26aa6425-098d-4e85-ba5d-2b73e85931cb"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       </encryption>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:9a:f9:a3"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <target dev="tapacd40164-54"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/5719d39a-a6d3-4e05-868e-db103119cdb6/console.log" append="off"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <video>
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     </video>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:17:04 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:17:04 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:17:04 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:17:04 compute-0 nova_compute[255040]: </domain>
Nov 29 08:17:04 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.016 255071 DEBUG nova.compute.manager [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Preparing to wait for external event network-vif-plugged-acd40164-54b0-4ce3-aa1f-8a4e056c6881 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.016 255071 DEBUG oslo_concurrency.lockutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "5719d39a-a6d3-4e05-868e-db103119cdb6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.017 255071 DEBUG oslo_concurrency.lockutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "5719d39a-a6d3-4e05-868e-db103119cdb6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.017 255071 DEBUG oslo_concurrency.lockutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "5719d39a-a6d3-4e05-868e-db103119cdb6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.018 255071 DEBUG nova.virt.libvirt.vif [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:16:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-846120083',display_name='tempest-TestEncryptedCinderVolumes-server-846120083',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-846120083',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEdRfFooeDrPdIr34Yh+0fce0QIhdx7hRFz43DuSx97qmzkIJdqTsJhIJpvFpHMnUcNk19c2heDhEKtTmUb/iXamAI7Q4J7B78+R5sIhgPtRSP6lsf7edjGY0plIk9Wynw==',key_name='tempest-TestEncryptedCinderVolumes-458026389',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6a2673206a04ec28205d820751e3174',ramdisk_id='',reservation_id='r-detp00aq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-2116890995',owner_user_name='tempest-TestEncryptedCinderVolumes-2116890995-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:59Z,user_data=None,user_id='8a7b756f6c364e97a9d0d5298587d61c',uuid=5719d39a-a6d3-4e05-868e-db103119cdb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "address": "fa:16:3e:9a:f9:a3", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd40164-54", "ovs_interfaceid": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.019 255071 DEBUG nova.network.os_vif_util [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Converting VIF {"id": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "address": "fa:16:3e:9a:f9:a3", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd40164-54", "ovs_interfaceid": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:17:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 202 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 3.7 MiB/s wr, 72 op/s
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.020 255071 DEBUG nova.network.os_vif_util [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:f9:a3,bridge_name='br-int',has_traffic_filtering=True,id=acd40164-54b0-4ce3-aa1f-8a4e056c6881,network=Network(7844e875-d723-468d-8c4a-c3bb5b3b635a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacd40164-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.022 255071 DEBUG os_vif [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:f9:a3,bridge_name='br-int',has_traffic_filtering=True,id=acd40164-54b0-4ce3-aa1f-8a4e056c6881,network=Network(7844e875-d723-468d-8c4a-c3bb5b3b635a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacd40164-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.023 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.023 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.024 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.029 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.029 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacd40164-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.030 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapacd40164-54, col_values=(('external_ids', {'iface-id': 'acd40164-54b0-4ce3-aa1f-8a4e056c6881', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9a:f9:a3', 'vm-uuid': '5719d39a-a6d3-4e05-868e-db103119cdb6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.033 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:04 compute-0 NetworkManager[49116]: <info>  [1764404224.0342] manager: (tapacd40164-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.036 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.043 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.045 255071 INFO os_vif [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:f9:a3,bridge_name='br-int',has_traffic_filtering=True,id=acd40164-54b0-4ce3-aa1f-8a4e056c6881,network=Network(7844e875-d723-468d-8c4a-c3bb5b3b635a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacd40164-54')
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.094 255071 DEBUG nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.095 255071 DEBUG nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.095 255071 DEBUG nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] No VIF found with MAC fa:16:3e:9a:f9:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.096 255071 INFO nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Using config drive
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.121 255071 DEBUG nova.storage.rbd_utils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] rbd image 5719d39a-a6d3-4e05-868e-db103119cdb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:17:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e460 do_prune osdmap full prune enabled
Nov 29 08:17:04 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3368967473' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:04 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e461 e461: 3 total, 3 up, 3 in
Nov 29 08:17:04 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e461: 3 total, 3 up, 3 in
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.470 255071 INFO nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Creating config drive at /var/lib/nova/instances/5719d39a-a6d3-4e05-868e-db103119cdb6/disk.config
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.482 255071 DEBUG oslo_concurrency.processutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5719d39a-a6d3-4e05-868e-db103119cdb6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp97y5l6a8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.613 255071 DEBUG oslo_concurrency.processutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5719d39a-a6d3-4e05-868e-db103119cdb6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp97y5l6a8" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.637 255071 DEBUG nova.storage.rbd_utils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] rbd image 5719d39a-a6d3-4e05-868e-db103119cdb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.640 255071 DEBUG oslo_concurrency.processutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5719d39a-a6d3-4e05-868e-db103119cdb6/disk.config 5719d39a-a6d3-4e05-868e-db103119cdb6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.784 255071 DEBUG oslo_concurrency.processutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5719d39a-a6d3-4e05-868e-db103119cdb6/disk.config 5719d39a-a6d3-4e05-868e-db103119cdb6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.785 255071 INFO nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Deleting local config drive /var/lib/nova/instances/5719d39a-a6d3-4e05-868e-db103119cdb6/disk.config because it was imported into RBD.
Nov 29 08:17:04 compute-0 kernel: tapacd40164-54: entered promiscuous mode
Nov 29 08:17:04 compute-0 NetworkManager[49116]: <info>  [1764404224.8473] manager: (tapacd40164-54): new Tun device (/org/freedesktop/NetworkManager/Devices/125)
Nov 29 08:17:04 compute-0 ovn_controller[153295]: 2025-11-29T08:17:04Z|00233|binding|INFO|Claiming lport acd40164-54b0-4ce3-aa1f-8a4e056c6881 for this chassis.
Nov 29 08:17:04 compute-0 ovn_controller[153295]: 2025-11-29T08:17:04Z|00234|binding|INFO|acd40164-54b0-4ce3-aa1f-8a4e056c6881: Claiming fa:16:3e:9a:f9:a3 10.100.0.3
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.848 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:04.858 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:f9:a3 10.100.0.3'], port_security=['fa:16:3e:9a:f9:a3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5719d39a-a6d3-4e05-868e-db103119cdb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6a2673206a04ec28205d820751e3174', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3cd54b75-8b12-47dc-bfa7-93fa344b6482', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e40ac74c-e68a-47d3-8a1f-fd021a26891c, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=acd40164-54b0-4ce3-aa1f-8a4e056c6881) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:17:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:04.861 163500 INFO neutron.agent.ovn.metadata.agent [-] Port acd40164-54b0-4ce3-aa1f-8a4e056c6881 in datapath 7844e875-d723-468d-8c4a-c3bb5b3b635a bound to our chassis
Nov 29 08:17:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:04.863 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7844e875-d723-468d-8c4a-c3bb5b3b635a
Nov 29 08:17:04 compute-0 ovn_controller[153295]: 2025-11-29T08:17:04Z|00235|binding|INFO|Setting lport acd40164-54b0-4ce3-aa1f-8a4e056c6881 ovn-installed in OVS
Nov 29 08:17:04 compute-0 ovn_controller[153295]: 2025-11-29T08:17:04Z|00236|binding|INFO|Setting lport acd40164-54b0-4ce3-aa1f-8a4e056c6881 up in Southbound
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.872 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:04.877 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[8e4a0864-ac02-4088-87df-cf9e46b629a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:04 compute-0 nova_compute[255040]: 2025-11-29 08:17:04.879 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:04.879 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7844e875-d1 in ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:17:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:04.882 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7844e875-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:17:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:04.883 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[01d4ce7c-7549-42f7-97dd-9d1b718b7d64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:04.885 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[69ae3ec4-02e6-4cbf-899d-92d8eb57be9b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:04 compute-0 systemd-udevd[294724]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:17:04 compute-0 systemd-machined[216271]: New machine qemu-25-instance-00000019.
Nov 29 08:17:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:04.903 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[8aba3ed0-4820-46bf-97f0-57fdcc4e9af7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:04 compute-0 NetworkManager[49116]: <info>  [1764404224.9055] device (tapacd40164-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:17:04 compute-0 NetworkManager[49116]: <info>  [1764404224.9065] device (tapacd40164-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:17:04 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000019.
Nov 29 08:17:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:04.932 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[907bf7cb-a4d2-443b-85b7-31e020a57cf3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:04.972 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb60b29-48f0-4f31-90c4-7d7a42af156f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:04.977 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa41273-c672-46c1-a479-9e8c5fb29d44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:04 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 08:17:04 compute-0 NetworkManager[49116]: <info>  [1764404224.9789] manager: (tap7844e875-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/126)
Nov 29 08:17:04 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 08:17:04 compute-0 systemd-udevd[294728]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:05.013 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[10e05d68-4557-4879-9922-b55527492751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:05.018 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f1a81f-ce3e-4e38-9ca5-b5673f3b5b76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:05 compute-0 NetworkManager[49116]: <info>  [1764404225.0439] device (tap7844e875-d0): carrier: link connected
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:05.048 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[56220e7b-f225-4caa-9a91-14fa59899b2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:05.068 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[b69b133e-9545-4eb1-b52b-38a49404cc68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7844e875-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:72:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646238, 'reachable_time': 21732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294774, 'error': None, 'target': 'ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:05.084 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[c1c01047-ca7f-47f3-96ec-c9f95692caf6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:7298'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646238, 'tstamp': 646238}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294779, 'error': None, 'target': 'ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:05 compute-0 nova_compute[255040]: 2025-11-29 08:17:05.096 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e461 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:05.107 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[aa70ef58-c2c5-4fed-9402-1b09361e1b00]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7844e875-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:72:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646238, 'reachable_time': 21732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294783, 'error': None, 'target': 'ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:05 compute-0 podman[294743]: 2025-11-29 08:17:05.114956157 +0000 UTC m=+0.100452281 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:05.148 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[db4688f1-1f5b-4510-9b5c-0853e16f9618]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:05 compute-0 ceph-mon[75237]: pgmap v1933: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 202 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 3.7 MiB/s wr, 72 op/s
Nov 29 08:17:05 compute-0 ceph-mon[75237]: osdmap e461: 3 total, 3 up, 3 in
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:05.210 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[1f5af221-a48f-42d7-a6bf-5510f20b8478]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:05.211 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7844e875-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:05.211 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:05.212 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7844e875-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:05 compute-0 kernel: tap7844e875-d0: entered promiscuous mode
Nov 29 08:17:05 compute-0 NetworkManager[49116]: <info>  [1764404225.2139] manager: (tap7844e875-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Nov 29 08:17:05 compute-0 nova_compute[255040]: 2025-11-29 08:17:05.213 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:05 compute-0 nova_compute[255040]: 2025-11-29 08:17:05.215 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:05.217 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7844e875-d0, col_values=(('external_ids', {'iface-id': 'b495613a-3fb1-48c4-aa81-640b29e83d9b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:05 compute-0 nova_compute[255040]: 2025-11-29 08:17:05.219 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:05 compute-0 ovn_controller[153295]: 2025-11-29T08:17:05Z|00237|binding|INFO|Releasing lport b495613a-3fb1-48c4-aa81-640b29e83d9b from this chassis (sb_readonly=0)
Nov 29 08:17:05 compute-0 nova_compute[255040]: 2025-11-29 08:17:05.220 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:05.221 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7844e875-d723-468d-8c4a-c3bb5b3b635a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7844e875-d723-468d-8c4a-c3bb5b3b635a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:05.223 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[4b3a72b7-698a-4685-8830-5214cc8c59fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:05.224 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-7844e875-d723-468d-8c4a-c3bb5b3b635a
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/7844e875-d723-468d-8c4a-c3bb5b3b635a.pid.haproxy
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID 7844e875-d723-468d-8c4a-c3bb5b3b635a
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:17:05 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:05.224 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'env', 'PROCESS_TAG=haproxy-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7844e875-d723-468d-8c4a-c3bb5b3b635a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:17:05 compute-0 nova_compute[255040]: 2025-11-29 08:17:05.235 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:05 compute-0 nova_compute[255040]: 2025-11-29 08:17:05.259 255071 DEBUG nova.compute.manager [req-24957af8-5d04-43fe-a09a-4b676340533d req-56e9c89e-5aff-4f99-89dd-67e1d106ece3 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Received event network-vif-plugged-acd40164-54b0-4ce3-aa1f-8a4e056c6881 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:17:05 compute-0 nova_compute[255040]: 2025-11-29 08:17:05.260 255071 DEBUG oslo_concurrency.lockutils [req-24957af8-5d04-43fe-a09a-4b676340533d req-56e9c89e-5aff-4f99-89dd-67e1d106ece3 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "5719d39a-a6d3-4e05-868e-db103119cdb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:05 compute-0 nova_compute[255040]: 2025-11-29 08:17:05.260 255071 DEBUG oslo_concurrency.lockutils [req-24957af8-5d04-43fe-a09a-4b676340533d req-56e9c89e-5aff-4f99-89dd-67e1d106ece3 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5719d39a-a6d3-4e05-868e-db103119cdb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:05 compute-0 nova_compute[255040]: 2025-11-29 08:17:05.260 255071 DEBUG oslo_concurrency.lockutils [req-24957af8-5d04-43fe-a09a-4b676340533d req-56e9c89e-5aff-4f99-89dd-67e1d106ece3 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5719d39a-a6d3-4e05-868e-db103119cdb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:05 compute-0 nova_compute[255040]: 2025-11-29 08:17:05.260 255071 DEBUG nova.compute.manager [req-24957af8-5d04-43fe-a09a-4b676340533d req-56e9c89e-5aff-4f99-89dd-67e1d106ece3 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Processing event network-vif-plugged-acd40164-54b0-4ce3-aa1f-8a4e056c6881 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:17:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:17:05 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/518252673' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:17:05 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/518252673' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:05 compute-0 podman[294853]: 2025-11-29 08:17:05.563727468 +0000 UTC m=+0.042397687 container create 6b5cdcfe12a5ce57e02c8b037a68b519f66094c549ad1fe2a8b0da3d9e5e6792 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 08:17:05 compute-0 systemd[1]: Started libpod-conmon-6b5cdcfe12a5ce57e02c8b037a68b519f66094c549ad1fe2a8b0da3d9e5e6792.scope.
Nov 29 08:17:05 compute-0 podman[294853]: 2025-11-29 08:17:05.540211278 +0000 UTC m=+0.018881507 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:17:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:17:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b44b48f04e003ef9d611a6a0640ad0bc369c8572adf911f7e4ad76d944ef1d66/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:17:05 compute-0 podman[294853]: 2025-11-29 08:17:05.672722757 +0000 UTC m=+0.151393036 container init 6b5cdcfe12a5ce57e02c8b037a68b519f66094c549ad1fe2a8b0da3d9e5e6792 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:17:05 compute-0 podman[294853]: 2025-11-29 08:17:05.684378439 +0000 UTC m=+0.163048678 container start 6b5cdcfe12a5ce57e02c8b037a68b519f66094c549ad1fe2a8b0da3d9e5e6792 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 08:17:05 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[294868]: [NOTICE]   (294872) : New worker (294874) forked
Nov 29 08:17:05 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[294868]: [NOTICE]   (294872) : Loading success.
Nov 29 08:17:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 202 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 8.5 KiB/s wr, 164 op/s
Nov 29 08:17:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e461 do_prune osdmap full prune enabled
Nov 29 08:17:06 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/518252673' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:06 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/518252673' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:06 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e462 e462: 3 total, 3 up, 3 in
Nov 29 08:17:06 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e462: 3 total, 3 up, 3 in
Nov 29 08:17:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e462 do_prune osdmap full prune enabled
Nov 29 08:17:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e463 e463: 3 total, 3 up, 3 in
Nov 29 08:17:07 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e463: 3 total, 3 up, 3 in
Nov 29 08:17:07 compute-0 ceph-mon[75237]: pgmap v1935: 305 pgs: 305 active+clean; 202 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 8.5 KiB/s wr, 164 op/s
Nov 29 08:17:07 compute-0 ceph-mon[75237]: osdmap e462: 3 total, 3 up, 3 in
Nov 29 08:17:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:17:07 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/619514652' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:07 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:17:07 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/619514652' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.344 255071 DEBUG nova.compute.manager [req-62467ad3-507d-4cd5-977e-75c134c1effa req-961a8282-9b03-40b8-b5f2-715923ebce1f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Received event network-vif-plugged-acd40164-54b0-4ce3-aa1f-8a4e056c6881 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.345 255071 DEBUG oslo_concurrency.lockutils [req-62467ad3-507d-4cd5-977e-75c134c1effa req-961a8282-9b03-40b8-b5f2-715923ebce1f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "5719d39a-a6d3-4e05-868e-db103119cdb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.345 255071 DEBUG oslo_concurrency.lockutils [req-62467ad3-507d-4cd5-977e-75c134c1effa req-961a8282-9b03-40b8-b5f2-715923ebce1f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5719d39a-a6d3-4e05-868e-db103119cdb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.345 255071 DEBUG oslo_concurrency.lockutils [req-62467ad3-507d-4cd5-977e-75c134c1effa req-961a8282-9b03-40b8-b5f2-715923ebce1f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5719d39a-a6d3-4e05-868e-db103119cdb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.346 255071 DEBUG nova.compute.manager [req-62467ad3-507d-4cd5-977e-75c134c1effa req-961a8282-9b03-40b8-b5f2-715923ebce1f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] No waiting events found dispatching network-vif-plugged-acd40164-54b0-4ce3-aa1f-8a4e056c6881 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.346 255071 WARNING nova.compute.manager [req-62467ad3-507d-4cd5-977e-75c134c1effa req-961a8282-9b03-40b8-b5f2-715923ebce1f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Received unexpected event network-vif-plugged-acd40164-54b0-4ce3-aa1f-8a4e056c6881 for instance with vm_state building and task_state spawning.
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.750 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404227.749939, 5719d39a-a6d3-4e05-868e-db103119cdb6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.751 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] VM Started (Lifecycle Event)
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.755 255071 DEBUG nova.compute.manager [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.760 255071 DEBUG nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.764 255071 INFO nova.virt.libvirt.driver [-] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Instance spawned successfully.
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.764 255071 DEBUG nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.779 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.784 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.789 255071 DEBUG nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.789 255071 DEBUG nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.790 255071 DEBUG nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.790 255071 DEBUG nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.790 255071 DEBUG nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.791 255071 DEBUG nova.virt.libvirt.driver [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.822 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.823 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404227.7501712, 5719d39a-a6d3-4e05-868e-db103119cdb6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.823 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] VM Paused (Lifecycle Event)
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.851 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.855 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404227.7597687, 5719d39a-a6d3-4e05-868e-db103119cdb6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.855 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] VM Resumed (Lifecycle Event)
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.880 255071 INFO nova.compute.manager [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Took 6.69 seconds to spawn the instance on the hypervisor.
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.881 255071 DEBUG nova.compute.manager [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.894 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.897 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.931 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.956 255071 INFO nova.compute.manager [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Took 8.95 seconds to build instance.
Nov 29 08:17:07 compute-0 nova_compute[255040]: 2025-11-29 08:17:07.973 255071 DEBUG oslo_concurrency.lockutils [None req-d6cef3f4-e2fd-416b-b2ab-bbfb79cf0937 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "5719d39a-a6d3-4e05-868e-db103119cdb6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 202 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 7.5 KiB/s wr, 117 op/s
Nov 29 08:17:08 compute-0 ceph-mon[75237]: osdmap e463: 3 total, 3 up, 3 in
Nov 29 08:17:08 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/619514652' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:08 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/619514652' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:17:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:17:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:17:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:17:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:17:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:17:09 compute-0 nova_compute[255040]: 2025-11-29 08:17:09.033 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:09 compute-0 ceph-mon[75237]: pgmap v1938: 305 pgs: 305 active+clean; 202 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 7.5 KiB/s wr, 117 op/s
Nov 29 08:17:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 202 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 35 KiB/s wr, 239 op/s
Nov 29 08:17:10 compute-0 nova_compute[255040]: 2025-11-29 08:17:10.099 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e463 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e463 do_prune osdmap full prune enabled
Nov 29 08:17:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e464 e464: 3 total, 3 up, 3 in
Nov 29 08:17:10 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e464: 3 total, 3 up, 3 in
Nov 29 08:17:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e464 do_prune osdmap full prune enabled
Nov 29 08:17:11 compute-0 ceph-mon[75237]: pgmap v1939: 305 pgs: 305 active+clean; 202 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 35 KiB/s wr, 239 op/s
Nov 29 08:17:11 compute-0 ceph-mon[75237]: osdmap e464: 3 total, 3 up, 3 in
Nov 29 08:17:11 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e465 e465: 3 total, 3 up, 3 in
Nov 29 08:17:11 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e465: 3 total, 3 up, 3 in
Nov 29 08:17:11 compute-0 nova_compute[255040]: 2025-11-29 08:17:11.144 255071 DEBUG nova.compute.manager [req-b9e25462-0baa-412c-a1db-482d0fc7208f req-28aacccf-784c-4e97-9813-f8b89674b443 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Received event network-changed-acd40164-54b0-4ce3-aa1f-8a4e056c6881 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:17:11 compute-0 nova_compute[255040]: 2025-11-29 08:17:11.144 255071 DEBUG nova.compute.manager [req-b9e25462-0baa-412c-a1db-482d0fc7208f req-28aacccf-784c-4e97-9813-f8b89674b443 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Refreshing instance network info cache due to event network-changed-acd40164-54b0-4ce3-aa1f-8a4e056c6881. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:17:11 compute-0 nova_compute[255040]: 2025-11-29 08:17:11.144 255071 DEBUG oslo_concurrency.lockutils [req-b9e25462-0baa-412c-a1db-482d0fc7208f req-28aacccf-784c-4e97-9813-f8b89674b443 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-5719d39a-a6d3-4e05-868e-db103119cdb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:17:11 compute-0 nova_compute[255040]: 2025-11-29 08:17:11.145 255071 DEBUG oslo_concurrency.lockutils [req-b9e25462-0baa-412c-a1db-482d0fc7208f req-28aacccf-784c-4e97-9813-f8b89674b443 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-5719d39a-a6d3-4e05-868e-db103119cdb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:17:11 compute-0 nova_compute[255040]: 2025-11-29 08:17:11.145 255071 DEBUG nova.network.neutron [req-b9e25462-0baa-412c-a1db-482d0fc7208f req-28aacccf-784c-4e97-9813-f8b89674b443 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Refreshing network info cache for port acd40164-54b0-4ce3-aa1f-8a4e056c6881 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:17:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 202 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 35 KiB/s wr, 324 op/s
Nov 29 08:17:12 compute-0 ceph-mon[75237]: osdmap e465: 3 total, 3 up, 3 in
Nov 29 08:17:12 compute-0 nova_compute[255040]: 2025-11-29 08:17:12.164 255071 DEBUG nova.network.neutron [req-b9e25462-0baa-412c-a1db-482d0fc7208f req-28aacccf-784c-4e97-9813-f8b89674b443 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Updated VIF entry in instance network info cache for port acd40164-54b0-4ce3-aa1f-8a4e056c6881. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:17:12 compute-0 nova_compute[255040]: 2025-11-29 08:17:12.166 255071 DEBUG nova.network.neutron [req-b9e25462-0baa-412c-a1db-482d0fc7208f req-28aacccf-784c-4e97-9813-f8b89674b443 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Updating instance_info_cache with network_info: [{"id": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "address": "fa:16:3e:9a:f9:a3", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd40164-54", "ovs_interfaceid": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:17:12 compute-0 nova_compute[255040]: 2025-11-29 08:17:12.212 255071 DEBUG oslo_concurrency.lockutils [req-b9e25462-0baa-412c-a1db-482d0fc7208f req-28aacccf-784c-4e97-9813-f8b89674b443 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-5719d39a-a6d3-4e05-868e-db103119cdb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:17:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:17:12 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3950119937' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:12 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:17:12 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3950119937' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e465 do_prune osdmap full prune enabled
Nov 29 08:17:13 compute-0 ceph-mon[75237]: pgmap v1942: 305 pgs: 305 active+clean; 202 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 35 KiB/s wr, 324 op/s
Nov 29 08:17:13 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3950119937' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:13 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3950119937' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e466 e466: 3 total, 3 up, 3 in
Nov 29 08:17:13 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e466: 3 total, 3 up, 3 in
Nov 29 08:17:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 202 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 34 KiB/s wr, 314 op/s
Nov 29 08:17:14 compute-0 nova_compute[255040]: 2025-11-29 08:17:14.035 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:14 compute-0 ceph-mon[75237]: osdmap e466: 3 total, 3 up, 3 in
Nov 29 08:17:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e466 do_prune osdmap full prune enabled
Nov 29 08:17:15 compute-0 nova_compute[255040]: 2025-11-29 08:17:15.102 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e467 e467: 3 total, 3 up, 3 in
Nov 29 08:17:15 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e467: 3 total, 3 up, 3 in
Nov 29 08:17:15 compute-0 ceph-mon[75237]: pgmap v1944: 305 pgs: 305 active+clean; 202 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 34 KiB/s wr, 314 op/s
Nov 29 08:17:15 compute-0 ceph-mon[75237]: osdmap e467: 3 total, 3 up, 3 in
Nov 29 08:17:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:17:15 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2074359104' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:17:15 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2074359104' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:15 compute-0 podman[294889]: 2025-11-29 08:17:15.947742641 +0000 UTC m=+0.108209919 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:17:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 202 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 6.8 KiB/s wr, 187 op/s
Nov 29 08:17:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e467 do_prune osdmap full prune enabled
Nov 29 08:17:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e468 e468: 3 total, 3 up, 3 in
Nov 29 08:17:16 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e468: 3 total, 3 up, 3 in
Nov 29 08:17:16 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2074359104' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:16 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2074359104' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:16 compute-0 ceph-mon[75237]: pgmap v1946: 305 pgs: 305 active+clean; 202 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 6.8 KiB/s wr, 187 op/s
Nov 29 08:17:16 compute-0 ceph-mon[75237]: osdmap e468: 3 total, 3 up, 3 in
Nov 29 08:17:17 compute-0 sudo[294906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:17:17 compute-0 sudo[294906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:17 compute-0 sudo[294906]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:17 compute-0 sudo[294931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:17:17 compute-0 sudo[294931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:17 compute-0 sudo[294931]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 202 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 139 KiB/s rd, 7.2 KiB/s wr, 179 op/s
Nov 29 08:17:18 compute-0 sudo[294956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:17:18 compute-0 sudo[294956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:18 compute-0 sudo[294956]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:18 compute-0 sudo[294981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:17:18 compute-0 sudo[294981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:18 compute-0 sudo[294981]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:17:18 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:17:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:17:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:17:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:17:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:17:18 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 9f0a9d93-3f92-4d3b-9d13-e52d880559ba does not exist
Nov 29 08:17:18 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 9818aed1-51b6-4c1e-a8ad-73ea41364e1b does not exist
Nov 29 08:17:18 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev d75df893-2506-468b-93a7-25ce6d4fcc6a does not exist
Nov 29 08:17:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:17:18 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:17:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:17:18 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:17:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:17:18 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:17:18 compute-0 sudo[295038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:17:18 compute-0 sudo[295038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:18 compute-0 sudo[295038]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:19 compute-0 sudo[295063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:17:19 compute-0 sudo[295063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:19 compute-0 sudo[295063]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:19 compute-0 nova_compute[255040]: 2025-11-29 08:17:19.037 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:19 compute-0 sudo[295088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:17:19 compute-0 sudo[295088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:19 compute-0 sudo[295088]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:19 compute-0 ceph-mon[75237]: pgmap v1948: 305 pgs: 305 active+clean; 202 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 139 KiB/s rd, 7.2 KiB/s wr, 179 op/s
Nov 29 08:17:19 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:17:19 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:17:19 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:17:19 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:17:19 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:17:19 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:17:19 compute-0 sudo[295113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:17:19 compute-0 sudo[295113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:19 compute-0 podman[295178]: 2025-11-29 08:17:19.515175754 +0000 UTC m=+0.049881457 container create 008d9f5927c00f208f5067dac2e8d49f27d41bd40342873c28c5ac0156b817b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_merkle, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 08:17:19 compute-0 systemd[1]: Started libpod-conmon-008d9f5927c00f208f5067dac2e8d49f27d41bd40342873c28c5ac0156b817b9.scope.
Nov 29 08:17:19 compute-0 podman[295178]: 2025-11-29 08:17:19.489558128 +0000 UTC m=+0.024263831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:17:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:17:19 compute-0 podman[295178]: 2025-11-29 08:17:19.608648088 +0000 UTC m=+0.143353801 container init 008d9f5927c00f208f5067dac2e8d49f27d41bd40342873c28c5ac0156b817b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_merkle, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 08:17:19 compute-0 podman[295178]: 2025-11-29 08:17:19.618788059 +0000 UTC m=+0.153493742 container start 008d9f5927c00f208f5067dac2e8d49f27d41bd40342873c28c5ac0156b817b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 08:17:19 compute-0 brave_merkle[295193]: 167 167
Nov 29 08:17:19 compute-0 systemd[1]: libpod-008d9f5927c00f208f5067dac2e8d49f27d41bd40342873c28c5ac0156b817b9.scope: Deactivated successfully.
Nov 29 08:17:19 compute-0 podman[295178]: 2025-11-29 08:17:19.628479719 +0000 UTC m=+0.163185392 container attach 008d9f5927c00f208f5067dac2e8d49f27d41bd40342873c28c5ac0156b817b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_merkle, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:17:19 compute-0 podman[295178]: 2025-11-29 08:17:19.631181181 +0000 UTC m=+0.165886864 container died 008d9f5927c00f208f5067dac2e8d49f27d41bd40342873c28c5ac0156b817b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_merkle, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 08:17:19 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 29 08:17:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-373d107bb813d7b39d35e89f90826ffcbe7e98f1e644965fe2526267d9cd1352-merged.mount: Deactivated successfully.
Nov 29 08:17:19 compute-0 podman[295178]: 2025-11-29 08:17:19.697256221 +0000 UTC m=+0.231961904 container remove 008d9f5927c00f208f5067dac2e8d49f27d41bd40342873c28c5ac0156b817b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_merkle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:17:19 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 29 08:17:19 compute-0 systemd[1]: libpod-conmon-008d9f5927c00f208f5067dac2e8d49f27d41bd40342873c28c5ac0156b817b9.scope: Deactivated successfully.
Nov 29 08:17:19 compute-0 podman[295216]: 2025-11-29 08:17:19.870266555 +0000 UTC m=+0.044523794 container create e782bbe0a30f94404b8df9404ceb38c7a2891c4a32b784f0b42c4cabe69d0e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_galileo, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 08:17:19 compute-0 systemd[1]: Started libpod-conmon-e782bbe0a30f94404b8df9404ceb38c7a2891c4a32b784f0b42c4cabe69d0e7b.scope.
Nov 29 08:17:19 compute-0 podman[295216]: 2025-11-29 08:17:19.85216012 +0000 UTC m=+0.026417379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:17:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:17:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b4ec818815f50a531d4d30f6f6481fd61a6f3006ba835e6cc1430f69481c06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:17:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b4ec818815f50a531d4d30f6f6481fd61a6f3006ba835e6cc1430f69481c06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:17:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b4ec818815f50a531d4d30f6f6481fd61a6f3006ba835e6cc1430f69481c06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:17:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b4ec818815f50a531d4d30f6f6481fd61a6f3006ba835e6cc1430f69481c06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:17:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b4ec818815f50a531d4d30f6f6481fd61a6f3006ba835e6cc1430f69481c06/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:17:19 compute-0 podman[295216]: 2025-11-29 08:17:19.976172751 +0000 UTC m=+0.150430000 container init e782bbe0a30f94404b8df9404ceb38c7a2891c4a32b784f0b42c4cabe69d0e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 08:17:19 compute-0 podman[295216]: 2025-11-29 08:17:19.982011768 +0000 UTC m=+0.156269007 container start e782bbe0a30f94404b8df9404ceb38c7a2891c4a32b784f0b42c4cabe69d0e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 08:17:19 compute-0 podman[295216]: 2025-11-29 08:17:19.985676206 +0000 UTC m=+0.159933455 container attach e782bbe0a30f94404b8df9404ceb38c7a2891c4a32b784f0b42c4cabe69d0e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_galileo, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 08:17:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 202 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 224 KiB/s rd, 7.3 KiB/s wr, 179 op/s
Nov 29 08:17:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e468 do_prune osdmap full prune enabled
Nov 29 08:17:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e469 e469: 3 total, 3 up, 3 in
Nov 29 08:17:20 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e469: 3 total, 3 up, 3 in
Nov 29 08:17:20 compute-0 nova_compute[255040]: 2025-11-29 08:17:20.105 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e469 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e469 do_prune osdmap full prune enabled
Nov 29 08:17:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e470 e470: 3 total, 3 up, 3 in
Nov 29 08:17:20 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e470: 3 total, 3 up, 3 in
Nov 29 08:17:21 compute-0 serene_galileo[295232]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:17:21 compute-0 serene_galileo[295232]: --> relative data size: 1.0
Nov 29 08:17:21 compute-0 serene_galileo[295232]: --> All data devices are unavailable
Nov 29 08:17:21 compute-0 systemd[1]: libpod-e782bbe0a30f94404b8df9404ceb38c7a2891c4a32b784f0b42c4cabe69d0e7b.scope: Deactivated successfully.
Nov 29 08:17:21 compute-0 systemd[1]: libpod-e782bbe0a30f94404b8df9404ceb38c7a2891c4a32b784f0b42c4cabe69d0e7b.scope: Consumed 1.045s CPU time.
Nov 29 08:17:21 compute-0 podman[295261]: 2025-11-29 08:17:21.137790055 +0000 UTC m=+0.028022341 container died e782bbe0a30f94404b8df9404ceb38c7a2891c4a32b784f0b42c4cabe69d0e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:17:21 compute-0 ceph-mon[75237]: pgmap v1949: 305 pgs: 305 active+clean; 202 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 224 KiB/s rd, 7.3 KiB/s wr, 179 op/s
Nov 29 08:17:21 compute-0 ceph-mon[75237]: osdmap e469: 3 total, 3 up, 3 in
Nov 29 08:17:21 compute-0 ceph-mon[75237]: osdmap e470: 3 total, 3 up, 3 in
Nov 29 08:17:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-11b4ec818815f50a531d4d30f6f6481fd61a6f3006ba835e6cc1430f69481c06-merged.mount: Deactivated successfully.
Nov 29 08:17:21 compute-0 podman[295261]: 2025-11-29 08:17:21.701169504 +0000 UTC m=+0.591401730 container remove e782bbe0a30f94404b8df9404ceb38c7a2891c4a32b784f0b42c4cabe69d0e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 08:17:21 compute-0 systemd[1]: libpod-conmon-e782bbe0a30f94404b8df9404ceb38c7a2891c4a32b784f0b42c4cabe69d0e7b.scope: Deactivated successfully.
Nov 29 08:17:21 compute-0 sudo[295113]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:21 compute-0 sudo[295276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:17:21 compute-0 sudo[295276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:21 compute-0 sudo[295276]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:21 compute-0 ovn_controller[153295]: 2025-11-29T08:17:21Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9a:f9:a3 10.100.0.3
Nov 29 08:17:21 compute-0 ovn_controller[153295]: 2025-11-29T08:17:21Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9a:f9:a3 10.100.0.3
Nov 29 08:17:21 compute-0 sudo[295301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:17:21 compute-0 sudo[295301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:21 compute-0 sudo[295301]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:21 compute-0 sudo[295326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:17:21 compute-0 sudo[295326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:21 compute-0 sudo[295326]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 210 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 764 KiB/s rd, 4.1 MiB/s wr, 185 op/s
Nov 29 08:17:22 compute-0 sudo[295357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 08:17:22 compute-0 sudo[295357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:22 compute-0 podman[295350]: 2025-11-29 08:17:22.062838122 +0000 UTC m=+0.080238521 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:17:22 compute-0 podman[295435]: 2025-11-29 08:17:22.353319782 +0000 UTC m=+0.021804514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:17:22 compute-0 podman[295435]: 2025-11-29 08:17:22.600429811 +0000 UTC m=+0.268914533 container create 0247bc265b994bb6c237e3b5285a48902540dcb809c6cca237ba099e3a554c54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 29 08:17:22 compute-0 ceph-mon[75237]: pgmap v1952: 305 pgs: 305 active+clean; 210 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 764 KiB/s rd, 4.1 MiB/s wr, 185 op/s
Nov 29 08:17:22 compute-0 systemd[1]: Started libpod-conmon-0247bc265b994bb6c237e3b5285a48902540dcb809c6cca237ba099e3a554c54.scope.
Nov 29 08:17:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:17:22 compute-0 podman[295435]: 2025-11-29 08:17:22.756985485 +0000 UTC m=+0.425470247 container init 0247bc265b994bb6c237e3b5285a48902540dcb809c6cca237ba099e3a554c54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 08:17:22 compute-0 podman[295435]: 2025-11-29 08:17:22.763715264 +0000 UTC m=+0.432199976 container start 0247bc265b994bb6c237e3b5285a48902540dcb809c6cca237ba099e3a554c54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 29 08:17:22 compute-0 mystifying_kapitsa[295451]: 167 167
Nov 29 08:17:22 compute-0 systemd[1]: libpod-0247bc265b994bb6c237e3b5285a48902540dcb809c6cca237ba099e3a554c54.scope: Deactivated successfully.
Nov 29 08:17:22 compute-0 podman[295435]: 2025-11-29 08:17:22.773149427 +0000 UTC m=+0.441634139 container attach 0247bc265b994bb6c237e3b5285a48902540dcb809c6cca237ba099e3a554c54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:17:22 compute-0 podman[295435]: 2025-11-29 08:17:22.773494806 +0000 UTC m=+0.441979548 container died 0247bc265b994bb6c237e3b5285a48902540dcb809c6cca237ba099e3a554c54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:17:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-30f3387fa9e62085880c4e7402ee0fc7fbaec44d2df28dea5bdf172e1cdc5284-merged.mount: Deactivated successfully.
Nov 29 08:17:22 compute-0 podman[295435]: 2025-11-29 08:17:22.821985636 +0000 UTC m=+0.490470338 container remove 0247bc265b994bb6c237e3b5285a48902540dcb809c6cca237ba099e3a554c54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 08:17:22 compute-0 systemd[1]: libpod-conmon-0247bc265b994bb6c237e3b5285a48902540dcb809c6cca237ba099e3a554c54.scope: Deactivated successfully.
Nov 29 08:17:23 compute-0 podman[295477]: 2025-11-29 08:17:22.973789801 +0000 UTC m=+0.025565195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:17:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e470 do_prune osdmap full prune enabled
Nov 29 08:17:23 compute-0 podman[295477]: 2025-11-29 08:17:23.791459562 +0000 UTC m=+0.843234956 container create cad235bdb451c9e2a8cc018064772be5de178944cee2d8ffe9a547a4723ef599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cannon, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 08:17:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e471 e471: 3 total, 3 up, 3 in
Nov 29 08:17:23 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e471: 3 total, 3 up, 3 in
Nov 29 08:17:23 compute-0 systemd[1]: Started libpod-conmon-cad235bdb451c9e2a8cc018064772be5de178944cee2d8ffe9a547a4723ef599.scope.
Nov 29 08:17:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1f137386b67e84294643c80f60b58ced986634a37520d4e923b6a976cf50b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1f137386b67e84294643c80f60b58ced986634a37520d4e923b6a976cf50b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1f137386b67e84294643c80f60b58ced986634a37520d4e923b6a976cf50b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1f137386b67e84294643c80f60b58ced986634a37520d4e923b6a976cf50b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:17:23 compute-0 podman[295477]: 2025-11-29 08:17:23.908290682 +0000 UTC m=+0.960066076 container init cad235bdb451c9e2a8cc018064772be5de178944cee2d8ffe9a547a4723ef599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:17:23 compute-0 podman[295477]: 2025-11-29 08:17:23.917131739 +0000 UTC m=+0.968907113 container start cad235bdb451c9e2a8cc018064772be5de178944cee2d8ffe9a547a4723ef599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cannon, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 08:17:23 compute-0 podman[295477]: 2025-11-29 08:17:23.920598822 +0000 UTC m=+0.972374196 container attach cad235bdb451c9e2a8cc018064772be5de178944cee2d8ffe9a547a4723ef599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cannon, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 08:17:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 210 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 729 KiB/s rd, 4.1 MiB/s wr, 138 op/s
Nov 29 08:17:24 compute-0 nova_compute[255040]: 2025-11-29 08:17:24.041 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]: {
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:     "0": [
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:         {
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "devices": [
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "/dev/loop3"
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             ],
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "lv_name": "ceph_lv0",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "lv_size": "21470642176",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "name": "ceph_lv0",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "tags": {
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.cluster_name": "ceph",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.crush_device_class": "",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.encrypted": "0",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.osd_id": "0",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.type": "block",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.vdo": "0"
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             },
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "type": "block",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "vg_name": "ceph_vg0"
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:         }
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:     ],
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:     "1": [
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:         {
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "devices": [
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "/dev/loop4"
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             ],
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "lv_name": "ceph_lv1",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "lv_size": "21470642176",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "name": "ceph_lv1",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "tags": {
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.cluster_name": "ceph",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.crush_device_class": "",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.encrypted": "0",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.osd_id": "1",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.type": "block",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.vdo": "0"
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             },
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "type": "block",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "vg_name": "ceph_vg1"
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:         }
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:     ],
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:     "2": [
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:         {
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "devices": [
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "/dev/loop5"
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             ],
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "lv_name": "ceph_lv2",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "lv_size": "21470642176",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "name": "ceph_lv2",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "tags": {
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.cluster_name": "ceph",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.crush_device_class": "",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.encrypted": "0",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.osd_id": "2",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.type": "block",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:                 "ceph.vdo": "0"
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             },
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "type": "block",
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:             "vg_name": "ceph_vg2"
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:         }
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]:     ]
Nov 29 08:17:24 compute-0 dazzling_cannon[295493]: }
Nov 29 08:17:24 compute-0 systemd[1]: libpod-cad235bdb451c9e2a8cc018064772be5de178944cee2d8ffe9a547a4723ef599.scope: Deactivated successfully.
Nov 29 08:17:24 compute-0 podman[295477]: 2025-11-29 08:17:24.732793566 +0000 UTC m=+1.784568950 container died cad235bdb451c9e2a8cc018064772be5de178944cee2d8ffe9a547a4723ef599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cannon, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 08:17:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e1f137386b67e84294643c80f60b58ced986634a37520d4e923b6a976cf50b8-merged.mount: Deactivated successfully.
Nov 29 08:17:24 compute-0 podman[295477]: 2025-11-29 08:17:24.784985354 +0000 UTC m=+1.836760738 container remove cad235bdb451c9e2a8cc018064772be5de178944cee2d8ffe9a547a4723ef599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cannon, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:17:24 compute-0 systemd[1]: libpod-conmon-cad235bdb451c9e2a8cc018064772be5de178944cee2d8ffe9a547a4723ef599.scope: Deactivated successfully.
Nov 29 08:17:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e471 do_prune osdmap full prune enabled
Nov 29 08:17:24 compute-0 ceph-mon[75237]: osdmap e471: 3 total, 3 up, 3 in
Nov 29 08:17:24 compute-0 ceph-mon[75237]: pgmap v1954: 305 pgs: 305 active+clean; 210 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 729 KiB/s rd, 4.1 MiB/s wr, 138 op/s
Nov 29 08:17:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e472 e472: 3 total, 3 up, 3 in
Nov 29 08:17:24 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e472: 3 total, 3 up, 3 in
Nov 29 08:17:24 compute-0 sudo[295357]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:24 compute-0 sudo[295514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:17:24 compute-0 sudo[295514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:24 compute-0 sudo[295514]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:24 compute-0 sudo[295539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:17:24 compute-0 sudo[295539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:24 compute-0 sudo[295539]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:25 compute-0 sudo[295564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:17:25 compute-0 sudo[295564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:25 compute-0 sudo[295564]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:17:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1792897533' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:17:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1792897533' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:25 compute-0 sudo[295589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 08:17:25 compute-0 sudo[295589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:25 compute-0 nova_compute[255040]: 2025-11-29 08:17:25.107 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e472 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:25 compute-0 podman[295656]: 2025-11-29 08:17:25.483611117 +0000 UTC m=+0.038945345 container create 40ec43981b00e0fbb19a93ff4006f68a1dd9f67a03d5e0c60f13cd56d303dd69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bouman, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 08:17:25 compute-0 systemd[1]: Started libpod-conmon-40ec43981b00e0fbb19a93ff4006f68a1dd9f67a03d5e0c60f13cd56d303dd69.scope.
Nov 29 08:17:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:17:25 compute-0 podman[295656]: 2025-11-29 08:17:25.556855848 +0000 UTC m=+0.112190096 container init 40ec43981b00e0fbb19a93ff4006f68a1dd9f67a03d5e0c60f13cd56d303dd69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:17:25 compute-0 podman[295656]: 2025-11-29 08:17:25.470087474 +0000 UTC m=+0.025421722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:17:25 compute-0 podman[295656]: 2025-11-29 08:17:25.566257 +0000 UTC m=+0.121591238 container start 40ec43981b00e0fbb19a93ff4006f68a1dd9f67a03d5e0c60f13cd56d303dd69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bouman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:17:25 compute-0 podman[295656]: 2025-11-29 08:17:25.570778801 +0000 UTC m=+0.126113059 container attach 40ec43981b00e0fbb19a93ff4006f68a1dd9f67a03d5e0c60f13cd56d303dd69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 08:17:25 compute-0 elastic_bouman[295673]: 167 167
Nov 29 08:17:25 compute-0 systemd[1]: libpod-40ec43981b00e0fbb19a93ff4006f68a1dd9f67a03d5e0c60f13cd56d303dd69.scope: Deactivated successfully.
Nov 29 08:17:25 compute-0 podman[295656]: 2025-11-29 08:17:25.57555377 +0000 UTC m=+0.130888018 container died 40ec43981b00e0fbb19a93ff4006f68a1dd9f67a03d5e0c60f13cd56d303dd69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:17:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-54afdc866c19dbd055b21ee582feeb5888a223500a912819612a3f4a412493b2-merged.mount: Deactivated successfully.
Nov 29 08:17:25 compute-0 podman[295656]: 2025-11-29 08:17:25.614848382 +0000 UTC m=+0.170182610 container remove 40ec43981b00e0fbb19a93ff4006f68a1dd9f67a03d5e0c60f13cd56d303dd69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bouman, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 08:17:25 compute-0 systemd[1]: libpod-conmon-40ec43981b00e0fbb19a93ff4006f68a1dd9f67a03d5e0c60f13cd56d303dd69.scope: Deactivated successfully.
Nov 29 08:17:25 compute-0 podman[295696]: 2025-11-29 08:17:25.800078383 +0000 UTC m=+0.044693298 container create 1475e4a66a0be6e406139d11e31f2433a3d3a2499e0cfa96f817cfc07f2a7b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_saha, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:17:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e472 do_prune osdmap full prune enabled
Nov 29 08:17:25 compute-0 ceph-mon[75237]: osdmap e472: 3 total, 3 up, 3 in
Nov 29 08:17:25 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1792897533' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:25 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1792897533' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e473 e473: 3 total, 3 up, 3 in
Nov 29 08:17:25 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e473: 3 total, 3 up, 3 in
Nov 29 08:17:25 compute-0 systemd[1]: Started libpod-conmon-1475e4a66a0be6e406139d11e31f2433a3d3a2499e0cfa96f817cfc07f2a7b9f.scope.
Nov 29 08:17:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:17:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c59708ec97c3c96cecf8d318f093be4c152e11715d119e01a856d8ecc26433f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:17:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c59708ec97c3c96cecf8d318f093be4c152e11715d119e01a856d8ecc26433f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:17:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c59708ec97c3c96cecf8d318f093be4c152e11715d119e01a856d8ecc26433f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:17:25 compute-0 podman[295696]: 2025-11-29 08:17:25.78167233 +0000 UTC m=+0.026287275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:17:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c59708ec97c3c96cecf8d318f093be4c152e11715d119e01a856d8ecc26433f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:17:25 compute-0 podman[295696]: 2025-11-29 08:17:25.89554389 +0000 UTC m=+0.140158835 container init 1475e4a66a0be6e406139d11e31f2433a3d3a2499e0cfa96f817cfc07f2a7b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_saha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:17:25 compute-0 podman[295696]: 2025-11-29 08:17:25.90152003 +0000 UTC m=+0.146134975 container start 1475e4a66a0be6e406139d11e31f2433a3d3a2499e0cfa96f817cfc07f2a7b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:17:25 compute-0 podman[295696]: 2025-11-29 08:17:25.906256567 +0000 UTC m=+0.150871512 container attach 1475e4a66a0be6e406139d11e31f2433a3d3a2499e0cfa96f817cfc07f2a7b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:17:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 260 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 9.1 MiB/s wr, 246 op/s
Nov 29 08:17:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e473 do_prune osdmap full prune enabled
Nov 29 08:17:26 compute-0 ceph-mon[75237]: osdmap e473: 3 total, 3 up, 3 in
Nov 29 08:17:26 compute-0 ceph-mon[75237]: pgmap v1957: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 260 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 9.1 MiB/s wr, 246 op/s
Nov 29 08:17:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e474 e474: 3 total, 3 up, 3 in
Nov 29 08:17:26 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e474: 3 total, 3 up, 3 in
Nov 29 08:17:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:17:26 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2932696200' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:17:26 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2932696200' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:26 compute-0 zen_saha[295713]: {
Nov 29 08:17:26 compute-0 zen_saha[295713]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 08:17:26 compute-0 zen_saha[295713]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:17:26 compute-0 zen_saha[295713]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:17:26 compute-0 zen_saha[295713]:         "osd_id": 2,
Nov 29 08:17:26 compute-0 zen_saha[295713]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:17:26 compute-0 zen_saha[295713]:         "type": "bluestore"
Nov 29 08:17:26 compute-0 zen_saha[295713]:     },
Nov 29 08:17:26 compute-0 zen_saha[295713]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 08:17:26 compute-0 zen_saha[295713]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:17:26 compute-0 zen_saha[295713]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:17:26 compute-0 zen_saha[295713]:         "osd_id": 0,
Nov 29 08:17:26 compute-0 zen_saha[295713]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:17:26 compute-0 zen_saha[295713]:         "type": "bluestore"
Nov 29 08:17:26 compute-0 zen_saha[295713]:     },
Nov 29 08:17:26 compute-0 zen_saha[295713]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 08:17:26 compute-0 zen_saha[295713]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:17:26 compute-0 zen_saha[295713]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:17:26 compute-0 zen_saha[295713]:         "osd_id": 1,
Nov 29 08:17:26 compute-0 zen_saha[295713]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:17:26 compute-0 zen_saha[295713]:         "type": "bluestore"
Nov 29 08:17:26 compute-0 zen_saha[295713]:     }
Nov 29 08:17:26 compute-0 zen_saha[295713]: }
Nov 29 08:17:26 compute-0 systemd[1]: libpod-1475e4a66a0be6e406139d11e31f2433a3d3a2499e0cfa96f817cfc07f2a7b9f.scope: Deactivated successfully.
Nov 29 08:17:26 compute-0 podman[295746]: 2025-11-29 08:17:26.923907745 +0000 UTC m=+0.037118086 container died 1475e4a66a0be6e406139d11e31f2433a3d3a2499e0cfa96f817cfc07f2a7b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 08:17:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-c59708ec97c3c96cecf8d318f093be4c152e11715d119e01a856d8ecc26433f6-merged.mount: Deactivated successfully.
Nov 29 08:17:26 compute-0 podman[295746]: 2025-11-29 08:17:26.978288901 +0000 UTC m=+0.091499212 container remove 1475e4a66a0be6e406139d11e31f2433a3d3a2499e0cfa96f817cfc07f2a7b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_saha, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:17:26 compute-0 systemd[1]: libpod-conmon-1475e4a66a0be6e406139d11e31f2433a3d3a2499e0cfa96f817cfc07f2a7b9f.scope: Deactivated successfully.
Nov 29 08:17:27 compute-0 sudo[295589]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:17:27 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:17:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:17:27 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:17:27 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 8d06cd41-0429-43ce-bac3-adb294fd8126 does not exist
Nov 29 08:17:27 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev d416d769-f8f3-4fbe-b372-7ba03d772e96 does not exist
Nov 29 08:17:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:17:27 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1769838083' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:17:27 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1769838083' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:27 compute-0 sudo[295761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:17:27 compute-0 sudo[295761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:27 compute-0 sudo[295761]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:27.142 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:27.144 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:27.145 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:27 compute-0 sudo[295786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:17:27 compute-0 sudo[295786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:27 compute-0 sudo[295786]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:27 compute-0 ceph-mon[75237]: osdmap e474: 3 total, 3 up, 3 in
Nov 29 08:17:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2932696200' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2932696200' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:27 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:17:27 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:17:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1769838083' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1769838083' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:27.839877) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404247840013, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 723, "num_deletes": 256, "total_data_size": 725412, "memory_usage": 739704, "flush_reason": "Manual Compaction"}
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404247849200, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 715644, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36644, "largest_seqno": 37366, "table_properties": {"data_size": 711767, "index_size": 1657, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9469, "raw_average_key_size": 20, "raw_value_size": 703765, "raw_average_value_size": 1533, "num_data_blocks": 71, "num_entries": 459, "num_filter_entries": 459, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404220, "oldest_key_time": 1764404220, "file_creation_time": 1764404247, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 9412 microseconds, and 5375 cpu microseconds.
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:27.849295) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 715644 bytes OK
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:27.849321) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:27.851663) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:27.851678) EVENT_LOG_v1 {"time_micros": 1764404247851672, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:27.851696) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 721523, prev total WAL file size 721523, number of live WAL files 2.
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:27.852256) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(698KB)], [74(11MB)]
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404247852379, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 13113082, "oldest_snapshot_seqno": -1}
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6963 keys, 11287972 bytes, temperature: kUnknown
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404247941813, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 11287972, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11234920, "index_size": 34572, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17413, "raw_key_size": 177286, "raw_average_key_size": 25, "raw_value_size": 11103307, "raw_average_value_size": 1594, "num_data_blocks": 1376, "num_entries": 6963, "num_filter_entries": 6963, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764404247, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:27.942271) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 11287972 bytes
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:27.944458) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.3 rd, 126.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 11.8 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(34.1) write-amplify(15.8) OK, records in: 7488, records dropped: 525 output_compression: NoCompression
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:27.944479) EVENT_LOG_v1 {"time_micros": 1764404247944470, "job": 42, "event": "compaction_finished", "compaction_time_micros": 89618, "compaction_time_cpu_micros": 36671, "output_level": 6, "num_output_files": 1, "total_output_size": 11287972, "num_input_records": 7488, "num_output_records": 6963, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404247944766, "job": 42, "event": "table_file_deletion", "file_number": 76}
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404247947591, "job": 42, "event": "table_file_deletion", "file_number": 74}
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:27.852138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:27.947770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:27.947778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:27.947780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:27.947783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:17:27 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:17:27.947786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:17:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 271 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 731 KiB/s rd, 11 MiB/s wr, 318 op/s
Nov 29 08:17:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:17:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4274171895' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:17:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4274171895' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:28 compute-0 ceph-mon[75237]: pgmap v1959: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 271 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 731 KiB/s rd, 11 MiB/s wr, 318 op/s
Nov 29 08:17:28 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4274171895' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:28 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4274171895' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:29 compute-0 nova_compute[255040]: 2025-11-29 08:17:29.043 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e474 do_prune osdmap full prune enabled
Nov 29 08:17:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e475 e475: 3 total, 3 up, 3 in
Nov 29 08:17:29 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e475: 3 total, 3 up, 3 in
Nov 29 08:17:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 271 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 3.5 MiB/s wr, 233 op/s
Nov 29 08:17:30 compute-0 nova_compute[255040]: 2025-11-29 08:17:30.109 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e475 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:30 compute-0 ceph-mon[75237]: osdmap e475: 3 total, 3 up, 3 in
Nov 29 08:17:30 compute-0 ceph-mon[75237]: pgmap v1961: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 271 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 3.5 MiB/s wr, 233 op/s
Nov 29 08:17:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e475 do_prune osdmap full prune enabled
Nov 29 08:17:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e476 e476: 3 total, 3 up, 3 in
Nov 29 08:17:31 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e476: 3 total, 3 up, 3 in
Nov 29 08:17:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 161 KiB/s rd, 1.2 MiB/s wr, 224 op/s
Nov 29 08:17:32 compute-0 ceph-mon[75237]: osdmap e476: 3 total, 3 up, 3 in
Nov 29 08:17:32 compute-0 ceph-mon[75237]: pgmap v1963: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 161 KiB/s rd, 1.2 MiB/s wr, 224 op/s
Nov 29 08:17:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:17:33 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3101951740' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:17:33 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3101951740' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:33 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3101951740' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:33 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3101951740' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 4.2 KiB/s wr, 148 op/s
Nov 29 08:17:34 compute-0 nova_compute[255040]: 2025-11-29 08:17:34.046 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:34 compute-0 ceph-mon[75237]: pgmap v1964: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 4.2 KiB/s wr, 148 op/s
Nov 29 08:17:34 compute-0 nova_compute[255040]: 2025-11-29 08:17:34.990 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:17:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e476 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e476 do_prune osdmap full prune enabled
Nov 29 08:17:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e477 e477: 3 total, 3 up, 3 in
Nov 29 08:17:35 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e477: 3 total, 3 up, 3 in
Nov 29 08:17:35 compute-0 nova_compute[255040]: 2025-11-29 08:17:35.158 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:35 compute-0 podman[295811]: 2025-11-29 08:17:35.955877381 +0000 UTC m=+0.121495445 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:17:35 compute-0 nova_compute[255040]: 2025-11-29 08:17:35.974 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:17:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 12 KiB/s wr, 116 op/s
Nov 29 08:17:36 compute-0 ceph-mon[75237]: osdmap e477: 3 total, 3 up, 3 in
Nov 29 08:17:36 compute-0 nova_compute[255040]: 2025-11-29 08:17:36.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:17:37 compute-0 ceph-mon[75237]: pgmap v1966: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 12 KiB/s wr, 116 op/s
Nov 29 08:17:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:17:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1161915434' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 9.9 KiB/s wr, 96 op/s
Nov 29 08:17:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e477 do_prune osdmap full prune enabled
Nov 29 08:17:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e478 e478: 3 total, 3 up, 3 in
Nov 29 08:17:38 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1161915434' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:38 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e478: 3 total, 3 up, 3 in
Nov 29 08:17:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:17:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:17:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:17:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:17:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:17:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:17:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:17:38
Nov 29 08:17:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:17:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:17:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['images', 'vms', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'default.rgw.control']
Nov 29 08:17:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:17:38 compute-0 nova_compute[255040]: 2025-11-29 08:17:38.974 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:17:38 compute-0 nova_compute[255040]: 2025-11-29 08:17:38.975 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:17:38 compute-0 nova_compute[255040]: 2025-11-29 08:17:38.975 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:17:39 compute-0 nova_compute[255040]: 2025-11-29 08:17:39.048 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e478 do_prune osdmap full prune enabled
Nov 29 08:17:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e479 e479: 3 total, 3 up, 3 in
Nov 29 08:17:39 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e479: 3 total, 3 up, 3 in
Nov 29 08:17:39 compute-0 ceph-mon[75237]: pgmap v1967: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 9.9 KiB/s wr, 96 op/s
Nov 29 08:17:39 compute-0 ceph-mon[75237]: osdmap e478: 3 total, 3 up, 3 in
Nov 29 08:17:39 compute-0 nova_compute[255040]: 2025-11-29 08:17:39.407 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "refresh_cache-5719d39a-a6d3-4e05-868e-db103119cdb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:17:39 compute-0 nova_compute[255040]: 2025-11-29 08:17:39.407 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquired lock "refresh_cache-5719d39a-a6d3-4e05-868e-db103119cdb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:17:39 compute-0 nova_compute[255040]: 2025-11-29 08:17:39.407 255071 DEBUG nova.network.neutron [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:17:39 compute-0 nova_compute[255040]: 2025-11-29 08:17:39.408 255071 DEBUG nova.objects.instance [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5719d39a-a6d3-4e05-868e-db103119cdb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:17:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 11 KiB/s wr, 70 op/s
Nov 29 08:17:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e479 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e479 do_prune osdmap full prune enabled
Nov 29 08:17:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e480 e480: 3 total, 3 up, 3 in
Nov 29 08:17:40 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e480: 3 total, 3 up, 3 in
Nov 29 08:17:40 compute-0 nova_compute[255040]: 2025-11-29 08:17:40.160 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:40 compute-0 ceph-mon[75237]: osdmap e479: 3 total, 3 up, 3 in
Nov 29 08:17:40 compute-0 ceph-mon[75237]: osdmap e480: 3 total, 3 up, 3 in
Nov 29 08:17:40 compute-0 nova_compute[255040]: 2025-11-29 08:17:40.269 255071 DEBUG nova.network.neutron [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Updating instance_info_cache with network_info: [{"id": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "address": "fa:16:3e:9a:f9:a3", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd40164-54", "ovs_interfaceid": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:17:40 compute-0 nova_compute[255040]: 2025-11-29 08:17:40.286 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Releasing lock "refresh_cache-5719d39a-a6d3-4e05-868e-db103119cdb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:17:40 compute-0 nova_compute[255040]: 2025-11-29 08:17:40.287 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:17:40 compute-0 nova_compute[255040]: 2025-11-29 08:17:40.288 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:17:40 compute-0 nova_compute[255040]: 2025-11-29 08:17:40.288 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:17:40 compute-0 nova_compute[255040]: 2025-11-29 08:17:40.289 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:17:40 compute-0 nova_compute[255040]: 2025-11-29 08:17:40.289 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:17:40 compute-0 nova_compute[255040]: 2025-11-29 08:17:40.327 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:40 compute-0 nova_compute[255040]: 2025-11-29 08:17:40.328 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:40 compute-0 nova_compute[255040]: 2025-11-29 08:17:40.329 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:40 compute-0 nova_compute[255040]: 2025-11-29 08:17:40.329 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:17:40 compute-0 nova_compute[255040]: 2025-11-29 08:17:40.330 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:40 compute-0 ovn_controller[153295]: 2025-11-29T08:17:40Z|00238|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Nov 29 08:17:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:17:40 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3440999041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:40 compute-0 nova_compute[255040]: 2025-11-29 08:17:40.845 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:40 compute-0 nova_compute[255040]: 2025-11-29 08:17:40.949 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:17:40 compute-0 nova_compute[255040]: 2025-11-29 08:17:40.950 255071 DEBUG nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:17:41 compute-0 nova_compute[255040]: 2025-11-29 08:17:41.104 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:17:41 compute-0 nova_compute[255040]: 2025-11-29 08:17:41.105 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4114MB free_disk=59.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:17:41 compute-0 nova_compute[255040]: 2025-11-29 08:17:41.105 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:41 compute-0 nova_compute[255040]: 2025-11-29 08:17:41.106 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:41 compute-0 nova_compute[255040]: 2025-11-29 08:17:41.170 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance 5719d39a-a6d3-4e05-868e-db103119cdb6 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:17:41 compute-0 nova_compute[255040]: 2025-11-29 08:17:41.170 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:17:41 compute-0 nova_compute[255040]: 2025-11-29 08:17:41.171 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:17:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e480 do_prune osdmap full prune enabled
Nov 29 08:17:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e481 e481: 3 total, 3 up, 3 in
Nov 29 08:17:41 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e481: 3 total, 3 up, 3 in
Nov 29 08:17:41 compute-0 ceph-mon[75237]: pgmap v1970: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 11 KiB/s wr, 70 op/s
Nov 29 08:17:41 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3440999041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:41 compute-0 nova_compute[255040]: 2025-11-29 08:17:41.213 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:17:41 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/665097549' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:17:41 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/665097549' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:17:41 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/943745462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:41 compute-0 nova_compute[255040]: 2025-11-29 08:17:41.617 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:41 compute-0 nova_compute[255040]: 2025-11-29 08:17:41.627 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:17:41 compute-0 nova_compute[255040]: 2025-11-29 08:17:41.643 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:17:41 compute-0 nova_compute[255040]: 2025-11-29 08:17:41.664 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:17:41 compute-0 nova_compute[255040]: 2025-11-29 08:17:41.664 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 11 KiB/s wr, 127 op/s
Nov 29 08:17:42 compute-0 ceph-mon[75237]: osdmap e481: 3 total, 3 up, 3 in
Nov 29 08:17:42 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/665097549' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:42 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/665097549' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:42 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/943745462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e481 do_prune osdmap full prune enabled
Nov 29 08:17:43 compute-0 ceph-mon[75237]: pgmap v1973: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 11 KiB/s wr, 127 op/s
Nov 29 08:17:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e482 e482: 3 total, 3 up, 3 in
Nov 29 08:17:43 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e482: 3 total, 3 up, 3 in
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.352 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.353 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.370 255071 DEBUG oslo_concurrency.lockutils [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "5719d39a-a6d3-4e05-868e-db103119cdb6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.370 255071 DEBUG oslo_concurrency.lockutils [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "5719d39a-a6d3-4e05-868e-db103119cdb6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.371 255071 DEBUG oslo_concurrency.lockutils [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "5719d39a-a6d3-4e05-868e-db103119cdb6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.371 255071 DEBUG oslo_concurrency.lockutils [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "5719d39a-a6d3-4e05-868e-db103119cdb6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.372 255071 DEBUG oslo_concurrency.lockutils [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "5719d39a-a6d3-4e05-868e-db103119cdb6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.374 255071 INFO nova.compute.manager [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Terminating instance
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.376 255071 DEBUG nova.compute.manager [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:17:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:17:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:17:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:17:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:17:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:17:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:17:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:17:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:17:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:17:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:17:43 compute-0 kernel: tapacd40164-54 (unregistering): left promiscuous mode
Nov 29 08:17:43 compute-0 NetworkManager[49116]: <info>  [1764404263.4369] device (tapacd40164-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:17:43 compute-0 ovn_controller[153295]: 2025-11-29T08:17:43Z|00239|binding|INFO|Releasing lport acd40164-54b0-4ce3-aa1f-8a4e056c6881 from this chassis (sb_readonly=0)
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.446 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:43 compute-0 ovn_controller[153295]: 2025-11-29T08:17:43Z|00240|binding|INFO|Setting lport acd40164-54b0-4ce3-aa1f-8a4e056c6881 down in Southbound
Nov 29 08:17:43 compute-0 ovn_controller[153295]: 2025-11-29T08:17:43Z|00241|binding|INFO|Removing iface tapacd40164-54 ovn-installed in OVS
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.450 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.472 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:43.479 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:f9:a3 10.100.0.3'], port_security=['fa:16:3e:9a:f9:a3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5719d39a-a6d3-4e05-868e-db103119cdb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6a2673206a04ec28205d820751e3174', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3cd54b75-8b12-47dc-bfa7-93fa344b6482', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e40ac74c-e68a-47d3-8a1f-fd021a26891c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=acd40164-54b0-4ce3-aa1f-8a4e056c6881) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:17:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:43.480 163500 INFO neutron.agent.ovn.metadata.agent [-] Port acd40164-54b0-4ce3-aa1f-8a4e056c6881 in datapath 7844e875-d723-468d-8c4a-c3bb5b3b635a unbound from our chassis
Nov 29 08:17:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:43.481 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7844e875-d723-468d-8c4a-c3bb5b3b635a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:17:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:43.483 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[303b0b63-e461-45eb-ac44-32b75d77458e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:43.484 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a namespace which is not needed anymore
Nov 29 08:17:43 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Deactivated successfully.
Nov 29 08:17:43 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Consumed 16.923s CPU time.
Nov 29 08:17:43 compute-0 systemd-machined[216271]: Machine qemu-25-instance-00000019 terminated.
Nov 29 08:17:43 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[294868]: [NOTICE]   (294872) : haproxy version is 2.8.14-c23fe91
Nov 29 08:17:43 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[294868]: [NOTICE]   (294872) : path to executable is /usr/sbin/haproxy
Nov 29 08:17:43 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[294868]: [WARNING]  (294872) : Exiting Master process...
Nov 29 08:17:43 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[294868]: [WARNING]  (294872) : Exiting Master process...
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.613 255071 INFO nova.virt.libvirt.driver [-] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Instance destroyed successfully.
Nov 29 08:17:43 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[294868]: [ALERT]    (294872) : Current worker (294874) exited with code 143 (Terminated)
Nov 29 08:17:43 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[294868]: [WARNING]  (294872) : All workers exited. Exiting... (0)
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.613 255071 DEBUG nova.objects.instance [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lazy-loading 'resources' on Instance uuid 5719d39a-a6d3-4e05-868e-db103119cdb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:17:43 compute-0 systemd[1]: libpod-6b5cdcfe12a5ce57e02c8b037a68b519f66094c549ad1fe2a8b0da3d9e5e6792.scope: Deactivated successfully.
Nov 29 08:17:43 compute-0 podman[295907]: 2025-11-29 08:17:43.621247225 +0000 UTC m=+0.049517327 container died 6b5cdcfe12a5ce57e02c8b037a68b519f66094c549ad1fe2a8b0da3d9e5e6792 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:17:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6b5cdcfe12a5ce57e02c8b037a68b519f66094c549ad1fe2a8b0da3d9e5e6792-userdata-shm.mount: Deactivated successfully.
Nov 29 08:17:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b44b48f04e003ef9d611a6a0640ad0bc369c8572adf911f7e4ad76d944ef1d66-merged.mount: Deactivated successfully.
Nov 29 08:17:43 compute-0 podman[295907]: 2025-11-29 08:17:43.662252213 +0000 UTC m=+0.090522315 container cleanup 6b5cdcfe12a5ce57e02c8b037a68b519f66094c549ad1fe2a8b0da3d9e5e6792 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:17:43 compute-0 systemd[1]: libpod-conmon-6b5cdcfe12a5ce57e02c8b037a68b519f66094c549ad1fe2a8b0da3d9e5e6792.scope: Deactivated successfully.
Nov 29 08:17:43 compute-0 podman[295949]: 2025-11-29 08:17:43.721649904 +0000 UTC m=+0.039303143 container remove 6b5cdcfe12a5ce57e02c8b037a68b519f66094c549ad1fe2a8b0da3d9e5e6792 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.726 255071 DEBUG nova.virt.libvirt.vif [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-846120083',display_name='tempest-TestEncryptedCinderVolumes-server-846120083',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-846120083',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEdRfFooeDrPdIr34Yh+0fce0QIhdx7hRFz43DuSx97qmzkIJdqTsJhIJpvFpHMnUcNk19c2heDhEKtTmUb/iXamAI7Q4J7B78+R5sIhgPtRSP6lsf7edjGY0plIk9Wynw==',key_name='tempest-TestEncryptedCinderVolumes-458026389',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e6a2673206a04ec28205d820751e3174',ramdisk_id='',reservation_id='r-detp00aq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-2116890995',owner_user_name='tempest-TestEncryptedCinderVolumes-2116890995-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:17:07Z,user_data=None,user_id='8a7b756f6c364e97a9d0d5298587d61c',uuid=5719d39a-a6d3-4e05-868e-db103119cdb6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "address": "fa:16:3e:9a:f9:a3", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd40164-54", "ovs_interfaceid": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.727 255071 DEBUG nova.network.os_vif_util [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Converting VIF {"id": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "address": "fa:16:3e:9a:f9:a3", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd40164-54", "ovs_interfaceid": "acd40164-54b0-4ce3-aa1f-8a4e056c6881", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.727 255071 DEBUG nova.network.os_vif_util [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:f9:a3,bridge_name='br-int',has_traffic_filtering=True,id=acd40164-54b0-4ce3-aa1f-8a4e056c6881,network=Network(7844e875-d723-468d-8c4a-c3bb5b3b635a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacd40164-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.728 255071 DEBUG os_vif [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:f9:a3,bridge_name='br-int',has_traffic_filtering=True,id=acd40164-54b0-4ce3-aa1f-8a4e056c6881,network=Network(7844e875-d723-468d-8c4a-c3bb5b3b635a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacd40164-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:17:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:43.728 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[11c43e56-e336-4086-8409-cf20f49b4c93]: (4, ('Sat Nov 29 08:17:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a (6b5cdcfe12a5ce57e02c8b037a68b519f66094c549ad1fe2a8b0da3d9e5e6792)\n6b5cdcfe12a5ce57e02c8b037a68b519f66094c549ad1fe2a8b0da3d9e5e6792\nSat Nov 29 08:17:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a (6b5cdcfe12a5ce57e02c8b037a68b519f66094c549ad1fe2a8b0da3d9e5e6792)\n6b5cdcfe12a5ce57e02c8b037a68b519f66094c549ad1fe2a8b0da3d9e5e6792\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.731 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.731 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacd40164-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:43.731 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5c658148-3e19-4971-8c1d-1c9b517934f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:43.732 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7844e875-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.732 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:43 compute-0 kernel: tap7844e875-d0: left promiscuous mode
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.735 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.737 255071 INFO os_vif [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:f9:a3,bridge_name='br-int',has_traffic_filtering=True,id=acd40164-54b0-4ce3-aa1f-8a4e056c6881,network=Network(7844e875-d723-468d-8c4a-c3bb5b3b635a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacd40164-54')
Nov 29 08:17:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:43.754 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[2c52c92a-4999-41af-bcd6-31a214c50dc1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.760 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:43.767 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[3dab8cf6-0e7d-416b-9c92-9d7df50db004]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:43.768 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[105064fe-5c05-4c94-aae7-db53910f6acc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:43.786 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[3e9aae6a-2fc7-45ac-8056-a2eb344113ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646230, 'reachable_time': 22849, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295978, 'error': None, 'target': 'ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:43.790 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:17:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:43.791 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[ee54c8c7-51d9-4789-9ce7-0bcf00c9cea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d7844e875\x2dd723\x2d468d\x2d8c4a\x2dc3bb5b3b635a.mount: Deactivated successfully.
Nov 29 08:17:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:17:43 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4145270531' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:17:43 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4145270531' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.869 255071 DEBUG nova.compute.manager [req-121e16d1-9963-4e79-a506-34f42ac84b81 req-816840d2-9373-408e-829a-f53c54166b0e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Received event network-vif-unplugged-acd40164-54b0-4ce3-aa1f-8a4e056c6881 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.870 255071 DEBUG oslo_concurrency.lockutils [req-121e16d1-9963-4e79-a506-34f42ac84b81 req-816840d2-9373-408e-829a-f53c54166b0e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "5719d39a-a6d3-4e05-868e-db103119cdb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.870 255071 DEBUG oslo_concurrency.lockutils [req-121e16d1-9963-4e79-a506-34f42ac84b81 req-816840d2-9373-408e-829a-f53c54166b0e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5719d39a-a6d3-4e05-868e-db103119cdb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.870 255071 DEBUG oslo_concurrency.lockutils [req-121e16d1-9963-4e79-a506-34f42ac84b81 req-816840d2-9373-408e-829a-f53c54166b0e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5719d39a-a6d3-4e05-868e-db103119cdb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.871 255071 DEBUG nova.compute.manager [req-121e16d1-9963-4e79-a506-34f42ac84b81 req-816840d2-9373-408e-829a-f53c54166b0e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] No waiting events found dispatching network-vif-unplugged-acd40164-54b0-4ce3-aa1f-8a4e056c6881 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.872 255071 DEBUG nova.compute.manager [req-121e16d1-9963-4e79-a506-34f42ac84b81 req-816840d2-9373-408e-829a-f53c54166b0e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Received event network-vif-unplugged-acd40164-54b0-4ce3-aa1f-8a4e056c6881 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.933 255071 INFO nova.virt.libvirt.driver [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Deleting instance files /var/lib/nova/instances/5719d39a-a6d3-4e05-868e-db103119cdb6_del
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.934 255071 INFO nova.virt.libvirt.driver [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Deletion of /var/lib/nova/instances/5719d39a-a6d3-4e05-868e-db103119cdb6_del complete
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.980 255071 INFO nova.compute.manager [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Took 0.60 seconds to destroy the instance on the hypervisor.
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.981 255071 DEBUG oslo.service.loopingcall [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.981 255071 DEBUG nova.compute.manager [-] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:17:43 compute-0 nova_compute[255040]: 2025-11-29 08:17:43.981 255071 DEBUG nova.network.neutron [-] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:17:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 9.0 KiB/s wr, 104 op/s
Nov 29 08:17:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e482 do_prune osdmap full prune enabled
Nov 29 08:17:44 compute-0 ceph-mon[75237]: osdmap e482: 3 total, 3 up, 3 in
Nov 29 08:17:44 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4145270531' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:44 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/4145270531' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e483 e483: 3 total, 3 up, 3 in
Nov 29 08:17:44 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e483: 3 total, 3 up, 3 in
Nov 29 08:17:44 compute-0 nova_compute[255040]: 2025-11-29 08:17:44.613 255071 DEBUG nova.network.neutron [-] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:17:44 compute-0 nova_compute[255040]: 2025-11-29 08:17:44.639 255071 INFO nova.compute.manager [-] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Took 0.66 seconds to deallocate network for instance.
Nov 29 08:17:44 compute-0 nova_compute[255040]: 2025-11-29 08:17:44.803 255071 DEBUG nova.compute.manager [req-a4b953b8-2381-4e2b-ac33-aaccc887ece6 req-fd323bf2-746c-4a63-85d1-525da58358da cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Received event network-vif-deleted-acd40164-54b0-4ce3-aa1f-8a4e056c6881 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:17:44 compute-0 nova_compute[255040]: 2025-11-29 08:17:44.967 255071 INFO nova.compute.manager [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Took 0.33 seconds to detach 1 volumes for instance.
Nov 29 08:17:45 compute-0 nova_compute[255040]: 2025-11-29 08:17:45.011 255071 DEBUG oslo_concurrency.lockutils [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:45 compute-0 nova_compute[255040]: 2025-11-29 08:17:45.011 255071 DEBUG oslo_concurrency.lockutils [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:45 compute-0 nova_compute[255040]: 2025-11-29 08:17:45.065 255071 DEBUG oslo_concurrency.processutils [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e483 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:45 compute-0 nova_compute[255040]: 2025-11-29 08:17:45.163 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:45 compute-0 ceph-mon[75237]: pgmap v1975: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 9.0 KiB/s wr, 104 op/s
Nov 29 08:17:45 compute-0 ceph-mon[75237]: osdmap e483: 3 total, 3 up, 3 in
Nov 29 08:17:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:17:45 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4287525889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:45 compute-0 nova_compute[255040]: 2025-11-29 08:17:45.529 255071 DEBUG oslo_concurrency.processutils [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:45 compute-0 nova_compute[255040]: 2025-11-29 08:17:45.535 255071 DEBUG nova.compute.provider_tree [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:17:45 compute-0 nova_compute[255040]: 2025-11-29 08:17:45.549 255071 DEBUG nova.scheduler.client.report [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:17:45 compute-0 nova_compute[255040]: 2025-11-29 08:17:45.575 255071 DEBUG oslo_concurrency.lockutils [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:45 compute-0 nova_compute[255040]: 2025-11-29 08:17:45.601 255071 INFO nova.scheduler.client.report [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Deleted allocations for instance 5719d39a-a6d3-4e05-868e-db103119cdb6
Nov 29 08:17:45 compute-0 nova_compute[255040]: 2025-11-29 08:17:45.673 255071 DEBUG oslo_concurrency.lockutils [None req-ac8c935a-767d-4bd8-91cf-e9b29a9ad940 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "5719d39a-a6d3-4e05-868e-db103119cdb6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 19 KiB/s wr, 149 op/s
Nov 29 08:17:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e483 do_prune osdmap full prune enabled
Nov 29 08:17:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e484 e484: 3 total, 3 up, 3 in
Nov 29 08:17:46 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/4287525889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:46 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e484: 3 total, 3 up, 3 in
Nov 29 08:17:46 compute-0 podman[296005]: 2025-11-29 08:17:46.927672827 +0000 UTC m=+0.083412375 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 08:17:47 compute-0 nova_compute[255040]: 2025-11-29 08:17:47.008 255071 DEBUG nova.compute.manager [req-21a51608-a6b1-4352-8652-bbdf7d62dd89 req-b6214f60-64dd-4451-a544-79cae0d4bbab cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Received event network-vif-plugged-acd40164-54b0-4ce3-aa1f-8a4e056c6881 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:17:47 compute-0 nova_compute[255040]: 2025-11-29 08:17:47.009 255071 DEBUG oslo_concurrency.lockutils [req-21a51608-a6b1-4352-8652-bbdf7d62dd89 req-b6214f60-64dd-4451-a544-79cae0d4bbab cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "5719d39a-a6d3-4e05-868e-db103119cdb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:47 compute-0 nova_compute[255040]: 2025-11-29 08:17:47.009 255071 DEBUG oslo_concurrency.lockutils [req-21a51608-a6b1-4352-8652-bbdf7d62dd89 req-b6214f60-64dd-4451-a544-79cae0d4bbab cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5719d39a-a6d3-4e05-868e-db103119cdb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:47 compute-0 nova_compute[255040]: 2025-11-29 08:17:47.010 255071 DEBUG oslo_concurrency.lockutils [req-21a51608-a6b1-4352-8652-bbdf7d62dd89 req-b6214f60-64dd-4451-a544-79cae0d4bbab cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "5719d39a-a6d3-4e05-868e-db103119cdb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:47 compute-0 nova_compute[255040]: 2025-11-29 08:17:47.010 255071 DEBUG nova.compute.manager [req-21a51608-a6b1-4352-8652-bbdf7d62dd89 req-b6214f60-64dd-4451-a544-79cae0d4bbab cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] No waiting events found dispatching network-vif-plugged-acd40164-54b0-4ce3-aa1f-8a4e056c6881 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:17:47 compute-0 nova_compute[255040]: 2025-11-29 08:17:47.010 255071 WARNING nova.compute.manager [req-21a51608-a6b1-4352-8652-bbdf7d62dd89 req-b6214f60-64dd-4451-a544-79cae0d4bbab cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Received unexpected event network-vif-plugged-acd40164-54b0-4ce3-aa1f-8a4e056c6881 for instance with vm_state deleted and task_state None.
Nov 29 08:17:47 compute-0 ceph-mon[75237]: pgmap v1977: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 19 KiB/s wr, 149 op/s
Nov 29 08:17:47 compute-0 ceph-mon[75237]: osdmap e484: 3 total, 3 up, 3 in
Nov 29 08:17:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 105 KiB/s rd, 30 KiB/s wr, 140 op/s
Nov 29 08:17:48 compute-0 nova_compute[255040]: 2025-11-29 08:17:48.733 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e484 do_prune osdmap full prune enabled
Nov 29 08:17:49 compute-0 ceph-mon[75237]: pgmap v1979: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 105 KiB/s rd, 30 KiB/s wr, 140 op/s
Nov 29 08:17:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e485 e485: 3 total, 3 up, 3 in
Nov 29 08:17:49 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e485: 3 total, 3 up, 3 in
Nov 29 08:17:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:17:49 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2760795250' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:17:49 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2760795250' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:17:49 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1069653010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 132 KiB/s rd, 32 KiB/s wr, 174 op/s
Nov 29 08:17:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e485 do_prune osdmap full prune enabled
Nov 29 08:17:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e486 e486: 3 total, 3 up, 3 in
Nov 29 08:17:50 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e486: 3 total, 3 up, 3 in
Nov 29 08:17:50 compute-0 nova_compute[255040]: 2025-11-29 08:17:50.165 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:50 compute-0 sshd-session[296024]: Invalid user appuser from 45.78.219.195 port 47020
Nov 29 08:17:50 compute-0 sshd-session[296024]: Received disconnect from 45.78.219.195 port 47020:11: Bye Bye [preauth]
Nov 29 08:17:50 compute-0 sshd-session[296024]: Disconnected from invalid user appuser 45.78.219.195 port 47020 [preauth]
Nov 29 08:17:50 compute-0 nova_compute[255040]: 2025-11-29 08:17:50.969 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:17:51 compute-0 ceph-mon[75237]: osdmap e485: 3 total, 3 up, 3 in
Nov 29 08:17:51 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2760795250' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:51 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2760795250' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:51 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1069653010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:51 compute-0 ceph-mon[75237]: pgmap v1981: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 132 KiB/s rd, 32 KiB/s wr, 174 op/s
Nov 29 08:17:51 compute-0 ceph-mon[75237]: osdmap e486: 3 total, 3 up, 3 in
Nov 29 08:17:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 17 KiB/s wr, 111 op/s
Nov 29 08:17:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e486 do_prune osdmap full prune enabled
Nov 29 08:17:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e487 e487: 3 total, 3 up, 3 in
Nov 29 08:17:52 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e487: 3 total, 3 up, 3 in
Nov 29 08:17:52 compute-0 podman[296027]: 2025-11-29 08:17:52.917138023 +0000 UTC m=+0.079063469 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 08:17:53 compute-0 ceph-mon[75237]: pgmap v1983: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 17 KiB/s wr, 111 op/s
Nov 29 08:17:53 compute-0 ceph-mon[75237]: osdmap e487: 3 total, 3 up, 3 in
Nov 29 08:17:53 compute-0 nova_compute[255040]: 2025-11-29 08:17:53.564 255071 DEBUG oslo_concurrency.lockutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "df14e24b-3b49-44ee-865e-eda9837a9190" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:53 compute-0 nova_compute[255040]: 2025-11-29 08:17:53.565 255071 DEBUG oslo_concurrency.lockutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:53 compute-0 nova_compute[255040]: 2025-11-29 08:17:53.584 255071 DEBUG nova.compute.manager [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:17:53 compute-0 nova_compute[255040]: 2025-11-29 08:17:53.656 255071 DEBUG oslo_concurrency.lockutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:53 compute-0 nova_compute[255040]: 2025-11-29 08:17:53.657 255071 DEBUG oslo_concurrency.lockutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:53 compute-0 nova_compute[255040]: 2025-11-29 08:17:53.665 255071 DEBUG nova.virt.hardware [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:17:53 compute-0 nova_compute[255040]: 2025-11-29 08:17:53.665 255071 INFO nova.compute.claims [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:17:53 compute-0 nova_compute[255040]: 2025-11-29 08:17:53.734 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:53 compute-0 nova_compute[255040]: 2025-11-29 08:17:53.755 255071 DEBUG oslo_concurrency.processutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 3.3 KiB/s wr, 67 op/s
Nov 29 08:17:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:17:54 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1773592305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:54 compute-0 nova_compute[255040]: 2025-11-29 08:17:54.184 255071 DEBUG oslo_concurrency.processutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:54 compute-0 nova_compute[255040]: 2025-11-29 08:17:54.190 255071 DEBUG nova.compute.provider_tree [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:17:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e487 do_prune osdmap full prune enabled
Nov 29 08:17:54 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1773592305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:17:54 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e488 e488: 3 total, 3 up, 3 in
Nov 29 08:17:54 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e488: 3 total, 3 up, 3 in
Nov 29 08:17:55 compute-0 nova_compute[255040]: 2025-11-29 08:17:55.085 255071 DEBUG nova.scheduler.client.report [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:17:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e488 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:55 compute-0 nova_compute[255040]: 2025-11-29 08:17:55.167 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:55 compute-0 ceph-mon[75237]: pgmap v1985: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 3.3 KiB/s wr, 67 op/s
Nov 29 08:17:55 compute-0 ceph-mon[75237]: osdmap e488: 3 total, 3 up, 3 in
Nov 29 08:17:55 compute-0 nova_compute[255040]: 2025-11-29 08:17:55.572 255071 DEBUG oslo_concurrency.lockutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.915s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:55 compute-0 nova_compute[255040]: 2025-11-29 08:17:55.573 255071 DEBUG nova.compute.manager [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:17:55 compute-0 nova_compute[255040]: 2025-11-29 08:17:55.650 255071 DEBUG nova.compute.manager [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:17:55 compute-0 nova_compute[255040]: 2025-11-29 08:17:55.650 255071 DEBUG nova.network.neutron [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:17:55 compute-0 nova_compute[255040]: 2025-11-29 08:17:55.675 255071 INFO nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:17:55 compute-0 nova_compute[255040]: 2025-11-29 08:17:55.692 255071 DEBUG nova.compute.manager [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:17:55 compute-0 nova_compute[255040]: 2025-11-29 08:17:55.750 255071 INFO nova.virt.block_device [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Booting with volume a8f13395-7310-4b6a-abf7-0d8c9669876a at /dev/vda
Nov 29 08:17:55 compute-0 nova_compute[255040]: 2025-11-29 08:17:55.954 255071 DEBUG os_brick.utils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:17:55 compute-0 nova_compute[255040]: 2025-11-29 08:17:55.956 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:55 compute-0 nova_compute[255040]: 2025-11-29 08:17:55.971 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:55 compute-0 nova_compute[255040]: 2025-11-29 08:17:55.972 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[91eb079d-d207-4a77-a34e-4878b1527609]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:55 compute-0 nova_compute[255040]: 2025-11-29 08:17:55.974 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:55 compute-0 nova_compute[255040]: 2025-11-29 08:17:55.990 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:55 compute-0 nova_compute[255040]: 2025-11-29 08:17:55.991 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[5b11d3db-9d68-4efb-a059-d7076b979fe0]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:55 compute-0 nova_compute[255040]: 2025-11-29 08:17:55.993 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:56 compute-0 nova_compute[255040]: 2025-11-29 08:17:56.010 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:56 compute-0 nova_compute[255040]: 2025-11-29 08:17:56.011 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[1af386a0-2544-49f0-924c-a4b87556fab0]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:56 compute-0 nova_compute[255040]: 2025-11-29 08:17:56.013 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[87cdd8d3-6003-4e55-b250-af535b11e746]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:17:56 compute-0 nova_compute[255040]: 2025-11-29 08:17:56.014 255071 DEBUG oslo_concurrency.processutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 2 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 300 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 3.3 KiB/s wr, 78 op/s
Nov 29 08:17:56 compute-0 nova_compute[255040]: 2025-11-29 08:17:56.044 255071 DEBUG oslo_concurrency.processutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:17:56 compute-0 nova_compute[255040]: 2025-11-29 08:17:56.048 255071 DEBUG os_brick.initiator.connectors.lightos [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:17:56 compute-0 nova_compute[255040]: 2025-11-29 08:17:56.049 255071 DEBUG os_brick.initiator.connectors.lightos [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:17:56 compute-0 nova_compute[255040]: 2025-11-29 08:17:56.050 255071 DEBUG os_brick.initiator.connectors.lightos [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:17:56 compute-0 nova_compute[255040]: 2025-11-29 08:17:56.051 255071 DEBUG os_brick.utils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] <== get_connector_properties: return (96ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:17:56 compute-0 nova_compute[255040]: 2025-11-29 08:17:56.052 255071 DEBUG nova.virt.block_device [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Updating existing volume attachment record: 1279d40c-36b5-4ef4-8524-034efb54e624 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:17:56 compute-0 nova_compute[255040]: 2025-11-29 08:17:56.081 255071 DEBUG nova.policy [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8a7b756f6c364e97a9d0d5298587d61c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e6a2673206a04ec28205d820751e3174', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894712610698704 of space, bias 1.0, pg target 0.8684137832096113 quantized to 32 (current 32)
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:17:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:17:56 compute-0 ceph-mon[75237]: pgmap v1987: 305 pgs: 2 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 300 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 3.3 KiB/s wr, 78 op/s
Nov 29 08:17:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:17:56 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3852645168' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:17:56 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3852645168' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:17:56 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3744935868' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:57 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3852645168' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:57 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3852645168' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:57 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3744935868' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:17:57 compute-0 nova_compute[255040]: 2025-11-29 08:17:57.854 255071 DEBUG nova.network.neutron [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Successfully created port: 2ff8b035-5dbe-484d-8c3b-b6f45649371f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:17:57 compute-0 nova_compute[255040]: 2025-11-29 08:17:57.917 255071 DEBUG nova.compute.manager [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:17:57 compute-0 nova_compute[255040]: 2025-11-29 08:17:57.919 255071 DEBUG nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:17:57 compute-0 nova_compute[255040]: 2025-11-29 08:17:57.919 255071 INFO nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Creating image(s)
Nov 29 08:17:57 compute-0 nova_compute[255040]: 2025-11-29 08:17:57.920 255071 DEBUG nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:17:57 compute-0 nova_compute[255040]: 2025-11-29 08:17:57.920 255071 DEBUG nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Ensure instance console log exists: /var/lib/nova/instances/df14e24b-3b49-44ee-865e-eda9837a9190/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:17:57 compute-0 nova_compute[255040]: 2025-11-29 08:17:57.920 255071 DEBUG oslo_concurrency.lockutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:17:57 compute-0 nova_compute[255040]: 2025-11-29 08:17:57.920 255071 DEBUG oslo_concurrency.lockutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:17:57 compute-0 nova_compute[255040]: 2025-11-29 08:17:57.921 255071 DEBUG oslo_concurrency.lockutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:17:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 2 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 300 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.7 KiB/s wr, 54 op/s
Nov 29 08:17:58 compute-0 ceph-mon[75237]: pgmap v1988: 305 pgs: 2 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 300 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.7 KiB/s wr, 54 op/s
Nov 29 08:17:58 compute-0 nova_compute[255040]: 2025-11-29 08:17:58.611 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404263.6094863, 5719d39a-a6d3-4e05-868e-db103119cdb6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:17:58 compute-0 nova_compute[255040]: 2025-11-29 08:17:58.612 255071 INFO nova.compute.manager [-] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] VM Stopped (Lifecycle Event)
Nov 29 08:17:58 compute-0 nova_compute[255040]: 2025-11-29 08:17:58.735 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:17:59 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3854086447' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:59 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:17:59 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3854086447' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3854086447' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:17:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3854086447' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:17:59 compute-0 nova_compute[255040]: 2025-11-29 08:17:59.640 255071 DEBUG nova.compute.manager [None req-dc24aace-68e2-4781-b12f-90da523e691e - - - - - -] [instance: 5719d39a-a6d3-4e05-868e-db103119cdb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:17:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:59.809 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:17:59 compute-0 nova_compute[255040]: 2025-11-29 08:17:59.809 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:17:59.810 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:17:59 compute-0 nova_compute[255040]: 2025-11-29 08:17:59.974 255071 DEBUG nova.network.neutron [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Successfully updated port: 2ff8b035-5dbe-484d-8c3b-b6f45649371f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:17:59 compute-0 nova_compute[255040]: 2025-11-29 08:17:59.992 255071 DEBUG oslo_concurrency.lockutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "refresh_cache-df14e24b-3b49-44ee-865e-eda9837a9190" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:17:59 compute-0 nova_compute[255040]: 2025-11-29 08:17:59.992 255071 DEBUG oslo_concurrency.lockutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquired lock "refresh_cache-df14e24b-3b49-44ee-865e-eda9837a9190" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:17:59 compute-0 nova_compute[255040]: 2025-11-29 08:17:59.992 255071 DEBUG nova.network.neutron [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:18:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 1.9 KiB/s wr, 53 op/s
Nov 29 08:18:00 compute-0 nova_compute[255040]: 2025-11-29 08:18:00.100 255071 DEBUG nova.compute.manager [req-f57b7e07-d055-43f2-838a-537f899086d7 req-4411e9f6-5ad8-4d06-b701-3127899769d9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Received event network-changed-2ff8b035-5dbe-484d-8c3b-b6f45649371f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:18:00 compute-0 nova_compute[255040]: 2025-11-29 08:18:00.100 255071 DEBUG nova.compute.manager [req-f57b7e07-d055-43f2-838a-537f899086d7 req-4411e9f6-5ad8-4d06-b701-3127899769d9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Refreshing instance network info cache due to event network-changed-2ff8b035-5dbe-484d-8c3b-b6f45649371f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:18:00 compute-0 nova_compute[255040]: 2025-11-29 08:18:00.101 255071 DEBUG oslo_concurrency.lockutils [req-f57b7e07-d055-43f2-838a-537f899086d7 req-4411e9f6-5ad8-4d06-b701-3127899769d9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-df14e24b-3b49-44ee-865e-eda9837a9190" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:18:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e488 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e488 do_prune osdmap full prune enabled
Nov 29 08:18:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e489 e489: 3 total, 3 up, 3 in
Nov 29 08:18:00 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e489: 3 total, 3 up, 3 in
Nov 29 08:18:00 compute-0 nova_compute[255040]: 2025-11-29 08:18:00.149 255071 DEBUG nova.network.neutron [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:18:00 compute-0 nova_compute[255040]: 2025-11-29 08:18:00.169 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.302 255071 DEBUG nova.network.neutron [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Updating instance_info_cache with network_info: [{"id": "2ff8b035-5dbe-484d-8c3b-b6f45649371f", "address": "fa:16:3e:e3:d2:ad", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff8b035-5d", "ovs_interfaceid": "2ff8b035-5dbe-484d-8c3b-b6f45649371f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.336 255071 DEBUG oslo_concurrency.lockutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Releasing lock "refresh_cache-df14e24b-3b49-44ee-865e-eda9837a9190" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.337 255071 DEBUG nova.compute.manager [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Instance network_info: |[{"id": "2ff8b035-5dbe-484d-8c3b-b6f45649371f", "address": "fa:16:3e:e3:d2:ad", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff8b035-5d", "ovs_interfaceid": "2ff8b035-5dbe-484d-8c3b-b6f45649371f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.340 255071 DEBUG oslo_concurrency.lockutils [req-f57b7e07-d055-43f2-838a-537f899086d7 req-4411e9f6-5ad8-4d06-b701-3127899769d9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-df14e24b-3b49-44ee-865e-eda9837a9190" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.340 255071 DEBUG nova.network.neutron [req-f57b7e07-d055-43f2-838a-537f899086d7 req-4411e9f6-5ad8-4d06-b701-3127899769d9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Refreshing network info cache for port 2ff8b035-5dbe-484d-8c3b-b6f45649371f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.347 255071 DEBUG nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Start _get_guest_xml network_info=[{"id": "2ff8b035-5dbe-484d-8c3b-b6f45649371f", "address": "fa:16:3e:e3:d2:ad", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff8b035-5d", "ovs_interfaceid": "2ff8b035-5dbe-484d-8c3b-b6f45649371f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a8f13395-7310-4b6a-abf7-0d8c9669876a', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a8f13395-7310-4b6a-abf7-0d8c9669876a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'df14e24b-3b49-44ee-865e-eda9837a9190', 'attached_at': '', 'detached_at': '', 'volume_id': 'a8f13395-7310-4b6a-abf7-0d8c9669876a', 'serial': 'a8f13395-7310-4b6a-abf7-0d8c9669876a'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'delete_on_termination': False, 'attachment_id': '1279d40c-36b5-4ef4-8524-034efb54e624', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.355 255071 WARNING nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.360 255071 DEBUG nova.virt.libvirt.host [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.361 255071 DEBUG nova.virt.libvirt.host [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.371 255071 DEBUG nova.virt.libvirt.host [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.372 255071 DEBUG nova.virt.libvirt.host [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.374 255071 DEBUG nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.374 255071 DEBUG nova.virt.hardware [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.376 255071 DEBUG nova.virt.hardware [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.376 255071 DEBUG nova.virt.hardware [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.377 255071 DEBUG nova.virt.hardware [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.378 255071 DEBUG nova.virt.hardware [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.378 255071 DEBUG nova.virt.hardware [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.378 255071 DEBUG nova.virt.hardware [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.379 255071 DEBUG nova.virt.hardware [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.379 255071 DEBUG nova.virt.hardware [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.379 255071 DEBUG nova.virt.hardware [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.380 255071 DEBUG nova.virt.hardware [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.413 255071 DEBUG nova.storage.rbd_utils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] rbd image df14e24b-3b49-44ee-865e-eda9837a9190_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:18:01 compute-0 nova_compute[255040]: 2025-11-29 08:18:01.418 255071 DEBUG oslo_concurrency.processutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:01 compute-0 ceph-mon[75237]: pgmap v1989: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 1.9 KiB/s wr, 53 op/s
Nov 29 08:18:01 compute-0 ceph-mon[75237]: osdmap e489: 3 total, 3 up, 3 in
Nov 29 08:18:02 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:18:02 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1240161100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.030 255071 DEBUG oslo_concurrency.processutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 2.1 KiB/s wr, 54 op/s
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.418 255071 DEBUG os_brick.encryptors [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Using volume encryption metadata '{'encryption_key_id': '7ced2a78-8f31-4db8-a4d7-d34b0ae043a1', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a8f13395-7310-4b6a-abf7-0d8c9669876a', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a8f13395-7310-4b6a-abf7-0d8c9669876a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'df14e24b-3b49-44ee-865e-eda9837a9190', 'attached_at': '', 'detached_at': '', 'volume_id': 'a8f13395-7310-4b6a-abf7-0d8c9669876a', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.421 255071 DEBUG barbicanclient.client [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 29 08:18:02 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1240161100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:18:02 compute-0 ceph-mon[75237]: pgmap v1991: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 2.1 KiB/s wr, 54 op/s
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.444 255071 DEBUG barbicanclient.v1.secrets [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/7ced2a78-8f31-4db8-a4d7-d34b0ae043a1 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.445 255071 INFO barbicanclient.base [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/7ced2a78-8f31-4db8-a4d7-d34b0ae043a1
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.560 255071 DEBUG barbicanclient.client [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.561 255071 INFO barbicanclient.base [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/7ced2a78-8f31-4db8-a4d7-d34b0ae043a1
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.589 255071 DEBUG barbicanclient.client [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.589 255071 INFO barbicanclient.base [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/7ced2a78-8f31-4db8-a4d7-d34b0ae043a1
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.616 255071 DEBUG barbicanclient.client [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.617 255071 INFO barbicanclient.base [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/7ced2a78-8f31-4db8-a4d7-d34b0ae043a1
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.643 255071 DEBUG barbicanclient.client [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.644 255071 INFO barbicanclient.base [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/7ced2a78-8f31-4db8-a4d7-d34b0ae043a1
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.668 255071 DEBUG barbicanclient.client [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.669 255071 INFO barbicanclient.base [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/7ced2a78-8f31-4db8-a4d7-d34b0ae043a1
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.695 255071 DEBUG barbicanclient.client [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.696 255071 INFO barbicanclient.base [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/7ced2a78-8f31-4db8-a4d7-d34b0ae043a1
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.718 255071 DEBUG barbicanclient.client [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.718 255071 INFO barbicanclient.base [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/7ced2a78-8f31-4db8-a4d7-d34b0ae043a1
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.749 255071 DEBUG barbicanclient.client [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.749 255071 INFO barbicanclient.base [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/7ced2a78-8f31-4db8-a4d7-d34b0ae043a1
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.774 255071 DEBUG barbicanclient.client [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.775 255071 INFO barbicanclient.base [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/7ced2a78-8f31-4db8-a4d7-d34b0ae043a1
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.816 255071 DEBUG barbicanclient.client [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.816 255071 INFO barbicanclient.base [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/7ced2a78-8f31-4db8-a4d7-d34b0ae043a1
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.840 255071 DEBUG barbicanclient.client [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.841 255071 INFO barbicanclient.base [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/7ced2a78-8f31-4db8-a4d7-d34b0ae043a1
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.865 255071 DEBUG barbicanclient.client [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.865 255071 INFO barbicanclient.base [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/7ced2a78-8f31-4db8-a4d7-d34b0ae043a1
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.889 255071 DEBUG barbicanclient.client [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.890 255071 INFO barbicanclient.base [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/7ced2a78-8f31-4db8-a4d7-d34b0ae043a1
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.913 255071 DEBUG barbicanclient.client [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.914 255071 INFO barbicanclient.base [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Calculated Secrets uuid ref: secrets/7ced2a78-8f31-4db8-a4d7-d34b0ae043a1
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.941 255071 DEBUG barbicanclient.client [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.942 255071 DEBUG nova.virt.libvirt.host [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 08:18:02 compute-0 nova_compute[255040]:   <usage type="volume">
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <volume>a8f13395-7310-4b6a-abf7-0d8c9669876a</volume>
Nov 29 08:18:02 compute-0 nova_compute[255040]:   </usage>
Nov 29 08:18:02 compute-0 nova_compute[255040]: </secret>
Nov 29 08:18:02 compute-0 nova_compute[255040]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.973 255071 DEBUG nova.virt.libvirt.vif [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:17:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-736452909',display_name='tempest-TestEncryptedCinderVolumes-server-736452909',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-736452909',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEdRfFooeDrPdIr34Yh+0fce0QIhdx7hRFz43DuSx97qmzkIJdqTsJhIJpvFpHMnUcNk19c2heDhEKtTmUb/iXamAI7Q4J7B78+R5sIhgPtRSP6lsf7edjGY0plIk9Wynw==',key_name='tempest-TestEncryptedCinderVolumes-458026389',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6a2673206a04ec28205d820751e3174',ramdisk_id='',reservation_id='r-vg8f425i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-2116890995',owner_user_name='tempest-TestEncryptedCinderVolumes-2116890995-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:17:55Z,user_data=None,user_id='8a7b756f6c364e97a9d0d5298587d61c',uuid=df14e24b-3b49-44ee-865e-eda9837a9190,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ff8b035-5dbe-484d-8c3b-b6f45649371f", "address": "fa:16:3e:e3:d2:ad", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff8b035-5d", "ovs_interfaceid": "2ff8b035-5dbe-484d-8c3b-b6f45649371f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.974 255071 DEBUG nova.network.os_vif_util [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Converting VIF {"id": "2ff8b035-5dbe-484d-8c3b-b6f45649371f", "address": "fa:16:3e:e3:d2:ad", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff8b035-5d", "ovs_interfaceid": "2ff8b035-5dbe-484d-8c3b-b6f45649371f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.975 255071 DEBUG nova.network.os_vif_util [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:d2:ad,bridge_name='br-int',has_traffic_filtering=True,id=2ff8b035-5dbe-484d-8c3b-b6f45649371f,network=Network(7844e875-d723-468d-8c4a-c3bb5b3b635a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ff8b035-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.978 255071 DEBUG nova.objects.instance [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lazy-loading 'pci_devices' on Instance uuid df14e24b-3b49-44ee-865e-eda9837a9190 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.995 255071 DEBUG nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:18:02 compute-0 nova_compute[255040]:   <uuid>df14e24b-3b49-44ee-865e-eda9837a9190</uuid>
Nov 29 08:18:02 compute-0 nova_compute[255040]:   <name>instance-0000001a</name>
Nov 29 08:18:02 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:18:02 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:18:02 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-736452909</nova:name>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:18:01</nova:creationTime>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:18:02 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:18:02 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:18:02 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:18:02 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:18:02 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:18:02 compute-0 nova_compute[255040]:         <nova:user uuid="8a7b756f6c364e97a9d0d5298587d61c">tempest-TestEncryptedCinderVolumes-2116890995-project-member</nova:user>
Nov 29 08:18:02 compute-0 nova_compute[255040]:         <nova:project uuid="e6a2673206a04ec28205d820751e3174">tempest-TestEncryptedCinderVolumes-2116890995</nova:project>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:18:02 compute-0 nova_compute[255040]:         <nova:port uuid="2ff8b035-5dbe-484d-8c3b-b6f45649371f">
Nov 29 08:18:02 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:18:02 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:18:02 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <system>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <entry name="serial">df14e24b-3b49-44ee-865e-eda9837a9190</entry>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <entry name="uuid">df14e24b-3b49-44ee-865e-eda9837a9190</entry>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     </system>
Nov 29 08:18:02 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:18:02 compute-0 nova_compute[255040]:   <os>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:   </os>
Nov 29 08:18:02 compute-0 nova_compute[255040]:   <features>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:   </features>
Nov 29 08:18:02 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:18:02 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:18:02 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/df14e24b-3b49-44ee-865e-eda9837a9190_disk.config">
Nov 29 08:18:02 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       </source>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:18:02 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <source protocol="rbd" name="volumes/volume-a8f13395-7310-4b6a-abf7-0d8c9669876a">
Nov 29 08:18:02 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       </source>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:18:02 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <serial>a8f13395-7310-4b6a-abf7-0d8c9669876a</serial>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <encryption format="luks">
Nov 29 08:18:02 compute-0 nova_compute[255040]:         <secret type="passphrase" uuid="3ca6f51d-0c0b-43ce-a592-f7163975f1f8"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       </encryption>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:e3:d2:ad"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <target dev="tap2ff8b035-5d"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/df14e24b-3b49-44ee-865e-eda9837a9190/console.log" append="off"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <video>
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     </video>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:18:02 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:18:02 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:18:02 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:18:02 compute-0 nova_compute[255040]: </domain>
Nov 29 08:18:02 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.998 255071 DEBUG nova.compute.manager [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Preparing to wait for external event network-vif-plugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.998 255071 DEBUG oslo_concurrency.lockutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.998 255071 DEBUG oslo_concurrency.lockutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.998 255071 DEBUG oslo_concurrency.lockutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:02 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.999 255071 DEBUG nova.virt.libvirt.vif [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:17:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-736452909',display_name='tempest-TestEncryptedCinderVolumes-server-736452909',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-736452909',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEdRfFooeDrPdIr34Yh+0fce0QIhdx7hRFz43DuSx97qmzkIJdqTsJhIJpvFpHMnUcNk19c2heDhEKtTmUb/iXamAI7Q4J7B78+R5sIhgPtRSP6lsf7edjGY0plIk9Wynw==',key_name='tempest-TestEncryptedCinderVolumes-458026389',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6a2673206a04ec28205d820751e3174',ramdisk_id='',reservation_id='r-vg8f425i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-2116890995',owner_user_name='tempest-TestEncryptedCinderVolumes-2116890995-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:17:55Z,user_data=None,user_id='8a7b756f6c364e97a9d0d5298587d61c',uuid=df14e24b-3b49-44ee-865e-eda9837a9190,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ff8b035-5dbe-484d-8c3b-b6f45649371f", "address": "fa:16:3e:e3:d2:ad", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff8b035-5d", "ovs_interfaceid": "2ff8b035-5dbe-484d-8c3b-b6f45649371f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:02.999 255071 DEBUG nova.network.os_vif_util [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Converting VIF {"id": "2ff8b035-5dbe-484d-8c3b-b6f45649371f", "address": "fa:16:3e:e3:d2:ad", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff8b035-5d", "ovs_interfaceid": "2ff8b035-5dbe-484d-8c3b-b6f45649371f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.000 255071 DEBUG nova.network.os_vif_util [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:d2:ad,bridge_name='br-int',has_traffic_filtering=True,id=2ff8b035-5dbe-484d-8c3b-b6f45649371f,network=Network(7844e875-d723-468d-8c4a-c3bb5b3b635a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ff8b035-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.000 255071 DEBUG os_vif [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:d2:ad,bridge_name='br-int',has_traffic_filtering=True,id=2ff8b035-5dbe-484d-8c3b-b6f45649371f,network=Network(7844e875-d723-468d-8c4a-c3bb5b3b635a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ff8b035-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.001 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.001 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.002 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.004 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.004 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ff8b035-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.005 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2ff8b035-5d, col_values=(('external_ids', {'iface-id': '2ff8b035-5dbe-484d-8c3b-b6f45649371f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e3:d2:ad', 'vm-uuid': 'df14e24b-3b49-44ee-865e-eda9837a9190'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.006 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:03 compute-0 NetworkManager[49116]: <info>  [1764404283.0077] manager: (tap2ff8b035-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.008 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.012 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.012 255071 INFO os_vif [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:d2:ad,bridge_name='br-int',has_traffic_filtering=True,id=2ff8b035-5dbe-484d-8c3b-b6f45649371f,network=Network(7844e875-d723-468d-8c4a-c3bb5b3b635a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ff8b035-5d')
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.160 255071 DEBUG nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.160 255071 DEBUG nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.161 255071 DEBUG nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] No VIF found with MAC fa:16:3e:e3:d2:ad, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.161 255071 INFO nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Using config drive
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.187 255071 DEBUG nova.storage.rbd_utils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] rbd image df14e24b-3b49-44ee-865e-eda9837a9190_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.538 255071 DEBUG nova.network.neutron [req-f57b7e07-d055-43f2-838a-537f899086d7 req-4411e9f6-5ad8-4d06-b701-3127899769d9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Updated VIF entry in instance network info cache for port 2ff8b035-5dbe-484d-8c3b-b6f45649371f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.540 255071 DEBUG nova.network.neutron [req-f57b7e07-d055-43f2-838a-537f899086d7 req-4411e9f6-5ad8-4d06-b701-3127899769d9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Updating instance_info_cache with network_info: [{"id": "2ff8b035-5dbe-484d-8c3b-b6f45649371f", "address": "fa:16:3e:e3:d2:ad", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff8b035-5d", "ovs_interfaceid": "2ff8b035-5dbe-484d-8c3b-b6f45649371f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.562 255071 DEBUG oslo_concurrency.lockutils [req-f57b7e07-d055-43f2-838a-537f899086d7 req-4411e9f6-5ad8-4d06-b701-3127899769d9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-df14e24b-3b49-44ee-865e-eda9837a9190" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.649 255071 INFO nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Creating config drive at /var/lib/nova/instances/df14e24b-3b49-44ee-865e-eda9837a9190/disk.config
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.655 255071 DEBUG oslo_concurrency.processutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/df14e24b-3b49-44ee-865e-eda9837a9190/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvoatw41f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.792 255071 DEBUG oslo_concurrency.processutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/df14e24b-3b49-44ee-865e-eda9837a9190/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvoatw41f" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:03 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:03.811 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.814 255071 DEBUG nova.storage.rbd_utils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] rbd image df14e24b-3b49-44ee-865e-eda9837a9190_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:18:03 compute-0 nova_compute[255040]: 2025-11-29 08:18:03.816 255071 DEBUG oslo_concurrency.processutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/df14e24b-3b49-44ee-865e-eda9837a9190/disk.config df14e24b-3b49-44ee-865e-eda9837a9190_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.7 KiB/s wr, 44 op/s
Nov 29 08:18:04 compute-0 nova_compute[255040]: 2025-11-29 08:18:04.240 255071 DEBUG oslo_concurrency.processutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/df14e24b-3b49-44ee-865e-eda9837a9190/disk.config df14e24b-3b49-44ee-865e-eda9837a9190_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:04 compute-0 nova_compute[255040]: 2025-11-29 08:18:04.241 255071 INFO nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Deleting local config drive /var/lib/nova/instances/df14e24b-3b49-44ee-865e-eda9837a9190/disk.config because it was imported into RBD.
Nov 29 08:18:04 compute-0 kernel: tap2ff8b035-5d: entered promiscuous mode
Nov 29 08:18:04 compute-0 NetworkManager[49116]: <info>  [1764404284.2845] manager: (tap2ff8b035-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/129)
Nov 29 08:18:04 compute-0 ovn_controller[153295]: 2025-11-29T08:18:04Z|00242|binding|INFO|Claiming lport 2ff8b035-5dbe-484d-8c3b-b6f45649371f for this chassis.
Nov 29 08:18:04 compute-0 ovn_controller[153295]: 2025-11-29T08:18:04Z|00243|binding|INFO|2ff8b035-5dbe-484d-8c3b-b6f45649371f: Claiming fa:16:3e:e3:d2:ad 10.100.0.12
Nov 29 08:18:04 compute-0 nova_compute[255040]: 2025-11-29 08:18:04.287 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.293 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:d2:ad 10.100.0.12'], port_security=['fa:16:3e:e3:d2:ad 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'df14e24b-3b49-44ee-865e-eda9837a9190', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6a2673206a04ec28205d820751e3174', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3cd54b75-8b12-47dc-bfa7-93fa344b6482', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e40ac74c-e68a-47d3-8a1f-fd021a26891c, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=2ff8b035-5dbe-484d-8c3b-b6f45649371f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.294 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 2ff8b035-5dbe-484d-8c3b-b6f45649371f in datapath 7844e875-d723-468d-8c4a-c3bb5b3b635a bound to our chassis
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.295 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7844e875-d723-468d-8c4a-c3bb5b3b635a
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.313 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[f0c006ad-95c0-4793-a26b-e6bb8ed9582c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.314 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7844e875-d1 in ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.316 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7844e875-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.316 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[6afc4c7e-179c-4584-bf98-a86dee7b7fbb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:04 compute-0 systemd-udevd[296192]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.317 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[f08d95b7-6d8a-4f67-ad20-73c7821aa402]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:04 compute-0 ovn_controller[153295]: 2025-11-29T08:18:04Z|00244|binding|INFO|Setting lport 2ff8b035-5dbe-484d-8c3b-b6f45649371f ovn-installed in OVS
Nov 29 08:18:04 compute-0 ovn_controller[153295]: 2025-11-29T08:18:04Z|00245|binding|INFO|Setting lport 2ff8b035-5dbe-484d-8c3b-b6f45649371f up in Southbound
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.330 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[4f669a39-8e40-42ab-9fd2-fe24ba88a88a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:04 compute-0 systemd-machined[216271]: New machine qemu-26-instance-0000001a.
Nov 29 08:18:04 compute-0 NetworkManager[49116]: <info>  [1764404284.3331] device (tap2ff8b035-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:18:04 compute-0 NetworkManager[49116]: <info>  [1764404284.3340] device (tap2ff8b035-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:18:04 compute-0 nova_compute[255040]: 2025-11-29 08:18:04.335 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:04 compute-0 nova_compute[255040]: 2025-11-29 08:18:04.336 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.345 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[444fa0c9-7827-41c5-bd39-6e22578799b4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:04 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-0000001a.
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.376 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[98dc699a-7633-48c3-9d6d-cb8c6166f3a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.384 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[a8cdda9f-fc1e-4c6b-92dc-d0a0d42b8cd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:04 compute-0 NetworkManager[49116]: <info>  [1764404284.3848] manager: (tap7844e875-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/130)
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.413 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[ae3e74e0-d4e0-453f-8a57-f5ba7caa9c72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.416 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[ef8e85c6-2c5a-4ee4-bffc-2dd49d6ab592]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:04 compute-0 NetworkManager[49116]: <info>  [1764404284.4396] device (tap7844e875-d0): carrier: link connected
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.444 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[34a0da81-0466-4be9-bac3-7d78fa2d0e20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.460 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[67f83741-5e82-44e4-b73c-bf33a6b6730c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7844e875-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:72:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652177, 'reachable_time': 33114, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296225, 'error': None, 'target': 'ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.475 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[acbb0b02-239a-4cd1-ba54-62471b750942]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:7298'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652177, 'tstamp': 652177}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296226, 'error': None, 'target': 'ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.492 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[880611a8-9af1-4d21-bff5-5e1ca317b625]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7844e875-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:72:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652177, 'reachable_time': 33114, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296227, 'error': None, 'target': 'ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.524 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[cbb595a9-55ab-49d8-b608-c6293eae82a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.579 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[fc43b3aa-aa1e-401a-ab7c-026ae90e9776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.581 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7844e875-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.581 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.581 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7844e875-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:04 compute-0 nova_compute[255040]: 2025-11-29 08:18:04.583 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:04 compute-0 kernel: tap7844e875-d0: entered promiscuous mode
Nov 29 08:18:04 compute-0 NetworkManager[49116]: <info>  [1764404284.5835] manager: (tap7844e875-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Nov 29 08:18:04 compute-0 nova_compute[255040]: 2025-11-29 08:18:04.585 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.585 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7844e875-d0, col_values=(('external_ids', {'iface-id': 'b495613a-3fb1-48c4-aa81-640b29e83d9b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:04 compute-0 nova_compute[255040]: 2025-11-29 08:18:04.586 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:04 compute-0 ovn_controller[153295]: 2025-11-29T08:18:04Z|00246|binding|INFO|Releasing lport b495613a-3fb1-48c4-aa81-640b29e83d9b from this chassis (sb_readonly=0)
Nov 29 08:18:04 compute-0 nova_compute[255040]: 2025-11-29 08:18:04.609 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.610 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7844e875-d723-468d-8c4a-c3bb5b3b635a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7844e875-d723-468d-8c4a-c3bb5b3b635a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.611 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[060972b9-3eb3-4e1b-abe2-8c51a93a2425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.612 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-7844e875-d723-468d-8c4a-c3bb5b3b635a
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/7844e875-d723-468d-8c4a-c3bb5b3b635a.pid.haproxy
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID 7844e875-d723-468d-8c4a-c3bb5b3b635a
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:18:04 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:04.612 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'env', 'PROCESS_TAG=haproxy-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7844e875-d723-468d-8c4a-c3bb5b3b635a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:18:04 compute-0 podman[296295]: 2025-11-29 08:18:04.985778429 +0000 UTC m=+0.060833330 container create da9ac8ca7241661ca4c077b8b7b4813b222e80f7aa3959a0e5e3622e1d158ea6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 08:18:05 compute-0 systemd[1]: Started libpod-conmon-da9ac8ca7241661ca4c077b8b7b4813b222e80f7aa3959a0e5e3622e1d158ea6.scope.
Nov 29 08:18:05 compute-0 podman[296295]: 2025-11-29 08:18:04.953225788 +0000 UTC m=+0.028280729 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:18:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a53c3a1e9c31b06601e50eeb0b740eabaca40b52b1c2077830d74c599bfa09/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:18:05 compute-0 podman[296295]: 2025-11-29 08:18:05.096216148 +0000 UTC m=+0.171271069 container init da9ac8ca7241661ca4c077b8b7b4813b222e80f7aa3959a0e5e3622e1d158ea6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:18:05 compute-0 podman[296295]: 2025-11-29 08:18:05.102418973 +0000 UTC m=+0.177473864 container start da9ac8ca7241661ca4c077b8b7b4813b222e80f7aa3959a0e5e3622e1d158ea6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 08:18:05 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[296310]: [NOTICE]   (296314) : New worker (296316) forked
Nov 29 08:18:05 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[296310]: [NOTICE]   (296314) : Loading success.
Nov 29 08:18:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e489 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e489 do_prune osdmap full prune enabled
Nov 29 08:18:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e490 e490: 3 total, 3 up, 3 in
Nov 29 08:18:05 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e490: 3 total, 3 up, 3 in
Nov 29 08:18:05 compute-0 nova_compute[255040]: 2025-11-29 08:18:05.172 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:05 compute-0 ceph-mon[75237]: pgmap v1992: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.7 KiB/s wr, 44 op/s
Nov 29 08:18:05 compute-0 ceph-mon[75237]: osdmap e490: 3 total, 3 up, 3 in
Nov 29 08:18:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 511 B/s wr, 25 op/s
Nov 29 08:18:06 compute-0 nova_compute[255040]: 2025-11-29 08:18:06.087 255071 DEBUG nova.compute.manager [req-3ce02ec9-4064-4fce-be97-c421e9519f8d req-49028ac1-e6c1-4e89-8c66-88ce7b3d8f6a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Received event network-vif-plugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:18:06 compute-0 nova_compute[255040]: 2025-11-29 08:18:06.088 255071 DEBUG oslo_concurrency.lockutils [req-3ce02ec9-4064-4fce-be97-c421e9519f8d req-49028ac1-e6c1-4e89-8c66-88ce7b3d8f6a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:06 compute-0 nova_compute[255040]: 2025-11-29 08:18:06.088 255071 DEBUG oslo_concurrency.lockutils [req-3ce02ec9-4064-4fce-be97-c421e9519f8d req-49028ac1-e6c1-4e89-8c66-88ce7b3d8f6a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:06 compute-0 nova_compute[255040]: 2025-11-29 08:18:06.088 255071 DEBUG oslo_concurrency.lockutils [req-3ce02ec9-4064-4fce-be97-c421e9519f8d req-49028ac1-e6c1-4e89-8c66-88ce7b3d8f6a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:06 compute-0 nova_compute[255040]: 2025-11-29 08:18:06.089 255071 DEBUG nova.compute.manager [req-3ce02ec9-4064-4fce-be97-c421e9519f8d req-49028ac1-e6c1-4e89-8c66-88ce7b3d8f6a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Processing event network-vif-plugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:18:06 compute-0 ceph-mon[75237]: pgmap v1994: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 511 B/s wr, 25 op/s
Nov 29 08:18:06 compute-0 podman[296325]: 2025-11-29 08:18:06.936025917 +0000 UTC m=+0.100756890 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:18:06 compute-0 nova_compute[255040]: 2025-11-29 08:18:06.994 255071 DEBUG nova.compute.manager [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:18:06 compute-0 nova_compute[255040]: 2025-11-29 08:18:06.995 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404286.9953792, df14e24b-3b49-44ee-865e-eda9837a9190 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:18:06 compute-0 nova_compute[255040]: 2025-11-29 08:18:06.995 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] VM Started (Lifecycle Event)
Nov 29 08:18:06 compute-0 nova_compute[255040]: 2025-11-29 08:18:06.999 255071 DEBUG nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:18:07 compute-0 nova_compute[255040]: 2025-11-29 08:18:07.003 255071 INFO nova.virt.libvirt.driver [-] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Instance spawned successfully.
Nov 29 08:18:07 compute-0 nova_compute[255040]: 2025-11-29 08:18:07.003 255071 DEBUG nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.007 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.025 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.031 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.035 255071 DEBUG nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.035 255071 DEBUG nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.036 255071 DEBUG nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.036 255071 DEBUG nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.036 255071 DEBUG nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.037 255071 DEBUG nova.virt.libvirt.driver [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:18:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 5.9 KiB/s rd, 19 KiB/s wr, 9 op/s
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.128 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.129 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404286.998728, df14e24b-3b49-44ee-865e-eda9837a9190 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.129 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] VM Paused (Lifecycle Event)
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.185 255071 INFO nova.compute.manager [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Took 10.27 seconds to spawn the instance on the hypervisor.
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.186 255071 DEBUG nova.compute.manager [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.194 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.197 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404286.9992025, df14e24b-3b49-44ee-865e-eda9837a9190 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.197 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] VM Resumed (Lifecycle Event)
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.225 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.229 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.352 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.400 255071 INFO nova.compute.manager [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Took 14.77 seconds to build instance.
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.404 255071 DEBUG nova.compute.manager [req-7512d717-beab-4880-bbaf-a6982af5d057 req-50283d8c-f92e-492b-8ed7-3e5764ee8702 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Received event network-vif-plugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.404 255071 DEBUG oslo_concurrency.lockutils [req-7512d717-beab-4880-bbaf-a6982af5d057 req-50283d8c-f92e-492b-8ed7-3e5764ee8702 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.405 255071 DEBUG oslo_concurrency.lockutils [req-7512d717-beab-4880-bbaf-a6982af5d057 req-50283d8c-f92e-492b-8ed7-3e5764ee8702 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.405 255071 DEBUG oslo_concurrency.lockutils [req-7512d717-beab-4880-bbaf-a6982af5d057 req-50283d8c-f92e-492b-8ed7-3e5764ee8702 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.405 255071 DEBUG nova.compute.manager [req-7512d717-beab-4880-bbaf-a6982af5d057 req-50283d8c-f92e-492b-8ed7-3e5764ee8702 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] No waiting events found dispatching network-vif-plugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.406 255071 WARNING nova.compute.manager [req-7512d717-beab-4880-bbaf-a6982af5d057 req-50283d8c-f92e-492b-8ed7-3e5764ee8702 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Received unexpected event network-vif-plugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f for instance with vm_state active and task_state None.
Nov 29 08:18:08 compute-0 nova_compute[255040]: 2025-11-29 08:18:08.462 255071 DEBUG oslo_concurrency.lockutils [None req-f3f4bead-bcc4-4023-962c-47d64f4a502d 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:18:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:18:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:18:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:18:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:18:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:18:08 compute-0 ceph-mon[75237]: pgmap v1995: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 5.9 KiB/s rd, 19 KiB/s wr, 9 op/s
Nov 29 08:18:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 747 KiB/s rd, 15 KiB/s wr, 41 op/s
Nov 29 08:18:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:10 compute-0 nova_compute[255040]: 2025-11-29 08:18:10.174 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:11 compute-0 ceph-mon[75237]: pgmap v1996: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 747 KiB/s rd, 15 KiB/s wr, 41 op/s
Nov 29 08:18:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 92 op/s
Nov 29 08:18:13 compute-0 nova_compute[255040]: 2025-11-29 08:18:13.010 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:13 compute-0 ceph-mon[75237]: pgmap v1997: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 92 op/s
Nov 29 08:18:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 92 op/s
Nov 29 08:18:14 compute-0 ceph-mon[75237]: pgmap v1998: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 92 op/s
Nov 29 08:18:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:15 compute-0 nova_compute[255040]: 2025-11-29 08:18:15.175 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 81 op/s
Nov 29 08:18:16 compute-0 ceph-mon[75237]: pgmap v1999: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 81 op/s
Nov 29 08:18:17 compute-0 nova_compute[255040]: 2025-11-29 08:18:17.897 255071 DEBUG nova.compute.manager [req-600dfe09-5ac0-4b4d-bec4-a08662f244e8 req-62497867-3f64-46a8-811f-8aab653c861a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Received event network-changed-2ff8b035-5dbe-484d-8c3b-b6f45649371f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:18:17 compute-0 nova_compute[255040]: 2025-11-29 08:18:17.898 255071 DEBUG nova.compute.manager [req-600dfe09-5ac0-4b4d-bec4-a08662f244e8 req-62497867-3f64-46a8-811f-8aab653c861a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Refreshing instance network info cache due to event network-changed-2ff8b035-5dbe-484d-8c3b-b6f45649371f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:18:17 compute-0 nova_compute[255040]: 2025-11-29 08:18:17.898 255071 DEBUG oslo_concurrency.lockutils [req-600dfe09-5ac0-4b4d-bec4-a08662f244e8 req-62497867-3f64-46a8-811f-8aab653c861a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-df14e24b-3b49-44ee-865e-eda9837a9190" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:18:17 compute-0 nova_compute[255040]: 2025-11-29 08:18:17.898 255071 DEBUG oslo_concurrency.lockutils [req-600dfe09-5ac0-4b4d-bec4-a08662f244e8 req-62497867-3f64-46a8-811f-8aab653c861a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-df14e24b-3b49-44ee-865e-eda9837a9190" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:18:17 compute-0 nova_compute[255040]: 2025-11-29 08:18:17.898 255071 DEBUG nova.network.neutron [req-600dfe09-5ac0-4b4d-bec4-a08662f244e8 req-62497867-3f64-46a8-811f-8aab653c861a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Refreshing network info cache for port 2ff8b035-5dbe-484d-8c3b-b6f45649371f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:18:17 compute-0 podman[296358]: 2025-11-29 08:18:17.91479744 +0000 UTC m=+0.084041732 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true)
Nov 29 08:18:18 compute-0 nova_compute[255040]: 2025-11-29 08:18:18.014 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Nov 29 08:18:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:18:18 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3976563516' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:18:18 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:18:18 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3976563516' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:18:18 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3976563516' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:18:18 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/3976563516' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:18:19 compute-0 ceph-mon[75237]: pgmap v2000: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Nov 29 08:18:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 426 B/s wr, 85 op/s
Nov 29 08:18:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:20 compute-0 nova_compute[255040]: 2025-11-29 08:18:20.176 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:20 compute-0 ceph-mon[75237]: pgmap v2001: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 426 B/s wr, 85 op/s
Nov 29 08:18:21 compute-0 ovn_controller[153295]: 2025-11-29T08:18:21Z|00058|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.12
Nov 29 08:18:21 compute-0 ovn_controller[153295]: 2025-11-29T08:18:21Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:e3:d2:ad 10.100.0.12
Nov 29 08:18:21 compute-0 nova_compute[255040]: 2025-11-29 08:18:21.541 255071 DEBUG nova.network.neutron [req-600dfe09-5ac0-4b4d-bec4-a08662f244e8 req-62497867-3f64-46a8-811f-8aab653c861a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Updated VIF entry in instance network info cache for port 2ff8b035-5dbe-484d-8c3b-b6f45649371f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:18:21 compute-0 nova_compute[255040]: 2025-11-29 08:18:21.542 255071 DEBUG nova.network.neutron [req-600dfe09-5ac0-4b4d-bec4-a08662f244e8 req-62497867-3f64-46a8-811f-8aab653c861a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Updating instance_info_cache with network_info: [{"id": "2ff8b035-5dbe-484d-8c3b-b6f45649371f", "address": "fa:16:3e:e3:d2:ad", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff8b035-5d", "ovs_interfaceid": "2ff8b035-5dbe-484d-8c3b-b6f45649371f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:18:21 compute-0 nova_compute[255040]: 2025-11-29 08:18:21.561 255071 DEBUG oslo_concurrency.lockutils [req-600dfe09-5ac0-4b4d-bec4-a08662f244e8 req-62497867-3f64-46a8-811f-8aab653c861a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-df14e24b-3b49-44ee-865e-eda9837a9190" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:18:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 279 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 684 KiB/s wr, 87 op/s
Nov 29 08:18:23 compute-0 nova_compute[255040]: 2025-11-29 08:18:23.017 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:23 compute-0 ceph-mon[75237]: pgmap v2002: 305 pgs: 305 active+clean; 279 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 684 KiB/s wr, 87 op/s
Nov 29 08:18:23 compute-0 nova_compute[255040]: 2025-11-29 08:18:23.797 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:23 compute-0 podman[296377]: 2025-11-29 08:18:23.886749846 +0000 UTC m=+0.057099401 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:18:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:18:23 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/557010455' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:18:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:18:23 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/557010455' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:18:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 279 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 1005 KiB/s rd, 684 KiB/s wr, 44 op/s
Nov 29 08:18:24 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/557010455' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:18:24 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/557010455' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:18:24 compute-0 ceph-mon[75237]: pgmap v2003: 305 pgs: 305 active+clean; 279 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 1005 KiB/s rd, 684 KiB/s wr, 44 op/s
Nov 29 08:18:24 compute-0 ovn_controller[153295]: 2025-11-29T08:18:24Z|00060|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.12
Nov 29 08:18:24 compute-0 ovn_controller[153295]: 2025-11-29T08:18:24Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:e3:d2:ad 10.100.0.12
Nov 29 08:18:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:25 compute-0 nova_compute[255040]: 2025-11-29 08:18:25.178 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 283 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.0 MiB/s wr, 69 op/s
Nov 29 08:18:26 compute-0 ovn_controller[153295]: 2025-11-29T08:18:26Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e3:d2:ad 10.100.0.12
Nov 29 08:18:26 compute-0 ovn_controller[153295]: 2025-11-29T08:18:26Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e3:d2:ad 10.100.0.12
Nov 29 08:18:27 compute-0 ceph-mon[75237]: pgmap v2004: 305 pgs: 305 active+clean; 283 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.0 MiB/s wr, 69 op/s
Nov 29 08:18:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:27.142 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:27.143 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:27.144 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:27 compute-0 sudo[296397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:18:27 compute-0 sudo[296397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:27 compute-0 sudo[296397]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:27 compute-0 sudo[296422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:18:27 compute-0 sudo[296422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:27 compute-0 sudo[296422]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:27 compute-0 sudo[296447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:18:27 compute-0 sudo[296447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:27 compute-0 sudo[296447]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:27 compute-0 sudo[296472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:18:27 compute-0 sudo[296472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:27 compute-0 nova_compute[255040]: 2025-11-29 08:18:27.537 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:28 compute-0 nova_compute[255040]: 2025-11-29 08:18:28.019 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 283 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.0 MiB/s wr, 77 op/s
Nov 29 08:18:28 compute-0 sudo[296472]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:18:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:18:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:18:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:18:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:18:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:18:28 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev e3d069b7-056e-4605-a552-38c28ad73012 does not exist
Nov 29 08:18:28 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 4f972519-0b58-4493-810f-878286f7e525 does not exist
Nov 29 08:18:28 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev f12f2626-3dcc-42b7-a495-b05e558b9ecc does not exist
Nov 29 08:18:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:18:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:18:28 compute-0 ceph-mon[75237]: pgmap v2005: 305 pgs: 305 active+clean; 283 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.0 MiB/s wr, 77 op/s
Nov 29 08:18:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:18:28 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:18:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:18:28 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:18:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:18:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:18:28 compute-0 sudo[296527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:18:28 compute-0 sudo[296527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:28 compute-0 sudo[296527]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:28 compute-0 sudo[296552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:18:28 compute-0 sudo[296552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:28 compute-0 sudo[296552]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:28 compute-0 sudo[296577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:18:28 compute-0 sudo[296577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:28 compute-0 sudo[296577]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:28 compute-0 sudo[296602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:18:28 compute-0 sudo[296602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:28 compute-0 podman[296666]: 2025-11-29 08:18:28.974519471 +0000 UTC m=+0.040003733 container create 6e5068254799b15b69902beb5951c1fbabb5188c15005fc245e54063153e8ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:18:29 compute-0 systemd[1]: Started libpod-conmon-6e5068254799b15b69902beb5951c1fbabb5188c15005fc245e54063153e8ded.scope.
Nov 29 08:18:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:18:29 compute-0 podman[296666]: 2025-11-29 08:18:28.959348215 +0000 UTC m=+0.024832497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:18:29 compute-0 podman[296666]: 2025-11-29 08:18:29.060135604 +0000 UTC m=+0.125619886 container init 6e5068254799b15b69902beb5951c1fbabb5188c15005fc245e54063153e8ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:18:29 compute-0 podman[296666]: 2025-11-29 08:18:29.067505122 +0000 UTC m=+0.132989384 container start 6e5068254799b15b69902beb5951c1fbabb5188c15005fc245e54063153e8ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 08:18:29 compute-0 podman[296666]: 2025-11-29 08:18:29.07045411 +0000 UTC m=+0.135938402 container attach 6e5068254799b15b69902beb5951c1fbabb5188c15005fc245e54063153e8ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bhabha, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 08:18:29 compute-0 jolly_bhabha[296681]: 167 167
Nov 29 08:18:29 compute-0 systemd[1]: libpod-6e5068254799b15b69902beb5951c1fbabb5188c15005fc245e54063153e8ded.scope: Deactivated successfully.
Nov 29 08:18:29 compute-0 conmon[296681]: conmon 6e5068254799b15b6990 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6e5068254799b15b69902beb5951c1fbabb5188c15005fc245e54063153e8ded.scope/container/memory.events
Nov 29 08:18:29 compute-0 podman[296666]: 2025-11-29 08:18:29.07638725 +0000 UTC m=+0.141871512 container died 6e5068254799b15b69902beb5951c1fbabb5188c15005fc245e54063153e8ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:18:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e673dad8cccb209ae82fcf2a7aa7f5d0339f812864719ab4f5faf7853b98e04b-merged.mount: Deactivated successfully.
Nov 29 08:18:29 compute-0 podman[296666]: 2025-11-29 08:18:29.120428599 +0000 UTC m=+0.185912861 container remove 6e5068254799b15b69902beb5951c1fbabb5188c15005fc245e54063153e8ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 08:18:29 compute-0 systemd[1]: libpod-conmon-6e5068254799b15b69902beb5951c1fbabb5188c15005fc245e54063153e8ded.scope: Deactivated successfully.
Nov 29 08:18:29 compute-0 podman[296703]: 2025-11-29 08:18:29.287658939 +0000 UTC m=+0.043953559 container create 16a3ef8f4cd33e36e26f87b13dd5232d4fd8e4af12111d52f7226775d930076a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:18:29 compute-0 systemd[1]: Started libpod-conmon-16a3ef8f4cd33e36e26f87b13dd5232d4fd8e4af12111d52f7226775d930076a.scope.
Nov 29 08:18:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d78c3f3c4d03a4d4e774229b7449b72b26c757d94835ff6e3ea77d40cf6bc2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d78c3f3c4d03a4d4e774229b7449b72b26c757d94835ff6e3ea77d40cf6bc2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d78c3f3c4d03a4d4e774229b7449b72b26c757d94835ff6e3ea77d40cf6bc2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d78c3f3c4d03a4d4e774229b7449b72b26c757d94835ff6e3ea77d40cf6bc2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d78c3f3c4d03a4d4e774229b7449b72b26c757d94835ff6e3ea77d40cf6bc2e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:18:29 compute-0 podman[296703]: 2025-11-29 08:18:29.269701118 +0000 UTC m=+0.025995778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:18:29 compute-0 podman[296703]: 2025-11-29 08:18:29.371429492 +0000 UTC m=+0.127724142 container init 16a3ef8f4cd33e36e26f87b13dd5232d4fd8e4af12111d52f7226775d930076a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 08:18:29 compute-0 podman[296703]: 2025-11-29 08:18:29.386114086 +0000 UTC m=+0.142408716 container start 16a3ef8f4cd33e36e26f87b13dd5232d4fd8e4af12111d52f7226775d930076a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:18:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:18:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:18:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:18:29 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:18:29 compute-0 podman[296703]: 2025-11-29 08:18:29.389826866 +0000 UTC m=+0.146121496 container attach 16a3ef8f4cd33e36e26f87b13dd5232d4fd8e4af12111d52f7226775d930076a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:18:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 283 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.0 MiB/s wr, 78 op/s
Nov 29 08:18:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:30 compute-0 nova_compute[255040]: 2025-11-29 08:18:30.180 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:30 compute-0 ceph-mon[75237]: pgmap v2006: 305 pgs: 305 active+clean; 283 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.0 MiB/s wr, 78 op/s
Nov 29 08:18:30 compute-0 funny_mirzakhani[296718]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:18:30 compute-0 funny_mirzakhani[296718]: --> relative data size: 1.0
Nov 29 08:18:30 compute-0 funny_mirzakhani[296718]: --> All data devices are unavailable
Nov 29 08:18:30 compute-0 systemd[1]: libpod-16a3ef8f4cd33e36e26f87b13dd5232d4fd8e4af12111d52f7226775d930076a.scope: Deactivated successfully.
Nov 29 08:18:30 compute-0 podman[296703]: 2025-11-29 08:18:30.443317373 +0000 UTC m=+1.199612003 container died 16a3ef8f4cd33e36e26f87b13dd5232d4fd8e4af12111d52f7226775d930076a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 08:18:30 compute-0 systemd[1]: libpod-16a3ef8f4cd33e36e26f87b13dd5232d4fd8e4af12111d52f7226775d930076a.scope: Consumed 1.000s CPU time.
Nov 29 08:18:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d78c3f3c4d03a4d4e774229b7449b72b26c757d94835ff6e3ea77d40cf6bc2e-merged.mount: Deactivated successfully.
Nov 29 08:18:30 compute-0 podman[296703]: 2025-11-29 08:18:30.492545821 +0000 UTC m=+1.248840451 container remove 16a3ef8f4cd33e36e26f87b13dd5232d4fd8e4af12111d52f7226775d930076a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:18:30 compute-0 systemd[1]: libpod-conmon-16a3ef8f4cd33e36e26f87b13dd5232d4fd8e4af12111d52f7226775d930076a.scope: Deactivated successfully.
Nov 29 08:18:30 compute-0 sudo[296602]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:30 compute-0 sudo[296763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:18:30 compute-0 sudo[296763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:30 compute-0 sudo[296763]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:30 compute-0 sudo[296788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:18:30 compute-0 sudo[296788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:30 compute-0 sudo[296788]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:30 compute-0 sudo[296813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:18:30 compute-0 sudo[296813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:30 compute-0 sudo[296813]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:30 compute-0 sudo[296838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 08:18:30 compute-0 sudo[296838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:31 compute-0 podman[296903]: 2025-11-29 08:18:31.085790001 +0000 UTC m=+0.041989516 container create 8926243919ebb0ca296e07a9c51c5e3cb6979f9ca971e2bac77413043788263c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jemison, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 08:18:31 compute-0 systemd[1]: Started libpod-conmon-8926243919ebb0ca296e07a9c51c5e3cb6979f9ca971e2bac77413043788263c.scope.
Nov 29 08:18:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:18:31 compute-0 podman[296903]: 2025-11-29 08:18:31.153663139 +0000 UTC m=+0.109862674 container init 8926243919ebb0ca296e07a9c51c5e3cb6979f9ca971e2bac77413043788263c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 08:18:31 compute-0 podman[296903]: 2025-11-29 08:18:31.160970624 +0000 UTC m=+0.117170149 container start 8926243919ebb0ca296e07a9c51c5e3cb6979f9ca971e2bac77413043788263c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:18:31 compute-0 podman[296903]: 2025-11-29 08:18:31.164716025 +0000 UTC m=+0.120915540 container attach 8926243919ebb0ca296e07a9c51c5e3cb6979f9ca971e2bac77413043788263c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:18:31 compute-0 amazing_jemison[296919]: 167 167
Nov 29 08:18:31 compute-0 podman[296903]: 2025-11-29 08:18:31.071159729 +0000 UTC m=+0.027359264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:18:31 compute-0 systemd[1]: libpod-8926243919ebb0ca296e07a9c51c5e3cb6979f9ca971e2bac77413043788263c.scope: Deactivated successfully.
Nov 29 08:18:31 compute-0 conmon[296919]: conmon 8926243919ebb0ca296e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8926243919ebb0ca296e07a9c51c5e3cb6979f9ca971e2bac77413043788263c.scope/container/memory.events
Nov 29 08:18:31 compute-0 podman[296903]: 2025-11-29 08:18:31.169550275 +0000 UTC m=+0.125749780 container died 8926243919ebb0ca296e07a9c51c5e3cb6979f9ca971e2bac77413043788263c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:18:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9d91b3138b5706f2e9e156e3251e76440b0ca62d7875eb7afd045241b22e3e8-merged.mount: Deactivated successfully.
Nov 29 08:18:31 compute-0 podman[296903]: 2025-11-29 08:18:31.201030738 +0000 UTC m=+0.157230253 container remove 8926243919ebb0ca296e07a9c51c5e3cb6979f9ca971e2bac77413043788263c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 08:18:31 compute-0 systemd[1]: libpod-conmon-8926243919ebb0ca296e07a9c51c5e3cb6979f9ca971e2bac77413043788263c.scope: Deactivated successfully.
Nov 29 08:18:31 compute-0 podman[296940]: 2025-11-29 08:18:31.401994751 +0000 UTC m=+0.052325513 container create 3617f9691e919e2549d3a5423248ebdb8c033c8d6a44e9dec50d2fa5d55eeed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brown, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:18:31 compute-0 systemd[1]: Started libpod-conmon-3617f9691e919e2549d3a5423248ebdb8c033c8d6a44e9dec50d2fa5d55eeed5.scope.
Nov 29 08:18:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e34c52acc35978bc93d2d0c99ac9291ae48943274934d215a81868772f7980fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e34c52acc35978bc93d2d0c99ac9291ae48943274934d215a81868772f7980fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e34c52acc35978bc93d2d0c99ac9291ae48943274934d215a81868772f7980fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e34c52acc35978bc93d2d0c99ac9291ae48943274934d215a81868772f7980fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:18:31 compute-0 podman[296940]: 2025-11-29 08:18:31.384210554 +0000 UTC m=+0.034541346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:18:31 compute-0 podman[296940]: 2025-11-29 08:18:31.492901626 +0000 UTC m=+0.143232408 container init 3617f9691e919e2549d3a5423248ebdb8c033c8d6a44e9dec50d2fa5d55eeed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brown, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:18:31 compute-0 podman[296940]: 2025-11-29 08:18:31.503033917 +0000 UTC m=+0.153364679 container start 3617f9691e919e2549d3a5423248ebdb8c033c8d6a44e9dec50d2fa5d55eeed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:18:31 compute-0 podman[296940]: 2025-11-29 08:18:31.509413427 +0000 UTC m=+0.159744209 container attach 3617f9691e919e2549d3a5423248ebdb8c033c8d6a44e9dec50d2fa5d55eeed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brown, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 08:18:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 287 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 66 op/s
Nov 29 08:18:32 compute-0 nova_compute[255040]: 2025-11-29 08:18:32.288 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:32 compute-0 adoring_brown[296956]: {
Nov 29 08:18:32 compute-0 adoring_brown[296956]:     "0": [
Nov 29 08:18:32 compute-0 adoring_brown[296956]:         {
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "devices": [
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "/dev/loop3"
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             ],
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "lv_name": "ceph_lv0",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "lv_size": "21470642176",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "name": "ceph_lv0",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "tags": {
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.cluster_name": "ceph",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.crush_device_class": "",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.encrypted": "0",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.osd_id": "0",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.type": "block",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.vdo": "0"
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             },
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "type": "block",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "vg_name": "ceph_vg0"
Nov 29 08:18:32 compute-0 adoring_brown[296956]:         }
Nov 29 08:18:32 compute-0 adoring_brown[296956]:     ],
Nov 29 08:18:32 compute-0 adoring_brown[296956]:     "1": [
Nov 29 08:18:32 compute-0 adoring_brown[296956]:         {
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "devices": [
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "/dev/loop4"
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             ],
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "lv_name": "ceph_lv1",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "lv_size": "21470642176",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "name": "ceph_lv1",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "tags": {
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.cluster_name": "ceph",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.crush_device_class": "",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.encrypted": "0",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.osd_id": "1",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.type": "block",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.vdo": "0"
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             },
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "type": "block",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "vg_name": "ceph_vg1"
Nov 29 08:18:32 compute-0 adoring_brown[296956]:         }
Nov 29 08:18:32 compute-0 adoring_brown[296956]:     ],
Nov 29 08:18:32 compute-0 adoring_brown[296956]:     "2": [
Nov 29 08:18:32 compute-0 adoring_brown[296956]:         {
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "devices": [
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "/dev/loop5"
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             ],
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "lv_name": "ceph_lv2",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "lv_size": "21470642176",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "name": "ceph_lv2",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "tags": {
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.cluster_name": "ceph",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.crush_device_class": "",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.encrypted": "0",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.osd_id": "2",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.type": "block",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:                 "ceph.vdo": "0"
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             },
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "type": "block",
Nov 29 08:18:32 compute-0 adoring_brown[296956]:             "vg_name": "ceph_vg2"
Nov 29 08:18:32 compute-0 adoring_brown[296956]:         }
Nov 29 08:18:32 compute-0 adoring_brown[296956]:     ]
Nov 29 08:18:32 compute-0 adoring_brown[296956]: }
Nov 29 08:18:32 compute-0 systemd[1]: libpod-3617f9691e919e2549d3a5423248ebdb8c033c8d6a44e9dec50d2fa5d55eeed5.scope: Deactivated successfully.
Nov 29 08:18:32 compute-0 conmon[296956]: conmon 3617f9691e919e2549d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3617f9691e919e2549d3a5423248ebdb8c033c8d6a44e9dec50d2fa5d55eeed5.scope/container/memory.events
Nov 29 08:18:32 compute-0 podman[296940]: 2025-11-29 08:18:32.335269228 +0000 UTC m=+0.985600060 container died 3617f9691e919e2549d3a5423248ebdb8c033c8d6a44e9dec50d2fa5d55eeed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brown, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 08:18:32 compute-0 ceph-mon[75237]: pgmap v2007: 305 pgs: 305 active+clean; 287 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 66 op/s
Nov 29 08:18:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-e34c52acc35978bc93d2d0c99ac9291ae48943274934d215a81868772f7980fe-merged.mount: Deactivated successfully.
Nov 29 08:18:32 compute-0 podman[296940]: 2025-11-29 08:18:32.754499627 +0000 UTC m=+1.404830379 container remove 3617f9691e919e2549d3a5423248ebdb8c033c8d6a44e9dec50d2fa5d55eeed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:18:32 compute-0 sudo[296838]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:32 compute-0 systemd[1]: libpod-conmon-3617f9691e919e2549d3a5423248ebdb8c033c8d6a44e9dec50d2fa5d55eeed5.scope: Deactivated successfully.
Nov 29 08:18:32 compute-0 sudo[296977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:18:32 compute-0 sudo[296977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:32 compute-0 sudo[296977]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:32 compute-0 sudo[297002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:18:32 compute-0 sudo[297002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:32 compute-0 sudo[297002]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:32 compute-0 sudo[297027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:18:32 compute-0 sudo[297027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:32 compute-0 sudo[297027]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:33 compute-0 sudo[297052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 08:18:33 compute-0 sudo[297052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:33 compute-0 nova_compute[255040]: 2025-11-29 08:18:33.021 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:33 compute-0 podman[297119]: 2025-11-29 08:18:33.379533888 +0000 UTC m=+0.023922241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:18:33 compute-0 podman[297119]: 2025-11-29 08:18:33.622916687 +0000 UTC m=+0.267305020 container create da9038bc68766855bf11ea0a290404bff8dea66d36d781b892f1765072203a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shamir, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:18:33 compute-0 systemd[1]: Started libpod-conmon-da9038bc68766855bf11ea0a290404bff8dea66d36d781b892f1765072203a62.scope.
Nov 29 08:18:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:18:33 compute-0 podman[297119]: 2025-11-29 08:18:33.698278766 +0000 UTC m=+0.342667109 container init da9038bc68766855bf11ea0a290404bff8dea66d36d781b892f1765072203a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:18:33 compute-0 podman[297119]: 2025-11-29 08:18:33.705106789 +0000 UTC m=+0.349495122 container start da9038bc68766855bf11ea0a290404bff8dea66d36d781b892f1765072203a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shamir, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 08:18:33 compute-0 ecstatic_shamir[297135]: 167 167
Nov 29 08:18:33 compute-0 systemd[1]: libpod-da9038bc68766855bf11ea0a290404bff8dea66d36d781b892f1765072203a62.scope: Deactivated successfully.
Nov 29 08:18:33 compute-0 podman[297119]: 2025-11-29 08:18:33.748731307 +0000 UTC m=+0.393119660 container attach da9038bc68766855bf11ea0a290404bff8dea66d36d781b892f1765072203a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shamir, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 08:18:33 compute-0 podman[297119]: 2025-11-29 08:18:33.749262221 +0000 UTC m=+0.393650584 container died da9038bc68766855bf11ea0a290404bff8dea66d36d781b892f1765072203a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 08:18:34 compute-0 nova_compute[255040]: 2025-11-29 08:18:34.041 255071 DEBUG oslo_concurrency.lockutils [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "df14e24b-3b49-44ee-865e-eda9837a9190" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:34 compute-0 nova_compute[255040]: 2025-11-29 08:18:34.043 255071 DEBUG oslo_concurrency.lockutils [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:34 compute-0 nova_compute[255040]: 2025-11-29 08:18:34.043 255071 DEBUG oslo_concurrency.lockutils [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:34 compute-0 nova_compute[255040]: 2025-11-29 08:18:34.043 255071 DEBUG oslo_concurrency.lockutils [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:34 compute-0 nova_compute[255040]: 2025-11-29 08:18:34.044 255071 DEBUG oslo_concurrency.lockutils [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:34 compute-0 nova_compute[255040]: 2025-11-29 08:18:34.045 255071 INFO nova.compute.manager [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Terminating instance
Nov 29 08:18:34 compute-0 nova_compute[255040]: 2025-11-29 08:18:34.046 255071 DEBUG nova.compute.manager [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:18:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 287 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 985 KiB/s rd, 698 KiB/s wr, 36 op/s
Nov 29 08:18:34 compute-0 ceph-mon[75237]: pgmap v2008: 305 pgs: 305 active+clean; 287 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 985 KiB/s rd, 698 KiB/s wr, 36 op/s
Nov 29 08:18:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb04a5559d853241f7b6ee7c0fa7121b70160c8a050df9d1af0c0379ca260d66-merged.mount: Deactivated successfully.
Nov 29 08:18:34 compute-0 kernel: tap2ff8b035-5d (unregistering): left promiscuous mode
Nov 29 08:18:34 compute-0 NetworkManager[49116]: <info>  [1764404314.6972] device (tap2ff8b035-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:18:34 compute-0 ovn_controller[153295]: 2025-11-29T08:18:34Z|00247|binding|INFO|Releasing lport 2ff8b035-5dbe-484d-8c3b-b6f45649371f from this chassis (sb_readonly=0)
Nov 29 08:18:34 compute-0 ovn_controller[153295]: 2025-11-29T08:18:34Z|00248|binding|INFO|Setting lport 2ff8b035-5dbe-484d-8c3b-b6f45649371f down in Southbound
Nov 29 08:18:34 compute-0 nova_compute[255040]: 2025-11-29 08:18:34.708 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:34 compute-0 ovn_controller[153295]: 2025-11-29T08:18:34Z|00249|binding|INFO|Removing iface tap2ff8b035-5d ovn-installed in OVS
Nov 29 08:18:34 compute-0 nova_compute[255040]: 2025-11-29 08:18:34.711 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:34 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:34.715 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:d2:ad 10.100.0.12'], port_security=['fa:16:3e:e3:d2:ad 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'df14e24b-3b49-44ee-865e-eda9837a9190', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6a2673206a04ec28205d820751e3174', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3cd54b75-8b12-47dc-bfa7-93fa344b6482', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.220'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e40ac74c-e68a-47d3-8a1f-fd021a26891c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=2ff8b035-5dbe-484d-8c3b-b6f45649371f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:18:34 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:34.717 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 2ff8b035-5dbe-484d-8c3b-b6f45649371f in datapath 7844e875-d723-468d-8c4a-c3bb5b3b635a unbound from our chassis
Nov 29 08:18:34 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:34.718 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7844e875-d723-468d-8c4a-c3bb5b3b635a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:18:34 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:34.720 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[e1312bb1-eb68-44d3-b130-9407431ea942]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:34 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:34.720 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a namespace which is not needed anymore
Nov 29 08:18:34 compute-0 nova_compute[255040]: 2025-11-29 08:18:34.731 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:34 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Nov 29 08:18:34 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Consumed 16.354s CPU time.
Nov 29 08:18:34 compute-0 systemd-machined[216271]: Machine qemu-26-instance-0000001a terminated.
Nov 29 08:18:34 compute-0 kernel: tap2ff8b035-5d: entered promiscuous mode
Nov 29 08:18:34 compute-0 kernel: tap2ff8b035-5d (unregistering): left promiscuous mode
Nov 29 08:18:34 compute-0 nova_compute[255040]: 2025-11-29 08:18:34.871 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:34 compute-0 ovn_controller[153295]: 2025-11-29T08:18:34Z|00250|binding|INFO|Claiming lport 2ff8b035-5dbe-484d-8c3b-b6f45649371f for this chassis.
Nov 29 08:18:34 compute-0 ovn_controller[153295]: 2025-11-29T08:18:34Z|00251|binding|INFO|2ff8b035-5dbe-484d-8c3b-b6f45649371f: Claiming fa:16:3e:e3:d2:ad 10.100.0.12
Nov 29 08:18:34 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:34.886 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:d2:ad 10.100.0.12'], port_security=['fa:16:3e:e3:d2:ad 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'df14e24b-3b49-44ee-865e-eda9837a9190', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6a2673206a04ec28205d820751e3174', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3cd54b75-8b12-47dc-bfa7-93fa344b6482', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.220'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e40ac74c-e68a-47d3-8a1f-fd021a26891c, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=2ff8b035-5dbe-484d-8c3b-b6f45649371f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:18:34 compute-0 nova_compute[255040]: 2025-11-29 08:18:34.891 255071 INFO nova.virt.libvirt.driver [-] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Instance destroyed successfully.
Nov 29 08:18:34 compute-0 nova_compute[255040]: 2025-11-29 08:18:34.892 255071 DEBUG nova.objects.instance [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lazy-loading 'resources' on Instance uuid df14e24b-3b49-44ee-865e-eda9837a9190 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:18:34 compute-0 ovn_controller[153295]: 2025-11-29T08:18:34Z|00252|binding|INFO|Setting lport 2ff8b035-5dbe-484d-8c3b-b6f45649371f ovn-installed in OVS
Nov 29 08:18:34 compute-0 ovn_controller[153295]: 2025-11-29T08:18:34Z|00253|binding|INFO|Setting lport 2ff8b035-5dbe-484d-8c3b-b6f45649371f up in Southbound
Nov 29 08:18:34 compute-0 nova_compute[255040]: 2025-11-29 08:18:34.895 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:34 compute-0 ovn_controller[153295]: 2025-11-29T08:18:34Z|00254|binding|INFO|Releasing lport 2ff8b035-5dbe-484d-8c3b-b6f45649371f from this chassis (sb_readonly=1)
Nov 29 08:18:34 compute-0 ovn_controller[153295]: 2025-11-29T08:18:34Z|00255|binding|INFO|Removing iface tap2ff8b035-5d ovn-installed in OVS
Nov 29 08:18:34 compute-0 ovn_controller[153295]: 2025-11-29T08:18:34Z|00256|if_status|INFO|Not setting lport 2ff8b035-5dbe-484d-8c3b-b6f45649371f down as sb is readonly
Nov 29 08:18:34 compute-0 nova_compute[255040]: 2025-11-29 08:18:34.898 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:34 compute-0 nova_compute[255040]: 2025-11-29 08:18:34.912 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:35 compute-0 nova_compute[255040]: 2025-11-29 08:18:35.183 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:35 compute-0 podman[297119]: 2025-11-29 08:18:35.350495551 +0000 UTC m=+1.994883884 container remove da9038bc68766855bf11ea0a290404bff8dea66d36d781b892f1765072203a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shamir, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:18:35 compute-0 systemd[1]: libpod-conmon-da9038bc68766855bf11ea0a290404bff8dea66d36d781b892f1765072203a62.scope: Deactivated successfully.
Nov 29 08:18:35 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[296310]: [NOTICE]   (296314) : haproxy version is 2.8.14-c23fe91
Nov 29 08:18:35 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[296310]: [NOTICE]   (296314) : path to executable is /usr/sbin/haproxy
Nov 29 08:18:35 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[296310]: [WARNING]  (296314) : Exiting Master process...
Nov 29 08:18:35 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[296310]: [WARNING]  (296314) : Exiting Master process...
Nov 29 08:18:35 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[296310]: [ALERT]    (296314) : Current worker (296316) exited with code 143 (Terminated)
Nov 29 08:18:35 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[296310]: [WARNING]  (296314) : All workers exited. Exiting... (0)
Nov 29 08:18:35 compute-0 systemd[1]: libpod-da9ac8ca7241661ca4c077b8b7b4813b222e80f7aa3959a0e5e3622e1d158ea6.scope: Deactivated successfully.
Nov 29 08:18:35 compute-0 podman[297188]: 2025-11-29 08:18:35.511404341 +0000 UTC m=+0.055098057 container died da9ac8ca7241661ca4c077b8b7b4813b222e80f7aa3959a0e5e3622e1d158ea6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 08:18:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-da9ac8ca7241661ca4c077b8b7b4813b222e80f7aa3959a0e5e3622e1d158ea6-userdata-shm.mount: Deactivated successfully.
Nov 29 08:18:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-31a53c3a1e9c31b06601e50eeb0b740eabaca40b52b1c2077830d74c599bfa09-merged.mount: Deactivated successfully.
Nov 29 08:18:35 compute-0 podman[297205]: 2025-11-29 08:18:35.546431979 +0000 UTC m=+0.053622798 container create 10b75ebdf1939ce275efe34c68c24ef9549077255544122fba9ec0556d2b8c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swartz, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:18:35 compute-0 podman[297205]: 2025-11-29 08:18:35.528224772 +0000 UTC m=+0.035415601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:18:35 compute-0 podman[297188]: 2025-11-29 08:18:35.693399276 +0000 UTC m=+0.237092992 container cleanup da9ac8ca7241661ca4c077b8b7b4813b222e80f7aa3959a0e5e3622e1d158ea6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:18:35 compute-0 systemd[1]: libpod-conmon-da9ac8ca7241661ca4c077b8b7b4813b222e80f7aa3959a0e5e3622e1d158ea6.scope: Deactivated successfully.
Nov 29 08:18:35 compute-0 nova_compute[255040]: 2025-11-29 08:18:35.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:18:35 compute-0 systemd[1]: Started libpod-conmon-10b75ebdf1939ce275efe34c68c24ef9549077255544122fba9ec0556d2b8c50.scope.
Nov 29 08:18:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:18:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca895f572d46ba423221d2589b2a92abd90c585ec88a9fb217f97ee5244cf915/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:18:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca895f572d46ba423221d2589b2a92abd90c585ec88a9fb217f97ee5244cf915/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:18:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca895f572d46ba423221d2589b2a92abd90c585ec88a9fb217f97ee5244cf915/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:18:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca895f572d46ba423221d2589b2a92abd90c585ec88a9fb217f97ee5244cf915/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:18:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 287 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 701 KiB/s wr, 40 op/s
Nov 29 08:18:36 compute-0 podman[297205]: 2025-11-29 08:18:36.177485021 +0000 UTC m=+0.684675860 container init 10b75ebdf1939ce275efe34c68c24ef9549077255544122fba9ec0556d2b8c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swartz, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 08:18:36 compute-0 podman[297205]: 2025-11-29 08:18:36.187526931 +0000 UTC m=+0.694717740 container start 10b75ebdf1939ce275efe34c68c24ef9549077255544122fba9ec0556d2b8c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swartz, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:18:36 compute-0 podman[297205]: 2025-11-29 08:18:36.192973266 +0000 UTC m=+0.700164095 container attach 10b75ebdf1939ce275efe34c68c24ef9549077255544122fba9ec0556d2b8c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:18:36 compute-0 nova_compute[255040]: 2025-11-29 08:18:36.552 255071 DEBUG nova.virt.libvirt.vif [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:17:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-736452909',display_name='tempest-TestEncryptedCinderVolumes-server-736452909',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-736452909',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEdRfFooeDrPdIr34Yh+0fce0QIhdx7hRFz43DuSx97qmzkIJdqTsJhIJpvFpHMnUcNk19c2heDhEKtTmUb/iXamAI7Q4J7B78+R5sIhgPtRSP6lsf7edjGY0plIk9Wynw==',key_name='tempest-TestEncryptedCinderVolumes-458026389',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:18:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e6a2673206a04ec28205d820751e3174',ramdisk_id='',reservation_id='r-vg8f425i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-2116890995',owner_user_name='tempest-TestEncryptedCinderVolumes-2116890995-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:18:08Z,user_data=None,user_id='8a7b756f6c364e97a9d0d5298587d61c',uuid=df14e24b-3b49-44ee-865e-eda9837a9190,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ff8b035-5dbe-484d-8c3b-b6f45649371f", "address": "fa:16:3e:e3:d2:ad", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff8b035-5d", "ovs_interfaceid": "2ff8b035-5dbe-484d-8c3b-b6f45649371f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:18:36 compute-0 nova_compute[255040]: 2025-11-29 08:18:36.553 255071 DEBUG nova.network.os_vif_util [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Converting VIF {"id": "2ff8b035-5dbe-484d-8c3b-b6f45649371f", "address": "fa:16:3e:e3:d2:ad", "network": {"id": "7844e875-d723-468d-8c4a-c3bb5b3b635a", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2016378387-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6a2673206a04ec28205d820751e3174", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ff8b035-5d", "ovs_interfaceid": "2ff8b035-5dbe-484d-8c3b-b6f45649371f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:18:36 compute-0 nova_compute[255040]: 2025-11-29 08:18:36.554 255071 DEBUG nova.network.os_vif_util [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e3:d2:ad,bridge_name='br-int',has_traffic_filtering=True,id=2ff8b035-5dbe-484d-8c3b-b6f45649371f,network=Network(7844e875-d723-468d-8c4a-c3bb5b3b635a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ff8b035-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:18:36 compute-0 nova_compute[255040]: 2025-11-29 08:18:36.554 255071 DEBUG os_vif [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:d2:ad,bridge_name='br-int',has_traffic_filtering=True,id=2ff8b035-5dbe-484d-8c3b-b6f45649371f,network=Network(7844e875-d723-468d-8c4a-c3bb5b3b635a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ff8b035-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:18:36 compute-0 nova_compute[255040]: 2025-11-29 08:18:36.556 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:36 compute-0 nova_compute[255040]: 2025-11-29 08:18:36.556 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ff8b035-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:36 compute-0 nova_compute[255040]: 2025-11-29 08:18:36.558 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:36 compute-0 nova_compute[255040]: 2025-11-29 08:18:36.559 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:36 compute-0 nova_compute[255040]: 2025-11-29 08:18:36.562 255071 INFO os_vif [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:d2:ad,bridge_name='br-int',has_traffic_filtering=True,id=2ff8b035-5dbe-484d-8c3b-b6f45649371f,network=Network(7844e875-d723-468d-8c4a-c3bb5b3b635a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ff8b035-5d')
Nov 29 08:18:36 compute-0 ovn_controller[153295]: 2025-11-29T08:18:36Z|00257|binding|INFO|Releasing lport 2ff8b035-5dbe-484d-8c3b-b6f45649371f from this chassis (sb_readonly=0)
Nov 29 08:18:36 compute-0 ovn_controller[153295]: 2025-11-29T08:18:36Z|00258|binding|INFO|Setting lport 2ff8b035-5dbe-484d-8c3b-b6f45649371f down in Southbound
Nov 29 08:18:36 compute-0 nova_compute[255040]: 2025-11-29 08:18:36.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:18:37 compute-0 adoring_swartz[297249]: {
Nov 29 08:18:37 compute-0 adoring_swartz[297249]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 08:18:37 compute-0 adoring_swartz[297249]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:18:37 compute-0 adoring_swartz[297249]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:18:37 compute-0 adoring_swartz[297249]:         "osd_id": 2,
Nov 29 08:18:37 compute-0 adoring_swartz[297249]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:18:37 compute-0 adoring_swartz[297249]:         "type": "bluestore"
Nov 29 08:18:37 compute-0 adoring_swartz[297249]:     },
Nov 29 08:18:37 compute-0 adoring_swartz[297249]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 08:18:37 compute-0 adoring_swartz[297249]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:18:37 compute-0 adoring_swartz[297249]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:18:37 compute-0 adoring_swartz[297249]:         "osd_id": 0,
Nov 29 08:18:37 compute-0 adoring_swartz[297249]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:18:37 compute-0 adoring_swartz[297249]:         "type": "bluestore"
Nov 29 08:18:37 compute-0 adoring_swartz[297249]:     },
Nov 29 08:18:37 compute-0 adoring_swartz[297249]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 08:18:37 compute-0 adoring_swartz[297249]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:18:37 compute-0 adoring_swartz[297249]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:18:37 compute-0 adoring_swartz[297249]:         "osd_id": 1,
Nov 29 08:18:37 compute-0 adoring_swartz[297249]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:18:37 compute-0 adoring_swartz[297249]:         "type": "bluestore"
Nov 29 08:18:37 compute-0 adoring_swartz[297249]:     }
Nov 29 08:18:37 compute-0 adoring_swartz[297249]: }
Nov 29 08:18:37 compute-0 nova_compute[255040]: 2025-11-29 08:18:37.127 255071 DEBUG nova.compute.manager [req-a4a70127-485a-4473-bffb-b8c50d1bbb6f req-e228450c-a27b-4ac5-b29e-d5f2150b29e0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Received event network-vif-unplugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:18:37 compute-0 nova_compute[255040]: 2025-11-29 08:18:37.128 255071 DEBUG oslo_concurrency.lockutils [req-a4a70127-485a-4473-bffb-b8c50d1bbb6f req-e228450c-a27b-4ac5-b29e-d5f2150b29e0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:37 compute-0 nova_compute[255040]: 2025-11-29 08:18:37.128 255071 DEBUG oslo_concurrency.lockutils [req-a4a70127-485a-4473-bffb-b8c50d1bbb6f req-e228450c-a27b-4ac5-b29e-d5f2150b29e0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:37 compute-0 nova_compute[255040]: 2025-11-29 08:18:37.129 255071 DEBUG oslo_concurrency.lockutils [req-a4a70127-485a-4473-bffb-b8c50d1bbb6f req-e228450c-a27b-4ac5-b29e-d5f2150b29e0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:37 compute-0 nova_compute[255040]: 2025-11-29 08:18:37.129 255071 DEBUG nova.compute.manager [req-a4a70127-485a-4473-bffb-b8c50d1bbb6f req-e228450c-a27b-4ac5-b29e-d5f2150b29e0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] No waiting events found dispatching network-vif-unplugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:18:37 compute-0 nova_compute[255040]: 2025-11-29 08:18:37.129 255071 DEBUG nova.compute.manager [req-a4a70127-485a-4473-bffb-b8c50d1bbb6f req-e228450c-a27b-4ac5-b29e-d5f2150b29e0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Received event network-vif-unplugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:18:37 compute-0 systemd[1]: libpod-10b75ebdf1939ce275efe34c68c24ef9549077255544122fba9ec0556d2b8c50.scope: Deactivated successfully.
Nov 29 08:18:37 compute-0 podman[297205]: 2025-11-29 08:18:37.148576812 +0000 UTC m=+1.655767621 container died 10b75ebdf1939ce275efe34c68c24ef9549077255544122fba9ec0556d2b8c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:18:37 compute-0 ceph-mon[75237]: pgmap v2009: 305 pgs: 305 active+clean; 287 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 701 KiB/s wr, 40 op/s
Nov 29 08:18:37 compute-0 podman[297232]: 2025-11-29 08:18:37.429191718 +0000 UTC m=+1.712743006 container remove da9ac8ca7241661ca4c077b8b7b4813b222e80f7aa3959a0e5e3622e1d158ea6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.436 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[2f500116-5799-478c-b10d-6bd9e7847217]: (4, ('Sat Nov 29 08:18:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a (da9ac8ca7241661ca4c077b8b7b4813b222e80f7aa3959a0e5e3622e1d158ea6)\nda9ac8ca7241661ca4c077b8b7b4813b222e80f7aa3959a0e5e3622e1d158ea6\nSat Nov 29 08:18:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a (da9ac8ca7241661ca4c077b8b7b4813b222e80f7aa3959a0e5e3622e1d158ea6)\nda9ac8ca7241661ca4c077b8b7b4813b222e80f7aa3959a0e5e3622e1d158ea6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.438 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[97923dcb-5358-4266-9bf0-107abe18dd83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca895f572d46ba423221d2589b2a92abd90c585ec88a9fb217f97ee5244cf915-merged.mount: Deactivated successfully.
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.441 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7844e875-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:37 compute-0 nova_compute[255040]: 2025-11-29 08:18:37.443 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:37 compute-0 kernel: tap7844e875-d0: left promiscuous mode
Nov 29 08:18:37 compute-0 nova_compute[255040]: 2025-11-29 08:18:37.458 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.463 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[a4841e93-d2ff-4295-a6a1-8721742dae52]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.476 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b4588e-3d7d-43cb-84bc-4802c42a405d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 podman[297205]: 2025-11-29 08:18:37.478302184 +0000 UTC m=+1.985492993 container remove 10b75ebdf1939ce275efe34c68c24ef9549077255544122fba9ec0556d2b8c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_swartz, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.479 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[d14cdb63-062c-4ec8-ae8b-0018493c0a6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 systemd[1]: libpod-conmon-10b75ebdf1939ce275efe34c68c24ef9549077255544122fba9ec0556d2b8c50.scope: Deactivated successfully.
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.503 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[06cb9188-a055-4d3d-bf27-07bdc975dfa8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652171, 'reachable_time': 30795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297326, 'error': None, 'target': 'ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.506 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.506 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[55106e0f-a2e9-4e86-9770-d8c6355749e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.507 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 2ff8b035-5dbe-484d-8c3b-b6f45649371f in datapath 7844e875-d723-468d-8c4a-c3bb5b3b635a unbound from our chassis
Nov 29 08:18:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d7844e875\x2dd723\x2d468d\x2d8c4a\x2dc3bb5b3b635a.mount: Deactivated successfully.
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.508 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7844e875-d723-468d-8c4a-c3bb5b3b635a
Nov 29 08:18:37 compute-0 sudo[297052]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.522 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[86ff02a9-ec8f-46c0-bdf1-9383c7cb8994]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.523 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7844e875-d1 in ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.525 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7844e875-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.525 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[8a648adf-7243-477f-b891-43bda24c0010]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.525 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[7e706e0d-f9bb-44e0-b366-6e71e3e64238]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:18:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.537 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[b606228f-3206-4298-82f4-cba3a50be8a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:18:37 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev a993fe6f-3a23-4c22-929b-3cf95eb0ce06 does not exist
Nov 29 08:18:37 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 61e32b77-e4fd-4080-8ed5-80b45da88933 does not exist
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.561 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[1d08b833-4088-4435-a7c8-89980d573563]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.590 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e77112-dbbe-4ebc-ad71-6f8ff983f2c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 systemd-udevd[297159]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:18:37 compute-0 podman[297302]: 2025-11-29 08:18:37.600003733 +0000 UTC m=+0.420516364 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 08:18:37 compute-0 NetworkManager[49116]: <info>  [1764404317.6003] manager: (tap7844e875-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/132)
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.599 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[6eaf410a-00cf-4ff4-9ee4-2eca666f43a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 sudo[297338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:18:37 compute-0 sudo[297338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:37 compute-0 sudo[297338]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.633 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[d044212c-302b-48dc-838f-047c9d83833d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.636 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[6373455a-5256-48b7-bba8-01bd56761257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 NetworkManager[49116]: <info>  [1764404317.6596] device (tap7844e875-d0): carrier: link connected
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.665 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[af77fc63-10a9-47b4-81a0-62910a268418]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 sudo[297389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:18:37 compute-0 sudo[297389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:18:37 compute-0 sudo[297389]: pam_unix(sudo:session): session closed for user root
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.687 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[161fafcf-f5ef-463b-93fe-af6174411d4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7844e875-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:72:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655499, 'reachable_time': 31612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297414, 'error': None, 'target': 'ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.704 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[9e46d95b-74dd-41bb-b5de-b5a9e713d7fb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:7298'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655499, 'tstamp': 655499}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297417, 'error': None, 'target': 'ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.720 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[889d861f-73bd-475f-8360-1caf7186f45d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7844e875-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:72:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655499, 'reachable_time': 31612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297418, 'error': None, 'target': 'ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.748 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5dab671a-29d0-4bbd-8066-a97545dbe984]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.795 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[c0784589-0aea-4c49-a9f4-7a95673c1bb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.796 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7844e875-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.796 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.797 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7844e875-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:37 compute-0 nova_compute[255040]: 2025-11-29 08:18:37.798 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:37 compute-0 NetworkManager[49116]: <info>  [1764404317.7993] manager: (tap7844e875-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Nov 29 08:18:37 compute-0 kernel: tap7844e875-d0: entered promiscuous mode
Nov 29 08:18:37 compute-0 nova_compute[255040]: 2025-11-29 08:18:37.800 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.802 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7844e875-d0, col_values=(('external_ids', {'iface-id': 'b495613a-3fb1-48c4-aa81-640b29e83d9b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:37 compute-0 nova_compute[255040]: 2025-11-29 08:18:37.802 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:37 compute-0 ovn_controller[153295]: 2025-11-29T08:18:37Z|00259|binding|INFO|Releasing lport b495613a-3fb1-48c4-aa81-640b29e83d9b from this chassis (sb_readonly=1)
Nov 29 08:18:37 compute-0 nova_compute[255040]: 2025-11-29 08:18:37.817 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:37 compute-0 nova_compute[255040]: 2025-11-29 08:18:37.817 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.818 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7844e875-d723-468d-8c4a-c3bb5b3b635a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7844e875-d723-468d-8c4a-c3bb5b3b635a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.818 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[dc00e7f4-64aa-446f-a554-c93d871eb90e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.819 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-7844e875-d723-468d-8c4a-c3bb5b3b635a
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/7844e875-d723-468d-8c4a-c3bb5b3b635a.pid.haproxy
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID 7844e875-d723-468d-8c4a-c3bb5b3b635a
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.820 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'env', 'PROCESS_TAG=haproxy-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7844e875-d723-468d-8c4a-c3bb5b3b635a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:18:37 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:37.874 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:d2:ad 10.100.0.12'], port_security=['fa:16:3e:e3:d2:ad 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'df14e24b-3b49-44ee-865e-eda9837a9190', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6a2673206a04ec28205d820751e3174', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3cd54b75-8b12-47dc-bfa7-93fa344b6482', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.220'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e40ac74c-e68a-47d3-8a1f-fd021a26891c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=2ff8b035-5dbe-484d-8c3b-b6f45649371f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:18:37 compute-0 nova_compute[255040]: 2025-11-29 08:18:37.974 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:18:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 287 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 708 KiB/s rd, 359 KiB/s wr, 16 op/s
Nov 29 08:18:38 compute-0 podman[297450]: 2025-11-29 08:18:38.140028258 +0000 UTC m=+0.032883781 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:18:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:18:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:18:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:18:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:18:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:18:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:18:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:18:38
Nov 29 08:18:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:18:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:18:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', '.mgr', '.rgw.root', 'default.rgw.control', 'backups', 'default.rgw.log', 'volumes', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta']
Nov 29 08:18:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:18:38 compute-0 nova_compute[255040]: 2025-11-29 08:18:38.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:18:38 compute-0 nova_compute[255040]: 2025-11-29 08:18:38.976 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:18:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:18:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:18:39 compute-0 ceph-mon[75237]: pgmap v2010: 305 pgs: 305 active+clean; 287 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 708 KiB/s rd, 359 KiB/s wr, 16 op/s
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.225 255071 DEBUG nova.compute.manager [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Received event network-vif-plugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.226 255071 DEBUG oslo_concurrency.lockutils [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.227 255071 DEBUG oslo_concurrency.lockutils [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.227 255071 DEBUG oslo_concurrency.lockutils [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.228 255071 DEBUG nova.compute.manager [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] No waiting events found dispatching network-vif-plugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.228 255071 WARNING nova.compute.manager [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Received unexpected event network-vif-plugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f for instance with vm_state active and task_state deleting.
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.229 255071 DEBUG nova.compute.manager [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Received event network-vif-plugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.229 255071 DEBUG oslo_concurrency.lockutils [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.230 255071 DEBUG oslo_concurrency.lockutils [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.231 255071 DEBUG oslo_concurrency.lockutils [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.231 255071 DEBUG nova.compute.manager [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] No waiting events found dispatching network-vif-plugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.232 255071 WARNING nova.compute.manager [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Received unexpected event network-vif-plugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f for instance with vm_state active and task_state deleting.
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.233 255071 DEBUG nova.compute.manager [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Received event network-vif-plugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.234 255071 DEBUG oslo_concurrency.lockutils [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.234 255071 DEBUG oslo_concurrency.lockutils [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.235 255071 DEBUG oslo_concurrency.lockutils [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.236 255071 DEBUG nova.compute.manager [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] No waiting events found dispatching network-vif-plugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.237 255071 WARNING nova.compute.manager [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Received unexpected event network-vif-plugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f for instance with vm_state active and task_state deleting.
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.237 255071 DEBUG nova.compute.manager [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Received event network-vif-unplugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.238 255071 DEBUG oslo_concurrency.lockutils [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.239 255071 DEBUG oslo_concurrency.lockutils [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.240 255071 DEBUG oslo_concurrency.lockutils [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.242 255071 DEBUG nova.compute.manager [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] No waiting events found dispatching network-vif-unplugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.242 255071 DEBUG nova.compute.manager [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Received event network-vif-unplugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.242 255071 DEBUG nova.compute.manager [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Received event network-vif-plugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.242 255071 DEBUG oslo_concurrency.lockutils [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.243 255071 DEBUG oslo_concurrency.lockutils [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.243 255071 DEBUG oslo_concurrency.lockutils [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.243 255071 DEBUG nova.compute.manager [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] No waiting events found dispatching network-vif-plugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.243 255071 WARNING nova.compute.manager [req-b0993837-e023-457e-bd4e-cf2c75a9c016 req-7bb6b240-5d74-462b-a09e-238691e04ec7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Received unexpected event network-vif-plugged-2ff8b035-5dbe-484d-8c3b-b6f45649371f for instance with vm_state active and task_state deleting.
Nov 29 08:18:39 compute-0 podman[297450]: 2025-11-29 08:18:39.448058003 +0000 UTC m=+1.340913516 container create 5ed632a62c059511980e5542c22245aa69c6108c4cc76c6a91ddd1716fb0133d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:18:39 compute-0 systemd[1]: Started libpod-conmon-5ed632a62c059511980e5542c22245aa69c6108c4cc76c6a91ddd1716fb0133d.scope.
Nov 29 08:18:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:18:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb56321dfb8b2213ec612830c084bf2236b52fbe21ebe237a7ca02683498d2aa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:18:39 compute-0 nova_compute[255040]: 2025-11-29 08:18:39.916 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:39 compute-0 podman[297450]: 2025-11-29 08:18:39.957487578 +0000 UTC m=+1.850343111 container init 5ed632a62c059511980e5542c22245aa69c6108c4cc76c6a91ddd1716fb0133d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:18:39 compute-0 podman[297450]: 2025-11-29 08:18:39.969232263 +0000 UTC m=+1.862087776 container start 5ed632a62c059511980e5542c22245aa69c6108c4cc76c6a91ddd1716fb0133d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:18:40 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[297465]: [NOTICE]   (297469) : New worker (297471) forked
Nov 29 08:18:40 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[297465]: [NOTICE]   (297469) : Loading success.
Nov 29 08:18:40 compute-0 nova_compute[255040]: 2025-11-29 08:18:40.037 255071 INFO nova.virt.libvirt.driver [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Deleting instance files /var/lib/nova/instances/df14e24b-3b49-44ee-865e-eda9837a9190_del
Nov 29 08:18:40 compute-0 nova_compute[255040]: 2025-11-29 08:18:40.038 255071 INFO nova.virt.libvirt.driver [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Deletion of /var/lib/nova/instances/df14e24b-3b49-44ee-865e-eda9837a9190_del complete
Nov 29 08:18:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:40.053 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 2ff8b035-5dbe-484d-8c3b-b6f45649371f in datapath 7844e875-d723-468d-8c4a-c3bb5b3b635a unbound from our chassis
Nov 29 08:18:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:40.055 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7844e875-d723-468d-8c4a-c3bb5b3b635a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:18:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:40.056 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[a8750b72-d26c-465c-af9b-d3f63bbaccf5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:40.056 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a namespace which is not needed anymore
Nov 29 08:18:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 287 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 559 KiB/s rd, 359 KiB/s wr, 16 op/s
Nov 29 08:18:40 compute-0 nova_compute[255040]: 2025-11-29 08:18:40.093 255071 INFO nova.compute.manager [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Took 6.05 seconds to destroy the instance on the hypervisor.
Nov 29 08:18:40 compute-0 nova_compute[255040]: 2025-11-29 08:18:40.094 255071 DEBUG oslo.service.loopingcall [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:18:40 compute-0 nova_compute[255040]: 2025-11-29 08:18:40.094 255071 DEBUG nova.compute.manager [-] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:18:40 compute-0 nova_compute[255040]: 2025-11-29 08:18:40.094 255071 DEBUG nova.network.neutron [-] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:18:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:40 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[297465]: [NOTICE]   (297469) : haproxy version is 2.8.14-c23fe91
Nov 29 08:18:40 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[297465]: [NOTICE]   (297469) : path to executable is /usr/sbin/haproxy
Nov 29 08:18:40 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[297465]: [WARNING]  (297469) : Exiting Master process...
Nov 29 08:18:40 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[297465]: [ALERT]    (297469) : Current worker (297471) exited with code 143 (Terminated)
Nov 29 08:18:40 compute-0 neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a[297465]: [WARNING]  (297469) : All workers exited. Exiting... (0)
Nov 29 08:18:40 compute-0 systemd[1]: libpod-5ed632a62c059511980e5542c22245aa69c6108c4cc76c6a91ddd1716fb0133d.scope: Deactivated successfully.
Nov 29 08:18:40 compute-0 podman[297498]: 2025-11-29 08:18:40.181314014 +0000 UTC m=+0.044587296 container died 5ed632a62c059511980e5542c22245aa69c6108c4cc76c6a91ddd1716fb0133d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 08:18:40 compute-0 nova_compute[255040]: 2025-11-29 08:18:40.185 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5ed632a62c059511980e5542c22245aa69c6108c4cc76c6a91ddd1716fb0133d-userdata-shm.mount: Deactivated successfully.
Nov 29 08:18:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb56321dfb8b2213ec612830c084bf2236b52fbe21ebe237a7ca02683498d2aa-merged.mount: Deactivated successfully.
Nov 29 08:18:40 compute-0 podman[297498]: 2025-11-29 08:18:40.226236237 +0000 UTC m=+0.089509499 container cleanup 5ed632a62c059511980e5542c22245aa69c6108c4cc76c6a91ddd1716fb0133d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:18:40 compute-0 systemd[1]: libpod-conmon-5ed632a62c059511980e5542c22245aa69c6108c4cc76c6a91ddd1716fb0133d.scope: Deactivated successfully.
Nov 29 08:18:40 compute-0 podman[297529]: 2025-11-29 08:18:40.294204678 +0000 UTC m=+0.048431709 container remove 5ed632a62c059511980e5542c22245aa69c6108c4cc76c6a91ddd1716fb0133d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:18:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:40.300 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[c00ae306-0aae-40c9-b014-1ce0baabf430]: (4, ('Sat Nov 29 08:18:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a (5ed632a62c059511980e5542c22245aa69c6108c4cc76c6a91ddd1716fb0133d)\n5ed632a62c059511980e5542c22245aa69c6108c4cc76c6a91ddd1716fb0133d\nSat Nov 29 08:18:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a (5ed632a62c059511980e5542c22245aa69c6108c4cc76c6a91ddd1716fb0133d)\n5ed632a62c059511980e5542c22245aa69c6108c4cc76c6a91ddd1716fb0133d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:40.302 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[f911c1e0-6636-444f-b190-ba08a0134927]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:40.303 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7844e875-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:18:40 compute-0 nova_compute[255040]: 2025-11-29 08:18:40.305 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:40 compute-0 nova_compute[255040]: 2025-11-29 08:18:40.321 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:40 compute-0 kernel: tap7844e875-d0: left promiscuous mode
Nov 29 08:18:40 compute-0 nova_compute[255040]: 2025-11-29 08:18:40.323 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:40.324 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f87a0c-a985-421a-8db1-86d513f117a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:40.336 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[479a8edf-2fb7-44d8-8eab-96ed7efe011b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:40.338 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[45cc1beb-da08-4704-9505-9f287d082e87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:40.353 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb6f38b-3ab6-4b07-84f7-0f44ae7bbf51]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655492, 'reachable_time': 36137, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297543, 'error': None, 'target': 'ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:40.355 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7844e875-d723-468d-8c4a-c3bb5b3b635a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:18:40 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:40.355 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[da8047aa-b692-4dee-bed5-addb314bcd1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:18:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d7844e875\x2dd723\x2d468d\x2d8c4a\x2dc3bb5b3b635a.mount: Deactivated successfully.
Nov 29 08:18:40 compute-0 ceph-mon[75237]: pgmap v2011: 305 pgs: 305 active+clean; 287 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 559 KiB/s rd, 359 KiB/s wr, 16 op/s
Nov 29 08:18:40 compute-0 nova_compute[255040]: 2025-11-29 08:18:40.948 255071 DEBUG nova.network.neutron [-] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:18:40 compute-0 nova_compute[255040]: 2025-11-29 08:18:40.975 255071 INFO nova.compute.manager [-] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Took 0.88 seconds to deallocate network for instance.
Nov 29 08:18:40 compute-0 nova_compute[255040]: 2025-11-29 08:18:40.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:18:40 compute-0 nova_compute[255040]: 2025-11-29 08:18:40.976 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:18:40 compute-0 nova_compute[255040]: 2025-11-29 08:18:40.977 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:18:40 compute-0 nova_compute[255040]: 2025-11-29 08:18:40.996 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 29 08:18:40 compute-0 nova_compute[255040]: 2025-11-29 08:18:40.996 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:18:40 compute-0 nova_compute[255040]: 2025-11-29 08:18:40.997 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:18:40 compute-0 nova_compute[255040]: 2025-11-29 08:18:40.997 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:18:41 compute-0 nova_compute[255040]: 2025-11-29 08:18:41.020 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:41 compute-0 nova_compute[255040]: 2025-11-29 08:18:41.020 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:41 compute-0 nova_compute[255040]: 2025-11-29 08:18:41.020 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:41 compute-0 nova_compute[255040]: 2025-11-29 08:18:41.021 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:18:41 compute-0 nova_compute[255040]: 2025-11-29 08:18:41.021 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:41 compute-0 nova_compute[255040]: 2025-11-29 08:18:41.241 255071 INFO nova.compute.manager [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Took 0.26 seconds to detach 1 volumes for instance.
Nov 29 08:18:41 compute-0 nova_compute[255040]: 2025-11-29 08:18:41.290 255071 DEBUG oslo_concurrency.lockutils [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:41 compute-0 nova_compute[255040]: 2025-11-29 08:18:41.291 255071 DEBUG oslo_concurrency.lockutils [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:41 compute-0 nova_compute[255040]: 2025-11-29 08:18:41.308 255071 DEBUG nova.compute.manager [req-f435af09-92f2-415f-88e0-61d655522253 req-b1a8e2fa-38e2-429d-8a48-4808a2aac9af cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Received event network-vif-deleted-2ff8b035-5dbe-484d-8c3b-b6f45649371f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:18:41 compute-0 nova_compute[255040]: 2025-11-29 08:18:41.316 255071 DEBUG nova.scheduler.client.report [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Refreshing inventories for resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 08:18:41 compute-0 nova_compute[255040]: 2025-11-29 08:18:41.331 255071 DEBUG nova.scheduler.client.report [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Updating ProviderTree inventory for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 08:18:41 compute-0 nova_compute[255040]: 2025-11-29 08:18:41.331 255071 DEBUG nova.compute.provider_tree [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Updating inventory in ProviderTree for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 08:18:41 compute-0 nova_compute[255040]: 2025-11-29 08:18:41.455 255071 DEBUG nova.scheduler.client.report [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Refreshing aggregate associations for resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 08:18:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:18:41 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3115600835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:41 compute-0 nova_compute[255040]: 2025-11-29 08:18:41.510 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:41 compute-0 nova_compute[255040]: 2025-11-29 08:18:41.518 255071 DEBUG nova.scheduler.client.report [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Refreshing trait associations for resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e, traits: COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_ABM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE,COMPUTE_NODE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 08:18:41 compute-0 nova_compute[255040]: 2025-11-29 08:18:41.552 255071 DEBUG oslo_concurrency.processutils [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:41 compute-0 nova_compute[255040]: 2025-11-29 08:18:41.572 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:41 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3115600835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:41 compute-0 nova_compute[255040]: 2025-11-29 08:18:41.710 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:18:41 compute-0 nova_compute[255040]: 2025-11-29 08:18:41.712 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4311MB free_disk=59.988136291503906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:18:41 compute-0 nova_compute[255040]: 2025-11-29 08:18:41.712 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:18:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:18:41 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1118774023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:41 compute-0 nova_compute[255040]: 2025-11-29 08:18:41.983 255071 DEBUG oslo_concurrency.processutils [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:41 compute-0 nova_compute[255040]: 2025-11-29 08:18:41.990 255071 DEBUG nova.compute.provider_tree [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:18:42 compute-0 nova_compute[255040]: 2025-11-29 08:18:42.018 255071 DEBUG nova.scheduler.client.report [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:18:42 compute-0 nova_compute[255040]: 2025-11-29 08:18:42.044 255071 DEBUG oslo_concurrency.lockutils [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:42 compute-0 nova_compute[255040]: 2025-11-29 08:18:42.049 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.337s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:18:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 287 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 561 KiB/s rd, 349 KiB/s wr, 20 op/s
Nov 29 08:18:42 compute-0 nova_compute[255040]: 2025-11-29 08:18:42.106 255071 INFO nova.scheduler.client.report [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Deleted allocations for instance df14e24b-3b49-44ee-865e-eda9837a9190
Nov 29 08:18:42 compute-0 nova_compute[255040]: 2025-11-29 08:18:42.131 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:18:42 compute-0 nova_compute[255040]: 2025-11-29 08:18:42.132 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:18:42 compute-0 nova_compute[255040]: 2025-11-29 08:18:42.153 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:18:42 compute-0 nova_compute[255040]: 2025-11-29 08:18:42.210 255071 DEBUG oslo_concurrency.lockutils [None req-e3860764-520a-42e7-a6ca-517776764509 8a7b756f6c364e97a9d0d5298587d61c e6a2673206a04ec28205d820751e3174 - - default default] Lock "df14e24b-3b49-44ee-865e-eda9837a9190" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:42 compute-0 nova_compute[255040]: 2025-11-29 08:18:42.414 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:18:42 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4256417140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e490 do_prune osdmap full prune enabled
Nov 29 08:18:42 compute-0 nova_compute[255040]: 2025-11-29 08:18:42.592 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:18:42 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1118774023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:42 compute-0 ceph-mon[75237]: pgmap v2012: 305 pgs: 305 active+clean; 287 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 561 KiB/s rd, 349 KiB/s wr, 20 op/s
Nov 29 08:18:42 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/4256417140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:18:42 compute-0 nova_compute[255040]: 2025-11-29 08:18:42.597 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:18:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e491 e491: 3 total, 3 up, 3 in
Nov 29 08:18:42 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e491: 3 total, 3 up, 3 in
Nov 29 08:18:42 compute-0 nova_compute[255040]: 2025-11-29 08:18:42.614 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:18:42 compute-0 nova_compute[255040]: 2025-11-29 08:18:42.642 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:18:42 compute-0 nova_compute[255040]: 2025-11-29 08:18:42.643 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:18:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:18:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:18:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:18:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:18:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:18:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:18:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:18:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:18:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:18:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:18:43 compute-0 ceph-mon[75237]: osdmap e491: 3 total, 3 up, 3 in
Nov 29 08:18:43 compute-0 nova_compute[255040]: 2025-11-29 08:18:43.622 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:18:43 compute-0 nova_compute[255040]: 2025-11-29 08:18:43.623 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:18:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:18:43 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2339055309' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:18:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:18:43 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2339055309' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:18:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 287 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 4.9 KiB/s wr, 22 op/s
Nov 29 08:18:44 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2339055309' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:18:44 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2339055309' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:18:44 compute-0 ceph-mon[75237]: pgmap v2014: 305 pgs: 305 active+clean; 287 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 4.9 KiB/s wr, 22 op/s
Nov 29 08:18:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e491 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:45 compute-0 nova_compute[255040]: 2025-11-29 08:18:45.188 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:18:45 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1380081640' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:18:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:18:45 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1380081640' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:18:45 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1380081640' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:18:45 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1380081640' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:18:45 compute-0 nova_compute[255040]: 2025-11-29 08:18:45.835 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 279 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 113 KiB/s rd, 3.2 KiB/s wr, 60 op/s
Nov 29 08:18:46 compute-0 nova_compute[255040]: 2025-11-29 08:18:46.576 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:46 compute-0 ceph-mon[75237]: pgmap v2015: 305 pgs: 305 active+clean; 279 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 113 KiB/s rd, 3.2 KiB/s wr, 60 op/s
Nov 29 08:18:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e491 do_prune osdmap full prune enabled
Nov 29 08:18:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e492 e492: 3 total, 3 up, 3 in
Nov 29 08:18:47 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e492: 3 total, 3 up, 3 in
Nov 29 08:18:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 4.0 KiB/s wr, 65 op/s
Nov 29 08:18:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:18:48 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2404676883' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:18:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:18:48 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2404676883' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:18:48 compute-0 ceph-mon[75237]: osdmap e492: 3 total, 3 up, 3 in
Nov 29 08:18:48 compute-0 ceph-mon[75237]: pgmap v2017: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 4.0 KiB/s wr, 65 op/s
Nov 29 08:18:48 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2404676883' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:18:48 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2404676883' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:18:48 compute-0 podman[297615]: 2025-11-29 08:18:48.900201798 +0000 UTC m=+0.054805629 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 08:18:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:18:49 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/400763423' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:18:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:18:49 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/400763423' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:18:49 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/400763423' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:18:49 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/400763423' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:18:49 compute-0 nova_compute[255040]: 2025-11-29 08:18:49.891 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404314.8894992, df14e24b-3b49-44ee-865e-eda9837a9190 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:18:49 compute-0 nova_compute[255040]: 2025-11-29 08:18:49.891 255071 INFO nova.compute.manager [-] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] VM Stopped (Lifecycle Event)
Nov 29 08:18:49 compute-0 nova_compute[255040]: 2025-11-29 08:18:49.926 255071 DEBUG nova.compute.manager [None req-f154e6bf-b6b6-439d-b53e-ef633887c8a6 - - - - - -] [instance: df14e24b-3b49-44ee-865e-eda9837a9190] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:18:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 4.4 KiB/s wr, 139 op/s
Nov 29 08:18:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e492 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:50 compute-0 nova_compute[255040]: 2025-11-29 08:18:50.191 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:50 compute-0 ceph-mon[75237]: pgmap v2018: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 4.4 KiB/s wr, 139 op/s
Nov 29 08:18:51 compute-0 nova_compute[255040]: 2025-11-29 08:18:51.578 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 4.4 KiB/s wr, 128 op/s
Nov 29 08:18:52 compute-0 nova_compute[255040]: 2025-11-29 08:18:52.850 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:53 compute-0 ceph-mon[75237]: pgmap v2019: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 4.4 KiB/s wr, 128 op/s
Nov 29 08:18:53 compute-0 nova_compute[255040]: 2025-11-29 08:18:53.229 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:53 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:18:53 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.9 total, 600.0 interval
                                           Cumulative writes: 29K writes, 109K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 29K writes, 10K syncs, 2.75 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 13K writes, 48K keys, 13K commit groups, 1.0 writes per commit group, ingest: 30.10 MB, 0.05 MB/s
                                           Interval WAL: 13K writes, 5550 syncs, 2.40 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:18:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 4.2 KiB/s wr, 121 op/s
Nov 29 08:18:54 compute-0 podman[297636]: 2025-11-29 08:18:54.897683569 +0000 UTC m=+0.064603882 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:18:55 compute-0 ceph-mon[75237]: pgmap v2020: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 4.2 KiB/s wr, 121 op/s
Nov 29 08:18:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e492 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:18:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e492 do_prune osdmap full prune enabled
Nov 29 08:18:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 e493: 3 total, 3 up, 3 in
Nov 29 08:18:55 compute-0 ceph-mon[75237]: log_channel(cluster) log [DBG] : osdmap e493: 3 total, 3 up, 3 in
Nov 29 08:18:55 compute-0 nova_compute[255040]: 2025-11-29 08:18:55.193 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 1.9 KiB/s wr, 88 op/s
Nov 29 08:18:56 compute-0 ceph-mon[75237]: osdmap e493: 3 total, 3 up, 3 in
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:18:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:18:56 compute-0 nova_compute[255040]: 2025-11-29 08:18:56.580 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:57 compute-0 ceph-mon[75237]: pgmap v2022: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 1.9 KiB/s wr, 88 op/s
Nov 29 08:18:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.6 KiB/s wr, 74 op/s
Nov 29 08:18:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:18:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1030324487' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:18:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:18:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1030324487' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:18:59 compute-0 ceph-mon[75237]: pgmap v2023: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.6 KiB/s wr, 74 op/s
Nov 29 08:18:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1030324487' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:18:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1030324487' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:18:59 compute-0 nova_compute[255040]: 2025-11-29 08:18:59.969 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:18:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:59.970 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:18:59 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:18:59.972 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:19:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 4.8 KiB/s rd, 716 B/s wr, 10 op/s
Nov 29 08:19:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:00 compute-0 nova_compute[255040]: 2025-11-29 08:19:00.195 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:00 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:00.975 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:01 compute-0 ceph-mon[75237]: pgmap v2024: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 4.8 KiB/s rd, 716 B/s wr, 10 op/s
Nov 29 08:19:01 compute-0 nova_compute[255040]: 2025-11-29 08:19:01.582 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:19:03 compute-0 ceph-mon[75237]: pgmap v2025: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:19:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:19:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:05 compute-0 nova_compute[255040]: 2025-11-29 08:19:05.197 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:05 compute-0 ceph-mon[75237]: pgmap v2026: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:19:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:19:06 compute-0 nova_compute[255040]: 2025-11-29 08:19:06.584 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:06 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:19:06 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 29K writes, 112K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 29K writes, 10K syncs, 2.80 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 13K writes, 49K keys, 13K commit groups, 1.0 writes per commit group, ingest: 30.99 MB, 0.05 MB/s
                                           Interval WAL: 13K writes, 5503 syncs, 2.39 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:19:07 compute-0 ceph-mon[75237]: pgmap v2027: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:19:08 compute-0 podman[297657]: 2025-11-29 08:19:08.031251964 +0000 UTC m=+0.180385942 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:19:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:19:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:19:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:19:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:19:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:19:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:19:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:19:09 compute-0 ceph-mon[75237]: pgmap v2028: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:19:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:19:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:10 compute-0 nova_compute[255040]: 2025-11-29 08:19:10.205 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:11 compute-0 ceph-mon[75237]: pgmap v2029: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:19:11 compute-0 nova_compute[255040]: 2025-11-29 08:19:11.587 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:19:12 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:19:12 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.2 total, 600.0 interval
                                           Cumulative writes: 23K writes, 94K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 23K writes, 8348 syncs, 2.87 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 26.47 MB, 0.04 MB/s
                                           Interval WAL: 10K writes, 4333 syncs, 2.42 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:19:13 compute-0 ceph-mon[75237]: pgmap v2030: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:19:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:19:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:15 compute-0 nova_compute[255040]: 2025-11-29 08:19:15.207 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:15 compute-0 ceph-mon[75237]: pgmap v2031: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:19:15 compute-0 ceph-mgr[75527]: [devicehealth INFO root] Check health
Nov 29 08:19:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 683 KiB/s rd, 0 op/s
Nov 29 08:19:16 compute-0 nova_compute[255040]: 2025-11-29 08:19:16.589 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:17 compute-0 ceph-mon[75237]: pgmap v2032: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 683 KiB/s rd, 0 op/s
Nov 29 08:19:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 85 B/s wr, 5 op/s
Nov 29 08:19:18 compute-0 ceph-mon[75237]: pgmap v2033: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 85 B/s wr, 5 op/s
Nov 29 08:19:19 compute-0 podman[297683]: 2025-11-29 08:19:19.909060038 +0000 UTC m=+0.072037920 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 29 08:19:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 29 08:19:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:20 compute-0 nova_compute[255040]: 2025-11-29 08:19:20.210 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:21 compute-0 ceph-mon[75237]: pgmap v2034: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 29 08:19:21 compute-0 nova_compute[255040]: 2025-11-29 08:19:21.591 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 9 op/s
Nov 29 08:19:23 compute-0 ceph-mon[75237]: pgmap v2035: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 9 op/s
Nov 29 08:19:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 9 op/s
Nov 29 08:19:25 compute-0 ceph-mon[75237]: pgmap v2036: 305 pgs: 305 active+clean; 271 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 9 op/s
Nov 29 08:19:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:25 compute-0 nova_compute[255040]: 2025-11-29 08:19:25.214 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:25 compute-0 podman[297701]: 2025-11-29 08:19:25.909133298 +0000 UTC m=+0.073694065 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251125)
Nov 29 08:19:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Nov 29 08:19:26 compute-0 nova_compute[255040]: 2025-11-29 08:19:26.594 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:27.144 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:27.144 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:27.145 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:27 compute-0 ceph-mon[75237]: pgmap v2037: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Nov 29 08:19:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 22 KiB/s wr, 11 op/s
Nov 29 08:19:29 compute-0 ceph-mon[75237]: pgmap v2038: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 22 KiB/s wr, 11 op/s
Nov 29 08:19:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 686 KiB/s rd, 22 KiB/s wr, 5 op/s
Nov 29 08:19:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:30 compute-0 nova_compute[255040]: 2025-11-29 08:19:30.216 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:31 compute-0 ovn_controller[153295]: 2025-11-29T08:19:31Z|00260|memory_trim|INFO|Detected inactivity (last active 30024 ms ago): trimming memory
Nov 29 08:19:31 compute-0 ceph-mon[75237]: pgmap v2039: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 686 KiB/s rd, 22 KiB/s wr, 5 op/s
Nov 29 08:19:31 compute-0 nova_compute[255040]: 2025-11-29 08:19:31.595 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 29 08:19:33 compute-0 ceph-mon[75237]: pgmap v2040: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 29 08:19:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 22 KiB/s wr, 2 op/s
Nov 29 08:19:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:35 compute-0 ceph-mon[75237]: pgmap v2041: 305 pgs: 305 active+clean; 271 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 22 KiB/s wr, 2 op/s
Nov 29 08:19:35 compute-0 nova_compute[255040]: 2025-11-29 08:19:35.218 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:35 compute-0 nova_compute[255040]: 2025-11-29 08:19:35.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:19:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 303 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 3.5 KiB/s rd, 2.7 MiB/s wr, 6 op/s
Nov 29 08:19:36 compute-0 nova_compute[255040]: 2025-11-29 08:19:36.597 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:36 compute-0 nova_compute[255040]: 2025-11-29 08:19:36.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:19:37 compute-0 ceph-mon[75237]: pgmap v2042: 305 pgs: 305 active+clean; 303 MiB data, 632 MiB used, 59 GiB / 60 GiB avail; 3.5 KiB/s rd, 2.7 MiB/s wr, 6 op/s
Nov 29 08:19:37 compute-0 sudo[297723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:19:37 compute-0 sudo[297723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:37 compute-0 sudo[297723]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:37 compute-0 sudo[297748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:19:37 compute-0 sudo[297748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:37 compute-0 sudo[297748]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:37 compute-0 sudo[297773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:19:37 compute-0 sudo[297773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:37 compute-0 sudo[297773]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:37 compute-0 sudo[297798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:19:37 compute-0 sudo[297798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 337 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 5.5 MiB/s wr, 22 op/s
Nov 29 08:19:38 compute-0 sudo[297798]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:19:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:19:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:19:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:19:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:19:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:19:38 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev febb9995-ce5a-47e4-bd5a-e615e5cf4f17 does not exist
Nov 29 08:19:38 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev a5c0f552-39e8-4440-afb3-95872ab8b027 does not exist
Nov 29 08:19:38 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 1900e545-63aa-4b8a-9b3e-c3afda6390e1 does not exist
Nov 29 08:19:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:19:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:19:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:19:38 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:19:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:19:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:19:38 compute-0 sudo[297856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:19:38 compute-0 sudo[297856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:38 compute-0 sudo[297856]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:38 compute-0 sudo[297882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:19:38 compute-0 sudo[297882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:38 compute-0 sudo[297882]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:38 compute-0 podman[297880]: 2025-11-29 08:19:38.666541591 +0000 UTC m=+0.100884233 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 08:19:38 compute-0 sudo[297926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:19:38 compute-0 sudo[297926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:38 compute-0 sudo[297926]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:19:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:19:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:19:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:19:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:19:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:19:38 compute-0 sudo[297957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:19:38 compute-0 sudo[297957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:19:38
Nov 29 08:19:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:19:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:19:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['volumes', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms']
Nov 29 08:19:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:19:39 compute-0 podman[298024]: 2025-11-29 08:19:39.122130153 +0000 UTC m=+0.047451052 container create 513ea618907117635e5753f20c5a8a3d28f6e7a2389be6bfe81656c7bfc5c57c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:19:39 compute-0 systemd[1]: Started libpod-conmon-513ea618907117635e5753f20c5a8a3d28f6e7a2389be6bfe81656c7bfc5c57c.scope.
Nov 29 08:19:39 compute-0 podman[298024]: 2025-11-29 08:19:39.098647324 +0000 UTC m=+0.023968273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:19:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:19:39 compute-0 podman[298024]: 2025-11-29 08:19:39.219566373 +0000 UTC m=+0.144887292 container init 513ea618907117635e5753f20c5a8a3d28f6e7a2389be6bfe81656c7bfc5c57c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ellis, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:19:39 compute-0 ceph-mon[75237]: pgmap v2043: 305 pgs: 305 active+clean; 337 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 5.5 MiB/s wr, 22 op/s
Nov 29 08:19:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:19:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:19:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:19:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:19:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:19:39 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:19:39 compute-0 podman[298024]: 2025-11-29 08:19:39.228846421 +0000 UTC m=+0.154167320 container start 513ea618907117635e5753f20c5a8a3d28f6e7a2389be6bfe81656c7bfc5c57c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ellis, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:19:39 compute-0 podman[298024]: 2025-11-29 08:19:39.232417217 +0000 UTC m=+0.157738136 container attach 513ea618907117635e5753f20c5a8a3d28f6e7a2389be6bfe81656c7bfc5c57c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 08:19:39 compute-0 romantic_ellis[298041]: 167 167
Nov 29 08:19:39 compute-0 systemd[1]: libpod-513ea618907117635e5753f20c5a8a3d28f6e7a2389be6bfe81656c7bfc5c57c.scope: Deactivated successfully.
Nov 29 08:19:39 compute-0 conmon[298041]: conmon 513ea618907117635e57 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-513ea618907117635e5753f20c5a8a3d28f6e7a2389be6bfe81656c7bfc5c57c.scope/container/memory.events
Nov 29 08:19:39 compute-0 podman[298024]: 2025-11-29 08:19:39.236882927 +0000 UTC m=+0.162203826 container died 513ea618907117635e5753f20c5a8a3d28f6e7a2389be6bfe81656c7bfc5c57c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ellis, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:19:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-fccaf573ecd5fd84412a46d0a441c49a9d3caf37e1708dc90bd97dc4788ce557-merged.mount: Deactivated successfully.
Nov 29 08:19:39 compute-0 podman[298024]: 2025-11-29 08:19:39.27919814 +0000 UTC m=+0.204519039 container remove 513ea618907117635e5753f20c5a8a3d28f6e7a2389be6bfe81656c7bfc5c57c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ellis, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 08:19:39 compute-0 systemd[1]: libpod-conmon-513ea618907117635e5753f20c5a8a3d28f6e7a2389be6bfe81656c7bfc5c57c.scope: Deactivated successfully.
Nov 29 08:19:39 compute-0 podman[298063]: 2025-11-29 08:19:39.489572556 +0000 UTC m=+0.039639524 container create c0b9b35ca42a5f71f94694c4998944a9455c0a80c029612b0a1e6ec4516c2620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:19:39 compute-0 systemd[1]: Started libpod-conmon-c0b9b35ca42a5f71f94694c4998944a9455c0a80c029612b0a1e6ec4516c2620.scope.
Nov 29 08:19:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf900a62db358e9886b9e2f9966076fce914612e20a6b61429536c819b0b14e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf900a62db358e9886b9e2f9966076fce914612e20a6b61429536c819b0b14e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf900a62db358e9886b9e2f9966076fce914612e20a6b61429536c819b0b14e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:19:39 compute-0 podman[298063]: 2025-11-29 08:19:39.471007808 +0000 UTC m=+0.021074776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf900a62db358e9886b9e2f9966076fce914612e20a6b61429536c819b0b14e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf900a62db358e9886b9e2f9966076fce914612e20a6b61429536c819b0b14e6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:19:39 compute-0 podman[298063]: 2025-11-29 08:19:39.5853399 +0000 UTC m=+0.135406858 container init c0b9b35ca42a5f71f94694c4998944a9455c0a80c029612b0a1e6ec4516c2620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 08:19:39 compute-0 podman[298063]: 2025-11-29 08:19:39.597370473 +0000 UTC m=+0.147437431 container start c0b9b35ca42a5f71f94694c4998944a9455c0a80c029612b0a1e6ec4516c2620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:19:39 compute-0 podman[298063]: 2025-11-29 08:19:39.600799974 +0000 UTC m=+0.150866932 container attach c0b9b35ca42a5f71f94694c4998944a9455c0a80c029612b0a1e6ec4516c2620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dubinsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:19:39 compute-0 nova_compute[255040]: 2025-11-29 08:19:39.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:19:39 compute-0 nova_compute[255040]: 2025-11-29 08:19:39.978 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:19:39 compute-0 nova_compute[255040]: 2025-11-29 08:19:39.979 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:19:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 365 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 7.7 MiB/s wr, 44 op/s
Nov 29 08:19:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:40 compute-0 nova_compute[255040]: 2025-11-29 08:19:40.220 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:40 compute-0 nova_compute[255040]: 2025-11-29 08:19:40.536 255071 DEBUG oslo_concurrency.lockutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "71726c35-b087-417d-aa8f-40239b043464" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:40 compute-0 nova_compute[255040]: 2025-11-29 08:19:40.537 255071 DEBUG oslo_concurrency.lockutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "71726c35-b087-417d-aa8f-40239b043464" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:40 compute-0 nova_compute[255040]: 2025-11-29 08:19:40.552 255071 DEBUG nova.compute.manager [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:19:40 compute-0 nova_compute[255040]: 2025-11-29 08:19:40.644 255071 DEBUG oslo_concurrency.lockutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:40 compute-0 nova_compute[255040]: 2025-11-29 08:19:40.644 255071 DEBUG oslo_concurrency.lockutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:40 compute-0 nova_compute[255040]: 2025-11-29 08:19:40.652 255071 DEBUG nova.virt.hardware [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:19:40 compute-0 nova_compute[255040]: 2025-11-29 08:19:40.653 255071 INFO nova.compute.claims [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:19:40 compute-0 romantic_dubinsky[298080]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:19:40 compute-0 romantic_dubinsky[298080]: --> relative data size: 1.0
Nov 29 08:19:40 compute-0 romantic_dubinsky[298080]: --> All data devices are unavailable
Nov 29 08:19:40 compute-0 systemd[1]: libpod-c0b9b35ca42a5f71f94694c4998944a9455c0a80c029612b0a1e6ec4516c2620.scope: Deactivated successfully.
Nov 29 08:19:40 compute-0 systemd[1]: libpod-c0b9b35ca42a5f71f94694c4998944a9455c0a80c029612b0a1e6ec4516c2620.scope: Consumed 1.084s CPU time.
Nov 29 08:19:40 compute-0 podman[298063]: 2025-11-29 08:19:40.730637027 +0000 UTC m=+1.280704005 container died c0b9b35ca42a5f71f94694c4998944a9455c0a80c029612b0a1e6ec4516c2620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:19:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf900a62db358e9886b9e2f9966076fce914612e20a6b61429536c819b0b14e6-merged.mount: Deactivated successfully.
Nov 29 08:19:40 compute-0 nova_compute[255040]: 2025-11-29 08:19:40.780 255071 DEBUG oslo_concurrency.processutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:40 compute-0 podman[298063]: 2025-11-29 08:19:40.805994555 +0000 UTC m=+1.356061503 container remove c0b9b35ca42a5f71f94694c4998944a9455c0a80c029612b0a1e6ec4516c2620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 08:19:40 compute-0 systemd[1]: libpod-conmon-c0b9b35ca42a5f71f94694c4998944a9455c0a80c029612b0a1e6ec4516c2620.scope: Deactivated successfully.
Nov 29 08:19:40 compute-0 sudo[297957]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:40 compute-0 sudo[298121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:19:40 compute-0 sudo[298121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:40 compute-0 sudo[298121]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:40 compute-0 nova_compute[255040]: 2025-11-29 08:19:40.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:19:40 compute-0 sudo[298148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:19:40 compute-0 sudo[298148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:40 compute-0 sudo[298148]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.012 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:41 compute-0 sudo[298190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:19:41 compute-0 sudo[298190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:41 compute-0 sudo[298190]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:41 compute-0 sudo[298215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 08:19:41 compute-0 sudo[298215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:19:41 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1835471418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:19:41 compute-0 ceph-mon[75237]: pgmap v2044: 305 pgs: 305 active+clean; 365 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 7.7 MiB/s wr, 44 op/s
Nov 29 08:19:41 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1835471418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.245 255071 DEBUG oslo_concurrency.processutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.253 255071 DEBUG nova.compute.provider_tree [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.273 255071 DEBUG nova.scheduler.client.report [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.301 255071 DEBUG oslo_concurrency.lockutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.302 255071 DEBUG nova.compute.manager [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.305 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.294s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.305 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.306 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.306 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.394 255071 DEBUG nova.compute.manager [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.396 255071 DEBUG nova.network.neutron [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.426 255071 INFO nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.450 255071 DEBUG nova.compute.manager [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:19:41 compute-0 podman[298282]: 2025-11-29 08:19:41.496783278 +0000 UTC m=+0.042171431 container create ba56f62d53c9157e55bccc2f2ada895d3b3ca97656a26bec9473e3f4387d5e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hopper, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.501 255071 INFO nova.virt.block_device [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Booting with volume 9d326a13-082a-48ff-a152-c8f6b3c1a7e9 at /dev/vda
Nov 29 08:19:41 compute-0 systemd[1]: Started libpod-conmon-ba56f62d53c9157e55bccc2f2ada895d3b3ca97656a26bec9473e3f4387d5e1a.scope.
Nov 29 08:19:41 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:19:41 compute-0 podman[298282]: 2025-11-29 08:19:41.576941094 +0000 UTC m=+0.122329277 container init ba56f62d53c9157e55bccc2f2ada895d3b3ca97656a26bec9473e3f4387d5e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hopper, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 08:19:41 compute-0 podman[298282]: 2025-11-29 08:19:41.480902423 +0000 UTC m=+0.026290596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:19:41 compute-0 podman[298282]: 2025-11-29 08:19:41.583144391 +0000 UTC m=+0.128532544 container start ba56f62d53c9157e55bccc2f2ada895d3b3ca97656a26bec9473e3f4387d5e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hopper, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:19:41 compute-0 podman[298282]: 2025-11-29 08:19:41.586575223 +0000 UTC m=+0.131963376 container attach ba56f62d53c9157e55bccc2f2ada895d3b3ca97656a26bec9473e3f4387d5e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 08:19:41 compute-0 blissful_hopper[298317]: 167 167
Nov 29 08:19:41 compute-0 systemd[1]: libpod-ba56f62d53c9157e55bccc2f2ada895d3b3ca97656a26bec9473e3f4387d5e1a.scope: Deactivated successfully.
Nov 29 08:19:41 compute-0 podman[298282]: 2025-11-29 08:19:41.588894045 +0000 UTC m=+0.134282218 container died ba56f62d53c9157e55bccc2f2ada895d3b3ca97656a26bec9473e3f4387d5e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.599 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-d30c2fd9a779f9b88f8827cd3b6f6c8589040067155ddbeb45c5e01c816a9695-merged.mount: Deactivated successfully.
Nov 29 08:19:41 compute-0 podman[298282]: 2025-11-29 08:19:41.627963282 +0000 UTC m=+0.173351455 container remove ba56f62d53c9157e55bccc2f2ada895d3b3ca97656a26bec9473e3f4387d5e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hopper, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.643 255071 DEBUG nova.policy [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a08e1ef223b748efa4d5bdc804150f97', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd25c6608beec4f818c6e402939192f16', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:19:41 compute-0 systemd[1]: libpod-conmon-ba56f62d53c9157e55bccc2f2ada895d3b3ca97656a26bec9473e3f4387d5e1a.scope: Deactivated successfully.
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.689 255071 DEBUG os_brick.utils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.690 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.707 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.707 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[3285f73b-870c-480b-b427-904014948d47]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.709 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.717 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.717 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[81b33e01-7644-4d04-9b5b-13d8377d18da]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.719 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.727 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.727 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[749c69cc-0ddd-45a1-8e3f-fea6b122bd6f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.729 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[bbee3d1a-f664-4bce-92f0-5aab0e5eb43b]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.730 255071 DEBUG oslo_concurrency.processutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:19:41 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1134474107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.763 255071 DEBUG oslo_concurrency.processutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.767 255071 DEBUG os_brick.initiator.connectors.lightos [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.767 255071 DEBUG os_brick.initiator.connectors.lightos [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.767 255071 DEBUG os_brick.initiator.connectors.lightos [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.768 255071 DEBUG os_brick.utils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] <== get_connector_properties: return (78ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.768 255071 DEBUG nova.virt.block_device [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Updating existing volume attachment record: 047811b0-07fe-4faf-8d88-72a9351e550b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.772 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:41 compute-0 podman[298350]: 2025-11-29 08:19:41.833835096 +0000 UTC m=+0.061473428 container create e48ac23e4ab63c1775c0c64a2f55a1da1bc4dd00f5aed745f876887c6990f0a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 08:19:41 compute-0 systemd[1]: Started libpod-conmon-e48ac23e4ab63c1775c0c64a2f55a1da1bc4dd00f5aed745f876887c6990f0a9.scope.
Nov 29 08:19:41 compute-0 podman[298350]: 2025-11-29 08:19:41.801815568 +0000 UTC m=+0.029453970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:19:41 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d6a208f6a3b75f9f1d2225c08dfc84ab7866f5e4d8bc75f51b29808d69cb1f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d6a208f6a3b75f9f1d2225c08dfc84ab7866f5e4d8bc75f51b29808d69cb1f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d6a208f6a3b75f9f1d2225c08dfc84ab7866f5e4d8bc75f51b29808d69cb1f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d6a208f6a3b75f9f1d2225c08dfc84ab7866f5e4d8bc75f51b29808d69cb1f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:19:41 compute-0 podman[298350]: 2025-11-29 08:19:41.92434164 +0000 UTC m=+0.151979932 container init e48ac23e4ab63c1775c0c64a2f55a1da1bc4dd00f5aed745f876887c6990f0a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 29 08:19:41 compute-0 podman[298350]: 2025-11-29 08:19:41.931832831 +0000 UTC m=+0.159471123 container start e48ac23e4ab63c1775c0c64a2f55a1da1bc4dd00f5aed745f876887c6990f0a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_jang, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:19:41 compute-0 podman[298350]: 2025-11-29 08:19:41.935212471 +0000 UTC m=+0.162850763 container attach e48ac23e4ab63c1775c0c64a2f55a1da1bc4dd00f5aed745f876887c6990f0a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.943 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.944 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4307MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.944 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:41 compute-0 nova_compute[255040]: 2025-11-29 08:19:41.945 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:42 compute-0 nova_compute[255040]: 2025-11-29 08:19:42.007 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance 71726c35-b087-417d-aa8f-40239b043464 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:19:42 compute-0 nova_compute[255040]: 2025-11-29 08:19:42.008 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:19:42 compute-0 nova_compute[255040]: 2025-11-29 08:19:42.008 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:19:42 compute-0 nova_compute[255040]: 2025-11-29 08:19:42.048 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 29 08:19:42 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1134474107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:19:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:19:42 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2103496914' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:19:42 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4077066318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:19:42 compute-0 nova_compute[255040]: 2025-11-29 08:19:42.671 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:42 compute-0 sweet_jang[298367]: {
Nov 29 08:19:42 compute-0 sweet_jang[298367]:     "0": [
Nov 29 08:19:42 compute-0 sweet_jang[298367]:         {
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "devices": [
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "/dev/loop3"
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             ],
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "lv_name": "ceph_lv0",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "lv_size": "21470642176",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "name": "ceph_lv0",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "tags": {
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.cluster_name": "ceph",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.crush_device_class": "",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.encrypted": "0",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.osd_id": "0",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.type": "block",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.vdo": "0"
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             },
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "type": "block",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "vg_name": "ceph_vg0"
Nov 29 08:19:42 compute-0 sweet_jang[298367]:         }
Nov 29 08:19:42 compute-0 sweet_jang[298367]:     ],
Nov 29 08:19:42 compute-0 sweet_jang[298367]:     "1": [
Nov 29 08:19:42 compute-0 sweet_jang[298367]:         {
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "devices": [
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "/dev/loop4"
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             ],
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "lv_name": "ceph_lv1",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "lv_size": "21470642176",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "name": "ceph_lv1",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "tags": {
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.cluster_name": "ceph",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.crush_device_class": "",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.encrypted": "0",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.osd_id": "1",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.type": "block",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.vdo": "0"
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             },
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "type": "block",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "vg_name": "ceph_vg1"
Nov 29 08:19:42 compute-0 sweet_jang[298367]:         }
Nov 29 08:19:42 compute-0 sweet_jang[298367]:     ],
Nov 29 08:19:42 compute-0 sweet_jang[298367]:     "2": [
Nov 29 08:19:42 compute-0 sweet_jang[298367]:         {
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "devices": [
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "/dev/loop5"
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             ],
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "lv_name": "ceph_lv2",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "lv_size": "21470642176",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "name": "ceph_lv2",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "tags": {
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.cluster_name": "ceph",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.crush_device_class": "",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.encrypted": "0",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.osd_id": "2",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.type": "block",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:                 "ceph.vdo": "0"
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             },
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "type": "block",
Nov 29 08:19:42 compute-0 sweet_jang[298367]:             "vg_name": "ceph_vg2"
Nov 29 08:19:42 compute-0 sweet_jang[298367]:         }
Nov 29 08:19:42 compute-0 sweet_jang[298367]:     ]
Nov 29 08:19:42 compute-0 sweet_jang[298367]: }
Nov 29 08:19:42 compute-0 nova_compute[255040]: 2025-11-29 08:19:42.680 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:19:42 compute-0 systemd[1]: libpod-e48ac23e4ab63c1775c0c64a2f55a1da1bc4dd00f5aed745f876887c6990f0a9.scope: Deactivated successfully.
Nov 29 08:19:42 compute-0 podman[298350]: 2025-11-29 08:19:42.701835195 +0000 UTC m=+0.929473487 container died e48ac23e4ab63c1775c0c64a2f55a1da1bc4dd00f5aed745f876887c6990f0a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_jang, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 08:19:42 compute-0 nova_compute[255040]: 2025-11-29 08:19:42.701 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:19:42 compute-0 nova_compute[255040]: 2025-11-29 08:19:42.729 255071 DEBUG nova.compute.manager [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:19:42 compute-0 nova_compute[255040]: 2025-11-29 08:19:42.730 255071 DEBUG nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:19:42 compute-0 nova_compute[255040]: 2025-11-29 08:19:42.731 255071 INFO nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Creating image(s)
Nov 29 08:19:42 compute-0 nova_compute[255040]: 2025-11-29 08:19:42.731 255071 DEBUG nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:19:42 compute-0 nova_compute[255040]: 2025-11-29 08:19:42.732 255071 DEBUG nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Ensure instance console log exists: /var/lib/nova/instances/71726c35-b087-417d-aa8f-40239b043464/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:19:42 compute-0 nova_compute[255040]: 2025-11-29 08:19:42.732 255071 DEBUG oslo_concurrency.lockutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:42 compute-0 nova_compute[255040]: 2025-11-29 08:19:42.732 255071 DEBUG oslo_concurrency.lockutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:42 compute-0 nova_compute[255040]: 2025-11-29 08:19:42.733 255071 DEBUG oslo_concurrency.lockutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:42 compute-0 nova_compute[255040]: 2025-11-29 08:19:42.734 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:19:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d6a208f6a3b75f9f1d2225c08dfc84ab7866f5e4d8bc75f51b29808d69cb1f8-merged.mount: Deactivated successfully.
Nov 29 08:19:42 compute-0 nova_compute[255040]: 2025-11-29 08:19:42.734 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:42 compute-0 podman[298350]: 2025-11-29 08:19:42.772842597 +0000 UTC m=+1.000480889 container remove e48ac23e4ab63c1775c0c64a2f55a1da1bc4dd00f5aed745f876887c6990f0a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 29 08:19:42 compute-0 systemd[1]: libpod-conmon-e48ac23e4ab63c1775c0c64a2f55a1da1bc4dd00f5aed745f876887c6990f0a9.scope: Deactivated successfully.
Nov 29 08:19:42 compute-0 sudo[298215]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:42 compute-0 nova_compute[255040]: 2025-11-29 08:19:42.834 255071 DEBUG nova.network.neutron [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Successfully created port: 4d2a450f-06f0-46f4-a472-c897cc576408 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:19:42 compute-0 sudo[298409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:19:42 compute-0 sudo[298409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:42 compute-0 sudo[298409]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:42 compute-0 sudo[298434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:19:42 compute-0 sudo[298434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:42 compute-0 sudo[298434]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:43 compute-0 sudo[298459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:19:43 compute-0 sudo[298459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:43 compute-0 sudo[298459]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:43 compute-0 sudo[298484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 08:19:43 compute-0 sudo[298484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:43 compute-0 ceph-mon[75237]: pgmap v2045: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 29 08:19:43 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2103496914' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:43 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/4077066318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:19:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:19:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:19:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:19:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:19:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:19:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:19:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:19:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:19:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:19:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:19:43 compute-0 podman[298548]: 2025-11-29 08:19:43.497340432 +0000 UTC m=+0.044839662 container create a8b35a8f7d018cecd85cbd1f8be1e9294e0e19b7d4a3f8616ecdb4cf88217f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 08:19:43 compute-0 systemd[1]: Started libpod-conmon-a8b35a8f7d018cecd85cbd1f8be1e9294e0e19b7d4a3f8616ecdb4cf88217f11.scope.
Nov 29 08:19:43 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:19:43 compute-0 podman[298548]: 2025-11-29 08:19:43.475530288 +0000 UTC m=+0.023029538 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:19:43 compute-0 podman[298548]: 2025-11-29 08:19:43.574503679 +0000 UTC m=+0.122002889 container init a8b35a8f7d018cecd85cbd1f8be1e9294e0e19b7d4a3f8616ecdb4cf88217f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:19:43 compute-0 podman[298548]: 2025-11-29 08:19:43.581130037 +0000 UTC m=+0.128629217 container start a8b35a8f7d018cecd85cbd1f8be1e9294e0e19b7d4a3f8616ecdb4cf88217f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:19:43 compute-0 podman[298548]: 2025-11-29 08:19:43.584511777 +0000 UTC m=+0.132010967 container attach a8b35a8f7d018cecd85cbd1f8be1e9294e0e19b7d4a3f8616ecdb4cf88217f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 08:19:43 compute-0 pensive_proskuriakova[298564]: 167 167
Nov 29 08:19:43 compute-0 systemd[1]: libpod-a8b35a8f7d018cecd85cbd1f8be1e9294e0e19b7d4a3f8616ecdb4cf88217f11.scope: Deactivated successfully.
Nov 29 08:19:43 compute-0 conmon[298564]: conmon a8b35a8f7d018cecd85c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8b35a8f7d018cecd85cbd1f8be1e9294e0e19b7d4a3f8616ecdb4cf88217f11.scope/container/memory.events
Nov 29 08:19:43 compute-0 podman[298548]: 2025-11-29 08:19:43.587548638 +0000 UTC m=+0.135047828 container died a8b35a8f7d018cecd85cbd1f8be1e9294e0e19b7d4a3f8616ecdb4cf88217f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_proskuriakova, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:19:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7346ac61bd8559dcaaeaac92579e9e64b47797beb5e31a3784e1f8bf8b3d918-merged.mount: Deactivated successfully.
Nov 29 08:19:43 compute-0 podman[298548]: 2025-11-29 08:19:43.625383562 +0000 UTC m=+0.172882752 container remove a8b35a8f7d018cecd85cbd1f8be1e9294e0e19b7d4a3f8616ecdb4cf88217f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_proskuriakova, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 08:19:43 compute-0 systemd[1]: libpod-conmon-a8b35a8f7d018cecd85cbd1f8be1e9294e0e19b7d4a3f8616ecdb4cf88217f11.scope: Deactivated successfully.
Nov 29 08:19:43 compute-0 nova_compute[255040]: 2025-11-29 08:19:43.734 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:19:43 compute-0 nova_compute[255040]: 2025-11-29 08:19:43.736 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:19:43 compute-0 nova_compute[255040]: 2025-11-29 08:19:43.736 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:19:43 compute-0 nova_compute[255040]: 2025-11-29 08:19:43.749 255071 DEBUG nova.network.neutron [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Successfully updated port: 4d2a450f-06f0-46f4-a472-c897cc576408 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:19:43 compute-0 nova_compute[255040]: 2025-11-29 08:19:43.754 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: 71726c35-b087-417d-aa8f-40239b043464] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 08:19:43 compute-0 nova_compute[255040]: 2025-11-29 08:19:43.754 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:19:43 compute-0 nova_compute[255040]: 2025-11-29 08:19:43.755 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:19:43 compute-0 nova_compute[255040]: 2025-11-29 08:19:43.762 255071 DEBUG oslo_concurrency.lockutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "refresh_cache-71726c35-b087-417d-aa8f-40239b043464" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:19:43 compute-0 nova_compute[255040]: 2025-11-29 08:19:43.762 255071 DEBUG oslo_concurrency.lockutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquired lock "refresh_cache-71726c35-b087-417d-aa8f-40239b043464" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:19:43 compute-0 nova_compute[255040]: 2025-11-29 08:19:43.762 255071 DEBUG nova.network.neutron [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:19:43 compute-0 podman[298589]: 2025-11-29 08:19:43.780778264 +0000 UTC m=+0.041325188 container create a7742e82b6c5c719ba701f7ebd9de8b041e80ec1ae30ccd9108e70c558424164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mayer, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 08:19:43 compute-0 systemd[1]: Started libpod-conmon-a7742e82b6c5c719ba701f7ebd9de8b041e80ec1ae30ccd9108e70c558424164.scope.
Nov 29 08:19:43 compute-0 nova_compute[255040]: 2025-11-29 08:19:43.853 255071 DEBUG nova.compute.manager [req-b69aa19c-d766-4f2d-963a-c87317b7ba06 req-cf8e2ebe-6ac1-4640-90ab-552b13dbf792 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Received event network-changed-4d2a450f-06f0-46f4-a472-c897cc576408 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:19:43 compute-0 nova_compute[255040]: 2025-11-29 08:19:43.854 255071 DEBUG nova.compute.manager [req-b69aa19c-d766-4f2d-963a-c87317b7ba06 req-cf8e2ebe-6ac1-4640-90ab-552b13dbf792 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Refreshing instance network info cache due to event network-changed-4d2a450f-06f0-46f4-a472-c897cc576408. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:19:43 compute-0 nova_compute[255040]: 2025-11-29 08:19:43.854 255071 DEBUG oslo_concurrency.lockutils [req-b69aa19c-d766-4f2d-963a-c87317b7ba06 req-cf8e2ebe-6ac1-4640-90ab-552b13dbf792 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-71726c35-b087-417d-aa8f-40239b043464" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:19:43 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:19:43 compute-0 podman[298589]: 2025-11-29 08:19:43.762664119 +0000 UTC m=+0.023211043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af4bf0b1634e521fe01841c32c33388a039d2024db88dd807acc5b9a336762a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af4bf0b1634e521fe01841c32c33388a039d2024db88dd807acc5b9a336762a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af4bf0b1634e521fe01841c32c33388a039d2024db88dd807acc5b9a336762a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af4bf0b1634e521fe01841c32c33388a039d2024db88dd807acc5b9a336762a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:19:43 compute-0 podman[298589]: 2025-11-29 08:19:43.878706957 +0000 UTC m=+0.139253881 container init a7742e82b6c5c719ba701f7ebd9de8b041e80ec1ae30ccd9108e70c558424164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mayer, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 08:19:43 compute-0 podman[298589]: 2025-11-29 08:19:43.885711114 +0000 UTC m=+0.146258028 container start a7742e82b6c5c719ba701f7ebd9de8b041e80ec1ae30ccd9108e70c558424164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:19:43 compute-0 podman[298589]: 2025-11-29 08:19:43.889506236 +0000 UTC m=+0.150053220 container attach a7742e82b6c5c719ba701f7ebd9de8b041e80ec1ae30ccd9108e70c558424164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mayer, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 08:19:43 compute-0 nova_compute[255040]: 2025-11-29 08:19:43.904 255071 DEBUG nova.network.neutron [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:19:43 compute-0 nova_compute[255040]: 2025-11-29 08:19:43.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:19:43 compute-0 nova_compute[255040]: 2025-11-29 08:19:43.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:19:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.496 255071 DEBUG nova.network.neutron [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Updating instance_info_cache with network_info: [{"id": "4d2a450f-06f0-46f4-a472-c897cc576408", "address": "fa:16:3e:ae:f3:f2", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a450f-06", "ovs_interfaceid": "4d2a450f-06f0-46f4-a472-c897cc576408", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.513 255071 DEBUG oslo_concurrency.lockutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Releasing lock "refresh_cache-71726c35-b087-417d-aa8f-40239b043464" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.513 255071 DEBUG nova.compute.manager [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Instance network_info: |[{"id": "4d2a450f-06f0-46f4-a472-c897cc576408", "address": "fa:16:3e:ae:f3:f2", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a450f-06", "ovs_interfaceid": "4d2a450f-06f0-46f4-a472-c897cc576408", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.514 255071 DEBUG oslo_concurrency.lockutils [req-b69aa19c-d766-4f2d-963a-c87317b7ba06 req-cf8e2ebe-6ac1-4640-90ab-552b13dbf792 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-71726c35-b087-417d-aa8f-40239b043464" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.514 255071 DEBUG nova.network.neutron [req-b69aa19c-d766-4f2d-963a-c87317b7ba06 req-cf8e2ebe-6ac1-4640-90ab-552b13dbf792 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Refreshing network info cache for port 4d2a450f-06f0-46f4-a472-c897cc576408 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.518 255071 DEBUG nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Start _get_guest_xml network_info=[{"id": "4d2a450f-06f0-46f4-a472-c897cc576408", "address": "fa:16:3e:ae:f3:f2", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a450f-06", "ovs_interfaceid": "4d2a450f-06f0-46f4-a472-c897cc576408", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9d326a13-082a-48ff-a152-c8f6b3c1a7e9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9d326a13-082a-48ff-a152-c8f6b3c1a7e9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '71726c35-b087-417d-aa8f-40239b043464', 'attached_at': '', 'detached_at': '', 'volume_id': '9d326a13-082a-48ff-a152-c8f6b3c1a7e9', 'serial': '9d326a13-082a-48ff-a152-c8f6b3c1a7e9'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'delete_on_termination': False, 'attachment_id': '047811b0-07fe-4faf-8d88-72a9351e550b', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.526 255071 WARNING nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.536 255071 DEBUG nova.virt.libvirt.host [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.537 255071 DEBUG nova.virt.libvirt.host [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.541 255071 DEBUG nova.virt.libvirt.host [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.541 255071 DEBUG nova.virt.libvirt.host [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.542 255071 DEBUG nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.542 255071 DEBUG nova.virt.hardware [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.543 255071 DEBUG nova.virt.hardware [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.543 255071 DEBUG nova.virt.hardware [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.544 255071 DEBUG nova.virt.hardware [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.544 255071 DEBUG nova.virt.hardware [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.544 255071 DEBUG nova.virt.hardware [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.544 255071 DEBUG nova.virt.hardware [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.545 255071 DEBUG nova.virt.hardware [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.545 255071 DEBUG nova.virt.hardware [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.545 255071 DEBUG nova.virt.hardware [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.545 255071 DEBUG nova.virt.hardware [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.576 255071 DEBUG nova.storage.rbd_utils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] rbd image 71726c35-b087-417d-aa8f-40239b043464_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:19:44 compute-0 nova_compute[255040]: 2025-11-29 08:19:44.582 255071 DEBUG oslo_concurrency.processutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:44 compute-0 gracious_mayer[298605]: {
Nov 29 08:19:44 compute-0 gracious_mayer[298605]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 08:19:44 compute-0 gracious_mayer[298605]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:19:44 compute-0 gracious_mayer[298605]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:19:44 compute-0 gracious_mayer[298605]:         "osd_id": 2,
Nov 29 08:19:44 compute-0 gracious_mayer[298605]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:19:44 compute-0 gracious_mayer[298605]:         "type": "bluestore"
Nov 29 08:19:44 compute-0 gracious_mayer[298605]:     },
Nov 29 08:19:44 compute-0 gracious_mayer[298605]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 08:19:44 compute-0 gracious_mayer[298605]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:19:44 compute-0 gracious_mayer[298605]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:19:44 compute-0 gracious_mayer[298605]:         "osd_id": 0,
Nov 29 08:19:44 compute-0 gracious_mayer[298605]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:19:44 compute-0 gracious_mayer[298605]:         "type": "bluestore"
Nov 29 08:19:44 compute-0 gracious_mayer[298605]:     },
Nov 29 08:19:44 compute-0 gracious_mayer[298605]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 08:19:44 compute-0 gracious_mayer[298605]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:19:44 compute-0 gracious_mayer[298605]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:19:44 compute-0 gracious_mayer[298605]:         "osd_id": 1,
Nov 29 08:19:44 compute-0 gracious_mayer[298605]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:19:44 compute-0 gracious_mayer[298605]:         "type": "bluestore"
Nov 29 08:19:44 compute-0 gracious_mayer[298605]:     }
Nov 29 08:19:44 compute-0 gracious_mayer[298605]: }
Nov 29 08:19:44 compute-0 systemd[1]: libpod-a7742e82b6c5c719ba701f7ebd9de8b041e80ec1ae30ccd9108e70c558424164.scope: Deactivated successfully.
Nov 29 08:19:44 compute-0 systemd[1]: libpod-a7742e82b6c5c719ba701f7ebd9de8b041e80ec1ae30ccd9108e70c558424164.scope: Consumed 1.032s CPU time.
Nov 29 08:19:44 compute-0 conmon[298605]: conmon a7742e82b6c5c719ba70 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a7742e82b6c5c719ba701f7ebd9de8b041e80ec1ae30ccd9108e70c558424164.scope/container/memory.events
Nov 29 08:19:44 compute-0 podman[298589]: 2025-11-29 08:19:44.917277045 +0000 UTC m=+1.177823989 container died a7742e82b6c5c719ba701f7ebd9de8b041e80ec1ae30ccd9108e70c558424164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 29 08:19:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-9af4bf0b1634e521fe01841c32c33388a039d2024db88dd807acc5b9a336762a-merged.mount: Deactivated successfully.
Nov 29 08:19:44 compute-0 podman[298589]: 2025-11-29 08:19:44.98169125 +0000 UTC m=+1.242238164 container remove a7742e82b6c5c719ba701f7ebd9de8b041e80ec1ae30ccd9108e70c558424164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:19:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:19:44 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2012609658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:44 compute-0 systemd[1]: libpod-conmon-a7742e82b6c5c719ba701f7ebd9de8b041e80ec1ae30ccd9108e70c558424164.scope: Deactivated successfully.
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.008 255071 DEBUG oslo_concurrency.processutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:45 compute-0 sudo[298484]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:19:45 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:19:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:19:45 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:19:45 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 392fd9e7-4aae-4af4-9a4a-be4c418e1f5c does not exist
Nov 29 08:19:45 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 6f26cf25-dc34-4ea3-8936-6c1ef42504e8 does not exist
Nov 29 08:19:45 compute-0 sudo[298692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:19:45 compute-0 sudo[298692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:45 compute-0 sudo[298692]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:45 compute-0 sudo[298717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:19:45 compute-0 sudo[298717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.151 255071 DEBUG os_brick.encryptors [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Using volume encryption metadata '{'encryption_key_id': '05d2cffa-095c-43f7-b086-85f02b724888', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9d326a13-082a-48ff-a152-c8f6b3c1a7e9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9d326a13-082a-48ff-a152-c8f6b3c1a7e9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '71726c35-b087-417d-aa8f-40239b043464', 'attached_at': '', 'detached_at': '', 'volume_id': '9d326a13-082a-48ff-a152-c8f6b3c1a7e9', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 08:19:45 compute-0 sudo[298717]: pam_unix(sudo:session): session closed for user root
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.154 255071 DEBUG barbicanclient.client [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 29 08:19:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.167 255071 DEBUG barbicanclient.v1.secrets [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/05d2cffa-095c-43f7-b086-85f02b724888 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.168 255071 INFO barbicanclient.base [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/05d2cffa-095c-43f7-b086-85f02b724888
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.190 255071 DEBUG barbicanclient.client [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.190 255071 INFO barbicanclient.base [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/05d2cffa-095c-43f7-b086-85f02b724888
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.217 255071 DEBUG barbicanclient.client [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.217 255071 INFO barbicanclient.base [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/05d2cffa-095c-43f7-b086-85f02b724888
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.223 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.243 255071 DEBUG barbicanclient.client [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.243 255071 INFO barbicanclient.base [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/05d2cffa-095c-43f7-b086-85f02b724888
Nov 29 08:19:45 compute-0 ceph-mon[75237]: pgmap v2046: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 29 08:19:45 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2012609658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:19:45 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:19:45 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.265 255071 DEBUG barbicanclient.client [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.265 255071 INFO barbicanclient.base [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/05d2cffa-095c-43f7-b086-85f02b724888
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.384 255071 DEBUG barbicanclient.client [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.385 255071 INFO barbicanclient.base [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/05d2cffa-095c-43f7-b086-85f02b724888
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.412 255071 DEBUG barbicanclient.client [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.412 255071 INFO barbicanclient.base [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/05d2cffa-095c-43f7-b086-85f02b724888
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.438 255071 DEBUG barbicanclient.client [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.439 255071 INFO barbicanclient.base [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/05d2cffa-095c-43f7-b086-85f02b724888
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.459 255071 DEBUG barbicanclient.client [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.460 255071 INFO barbicanclient.base [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/05d2cffa-095c-43f7-b086-85f02b724888
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.489 255071 DEBUG barbicanclient.client [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.490 255071 INFO barbicanclient.base [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/05d2cffa-095c-43f7-b086-85f02b724888
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.511 255071 DEBUG barbicanclient.client [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.511 255071 INFO barbicanclient.base [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/05d2cffa-095c-43f7-b086-85f02b724888
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.540 255071 DEBUG barbicanclient.client [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.541 255071 INFO barbicanclient.base [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/05d2cffa-095c-43f7-b086-85f02b724888
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.565 255071 DEBUG barbicanclient.client [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.566 255071 INFO barbicanclient.base [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/05d2cffa-095c-43f7-b086-85f02b724888
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.588 255071 DEBUG barbicanclient.client [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.589 255071 INFO barbicanclient.base [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/05d2cffa-095c-43f7-b086-85f02b724888
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.609 255071 DEBUG barbicanclient.client [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.609 255071 INFO barbicanclient.base [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/05d2cffa-095c-43f7-b086-85f02b724888
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.642 255071 DEBUG barbicanclient.client [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.643 255071 DEBUG nova.virt.libvirt.host [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 08:19:45 compute-0 nova_compute[255040]:   <usage type="volume">
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <volume>9d326a13-082a-48ff-a152-c8f6b3c1a7e9</volume>
Nov 29 08:19:45 compute-0 nova_compute[255040]:   </usage>
Nov 29 08:19:45 compute-0 nova_compute[255040]: </secret>
Nov 29 08:19:45 compute-0 nova_compute[255040]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.689 255071 DEBUG nova.virt.libvirt.vif [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1829653041',display_name='tempest-TransferEncryptedVolumeTest-server-1829653041',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1829653041',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHZ/+4uVEasBc+0e7HUZBciJ5ezONfDyC9abvZvKTAfyTotAeMYwBOUphcmP9ofLztEKtRidvxMb+4vqS+q0JDF+wXrAnm0iCWndPLMz17r0Q90fDoTo8tKBi9U0NUAS+w==',key_name='tempest-TransferEncryptedVolumeTest-127275625',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d25c6608beec4f818c6e402939192f16',ramdisk_id='',reservation_id='r-yo98n5nd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1043863442',owner_user_name='tempest-TransferEncryptedVolumeTest-1043863442-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:19:41Z,user_data=None,user_id='a08e1ef223b748efa4d5bdc804150f97',uuid=71726c35-b087-417d-aa8f-40239b043464,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4d2a450f-06f0-46f4-a472-c897cc576408", "address": "fa:16:3e:ae:f3:f2", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a450f-06", "ovs_interfaceid": "4d2a450f-06f0-46f4-a472-c897cc576408", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.690 255071 DEBUG nova.network.os_vif_util [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converting VIF {"id": "4d2a450f-06f0-46f4-a472-c897cc576408", "address": "fa:16:3e:ae:f3:f2", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a450f-06", "ovs_interfaceid": "4d2a450f-06f0-46f4-a472-c897cc576408", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.691 255071 DEBUG nova.network.os_vif_util [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:f3:f2,bridge_name='br-int',has_traffic_filtering=True,id=4d2a450f-06f0-46f4-a472-c897cc576408,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d2a450f-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.693 255071 DEBUG nova.objects.instance [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lazy-loading 'pci_devices' on Instance uuid 71726c35-b087-417d-aa8f-40239b043464 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.706 255071 DEBUG nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:19:45 compute-0 nova_compute[255040]:   <uuid>71726c35-b087-417d-aa8f-40239b043464</uuid>
Nov 29 08:19:45 compute-0 nova_compute[255040]:   <name>instance-0000001b</name>
Nov 29 08:19:45 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:19:45 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:19:45 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-1829653041</nova:name>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:19:44</nova:creationTime>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:19:45 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:19:45 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:19:45 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:19:45 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:19:45 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:19:45 compute-0 nova_compute[255040]:         <nova:user uuid="a08e1ef223b748efa4d5bdc804150f97">tempest-TransferEncryptedVolumeTest-1043863442-project-member</nova:user>
Nov 29 08:19:45 compute-0 nova_compute[255040]:         <nova:project uuid="d25c6608beec4f818c6e402939192f16">tempest-TransferEncryptedVolumeTest-1043863442</nova:project>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:19:45 compute-0 nova_compute[255040]:         <nova:port uuid="4d2a450f-06f0-46f4-a472-c897cc576408">
Nov 29 08:19:45 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:19:45 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:19:45 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <system>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <entry name="serial">71726c35-b087-417d-aa8f-40239b043464</entry>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <entry name="uuid">71726c35-b087-417d-aa8f-40239b043464</entry>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     </system>
Nov 29 08:19:45 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:19:45 compute-0 nova_compute[255040]:   <os>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:   </os>
Nov 29 08:19:45 compute-0 nova_compute[255040]:   <features>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:   </features>
Nov 29 08:19:45 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:19:45 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:19:45 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/71726c35-b087-417d-aa8f-40239b043464_disk.config">
Nov 29 08:19:45 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       </source>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:19:45 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <source protocol="rbd" name="volumes/volume-9d326a13-082a-48ff-a152-c8f6b3c1a7e9">
Nov 29 08:19:45 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       </source>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:19:45 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <serial>9d326a13-082a-48ff-a152-c8f6b3c1a7e9</serial>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <encryption format="luks">
Nov 29 08:19:45 compute-0 nova_compute[255040]:         <secret type="passphrase" uuid="71daaf32-f187-4380-938e-8836627a2562"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       </encryption>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:ae:f3:f2"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <target dev="tap4d2a450f-06"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/71726c35-b087-417d-aa8f-40239b043464/console.log" append="off"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <video>
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     </video>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:19:45 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:19:45 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:19:45 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:19:45 compute-0 nova_compute[255040]: </domain>
Nov 29 08:19:45 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.707 255071 DEBUG nova.compute.manager [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Preparing to wait for external event network-vif-plugged-4d2a450f-06f0-46f4-a472-c897cc576408 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.707 255071 DEBUG oslo_concurrency.lockutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "71726c35-b087-417d-aa8f-40239b043464-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.707 255071 DEBUG oslo_concurrency.lockutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "71726c35-b087-417d-aa8f-40239b043464-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.708 255071 DEBUG oslo_concurrency.lockutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "71726c35-b087-417d-aa8f-40239b043464-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.708 255071 DEBUG nova.virt.libvirt.vif [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1829653041',display_name='tempest-TransferEncryptedVolumeTest-server-1829653041',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1829653041',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHZ/+4uVEasBc+0e7HUZBciJ5ezONfDyC9abvZvKTAfyTotAeMYwBOUphcmP9ofLztEKtRidvxMb+4vqS+q0JDF+wXrAnm0iCWndPLMz17r0Q90fDoTo8tKBi9U0NUAS+w==',key_name='tempest-TransferEncryptedVolumeTest-127275625',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d25c6608beec4f818c6e402939192f16',ramdisk_id='',reservation_id='r-yo98n5nd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1043863442',owner_user_name='tempest-TransferEncryptedVolumeTest-1043863442-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:19:41Z,user_data=None,user_id='a08e1ef223b748efa4d5bdc804150f97',uuid=71726c35-b087-417d-aa8f-40239b043464,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4d2a450f-06f0-46f4-a472-c897cc576408", "address": "fa:16:3e:ae:f3:f2", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a450f-06", "ovs_interfaceid": "4d2a450f-06f0-46f4-a472-c897cc576408", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.709 255071 DEBUG nova.network.os_vif_util [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converting VIF {"id": "4d2a450f-06f0-46f4-a472-c897cc576408", "address": "fa:16:3e:ae:f3:f2", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a450f-06", "ovs_interfaceid": "4d2a450f-06f0-46f4-a472-c897cc576408", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.709 255071 DEBUG nova.network.os_vif_util [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:f3:f2,bridge_name='br-int',has_traffic_filtering=True,id=4d2a450f-06f0-46f4-a472-c897cc576408,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d2a450f-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.710 255071 DEBUG os_vif [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:f3:f2,bridge_name='br-int',has_traffic_filtering=True,id=4d2a450f-06f0-46f4-a472-c897cc576408,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d2a450f-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.710 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.711 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.711 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.715 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.715 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d2a450f-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.716 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4d2a450f-06, col_values=(('external_ids', {'iface-id': '4d2a450f-06f0-46f4-a472-c897cc576408', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:f3:f2', 'vm-uuid': '71726c35-b087-417d-aa8f-40239b043464'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.718 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.719 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:19:45 compute-0 NetworkManager[49116]: <info>  [1764404385.7205] manager: (tap4d2a450f-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.726 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.728 255071 INFO os_vif [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:f3:f2,bridge_name='br-int',has_traffic_filtering=True,id=4d2a450f-06f0-46f4-a472-c897cc576408,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d2a450f-06')
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.790 255071 DEBUG nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.790 255071 DEBUG nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.790 255071 DEBUG nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] No VIF found with MAC fa:16:3e:ae:f3:f2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.791 255071 INFO nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Using config drive
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.821 255071 DEBUG nova.storage.rbd_utils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] rbd image 71726c35-b087-417d-aa8f-40239b043464_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.866 255071 DEBUG nova.network.neutron [req-b69aa19c-d766-4f2d-963a-c87317b7ba06 req-cf8e2ebe-6ac1-4640-90ab-552b13dbf792 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Updated VIF entry in instance network info cache for port 4d2a450f-06f0-46f4-a472-c897cc576408. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.867 255071 DEBUG nova.network.neutron [req-b69aa19c-d766-4f2d-963a-c87317b7ba06 req-cf8e2ebe-6ac1-4640-90ab-552b13dbf792 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Updating instance_info_cache with network_info: [{"id": "4d2a450f-06f0-46f4-a472-c897cc576408", "address": "fa:16:3e:ae:f3:f2", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a450f-06", "ovs_interfaceid": "4d2a450f-06f0-46f4-a472-c897cc576408", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:19:45 compute-0 nova_compute[255040]: 2025-11-29 08:19:45.882 255071 DEBUG oslo_concurrency.lockutils [req-b69aa19c-d766-4f2d-963a-c87317b7ba06 req-cf8e2ebe-6ac1-4640-90ab-552b13dbf792 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-71726c35-b087-417d-aa8f-40239b043464" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:19:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 29 08:19:46 compute-0 nova_compute[255040]: 2025-11-29 08:19:46.334 255071 INFO nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Creating config drive at /var/lib/nova/instances/71726c35-b087-417d-aa8f-40239b043464/disk.config
Nov 29 08:19:46 compute-0 nova_compute[255040]: 2025-11-29 08:19:46.345 255071 DEBUG oslo_concurrency.processutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/71726c35-b087-417d-aa8f-40239b043464/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz1c1frnb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:46 compute-0 nova_compute[255040]: 2025-11-29 08:19:46.496 255071 DEBUG oslo_concurrency.processutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/71726c35-b087-417d-aa8f-40239b043464/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz1c1frnb" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:46 compute-0 nova_compute[255040]: 2025-11-29 08:19:46.523 255071 DEBUG nova.storage.rbd_utils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] rbd image 71726c35-b087-417d-aa8f-40239b043464_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:19:46 compute-0 nova_compute[255040]: 2025-11-29 08:19:46.528 255071 DEBUG oslo_concurrency.processutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/71726c35-b087-417d-aa8f-40239b043464/disk.config 71726c35-b087-417d-aa8f-40239b043464_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:19:46 compute-0 nova_compute[255040]: 2025-11-29 08:19:46.694 255071 DEBUG oslo_concurrency.processutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/71726c35-b087-417d-aa8f-40239b043464/disk.config 71726c35-b087-417d-aa8f-40239b043464_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:19:46 compute-0 nova_compute[255040]: 2025-11-29 08:19:46.695 255071 INFO nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Deleting local config drive /var/lib/nova/instances/71726c35-b087-417d-aa8f-40239b043464/disk.config because it was imported into RBD.
Nov 29 08:19:46 compute-0 kernel: tap4d2a450f-06: entered promiscuous mode
Nov 29 08:19:46 compute-0 NetworkManager[49116]: <info>  [1764404386.7539] manager: (tap4d2a450f-06): new Tun device (/org/freedesktop/NetworkManager/Devices/135)
Nov 29 08:19:46 compute-0 ovn_controller[153295]: 2025-11-29T08:19:46Z|00261|binding|INFO|Claiming lport 4d2a450f-06f0-46f4-a472-c897cc576408 for this chassis.
Nov 29 08:19:46 compute-0 ovn_controller[153295]: 2025-11-29T08:19:46Z|00262|binding|INFO|4d2a450f-06f0-46f4-a472-c897cc576408: Claiming fa:16:3e:ae:f3:f2 10.100.0.10
Nov 29 08:19:46 compute-0 nova_compute[255040]: 2025-11-29 08:19:46.756 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:46.769 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:f3:f2 10.100.0.10'], port_security=['fa:16:3e:ae:f3:f2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '71726c35-b087-417d-aa8f-40239b043464', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a234aa60-c8c5-4137-96cd-77f576498813', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd25c6608beec4f818c6e402939192f16', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2022bec6-39c6-4719-b618-05c5c5bc6af6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b002bcc-9ffd-4aaa-8483-7d6ef4853f0e, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=4d2a450f-06f0-46f4-a472-c897cc576408) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:19:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:46.770 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 4d2a450f-06f0-46f4-a472-c897cc576408 in datapath a234aa60-c8c5-4137-96cd-77f576498813 bound to our chassis
Nov 29 08:19:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:46.771 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a234aa60-c8c5-4137-96cd-77f576498813
Nov 29 08:19:46 compute-0 systemd-machined[216271]: New machine qemu-27-instance-0000001b.
Nov 29 08:19:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:46.786 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[82b9cc04-9a8a-4e4e-bdda-6835cecf249f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:46.787 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa234aa60-c1 in ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:19:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:46.788 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa234aa60-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:19:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:46.788 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[67fb1e46-e6bb-4615-87ca-b43ed83d6090]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:46.789 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[3da974b1-74e4-4f45-9659-773dbe8427ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:46 compute-0 systemd-udevd[298816]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:19:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:46.802 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[4e1d75c8-1841-4ec7-a212-24c7d56b882e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:46 compute-0 NetworkManager[49116]: <info>  [1764404386.8049] device (tap4d2a450f-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:19:46 compute-0 NetworkManager[49116]: <info>  [1764404386.8059] device (tap4d2a450f-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:19:46 compute-0 systemd[1]: Started Virtual Machine qemu-27-instance-0000001b.
Nov 29 08:19:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:46.827 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[686d1261-43fb-46f2-a9d8-0676a6f19308]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:46 compute-0 nova_compute[255040]: 2025-11-29 08:19:46.830 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:46 compute-0 ovn_controller[153295]: 2025-11-29T08:19:46Z|00263|binding|INFO|Setting lport 4d2a450f-06f0-46f4-a472-c897cc576408 ovn-installed in OVS
Nov 29 08:19:46 compute-0 ovn_controller[153295]: 2025-11-29T08:19:46Z|00264|binding|INFO|Setting lport 4d2a450f-06f0-46f4-a472-c897cc576408 up in Southbound
Nov 29 08:19:46 compute-0 nova_compute[255040]: 2025-11-29 08:19:46.836 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:46.859 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[6cc351cb-7733-408c-8a68-2d4ed08570a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:46.863 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[9ca26faa-8e81-4129-8595-96e9371fc7ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:46 compute-0 NetworkManager[49116]: <info>  [1764404386.8643] manager: (tapa234aa60-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/136)
Nov 29 08:19:46 compute-0 systemd-udevd[298819]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:19:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:46.896 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[a82c99d2-7195-4a30-847f-06ffed957d9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:46.900 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[a1ab0d0a-e1d4-4ff3-a450-dd4f410e90e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:46 compute-0 NetworkManager[49116]: <info>  [1764404386.9210] device (tapa234aa60-c0): carrier: link connected
Nov 29 08:19:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:46.925 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[4e95a172-eb2a-4dba-b8f4-15370ad4c9a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:46.941 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[a4309144-c4e3-4a11-9837-49e7f74607b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa234aa60-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:9b:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 86], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662426, 'reachable_time': 43871, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298848, 'error': None, 'target': 'ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:46.954 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[d690ee8e-eb71-49c1-bc11-69226da5cc37]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe65:9b6a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662426, 'tstamp': 662426}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298849, 'error': None, 'target': 'ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:46.968 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[c5dccf51-45bc-4c29-a53b-1090ff1eab98]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa234aa60-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:9b:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 86], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662426, 'reachable_time': 43871, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298850, 'error': None, 'target': 'ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:46 compute-0 nova_compute[255040]: 2025-11-29 08:19:46.989 255071 DEBUG nova.compute.manager [req-0e8743d0-d5e3-47ac-9b2f-c9fa42a55f53 req-6fdfc317-880f-4ea5-b504-c5c2202f624b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Received event network-vif-plugged-4d2a450f-06f0-46f4-a472-c897cc576408 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:19:46 compute-0 nova_compute[255040]: 2025-11-29 08:19:46.989 255071 DEBUG oslo_concurrency.lockutils [req-0e8743d0-d5e3-47ac-9b2f-c9fa42a55f53 req-6fdfc317-880f-4ea5-b504-c5c2202f624b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "71726c35-b087-417d-aa8f-40239b043464-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:46 compute-0 nova_compute[255040]: 2025-11-29 08:19:46.989 255071 DEBUG oslo_concurrency.lockutils [req-0e8743d0-d5e3-47ac-9b2f-c9fa42a55f53 req-6fdfc317-880f-4ea5-b504-c5c2202f624b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "71726c35-b087-417d-aa8f-40239b043464-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:46 compute-0 nova_compute[255040]: 2025-11-29 08:19:46.990 255071 DEBUG oslo_concurrency.lockutils [req-0e8743d0-d5e3-47ac-9b2f-c9fa42a55f53 req-6fdfc317-880f-4ea5-b504-c5c2202f624b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "71726c35-b087-417d-aa8f-40239b043464-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:46 compute-0 nova_compute[255040]: 2025-11-29 08:19:46.990 255071 DEBUG nova.compute.manager [req-0e8743d0-d5e3-47ac-9b2f-c9fa42a55f53 req-6fdfc317-880f-4ea5-b504-c5c2202f624b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Processing event network-vif-plugged-4d2a450f-06f0-46f4-a472-c897cc576408 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:19:46 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:46.998 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[985154a2-5ae5-4cca-801d-880583e7b050]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:47.053 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[0c63d7bc-6728-45ac-95a2-73c68c485f4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:47.054 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa234aa60-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:47.055 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:47.055 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa234aa60-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:47 compute-0 kernel: tapa234aa60-c0: entered promiscuous mode
Nov 29 08:19:47 compute-0 NetworkManager[49116]: <info>  [1764404387.0574] manager: (tapa234aa60-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Nov 29 08:19:47 compute-0 nova_compute[255040]: 2025-11-29 08:19:47.056 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:47.060 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa234aa60-c0, col_values=(('external_ids', {'iface-id': '821a8872-735e-4a04-8244-d3a33097614d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:19:47 compute-0 nova_compute[255040]: 2025-11-29 08:19:47.061 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:47 compute-0 ovn_controller[153295]: 2025-11-29T08:19:47Z|00265|binding|INFO|Releasing lport 821a8872-735e-4a04-8244-d3a33097614d from this chassis (sb_readonly=0)
Nov 29 08:19:47 compute-0 nova_compute[255040]: 2025-11-29 08:19:47.062 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:47.062 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a234aa60-c8c5-4137-96cd-77f576498813.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a234aa60-c8c5-4137-96cd-77f576498813.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:47.063 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[90ad4840-c10f-46d5-90b7-e760a9dd708b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:47.064 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-a234aa60-c8c5-4137-96cd-77f576498813
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/a234aa60-c8c5-4137-96cd-77f576498813.pid.haproxy
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID a234aa60-c8c5-4137-96cd-77f576498813
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:19:47 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:19:47.065 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813', 'env', 'PROCESS_TAG=haproxy-a234aa60-c8c5-4137-96cd-77f576498813', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a234aa60-c8c5-4137-96cd-77f576498813.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:19:47 compute-0 nova_compute[255040]: 2025-11-29 08:19:47.074 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:47 compute-0 ceph-mon[75237]: pgmap v2047: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 29 08:19:47 compute-0 podman[298918]: 2025-11-29 08:19:47.41478012 +0000 UTC m=+0.046144817 container create 19dea5a33e58e1428e8af6f0301e74730246e9dd3d2108dd7571672ab7abf605 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:19:47 compute-0 systemd[1]: Started libpod-conmon-19dea5a33e58e1428e8af6f0301e74730246e9dd3d2108dd7571672ab7abf605.scope.
Nov 29 08:19:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:19:47 compute-0 podman[298918]: 2025-11-29 08:19:47.391174158 +0000 UTC m=+0.022538885 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/255ecdf045cc05c767106509fe4bf0f80319461aa06446fa71fd28d16776e7ed/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:19:47 compute-0 podman[298918]: 2025-11-29 08:19:47.511225083 +0000 UTC m=+0.142589810 container init 19dea5a33e58e1428e8af6f0301e74730246e9dd3d2108dd7571672ab7abf605 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:19:47 compute-0 podman[298918]: 2025-11-29 08:19:47.516867495 +0000 UTC m=+0.148232202 container start 19dea5a33e58e1428e8af6f0301e74730246e9dd3d2108dd7571672ab7abf605 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 08:19:47 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[298933]: [NOTICE]   (298937) : New worker (298939) forked
Nov 29 08:19:47 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[298933]: [NOTICE]   (298937) : Loading success.
Nov 29 08:19:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 6.7 MiB/s wr, 39 op/s
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.063 255071 DEBUG nova.compute.manager [req-67f912bf-617d-4cc5-a64b-3786d5baeda6 req-24b84027-c2b5-40cd-b072-95ef9bb64e54 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Received event network-vif-plugged-4d2a450f-06f0-46f4-a472-c897cc576408 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.064 255071 DEBUG oslo_concurrency.lockutils [req-67f912bf-617d-4cc5-a64b-3786d5baeda6 req-24b84027-c2b5-40cd-b072-95ef9bb64e54 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "71726c35-b087-417d-aa8f-40239b043464-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.064 255071 DEBUG oslo_concurrency.lockutils [req-67f912bf-617d-4cc5-a64b-3786d5baeda6 req-24b84027-c2b5-40cd-b072-95ef9bb64e54 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "71726c35-b087-417d-aa8f-40239b043464-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.065 255071 DEBUG oslo_concurrency.lockutils [req-67f912bf-617d-4cc5-a64b-3786d5baeda6 req-24b84027-c2b5-40cd-b072-95ef9bb64e54 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "71726c35-b087-417d-aa8f-40239b043464-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.065 255071 DEBUG nova.compute.manager [req-67f912bf-617d-4cc5-a64b-3786d5baeda6 req-24b84027-c2b5-40cd-b072-95ef9bb64e54 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] No waiting events found dispatching network-vif-plugged-4d2a450f-06f0-46f4-a472-c897cc576408 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.065 255071 WARNING nova.compute.manager [req-67f912bf-617d-4cc5-a64b-3786d5baeda6 req-24b84027-c2b5-40cd-b072-95ef9bb64e54 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Received unexpected event network-vif-plugged-4d2a450f-06f0-46f4-a472-c897cc576408 for instance with vm_state building and task_state spawning.
Nov 29 08:19:49 compute-0 ceph-mon[75237]: pgmap v2048: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 6.7 MiB/s wr, 39 op/s
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.445 255071 DEBUG nova.compute.manager [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.447 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404389.4441986, 71726c35-b087-417d-aa8f-40239b043464 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.448 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 71726c35-b087-417d-aa8f-40239b043464] VM Started (Lifecycle Event)
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.453 255071 DEBUG nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.460 255071 INFO nova.virt.libvirt.driver [-] [instance: 71726c35-b087-417d-aa8f-40239b043464] Instance spawned successfully.
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.461 255071 DEBUG nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.476 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 71726c35-b087-417d-aa8f-40239b043464] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.488 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 71726c35-b087-417d-aa8f-40239b043464] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.493 255071 DEBUG nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.494 255071 DEBUG nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.495 255071 DEBUG nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.496 255071 DEBUG nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.497 255071 DEBUG nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.498 255071 DEBUG nova.virt.libvirt.driver [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.513 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 71726c35-b087-417d-aa8f-40239b043464] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.514 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404389.4448144, 71726c35-b087-417d-aa8f-40239b043464 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.515 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 71726c35-b087-417d-aa8f-40239b043464] VM Paused (Lifecycle Event)
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.543 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 71726c35-b087-417d-aa8f-40239b043464] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.548 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404389.452156, 71726c35-b087-417d-aa8f-40239b043464 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.549 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 71726c35-b087-417d-aa8f-40239b043464] VM Resumed (Lifecycle Event)
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.568 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 71726c35-b087-417d-aa8f-40239b043464] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.572 255071 INFO nova.compute.manager [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Took 6.84 seconds to spawn the instance on the hypervisor.
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.573 255071 DEBUG nova.compute.manager [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.576 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 71726c35-b087-417d-aa8f-40239b043464] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.603 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 71726c35-b087-417d-aa8f-40239b043464] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.635 255071 INFO nova.compute.manager [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Took 9.03 seconds to build instance.
Nov 29 08:19:49 compute-0 nova_compute[255040]: 2025-11-29 08:19:49.661 255071 DEBUG oslo_concurrency.lockutils [None req-2ea00bcf-f9ba-4cc2-8776-6e8882d4d260 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "71726c35-b087-417d-aa8f-40239b043464" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:19:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.9 MiB/s wr, 22 op/s
Nov 29 08:19:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:50 compute-0 nova_compute[255040]: 2025-11-29 08:19:50.224 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:50 compute-0 nova_compute[255040]: 2025-11-29 08:19:50.718 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:50 compute-0 podman[298954]: 2025-11-29 08:19:50.892037937 +0000 UTC m=+0.061759255 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:19:51 compute-0 ceph-mon[75237]: pgmap v2049: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.9 MiB/s wr, 22 op/s
Nov 29 08:19:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 462 KiB/s rd, 1.7 MiB/s wr, 25 op/s
Nov 29 08:19:53 compute-0 ceph-mon[75237]: pgmap v2050: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 462 KiB/s rd, 1.7 MiB/s wr, 25 op/s
Nov 29 08:19:53 compute-0 NetworkManager[49116]: <info>  [1764404393.4167] manager: (patch-br-int-to-provnet-0b50aea8-d2d6-4416-bd00-1ceabb7a7c1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Nov 29 08:19:53 compute-0 NetworkManager[49116]: <info>  [1764404393.4184] manager: (patch-provnet-0b50aea8-d2d6-4416-bd00-1ceabb7a7c1d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Nov 29 08:19:53 compute-0 nova_compute[255040]: 2025-11-29 08:19:53.415 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:53 compute-0 nova_compute[255040]: 2025-11-29 08:19:53.572 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:53 compute-0 ovn_controller[153295]: 2025-11-29T08:19:53Z|00266|binding|INFO|Releasing lport 821a8872-735e-4a04-8244-d3a33097614d from this chassis (sb_readonly=0)
Nov 29 08:19:53 compute-0 nova_compute[255040]: 2025-11-29 08:19:53.597 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:53 compute-0 nova_compute[255040]: 2025-11-29 08:19:53.783 255071 DEBUG nova.compute.manager [req-c11693e9-d95d-4df1-800e-e6d8b0a6ed5b req-e4faeef4-5757-4a6a-8d5c-c526cdb0cd7f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Received event network-changed-4d2a450f-06f0-46f4-a472-c897cc576408 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:19:53 compute-0 nova_compute[255040]: 2025-11-29 08:19:53.783 255071 DEBUG nova.compute.manager [req-c11693e9-d95d-4df1-800e-e6d8b0a6ed5b req-e4faeef4-5757-4a6a-8d5c-c526cdb0cd7f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Refreshing instance network info cache due to event network-changed-4d2a450f-06f0-46f4-a472-c897cc576408. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:19:53 compute-0 nova_compute[255040]: 2025-11-29 08:19:53.784 255071 DEBUG oslo_concurrency.lockutils [req-c11693e9-d95d-4df1-800e-e6d8b0a6ed5b req-e4faeef4-5757-4a6a-8d5c-c526cdb0cd7f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-71726c35-b087-417d-aa8f-40239b043464" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:19:53 compute-0 nova_compute[255040]: 2025-11-29 08:19:53.784 255071 DEBUG oslo_concurrency.lockutils [req-c11693e9-d95d-4df1-800e-e6d8b0a6ed5b req-e4faeef4-5757-4a6a-8d5c-c526cdb0cd7f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-71726c35-b087-417d-aa8f-40239b043464" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:19:53 compute-0 nova_compute[255040]: 2025-11-29 08:19:53.784 255071 DEBUG nova.network.neutron [req-c11693e9-d95d-4df1-800e-e6d8b0a6ed5b req-e4faeef4-5757-4a6a-8d5c-c526cdb0cd7f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Refreshing network info cache for port 4d2a450f-06f0-46f4-a472-c897cc576408 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:19:53 compute-0 nova_compute[255040]: 2025-11-29 08:19:53.969 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:19:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 462 KiB/s rd, 12 KiB/s wr, 25 op/s
Nov 29 08:19:54 compute-0 ceph-mon[75237]: pgmap v2051: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 462 KiB/s rd, 12 KiB/s wr, 25 op/s
Nov 29 08:19:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:19:55 compute-0 nova_compute[255040]: 2025-11-29 08:19:55.227 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:55 compute-0 nova_compute[255040]: 2025-11-29 08:19:55.447 255071 DEBUG nova.network.neutron [req-c11693e9-d95d-4df1-800e-e6d8b0a6ed5b req-e4faeef4-5757-4a6a-8d5c-c526cdb0cd7f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Updated VIF entry in instance network info cache for port 4d2a450f-06f0-46f4-a472-c897cc576408. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:19:55 compute-0 nova_compute[255040]: 2025-11-29 08:19:55.448 255071 DEBUG nova.network.neutron [req-c11693e9-d95d-4df1-800e-e6d8b0a6ed5b req-e4faeef4-5757-4a6a-8d5c-c526cdb0cd7f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Updating instance_info_cache with network_info: [{"id": "4d2a450f-06f0-46f4-a472-c897cc576408", "address": "fa:16:3e:ae:f3:f2", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a450f-06", "ovs_interfaceid": "4d2a450f-06f0-46f4-a472-c897cc576408", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:19:55 compute-0 nova_compute[255040]: 2025-11-29 08:19:55.466 255071 DEBUG oslo_concurrency.lockutils [req-c11693e9-d95d-4df1-800e-e6d8b0a6ed5b req-e4faeef4-5757-4a6a-8d5c-c526cdb0cd7f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-71726c35-b087-417d-aa8f-40239b043464" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:19:55 compute-0 nova_compute[255040]: 2025-11-29 08:19:55.720 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0047219280092140395 of space, bias 1.0, pg target 1.4165784027642119 quantized to 32 (current 32)
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.9013621638340822e-05 quantized to 32 (current 32)
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.19918670028325844 quantized to 32 (current 32)
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 16)
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:19:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Nov 29 08:19:56 compute-0 podman[298975]: 2025-11-29 08:19:56.889839047 +0000 UTC m=+0.054860860 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:19:57 compute-0 ceph-mon[75237]: pgmap v2052: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:19:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:19:58 compute-0 ceph-mon[75237]: pgmap v2053: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:19:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:19:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1501490805' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:19:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:19:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1501490805' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:20:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:20:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:00 compute-0 nova_compute[255040]: 2025-11-29 08:20:00.228 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:00 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1501490805' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:20:00 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1501490805' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:20:00 compute-0 nova_compute[255040]: 2025-11-29 08:20:00.722 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:01 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 29 08:20:01 compute-0 ceph-mon[75237]: pgmap v2054: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:20:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 75 op/s
Nov 29 08:20:02 compute-0 ceph-mon[75237]: pgmap v2055: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 75 op/s
Nov 29 08:20:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 51 op/s
Nov 29 08:20:04 compute-0 ceph-mon[75237]: pgmap v2056: 305 pgs: 305 active+clean; 385 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 51 op/s
Nov 29 08:20:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:05 compute-0 nova_compute[255040]: 2025-11-29 08:20:05.230 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:05 compute-0 ovn_controller[153295]: 2025-11-29T08:20:05Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ae:f3:f2 10.100.0.10
Nov 29 08:20:05 compute-0 ovn_controller[153295]: 2025-11-29T08:20:05Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ae:f3:f2 10.100.0.10
Nov 29 08:20:05 compute-0 nova_compute[255040]: 2025-11-29 08:20:05.724 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 385 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 76 op/s
Nov 29 08:20:06 compute-0 ceph-mon[75237]: pgmap v2057: 305 pgs: 305 active+clean; 385 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 76 op/s
Nov 29 08:20:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 385 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 446 KiB/s rd, 2.0 MiB/s wr, 45 op/s
Nov 29 08:20:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:20:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:20:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:20:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:20:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:20:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:20:08 compute-0 podman[298996]: 2025-11-29 08:20:08.958007039 +0000 UTC m=+0.122102322 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:20:09 compute-0 ceph-mon[75237]: pgmap v2058: 305 pgs: 305 active+clean; 385 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 446 KiB/s rd, 2.0 MiB/s wr, 45 op/s
Nov 29 08:20:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 407 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 513 KiB/s rd, 2.8 MiB/s wr, 58 op/s
Nov 29 08:20:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:10 compute-0 nova_compute[255040]: 2025-11-29 08:20:10.232 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:10 compute-0 nova_compute[255040]: 2025-11-29 08:20:10.726 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:11 compute-0 ceph-mon[75237]: pgmap v2059: 305 pgs: 305 active+clean; 407 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 513 KiB/s rd, 2.8 MiB/s wr, 58 op/s
Nov 29 08:20:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 440 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 538 KiB/s rd, 5.4 MiB/s wr, 68 op/s
Nov 29 08:20:13 compute-0 ceph-mon[75237]: pgmap v2060: 305 pgs: 305 active+clean; 440 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 538 KiB/s rd, 5.4 MiB/s wr, 68 op/s
Nov 29 08:20:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 440 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 487 KiB/s rd, 5.4 MiB/s wr, 66 op/s
Nov 29 08:20:14 compute-0 sshd-session[299022]: Received disconnect from 45.78.219.195 port 58226:11: Bye Bye [preauth]
Nov 29 08:20:14 compute-0 sshd-session[299022]: Disconnected from authenticating user root 45.78.219.195 port 58226 [preauth]
Nov 29 08:20:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:15 compute-0 ceph-mon[75237]: pgmap v2061: 305 pgs: 305 active+clean; 440 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 487 KiB/s rd, 5.4 MiB/s wr, 66 op/s
Nov 29 08:20:15 compute-0 nova_compute[255040]: 2025-11-29 08:20:15.235 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:15 compute-0 nova_compute[255040]: 2025-11-29 08:20:15.728 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 491 KiB/s rd, 5.8 MiB/s wr, 74 op/s
Nov 29 08:20:17 compute-0 ceph-mon[75237]: pgmap v2062: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 491 KiB/s rd, 5.8 MiB/s wr, 74 op/s
Nov 29 08:20:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 209 KiB/s rd, 4.6 MiB/s wr, 49 op/s
Nov 29 08:20:19 compute-0 ceph-mon[75237]: pgmap v2063: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 209 KiB/s rd, 4.6 MiB/s wr, 49 op/s
Nov 29 08:20:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 96 KiB/s rd, 3.7 MiB/s wr, 31 op/s
Nov 29 08:20:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:20 compute-0 nova_compute[255040]: 2025-11-29 08:20:20.239 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:20 compute-0 nova_compute[255040]: 2025-11-29 08:20:20.730 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:21 compute-0 ceph-mon[75237]: pgmap v2064: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 96 KiB/s rd, 3.7 MiB/s wr, 31 op/s
Nov 29 08:20:21 compute-0 podman[299024]: 2025-11-29 08:20:21.928337491 +0000 UTC m=+0.090555117 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 08:20:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.9 MiB/s wr, 18 op/s
Nov 29 08:20:23 compute-0 ceph-mon[75237]: pgmap v2065: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.9 MiB/s wr, 18 op/s
Nov 29 08:20:23 compute-0 ovn_controller[153295]: 2025-11-29T08:20:23Z|00267|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Nov 29 08:20:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 379 KiB/s wr, 8 op/s
Nov 29 08:20:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:25 compute-0 nova_compute[255040]: 2025-11-29 08:20:25.251 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:25 compute-0 ceph-mon[75237]: pgmap v2066: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 379 KiB/s wr, 8 op/s
Nov 29 08:20:25 compute-0 nova_compute[255040]: 2025-11-29 08:20:25.733 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.065 255071 DEBUG oslo_concurrency.lockutils [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "71726c35-b087-417d-aa8f-40239b043464" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.065 255071 DEBUG oslo_concurrency.lockutils [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "71726c35-b087-417d-aa8f-40239b043464" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.065 255071 DEBUG oslo_concurrency.lockutils [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "71726c35-b087-417d-aa8f-40239b043464-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.065 255071 DEBUG oslo_concurrency.lockutils [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "71726c35-b087-417d-aa8f-40239b043464-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.066 255071 DEBUG oslo_concurrency.lockutils [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "71726c35-b087-417d-aa8f-40239b043464-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.066 255071 INFO nova.compute.manager [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Terminating instance
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.067 255071 DEBUG nova.compute.manager [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:20:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 380 KiB/s wr, 8 op/s
Nov 29 08:20:26 compute-0 kernel: tap4d2a450f-06 (unregistering): left promiscuous mode
Nov 29 08:20:26 compute-0 NetworkManager[49116]: <info>  [1764404426.1229] device (tap4d2a450f-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:20:26 compute-0 ovn_controller[153295]: 2025-11-29T08:20:26Z|00268|binding|INFO|Releasing lport 4d2a450f-06f0-46f4-a472-c897cc576408 from this chassis (sb_readonly=0)
Nov 29 08:20:26 compute-0 ovn_controller[153295]: 2025-11-29T08:20:26Z|00269|binding|INFO|Setting lport 4d2a450f-06f0-46f4-a472-c897cc576408 down in Southbound
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.132 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:26 compute-0 ovn_controller[153295]: 2025-11-29T08:20:26Z|00270|binding|INFO|Removing iface tap4d2a450f-06 ovn-installed in OVS
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.135 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:26.140 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:f3:f2 10.100.0.10'], port_security=['fa:16:3e:ae:f3:f2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '71726c35-b087-417d-aa8f-40239b043464', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a234aa60-c8c5-4137-96cd-77f576498813', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd25c6608beec4f818c6e402939192f16', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2022bec6-39c6-4719-b618-05c5c5bc6af6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.173'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b002bcc-9ffd-4aaa-8483-7d6ef4853f0e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=4d2a450f-06f0-46f4-a472-c897cc576408) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:20:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:26.141 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 4d2a450f-06f0-46f4-a472-c897cc576408 in datapath a234aa60-c8c5-4137-96cd-77f576498813 unbound from our chassis
Nov 29 08:20:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:26.142 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a234aa60-c8c5-4137-96cd-77f576498813, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:20:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:26.145 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[4e4ca097-9a37-4cf9-9cce-3ddbae313f77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:26.145 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813 namespace which is not needed anymore
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.153 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:26 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Nov 29 08:20:26 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Consumed 16.974s CPU time.
Nov 29 08:20:26 compute-0 systemd-machined[216271]: Machine qemu-27-instance-0000001b terminated.
Nov 29 08:20:26 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[298933]: [NOTICE]   (298937) : haproxy version is 2.8.14-c23fe91
Nov 29 08:20:26 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[298933]: [NOTICE]   (298937) : path to executable is /usr/sbin/haproxy
Nov 29 08:20:26 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[298933]: [WARNING]  (298937) : Exiting Master process...
Nov 29 08:20:26 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[298933]: [ALERT]    (298937) : Current worker (298939) exited with code 143 (Terminated)
Nov 29 08:20:26 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[298933]: [WARNING]  (298937) : All workers exited. Exiting... (0)
Nov 29 08:20:26 compute-0 systemd[1]: libpod-19dea5a33e58e1428e8af6f0301e74730246e9dd3d2108dd7571672ab7abf605.scope: Deactivated successfully.
Nov 29 08:20:26 compute-0 podman[299068]: 2025-11-29 08:20:26.293044048 +0000 UTC m=+0.050288519 container died 19dea5a33e58e1428e8af6f0301e74730246e9dd3d2108dd7571672ab7abf605 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.308 255071 INFO nova.virt.libvirt.driver [-] [instance: 71726c35-b087-417d-aa8f-40239b043464] Instance destroyed successfully.
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.308 255071 DEBUG nova.objects.instance [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lazy-loading 'resources' on Instance uuid 71726c35-b087-417d-aa8f-40239b043464 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.320 255071 DEBUG nova.compute.manager [req-b3a8f120-22bb-446f-bdb5-4ffc535f9511 req-9e7e56e0-31ae-4edd-a2f9-0111a6d77232 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Received event network-vif-unplugged-4d2a450f-06f0-46f4-a472-c897cc576408 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.321 255071 DEBUG oslo_concurrency.lockutils [req-b3a8f120-22bb-446f-bdb5-4ffc535f9511 req-9e7e56e0-31ae-4edd-a2f9-0111a6d77232 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "71726c35-b087-417d-aa8f-40239b043464-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.321 255071 DEBUG oslo_concurrency.lockutils [req-b3a8f120-22bb-446f-bdb5-4ffc535f9511 req-9e7e56e0-31ae-4edd-a2f9-0111a6d77232 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "71726c35-b087-417d-aa8f-40239b043464-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.322 255071 DEBUG oslo_concurrency.lockutils [req-b3a8f120-22bb-446f-bdb5-4ffc535f9511 req-9e7e56e0-31ae-4edd-a2f9-0111a6d77232 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "71726c35-b087-417d-aa8f-40239b043464-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.322 255071 DEBUG nova.compute.manager [req-b3a8f120-22bb-446f-bdb5-4ffc535f9511 req-9e7e56e0-31ae-4edd-a2f9-0111a6d77232 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] No waiting events found dispatching network-vif-unplugged-4d2a450f-06f0-46f4-a472-c897cc576408 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.322 255071 DEBUG nova.compute.manager [req-b3a8f120-22bb-446f-bdb5-4ffc535f9511 req-9e7e56e0-31ae-4edd-a2f9-0111a6d77232 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Received event network-vif-unplugged-4d2a450f-06f0-46f4-a472-c897cc576408 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.325 255071 DEBUG nova.virt.libvirt.vif [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:19:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1829653041',display_name='tempest-TransferEncryptedVolumeTest-server-1829653041',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1829653041',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHZ/+4uVEasBc+0e7HUZBciJ5ezONfDyC9abvZvKTAfyTotAeMYwBOUphcmP9ofLztEKtRidvxMb+4vqS+q0JDF+wXrAnm0iCWndPLMz17r0Q90fDoTo8tKBi9U0NUAS+w==',key_name='tempest-TransferEncryptedVolumeTest-127275625',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:19:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d25c6608beec4f818c6e402939192f16',ramdisk_id='',reservation_id='r-yo98n5nd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1043863442',owner_user_name='tempest-TransferEncryptedVolumeTest-1043863442-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:19:49Z,user_data=None,user_id='a08e1ef223b748efa4d5bdc804150f97',uuid=71726c35-b087-417d-aa8f-40239b043464,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4d2a450f-06f0-46f4-a472-c897cc576408", "address": "fa:16:3e:ae:f3:f2", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a450f-06", "ovs_interfaceid": "4d2a450f-06f0-46f4-a472-c897cc576408", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.326 255071 DEBUG nova.network.os_vif_util [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converting VIF {"id": "4d2a450f-06f0-46f4-a472-c897cc576408", "address": "fa:16:3e:ae:f3:f2", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a450f-06", "ovs_interfaceid": "4d2a450f-06f0-46f4-a472-c897cc576408", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.327 255071 DEBUG nova.network.os_vif_util [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ae:f3:f2,bridge_name='br-int',has_traffic_filtering=True,id=4d2a450f-06f0-46f4-a472-c897cc576408,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d2a450f-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.328 255071 DEBUG os_vif [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:f3:f2,bridge_name='br-int',has_traffic_filtering=True,id=4d2a450f-06f0-46f4-a472-c897cc576408,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d2a450f-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.330 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.330 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d2a450f-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-19dea5a33e58e1428e8af6f0301e74730246e9dd3d2108dd7571672ab7abf605-userdata-shm.mount: Deactivated successfully.
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.360 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.362 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:20:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-255ecdf045cc05c767106509fe4bf0f80319461aa06446fa71fd28d16776e7ed-merged.mount: Deactivated successfully.
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.365 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.367 255071 INFO os_vif [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:f3:f2,bridge_name='br-int',has_traffic_filtering=True,id=4d2a450f-06f0-46f4-a472-c897cc576408,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d2a450f-06')
Nov 29 08:20:26 compute-0 podman[299068]: 2025-11-29 08:20:26.372697121 +0000 UTC m=+0.129941602 container cleanup 19dea5a33e58e1428e8af6f0301e74730246e9dd3d2108dd7571672ab7abf605 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 08:20:26 compute-0 systemd[1]: libpod-conmon-19dea5a33e58e1428e8af6f0301e74730246e9dd3d2108dd7571672ab7abf605.scope: Deactivated successfully.
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.423 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:26.426 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:20:26 compute-0 podman[299106]: 2025-11-29 08:20:26.451516322 +0000 UTC m=+0.055952150 container remove 19dea5a33e58e1428e8af6f0301e74730246e9dd3d2108dd7571672ab7abf605 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 08:20:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:26.458 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[55d5700b-e483-4063-82cc-bb17def3ecf5]: (4, ('Sat Nov 29 08:20:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813 (19dea5a33e58e1428e8af6f0301e74730246e9dd3d2108dd7571672ab7abf605)\n19dea5a33e58e1428e8af6f0301e74730246e9dd3d2108dd7571672ab7abf605\nSat Nov 29 08:20:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813 (19dea5a33e58e1428e8af6f0301e74730246e9dd3d2108dd7571672ab7abf605)\n19dea5a33e58e1428e8af6f0301e74730246e9dd3d2108dd7571672ab7abf605\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:26.460 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[20f7dc02-b465-4132-a30a-f535c2bafbc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:26.462 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa234aa60-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.464 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:26 compute-0 kernel: tapa234aa60-c0: left promiscuous mode
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.477 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.479 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:26.481 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[bd0b03f5-a30d-4ed2-be64-20e36a7c45cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:26.500 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[b24f3f82-439b-4136-8515-21a15412ec6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:26.502 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[6e93c221-6e70-4ef0-9acb-2cc77fc2dad0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:26.518 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[508c9d1f-eded-4a9d-a916-1da6123f645c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662419, 'reachable_time': 27668, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299137, 'error': None, 'target': 'ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-0 systemd[1]: run-netns-ovnmeta\x2da234aa60\x2dc8c5\x2d4137\x2d96cd\x2d77f576498813.mount: Deactivated successfully.
Nov 29 08:20:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:26.524 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:20:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:26.524 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[8b70af46-a4e3-4621-acc9-3ae65208e3cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:26.525 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.583 255071 INFO nova.virt.libvirt.driver [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Deleting instance files /var/lib/nova/instances/71726c35-b087-417d-aa8f-40239b043464_del
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.584 255071 INFO nova.virt.libvirt.driver [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Deletion of /var/lib/nova/instances/71726c35-b087-417d-aa8f-40239b043464_del complete
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.632 255071 INFO nova.compute.manager [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Took 0.56 seconds to destroy the instance on the hypervisor.
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.634 255071 DEBUG oslo.service.loopingcall [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.635 255071 DEBUG nova.compute.manager [-] [instance: 71726c35-b087-417d-aa8f-40239b043464] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:20:26 compute-0 nova_compute[255040]: 2025-11-29 08:20:26.635 255071 DEBUG nova.network.neutron [-] [instance: 71726c35-b087-417d-aa8f-40239b043464] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:20:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:27.145 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:27.146 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:27.146 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:27 compute-0 ceph-mon[75237]: pgmap v2067: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 380 KiB/s wr, 8 op/s
Nov 29 08:20:27 compute-0 nova_compute[255040]: 2025-11-29 08:20:27.716 255071 DEBUG nova.network.neutron [-] [instance: 71726c35-b087-417d-aa8f-40239b043464] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:20:27 compute-0 nova_compute[255040]: 2025-11-29 08:20:27.739 255071 INFO nova.compute.manager [-] [instance: 71726c35-b087-417d-aa8f-40239b043464] Took 1.10 seconds to deallocate network for instance.
Nov 29 08:20:27 compute-0 nova_compute[255040]: 2025-11-29 08:20:27.816 255071 DEBUG nova.compute.manager [req-71c355aa-78f4-483d-8fa0-be4ca2004f9e req-82e55030-e018-416a-97ca-df83b92c8db2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Received event network-vif-deleted-4d2a450f-06f0-46f4-a472-c897cc576408 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:20:27 compute-0 nova_compute[255040]: 2025-11-29 08:20:27.882 255071 INFO nova.compute.manager [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Took 0.14 seconds to detach 1 volumes for instance.
Nov 29 08:20:27 compute-0 nova_compute[255040]: 2025-11-29 08:20:27.926 255071 DEBUG oslo_concurrency.lockutils [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:27 compute-0 nova_compute[255040]: 2025-11-29 08:20:27.927 255071 DEBUG oslo_concurrency.lockutils [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:27 compute-0 nova_compute[255040]: 2025-11-29 08:20:27.970 255071 DEBUG oslo_concurrency.processutils [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:28 compute-0 podman[299139]: 2025-11-29 08:20:28.026760744 +0000 UTC m=+0.190232785 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, org.label-schema.vendor=CentOS)
Nov 29 08:20:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Nov 29 08:20:28 compute-0 nova_compute[255040]: 2025-11-29 08:20:28.384 255071 DEBUG nova.compute.manager [req-66ee9124-f9bd-4d65-85ae-9a763f895b1b req-5c47e33f-cd65-41a1-b9ba-f2528916781f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Received event network-vif-plugged-4d2a450f-06f0-46f4-a472-c897cc576408 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:20:28 compute-0 nova_compute[255040]: 2025-11-29 08:20:28.385 255071 DEBUG oslo_concurrency.lockutils [req-66ee9124-f9bd-4d65-85ae-9a763f895b1b req-5c47e33f-cd65-41a1-b9ba-f2528916781f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "71726c35-b087-417d-aa8f-40239b043464-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:28 compute-0 nova_compute[255040]: 2025-11-29 08:20:28.385 255071 DEBUG oslo_concurrency.lockutils [req-66ee9124-f9bd-4d65-85ae-9a763f895b1b req-5c47e33f-cd65-41a1-b9ba-f2528916781f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "71726c35-b087-417d-aa8f-40239b043464-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:28 compute-0 nova_compute[255040]: 2025-11-29 08:20:28.385 255071 DEBUG oslo_concurrency.lockutils [req-66ee9124-f9bd-4d65-85ae-9a763f895b1b req-5c47e33f-cd65-41a1-b9ba-f2528916781f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "71726c35-b087-417d-aa8f-40239b043464-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:28 compute-0 nova_compute[255040]: 2025-11-29 08:20:28.385 255071 DEBUG nova.compute.manager [req-66ee9124-f9bd-4d65-85ae-9a763f895b1b req-5c47e33f-cd65-41a1-b9ba-f2528916781f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] No waiting events found dispatching network-vif-plugged-4d2a450f-06f0-46f4-a472-c897cc576408 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:20:28 compute-0 nova_compute[255040]: 2025-11-29 08:20:28.385 255071 WARNING nova.compute.manager [req-66ee9124-f9bd-4d65-85ae-9a763f895b1b req-5c47e33f-cd65-41a1-b9ba-f2528916781f cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 71726c35-b087-417d-aa8f-40239b043464] Received unexpected event network-vif-plugged-4d2a450f-06f0-46f4-a472-c897cc576408 for instance with vm_state deleted and task_state None.
Nov 29 08:20:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:20:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2898661370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:28 compute-0 nova_compute[255040]: 2025-11-29 08:20:28.405 255071 DEBUG oslo_concurrency.processutils [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:28 compute-0 nova_compute[255040]: 2025-11-29 08:20:28.410 255071 DEBUG nova.compute.provider_tree [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:20:28 compute-0 nova_compute[255040]: 2025-11-29 08:20:28.425 255071 DEBUG nova.scheduler.client.report [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:20:28 compute-0 nova_compute[255040]: 2025-11-29 08:20:28.443 255071 DEBUG oslo_concurrency.lockutils [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:28 compute-0 nova_compute[255040]: 2025-11-29 08:20:28.464 255071 INFO nova.scheduler.client.report [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Deleted allocations for instance 71726c35-b087-417d-aa8f-40239b043464
Nov 29 08:20:28 compute-0 nova_compute[255040]: 2025-11-29 08:20:28.547 255071 DEBUG oslo_concurrency.lockutils [None req-135d6f68-0811-4caf-8676-d7c330510fc9 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "71726c35-b087-417d-aa8f-40239b043464" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:29 compute-0 ceph-mon[75237]: pgmap v2068: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Nov 29 08:20:29 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2898661370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 5.9 KiB/s wr, 2 op/s
Nov 29 08:20:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:30 compute-0 nova_compute[255040]: 2025-11-29 08:20:30.254 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:31 compute-0 ceph-mon[75237]: pgmap v2069: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 5.9 KiB/s wr, 2 op/s
Nov 29 08:20:31 compute-0 nova_compute[255040]: 2025-11-29 08:20:31.362 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 18 KiB/s wr, 15 op/s
Nov 29 08:20:33 compute-0 ceph-mon[75237]: pgmap v2070: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 18 KiB/s wr, 15 op/s
Nov 29 08:20:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 14 KiB/s wr, 15 op/s
Nov 29 08:20:34 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:34.528 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:35 compute-0 nova_compute[255040]: 2025-11-29 08:20:35.289 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:35 compute-0 ceph-mon[75237]: pgmap v2071: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 14 KiB/s wr, 15 op/s
Nov 29 08:20:35 compute-0 nova_compute[255040]: 2025-11-29 08:20:35.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:20:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 14 KiB/s wr, 15 op/s
Nov 29 08:20:36 compute-0 ceph-mon[75237]: pgmap v2072: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 14 KiB/s wr, 15 op/s
Nov 29 08:20:36 compute-0 nova_compute[255040]: 2025-11-29 08:20:36.406 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:38 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.054 255071 DEBUG oslo_concurrency.lockutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "bef937b1-7990-4cb0-8126-746317b65e5f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:38 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.055 255071 DEBUG oslo_concurrency.lockutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "bef937b1-7990-4cb0-8126-746317b65e5f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:38 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.072 255071 DEBUG nova.compute.manager [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:20:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 13 KiB/s wr, 17 op/s
Nov 29 08:20:38 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.149 255071 DEBUG oslo_concurrency.lockutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:38 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.149 255071 DEBUG oslo_concurrency.lockutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:38 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.154 255071 DEBUG nova.virt.hardware [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:20:38 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.155 255071 INFO nova.compute.claims [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:20:38 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.252 255071 DEBUG oslo_concurrency.processutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:20:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2315208610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:38 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.704 255071 DEBUG oslo_concurrency.processutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:20:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:20:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:20:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:20:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:20:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:20:38 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.711 255071 DEBUG nova.compute.provider_tree [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:20:38 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.742 255071 DEBUG nova.scheduler.client.report [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:20:38 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.766 255071 DEBUG oslo_concurrency.lockutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:38 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.766 255071 DEBUG nova.compute.manager [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:20:38 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.806 255071 DEBUG nova.compute.manager [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:20:38 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.807 255071 DEBUG nova.network.neutron [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:20:38 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.824 255071 INFO nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:20:38 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.843 255071 DEBUG nova.compute.manager [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:20:38 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.883 255071 INFO nova.virt.block_device [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Booting with volume 9d326a13-082a-48ff-a152-c8f6b3c1a7e9 at /dev/vda
Nov 29 08:20:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:20:38
Nov 29 08:20:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:20:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:20:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'images', 'vms', 'default.rgw.control', 'default.rgw.log']
Nov 29 08:20:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:20:38 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:20:38 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.987 255071 DEBUG os_brick.utils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:20:38 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.988 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:38 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.998 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:38.999 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[b5f70f64-480b-447a-bbbd-5b8cd0f990cc]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.001 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.008 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.008 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[89b2a4b5-1773-4d1b-93a8-1acdd28c5852]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.010 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.017 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.017 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[109d424f-6efc-474a-b927-2246f89c6899]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.019 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[7e62acae-bf30-42f9-952d-501d22773adb]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.019 255071 DEBUG oslo_concurrency.processutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.050 255071 DEBUG oslo_concurrency.processutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.053 255071 DEBUG os_brick.initiator.connectors.lightos [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.053 255071 DEBUG os_brick.initiator.connectors.lightos [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.053 255071 DEBUG os_brick.initiator.connectors.lightos [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.054 255071 DEBUG os_brick.utils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.054 255071 DEBUG nova.virt.block_device [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Updating existing volume attachment record: 6e5a2692-8339-46c5-ae72-e3f5a7d04602 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:20:39 compute-0 ceph-mon[75237]: pgmap v2073: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 13 KiB/s wr, 17 op/s
Nov 29 08:20:39 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2315208610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:20:39 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2032156481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.669 255071 DEBUG nova.policy [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a08e1ef223b748efa4d5bdc804150f97', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd25c6608beec4f818c6e402939192f16', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.956 255071 DEBUG nova.compute.manager [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.960 255071 DEBUG nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.960 255071 INFO nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Creating image(s)
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.961 255071 DEBUG nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.962 255071 DEBUG nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Ensure instance console log exists: /var/lib/nova/instances/bef937b1-7990-4cb0-8126-746317b65e5f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.963 255071 DEBUG oslo_concurrency.lockutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.963 255071 DEBUG oslo_concurrency.lockutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:39 compute-0 nova_compute[255040]: 2025-11-29 08:20:39.964 255071 DEBUG oslo_concurrency.lockutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:39 compute-0 podman[299211]: 2025-11-29 08:20:39.973560366 +0000 UTC m=+0.127929067 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 08:20:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 12 KiB/s wr, 35 op/s
Nov 29 08:20:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:40 compute-0 nova_compute[255040]: 2025-11-29 08:20:40.292 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:40 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2032156481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:41 compute-0 nova_compute[255040]: 2025-11-29 08:20:41.182 255071 DEBUG nova.network.neutron [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Successfully created port: 26c5d3b7-4c28-4b38-9949-5d6291c59eae _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:20:41 compute-0 nova_compute[255040]: 2025-11-29 08:20:41.306 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404426.3053765, 71726c35-b087-417d-aa8f-40239b043464 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:20:41 compute-0 nova_compute[255040]: 2025-11-29 08:20:41.307 255071 INFO nova.compute.manager [-] [instance: 71726c35-b087-417d-aa8f-40239b043464] VM Stopped (Lifecycle Event)
Nov 29 08:20:41 compute-0 nova_compute[255040]: 2025-11-29 08:20:41.330 255071 DEBUG nova.compute.manager [None req-8ca04115-ee26-4372-8b6e-c5eab6a1d08c - - - - - -] [instance: 71726c35-b087-417d-aa8f-40239b043464] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:20:41 compute-0 ceph-mon[75237]: pgmap v2074: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 12 KiB/s wr, 35 op/s
Nov 29 08:20:41 compute-0 nova_compute[255040]: 2025-11-29 08:20:41.410 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:41 compute-0 nova_compute[255040]: 2025-11-29 08:20:41.936 255071 DEBUG nova.network.neutron [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Successfully updated port: 26c5d3b7-4c28-4b38-9949-5d6291c59eae _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:20:41 compute-0 nova_compute[255040]: 2025-11-29 08:20:41.955 255071 DEBUG oslo_concurrency.lockutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "refresh_cache-bef937b1-7990-4cb0-8126-746317b65e5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:20:41 compute-0 nova_compute[255040]: 2025-11-29 08:20:41.956 255071 DEBUG oslo_concurrency.lockutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquired lock "refresh_cache-bef937b1-7990-4cb0-8126-746317b65e5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:20:41 compute-0 nova_compute[255040]: 2025-11-29 08:20:41.956 255071 DEBUG nova.network.neutron [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:20:41 compute-0 nova_compute[255040]: 2025-11-29 08:20:41.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:20:41 compute-0 nova_compute[255040]: 2025-11-29 08:20:41.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:20:41 compute-0 nova_compute[255040]: 2025-11-29 08:20:41.976 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:20:41 compute-0 nova_compute[255040]: 2025-11-29 08:20:41.977 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:20:41 compute-0 nova_compute[255040]: 2025-11-29 08:20:41.997 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:41 compute-0 nova_compute[255040]: 2025-11-29 08:20:41.998 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:41 compute-0 nova_compute[255040]: 2025-11-29 08:20:41.998 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:41 compute-0 nova_compute[255040]: 2025-11-29 08:20:41.998 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:20:41 compute-0 nova_compute[255040]: 2025-11-29 08:20:41.998 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:42 compute-0 nova_compute[255040]: 2025-11-29 08:20:42.058 255071 DEBUG nova.compute.manager [req-a30855fa-f4f8-4eb5-bbf0-07f293fb6f3f req-246bc0c1-dc34-4835-b580-9de51971f70b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Received event network-changed-26c5d3b7-4c28-4b38-9949-5d6291c59eae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:20:42 compute-0 nova_compute[255040]: 2025-11-29 08:20:42.058 255071 DEBUG nova.compute.manager [req-a30855fa-f4f8-4eb5-bbf0-07f293fb6f3f req-246bc0c1-dc34-4835-b580-9de51971f70b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Refreshing instance network info cache due to event network-changed-26c5d3b7-4c28-4b38-9949-5d6291c59eae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:20:42 compute-0 nova_compute[255040]: 2025-11-29 08:20:42.058 255071 DEBUG oslo_concurrency.lockutils [req-a30855fa-f4f8-4eb5-bbf0-07f293fb6f3f req-246bc0c1-dc34-4835-b580-9de51971f70b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-bef937b1-7990-4cb0-8126-746317b65e5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:20:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 12 KiB/s wr, 72 op/s
Nov 29 08:20:42 compute-0 ceph-mon[75237]: pgmap v2075: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 12 KiB/s wr, 72 op/s
Nov 29 08:20:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:20:42 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2305669901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:42 compute-0 nova_compute[255040]: 2025-11-29 08:20:42.488 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:42 compute-0 nova_compute[255040]: 2025-11-29 08:20:42.636 255071 DEBUG nova.network.neutron [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:20:42 compute-0 nova_compute[255040]: 2025-11-29 08:20:42.680 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:20:42 compute-0 nova_compute[255040]: 2025-11-29 08:20:42.681 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4310MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:20:42 compute-0 nova_compute[255040]: 2025-11-29 08:20:42.681 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:42 compute-0 nova_compute[255040]: 2025-11-29 08:20:42.682 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:42 compute-0 nova_compute[255040]: 2025-11-29 08:20:42.736 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Instance bef937b1-7990-4cb0-8126-746317b65e5f actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:20:42 compute-0 nova_compute[255040]: 2025-11-29 08:20:42.736 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:20:42 compute-0 nova_compute[255040]: 2025-11-29 08:20:42.736 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:20:42 compute-0 nova_compute[255040]: 2025-11-29 08:20:42.766 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:20:43 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1357268276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.182 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.191 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.211 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.231 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.231 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:43 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2305669901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:43 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1357268276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:20:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:20:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:20:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:20:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:20:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:20:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:20:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:20:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:20:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:20:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.460 255071 DEBUG nova.network.neutron [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Updating instance_info_cache with network_info: [{"id": "26c5d3b7-4c28-4b38-9949-5d6291c59eae", "address": "fa:16:3e:27:e8:23", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c5d3b7-4c", "ovs_interfaceid": "26c5d3b7-4c28-4b38-9949-5d6291c59eae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.477 255071 DEBUG oslo_concurrency.lockutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Releasing lock "refresh_cache-bef937b1-7990-4cb0-8126-746317b65e5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.477 255071 DEBUG nova.compute.manager [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Instance network_info: |[{"id": "26c5d3b7-4c28-4b38-9949-5d6291c59eae", "address": "fa:16:3e:27:e8:23", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c5d3b7-4c", "ovs_interfaceid": "26c5d3b7-4c28-4b38-9949-5d6291c59eae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.478 255071 DEBUG oslo_concurrency.lockutils [req-a30855fa-f4f8-4eb5-bbf0-07f293fb6f3f req-246bc0c1-dc34-4835-b580-9de51971f70b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-bef937b1-7990-4cb0-8126-746317b65e5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.478 255071 DEBUG nova.network.neutron [req-a30855fa-f4f8-4eb5-bbf0-07f293fb6f3f req-246bc0c1-dc34-4835-b580-9de51971f70b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Refreshing network info cache for port 26c5d3b7-4c28-4b38-9949-5d6291c59eae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.481 255071 DEBUG nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Start _get_guest_xml network_info=[{"id": "26c5d3b7-4c28-4b38-9949-5d6291c59eae", "address": "fa:16:3e:27:e8:23", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c5d3b7-4c", "ovs_interfaceid": "26c5d3b7-4c28-4b38-9949-5d6291c59eae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9d326a13-082a-48ff-a152-c8f6b3c1a7e9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9d326a13-082a-48ff-a152-c8f6b3c1a7e9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'bef937b1-7990-4cb0-8126-746317b65e5f', 'attached_at': '', 'detached_at': '', 'volume_id': '9d326a13-082a-48ff-a152-c8f6b3c1a7e9', 'serial': '9d326a13-082a-48ff-a152-c8f6b3c1a7e9'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'delete_on_termination': False, 'attachment_id': '6e5a2692-8339-46c5-ae72-e3f5a7d04602', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.484 255071 WARNING nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.489 255071 DEBUG nova.virt.libvirt.host [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.489 255071 DEBUG nova.virt.libvirt.host [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.497 255071 DEBUG nova.virt.libvirt.host [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.498 255071 DEBUG nova.virt.libvirt.host [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.498 255071 DEBUG nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.499 255071 DEBUG nova.virt.hardware [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.499 255071 DEBUG nova.virt.hardware [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.500 255071 DEBUG nova.virt.hardware [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.500 255071 DEBUG nova.virt.hardware [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.500 255071 DEBUG nova.virt.hardware [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.501 255071 DEBUG nova.virt.hardware [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.501 255071 DEBUG nova.virt.hardware [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.501 255071 DEBUG nova.virt.hardware [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.502 255071 DEBUG nova.virt.hardware [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.502 255071 DEBUG nova.virt.hardware [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.502 255071 DEBUG nova.virt.hardware [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.530 255071 DEBUG nova.storage.rbd_utils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] rbd image bef937b1-7990-4cb0-8126-746317b65e5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.536 255071 DEBUG oslo_concurrency.processutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:20:43 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2364235546' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:43 compute-0 nova_compute[255040]: 2025-11-29 08:20:43.997 255071 DEBUG oslo_concurrency.processutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.109 255071 DEBUG os_brick.encryptors [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Using volume encryption metadata '{'encryption_key_id': '147484c8-c306-44d8-b0b0-cb61b3ef30c1', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9d326a13-082a-48ff-a152-c8f6b3c1a7e9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9d326a13-082a-48ff-a152-c8f6b3c1a7e9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'bef937b1-7990-4cb0-8126-746317b65e5f', 'attached_at': '', 'detached_at': '', 'volume_id': '9d326a13-082a-48ff-a152-c8f6b3c1a7e9', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.112 255071 DEBUG barbicanclient.client [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 29 08:20:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.129 255071 DEBUG barbicanclient.v1.secrets [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/147484c8-c306-44d8-b0b0-cb61b3ef30c1 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.130 255071 INFO barbicanclient.base [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/147484c8-c306-44d8-b0b0-cb61b3ef30c1
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.166 255071 DEBUG barbicanclient.client [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.167 255071 INFO barbicanclient.base [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/147484c8-c306-44d8-b0b0-cb61b3ef30c1
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.187 255071 DEBUG barbicanclient.client [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.188 255071 INFO barbicanclient.base [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/147484c8-c306-44d8-b0b0-cb61b3ef30c1
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.212 255071 DEBUG barbicanclient.client [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.213 255071 INFO barbicanclient.base [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/147484c8-c306-44d8-b0b0-cb61b3ef30c1
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.238 255071 DEBUG barbicanclient.client [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.239 255071 INFO barbicanclient.base [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/147484c8-c306-44d8-b0b0-cb61b3ef30c1
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.257 255071 DEBUG barbicanclient.client [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.258 255071 INFO barbicanclient.base [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/147484c8-c306-44d8-b0b0-cb61b3ef30c1
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.288 255071 DEBUG barbicanclient.client [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.289 255071 INFO barbicanclient.base [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/147484c8-c306-44d8-b0b0-cb61b3ef30c1
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.309 255071 DEBUG barbicanclient.client [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.310 255071 INFO barbicanclient.base [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/147484c8-c306-44d8-b0b0-cb61b3ef30c1
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.330 255071 DEBUG barbicanclient.client [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.331 255071 INFO barbicanclient.base [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/147484c8-c306-44d8-b0b0-cb61b3ef30c1
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.350 255071 DEBUG barbicanclient.client [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.351 255071 INFO barbicanclient.base [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/147484c8-c306-44d8-b0b0-cb61b3ef30c1
Nov 29 08:20:44 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2364235546' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:20:44 compute-0 ceph-mon[75237]: pgmap v2076: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.375 255071 DEBUG barbicanclient.client [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.376 255071 INFO barbicanclient.base [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/147484c8-c306-44d8-b0b0-cb61b3ef30c1
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.393 255071 DEBUG barbicanclient.client [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.394 255071 INFO barbicanclient.base [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/147484c8-c306-44d8-b0b0-cb61b3ef30c1
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.423 255071 DEBUG barbicanclient.client [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.424 255071 INFO barbicanclient.base [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/147484c8-c306-44d8-b0b0-cb61b3ef30c1
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.455 255071 DEBUG barbicanclient.client [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.456 255071 INFO barbicanclient.base [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/147484c8-c306-44d8-b0b0-cb61b3ef30c1
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.478 255071 DEBUG barbicanclient.client [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.479 255071 INFO barbicanclient.base [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/147484c8-c306-44d8-b0b0-cb61b3ef30c1
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.500 255071 DEBUG barbicanclient.client [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.501 255071 DEBUG nova.virt.libvirt.host [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 08:20:44 compute-0 nova_compute[255040]:   <usage type="volume">
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <volume>9d326a13-082a-48ff-a152-c8f6b3c1a7e9</volume>
Nov 29 08:20:44 compute-0 nova_compute[255040]:   </usage>
Nov 29 08:20:44 compute-0 nova_compute[255040]: </secret>
Nov 29 08:20:44 compute-0 nova_compute[255040]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.529 255071 DEBUG nova.virt.libvirt.vif [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:20:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1414064021',display_name='tempest-TransferEncryptedVolumeTest-server-1414064021',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1414064021',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHZ/+4uVEasBc+0e7HUZBciJ5ezONfDyC9abvZvKTAfyTotAeMYwBOUphcmP9ofLztEKtRidvxMb+4vqS+q0JDF+wXrAnm0iCWndPLMz17r0Q90fDoTo8tKBi9U0NUAS+w==',key_name='tempest-TransferEncryptedVolumeTest-127275625',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d25c6608beec4f818c6e402939192f16',ramdisk_id='',reservation_id='r-u7645mv2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1043863442',owner_user_name='tempest-TransferEncryptedVolumeTest-1043863442-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:38Z,user_data=None,user_id='a08e1ef223b748efa4d5bdc804150f97',uuid=bef937b1-7990-4cb0-8126-746317b65e5f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26c5d3b7-4c28-4b38-9949-5d6291c59eae", "address": "fa:16:3e:27:e8:23", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c5d3b7-4c", "ovs_interfaceid": "26c5d3b7-4c28-4b38-9949-5d6291c59eae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.530 255071 DEBUG nova.network.os_vif_util [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converting VIF {"id": "26c5d3b7-4c28-4b38-9949-5d6291c59eae", "address": "fa:16:3e:27:e8:23", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c5d3b7-4c", "ovs_interfaceid": "26c5d3b7-4c28-4b38-9949-5d6291c59eae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.530 255071 DEBUG nova.network.os_vif_util [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:e8:23,bridge_name='br-int',has_traffic_filtering=True,id=26c5d3b7-4c28-4b38-9949-5d6291c59eae,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c5d3b7-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.532 255071 DEBUG nova.objects.instance [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lazy-loading 'pci_devices' on Instance uuid bef937b1-7990-4cb0-8126-746317b65e5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.543 255071 DEBUG nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:20:44 compute-0 nova_compute[255040]:   <uuid>bef937b1-7990-4cb0-8126-746317b65e5f</uuid>
Nov 29 08:20:44 compute-0 nova_compute[255040]:   <name>instance-0000001c</name>
Nov 29 08:20:44 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:20:44 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:20:44 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-1414064021</nova:name>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:20:43</nova:creationTime>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:20:44 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:20:44 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:20:44 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:20:44 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:20:44 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:20:44 compute-0 nova_compute[255040]:         <nova:user uuid="a08e1ef223b748efa4d5bdc804150f97">tempest-TransferEncryptedVolumeTest-1043863442-project-member</nova:user>
Nov 29 08:20:44 compute-0 nova_compute[255040]:         <nova:project uuid="d25c6608beec4f818c6e402939192f16">tempest-TransferEncryptedVolumeTest-1043863442</nova:project>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:20:44 compute-0 nova_compute[255040]:         <nova:port uuid="26c5d3b7-4c28-4b38-9949-5d6291c59eae">
Nov 29 08:20:44 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:20:44 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:20:44 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <system>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <entry name="serial">bef937b1-7990-4cb0-8126-746317b65e5f</entry>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <entry name="uuid">bef937b1-7990-4cb0-8126-746317b65e5f</entry>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     </system>
Nov 29 08:20:44 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:20:44 compute-0 nova_compute[255040]:   <os>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:   </os>
Nov 29 08:20:44 compute-0 nova_compute[255040]:   <features>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:   </features>
Nov 29 08:20:44 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:20:44 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:20:44 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/bef937b1-7990-4cb0-8126-746317b65e5f_disk.config">
Nov 29 08:20:44 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       </source>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:20:44 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <source protocol="rbd" name="volumes/volume-9d326a13-082a-48ff-a152-c8f6b3c1a7e9">
Nov 29 08:20:44 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       </source>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:20:44 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <serial>9d326a13-082a-48ff-a152-c8f6b3c1a7e9</serial>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <encryption format="luks">
Nov 29 08:20:44 compute-0 nova_compute[255040]:         <secret type="passphrase" uuid="9d3ef7c2-9487-4aa0-9bb0-153afa1829e4"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       </encryption>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:27:e8:23"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <target dev="tap26c5d3b7-4c"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/bef937b1-7990-4cb0-8126-746317b65e5f/console.log" append="off"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <video>
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     </video>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:20:44 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:20:44 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:20:44 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:20:44 compute-0 nova_compute[255040]: </domain>
Nov 29 08:20:44 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.545 255071 DEBUG nova.compute.manager [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Preparing to wait for external event network-vif-plugged-26c5d3b7-4c28-4b38-9949-5d6291c59eae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.545 255071 DEBUG oslo_concurrency.lockutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "bef937b1-7990-4cb0-8126-746317b65e5f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.545 255071 DEBUG oslo_concurrency.lockutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "bef937b1-7990-4cb0-8126-746317b65e5f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.545 255071 DEBUG oslo_concurrency.lockutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "bef937b1-7990-4cb0-8126-746317b65e5f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.546 255071 DEBUG nova.virt.libvirt.vif [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:20:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1414064021',display_name='tempest-TransferEncryptedVolumeTest-server-1414064021',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1414064021',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHZ/+4uVEasBc+0e7HUZBciJ5ezONfDyC9abvZvKTAfyTotAeMYwBOUphcmP9ofLztEKtRidvxMb+4vqS+q0JDF+wXrAnm0iCWndPLMz17r0Q90fDoTo8tKBi9U0NUAS+w==',key_name='tempest-TransferEncryptedVolumeTest-127275625',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d25c6608beec4f818c6e402939192f16',ramdisk_id='',reservation_id='r-u7645mv2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1043863442',owner_user_name='tempest-TransferEncryptedVolumeTest-1043863442-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:38Z,user_data=None,user_id='a08e1ef223b748efa4d5bdc804150f97',uuid=bef937b1-7990-4cb0-8126-746317b65e5f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26c5d3b7-4c28-4b38-9949-5d6291c59eae", "address": "fa:16:3e:27:e8:23", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c5d3b7-4c", "ovs_interfaceid": "26c5d3b7-4c28-4b38-9949-5d6291c59eae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.546 255071 DEBUG nova.network.os_vif_util [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converting VIF {"id": "26c5d3b7-4c28-4b38-9949-5d6291c59eae", "address": "fa:16:3e:27:e8:23", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c5d3b7-4c", "ovs_interfaceid": "26c5d3b7-4c28-4b38-9949-5d6291c59eae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.547 255071 DEBUG nova.network.os_vif_util [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:e8:23,bridge_name='br-int',has_traffic_filtering=True,id=26c5d3b7-4c28-4b38-9949-5d6291c59eae,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c5d3b7-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.547 255071 DEBUG os_vif [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:e8:23,bridge_name='br-int',has_traffic_filtering=True,id=26c5d3b7-4c28-4b38-9949-5d6291c59eae,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c5d3b7-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.548 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.548 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.549 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.551 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.552 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26c5d3b7-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.552 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap26c5d3b7-4c, col_values=(('external_ids', {'iface-id': '26c5d3b7-4c28-4b38-9949-5d6291c59eae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:e8:23', 'vm-uuid': 'bef937b1-7990-4cb0-8126-746317b65e5f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.590 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:44 compute-0 NetworkManager[49116]: <info>  [1764404444.5913] manager: (tap26c5d3b7-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.593 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.596 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.597 255071 INFO os_vif [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:e8:23,bridge_name='br-int',has_traffic_filtering=True,id=26c5d3b7-4c28-4b38-9949-5d6291c59eae,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c5d3b7-4c')
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.632 255071 DEBUG nova.network.neutron [req-a30855fa-f4f8-4eb5-bbf0-07f293fb6f3f req-246bc0c1-dc34-4835-b580-9de51971f70b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Updated VIF entry in instance network info cache for port 26c5d3b7-4c28-4b38-9949-5d6291c59eae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.633 255071 DEBUG nova.network.neutron [req-a30855fa-f4f8-4eb5-bbf0-07f293fb6f3f req-246bc0c1-dc34-4835-b580-9de51971f70b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Updating instance_info_cache with network_info: [{"id": "26c5d3b7-4c28-4b38-9949-5d6291c59eae", "address": "fa:16:3e:27:e8:23", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c5d3b7-4c", "ovs_interfaceid": "26c5d3b7-4c28-4b38-9949-5d6291c59eae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.649 255071 DEBUG oslo_concurrency.lockutils [req-a30855fa-f4f8-4eb5-bbf0-07f293fb6f3f req-246bc0c1-dc34-4835-b580-9de51971f70b cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-bef937b1-7990-4cb0-8126-746317b65e5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.650 255071 DEBUG nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.650 255071 DEBUG nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.650 255071 DEBUG nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] No VIF found with MAC fa:16:3e:27:e8:23, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.651 255071 INFO nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Using config drive
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.671 255071 DEBUG nova.storage.rbd_utils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] rbd image bef937b1-7990-4cb0-8126-746317b65e5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.921 255071 INFO nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Creating config drive at /var/lib/nova/instances/bef937b1-7990-4cb0-8126-746317b65e5f/disk.config
Nov 29 08:20:44 compute-0 nova_compute[255040]: 2025-11-29 08:20:44.929 255071 DEBUG oslo_concurrency.processutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bef937b1-7990-4cb0-8126-746317b65e5f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8zdjfzjp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.059 255071 DEBUG oslo_concurrency.processutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bef937b1-7990-4cb0-8126-746317b65e5f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8zdjfzjp" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.091 255071 DEBUG nova.storage.rbd_utils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] rbd image bef937b1-7990-4cb0-8126-746317b65e5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.095 255071 DEBUG oslo_concurrency.processutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bef937b1-7990-4cb0-8126-746317b65e5f/disk.config bef937b1-7990-4cb0-8126-746317b65e5f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:20:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.232 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.233 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.234 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.252 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.252 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.253 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:20:45 compute-0 sudo[299371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.295 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:45 compute-0 sudo[299371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:45 compute-0 sudo[299371]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.308 255071 DEBUG oslo_concurrency.processutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bef937b1-7990-4cb0-8126-746317b65e5f/disk.config bef937b1-7990-4cb0-8126-746317b65e5f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.309 255071 INFO nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Deleting local config drive /var/lib/nova/instances/bef937b1-7990-4cb0-8126-746317b65e5f/disk.config because it was imported into RBD.
Nov 29 08:20:45 compute-0 sudo[299407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:20:45 compute-0 kernel: tap26c5d3b7-4c: entered promiscuous mode
Nov 29 08:20:45 compute-0 sudo[299407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:45 compute-0 NetworkManager[49116]: <info>  [1764404445.3610] manager: (tap26c5d3b7-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/141)
Nov 29 08:20:45 compute-0 ovn_controller[153295]: 2025-11-29T08:20:45Z|00271|binding|INFO|Claiming lport 26c5d3b7-4c28-4b38-9949-5d6291c59eae for this chassis.
Nov 29 08:20:45 compute-0 ovn_controller[153295]: 2025-11-29T08:20:45Z|00272|binding|INFO|26c5d3b7-4c28-4b38-9949-5d6291c59eae: Claiming fa:16:3e:27:e8:23 10.100.0.11
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.361 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:45 compute-0 sudo[299407]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.368 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:e8:23 10.100.0.11'], port_security=['fa:16:3e:27:e8:23 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'bef937b1-7990-4cb0-8126-746317b65e5f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a234aa60-c8c5-4137-96cd-77f576498813', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd25c6608beec4f818c6e402939192f16', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2022bec6-39c6-4719-b618-05c5c5bc6af6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b002bcc-9ffd-4aaa-8483-7d6ef4853f0e, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=26c5d3b7-4c28-4b38-9949-5d6291c59eae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.369 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 26c5d3b7-4c28-4b38-9949-5d6291c59eae in datapath a234aa60-c8c5-4137-96cd-77f576498813 bound to our chassis
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.370 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a234aa60-c8c5-4137-96cd-77f576498813
Nov 29 08:20:45 compute-0 ovn_controller[153295]: 2025-11-29T08:20:45Z|00273|binding|INFO|Setting lport 26c5d3b7-4c28-4b38-9949-5d6291c59eae ovn-installed in OVS
Nov 29 08:20:45 compute-0 ovn_controller[153295]: 2025-11-29T08:20:45Z|00274|binding|INFO|Setting lport 26c5d3b7-4c28-4b38-9949-5d6291c59eae up in Southbound
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.384 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.387 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.386 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[59c555ca-c395-42f3-9ef0-e68c3275abe6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.389 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa234aa60-c1 in ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.394 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa234aa60-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.394 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[23f5d580-8eaa-4fd7-a683-58bd3a964d30]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.395 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[9fdc3598-ad98-47ae-8b0f-0465fc7a7cb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:45 compute-0 systemd-udevd[299451]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:20:45 compute-0 systemd-machined[216271]: New machine qemu-28-instance-0000001c.
Nov 29 08:20:45 compute-0 systemd[1]: Started Virtual Machine qemu-28-instance-0000001c.
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.411 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf88095-6c2b-48a7-8508-cd0f937c3957]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:45 compute-0 NetworkManager[49116]: <info>  [1764404445.4145] device (tap26c5d3b7-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:20:45 compute-0 NetworkManager[49116]: <info>  [1764404445.4157] device (tap26c5d3b7-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:20:45 compute-0 sudo[299443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:20:45 compute-0 sudo[299443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.439 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[48106b19-0386-4f8a-9df3-3dcee3ce3829]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:45 compute-0 sudo[299443]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.469 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[85d402d3-730a-40df-94ec-f4a268663e45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:45 compute-0 NetworkManager[49116]: <info>  [1764404445.4760] manager: (tapa234aa60-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/142)
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.474 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[08da92b2-b3c0-4368-a67a-93044ac0e16c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:45 compute-0 sudo[299477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:20:45 compute-0 sudo[299477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.512 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[9304fdbb-04f7-4388-a905-11fa5f5be242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.516 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[12f2110d-872b-4f10-a191-1da8c9347296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:45 compute-0 NetworkManager[49116]: <info>  [1764404445.5368] device (tapa234aa60-c0): carrier: link connected
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.544 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[d5e9f2c1-1fb7-4059-8699-408f678f5f83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.569 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba31050-e393-4694-bc84-cfef779a6908]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa234aa60-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:9b:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668287, 'reachable_time': 26497, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299527, 'error': None, 'target': 'ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.585 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4ad6c7-0968-4db7-97ae-74303bee88aa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe65:9b6a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 668287, 'tstamp': 668287}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299528, 'error': None, 'target': 'ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.605 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[896e40bd-33f5-46fb-b293-cc69d8195c65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa234aa60-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:9b:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668287, 'reachable_time': 26497, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299529, 'error': None, 'target': 'ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.643 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5efad294-3535-4256-8ab3-5684f010a0b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.716 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[712265c5-6cc3-4196-ad4a-5481390d46a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.718 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa234aa60-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.718 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.719 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa234aa60-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.720 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:45 compute-0 NetworkManager[49116]: <info>  [1764404445.7210] manager: (tapa234aa60-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/143)
Nov 29 08:20:45 compute-0 kernel: tapa234aa60-c0: entered promiscuous mode
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.728 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa234aa60-c0, col_values=(('external_ids', {'iface-id': '821a8872-735e-4a04-8244-d3a33097614d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:20:45 compute-0 ovn_controller[153295]: 2025-11-29T08:20:45Z|00275|binding|INFO|Releasing lport 821a8872-735e-4a04-8244-d3a33097614d from this chassis (sb_readonly=0)
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.729 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.732 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a234aa60-c8c5-4137-96cd-77f576498813.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a234aa60-c8c5-4137-96cd-77f576498813.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.733 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[fe3b1c0b-0095-4427-bb28-3b32de011979]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.733 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-a234aa60-c8c5-4137-96cd-77f576498813
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/a234aa60-c8c5-4137-96cd-77f576498813.pid.haproxy
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID a234aa60-c8c5-4137-96cd-77f576498813
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:20:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:20:45.735 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813', 'env', 'PROCESS_TAG=haproxy-a234aa60-c8c5-4137-96cd-77f576498813', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a234aa60-c8c5-4137-96cd-77f576498813.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.741 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.794 255071 DEBUG nova.compute.manager [req-1e74eebd-66fa-486c-ab8d-f77523e90686 req-b10f186c-dcb8-4ba5-9fce-d867cc6a4c17 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Received event network-vif-plugged-26c5d3b7-4c28-4b38-9949-5d6291c59eae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.794 255071 DEBUG oslo_concurrency.lockutils [req-1e74eebd-66fa-486c-ab8d-f77523e90686 req-b10f186c-dcb8-4ba5-9fce-d867cc6a4c17 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "bef937b1-7990-4cb0-8126-746317b65e5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.794 255071 DEBUG oslo_concurrency.lockutils [req-1e74eebd-66fa-486c-ab8d-f77523e90686 req-b10f186c-dcb8-4ba5-9fce-d867cc6a4c17 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "bef937b1-7990-4cb0-8126-746317b65e5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.795 255071 DEBUG oslo_concurrency.lockutils [req-1e74eebd-66fa-486c-ab8d-f77523e90686 req-b10f186c-dcb8-4ba5-9fce-d867cc6a4c17 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "bef937b1-7990-4cb0-8126-746317b65e5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.795 255071 DEBUG nova.compute.manager [req-1e74eebd-66fa-486c-ab8d-f77523e90686 req-b10f186c-dcb8-4ba5-9fce-d867cc6a4c17 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Processing event network-vif-plugged-26c5d3b7-4c28-4b38-9949-5d6291c59eae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.977 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:20:45 compute-0 nova_compute[255040]: 2025-11-29 08:20:45.977 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:20:45 compute-0 sudo[299477]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:20:46 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:20:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:20:46 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:20:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:20:46 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:20:46 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev c8b348f2-e149-41cd-8d7c-3dab265f4cdd does not exist
Nov 29 08:20:46 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev aca4cde3-7e93-40de-9701-bc24f883267d does not exist
Nov 29 08:20:46 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 0af8cf6c-7faa-447c-8ed6-691a717f538b does not exist
Nov 29 08:20:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:20:46 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:20:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:20:46 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:20:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:20:46 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:20:46 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:20:46 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:20:46 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:20:46 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:20:46 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:20:46 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:20:46 compute-0 podman[299626]: 2025-11-29 08:20:46.105169478 +0000 UTC m=+0.058124077 container create f8de2bdca36f35573664ba334d7449375ef8b700a96a35ee52150356573f28b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:20:46 compute-0 sudo[299632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:20:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 08:20:46 compute-0 sudo[299632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:46 compute-0 sudo[299632]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:46 compute-0 systemd[1]: Started libpod-conmon-f8de2bdca36f35573664ba334d7449375ef8b700a96a35ee52150356573f28b3.scope.
Nov 29 08:20:46 compute-0 podman[299626]: 2025-11-29 08:20:46.077283412 +0000 UTC m=+0.030238031 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:20:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:20:46 compute-0 sudo[299665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:20:46 compute-0 sudo[299665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:46 compute-0 sudo[299665]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea90da5ae7c9b4cae2e8b57cfc244d67a4c27b8dd11e83311ff83c4b06849f36/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:20:46 compute-0 podman[299626]: 2025-11-29 08:20:46.208055045 +0000 UTC m=+0.161009654 container init f8de2bdca36f35573664ba334d7449375ef8b700a96a35ee52150356573f28b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:20:46 compute-0 podman[299626]: 2025-11-29 08:20:46.214587859 +0000 UTC m=+0.167542468 container start f8de2bdca36f35573664ba334d7449375ef8b700a96a35ee52150356573f28b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:20:46 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[299679]: [NOTICE]   (299707) : New worker (299719) forked
Nov 29 08:20:46 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[299679]: [NOTICE]   (299707) : Loading success.
Nov 29 08:20:46 compute-0 sudo[299693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:20:46 compute-0 sudo[299693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:46 compute-0 sudo[299693]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:46 compute-0 sudo[299730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:20:46 compute-0 sudo[299730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:46 compute-0 podman[299796]: 2025-11-29 08:20:46.714201701 +0000 UTC m=+0.042628062 container create 39ffa90e5ea36f8c8edeb9715b2294d903ca2dc12ebdcdeb5d72ca90ada0e29c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_morse, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 08:20:46 compute-0 systemd[1]: Started libpod-conmon-39ffa90e5ea36f8c8edeb9715b2294d903ca2dc12ebdcdeb5d72ca90ada0e29c.scope.
Nov 29 08:20:46 compute-0 podman[299796]: 2025-11-29 08:20:46.692206413 +0000 UTC m=+0.020632794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:20:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:20:46 compute-0 podman[299796]: 2025-11-29 08:20:46.818130725 +0000 UTC m=+0.146557096 container init 39ffa90e5ea36f8c8edeb9715b2294d903ca2dc12ebdcdeb5d72ca90ada0e29c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:20:46 compute-0 podman[299796]: 2025-11-29 08:20:46.828546134 +0000 UTC m=+0.156972485 container start 39ffa90e5ea36f8c8edeb9715b2294d903ca2dc12ebdcdeb5d72ca90ada0e29c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 29 08:20:46 compute-0 podman[299796]: 2025-11-29 08:20:46.832679275 +0000 UTC m=+0.161105656 container attach 39ffa90e5ea36f8c8edeb9715b2294d903ca2dc12ebdcdeb5d72ca90ada0e29c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_morse, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:20:46 compute-0 confident_morse[299813]: 167 167
Nov 29 08:20:46 compute-0 systemd[1]: libpod-39ffa90e5ea36f8c8edeb9715b2294d903ca2dc12ebdcdeb5d72ca90ada0e29c.scope: Deactivated successfully.
Nov 29 08:20:46 compute-0 conmon[299813]: conmon 39ffa90e5ea36f8c8ede <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-39ffa90e5ea36f8c8edeb9715b2294d903ca2dc12ebdcdeb5d72ca90ada0e29c.scope/container/memory.events
Nov 29 08:20:46 compute-0 podman[299796]: 2025-11-29 08:20:46.836167598 +0000 UTC m=+0.164593969 container died 39ffa90e5ea36f8c8edeb9715b2294d903ca2dc12ebdcdeb5d72ca90ada0e29c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_morse, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 08:20:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0bba651f7e0c08e636f6474fc03f817d57b23e3c80c9d1e7b54edbf36652d1e-merged.mount: Deactivated successfully.
Nov 29 08:20:46 compute-0 podman[299796]: 2025-11-29 08:20:46.888894081 +0000 UTC m=+0.217320472 container remove 39ffa90e5ea36f8c8edeb9715b2294d903ca2dc12ebdcdeb5d72ca90ada0e29c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_morse, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:20:46 compute-0 systemd[1]: libpod-conmon-39ffa90e5ea36f8c8edeb9715b2294d903ca2dc12ebdcdeb5d72ca90ada0e29c.scope: Deactivated successfully.
Nov 29 08:20:47 compute-0 podman[299837]: 2025-11-29 08:20:47.056541721 +0000 UTC m=+0.042679784 container create 882c027937536b778d6bbb46bb3db11672891284dee1e545b84438d229e03a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:20:47 compute-0 systemd[1]: Started libpod-conmon-882c027937536b778d6bbb46bb3db11672891284dee1e545b84438d229e03a1c.scope.
Nov 29 08:20:47 compute-0 ceph-mon[75237]: pgmap v2077: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 08:20:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8600efd29911c7b70ad26ea756f4a62e684515bf814593710f62486a9dadbbcc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8600efd29911c7b70ad26ea756f4a62e684515bf814593710f62486a9dadbbcc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8600efd29911c7b70ad26ea756f4a62e684515bf814593710f62486a9dadbbcc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8600efd29911c7b70ad26ea756f4a62e684515bf814593710f62486a9dadbbcc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8600efd29911c7b70ad26ea756f4a62e684515bf814593710f62486a9dadbbcc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:20:47 compute-0 podman[299837]: 2025-11-29 08:20:47.129077025 +0000 UTC m=+0.115215088 container init 882c027937536b778d6bbb46bb3db11672891284dee1e545b84438d229e03a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 08:20:47 compute-0 podman[299837]: 2025-11-29 08:20:47.040129322 +0000 UTC m=+0.026267405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:20:47 compute-0 podman[299837]: 2025-11-29 08:20:47.138150207 +0000 UTC m=+0.124288270 container start 882c027937536b778d6bbb46bb3db11672891284dee1e545b84438d229e03a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 08:20:47 compute-0 podman[299837]: 2025-11-29 08:20:47.141415744 +0000 UTC m=+0.127553837 container attach 882c027937536b778d6bbb46bb3db11672891284dee1e545b84438d229e03a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:20:47 compute-0 nova_compute[255040]: 2025-11-29 08:20:47.863 255071 DEBUG nova.compute.manager [req-be863186-b354-49bf-8ab1-507cb197e4a5 req-d836e7f8-6aa4-4e51-8e43-8e71c15506c0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Received event network-vif-plugged-26c5d3b7-4c28-4b38-9949-5d6291c59eae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:20:47 compute-0 nova_compute[255040]: 2025-11-29 08:20:47.865 255071 DEBUG oslo_concurrency.lockutils [req-be863186-b354-49bf-8ab1-507cb197e4a5 req-d836e7f8-6aa4-4e51-8e43-8e71c15506c0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "bef937b1-7990-4cb0-8126-746317b65e5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:20:47 compute-0 nova_compute[255040]: 2025-11-29 08:20:47.866 255071 DEBUG oslo_concurrency.lockutils [req-be863186-b354-49bf-8ab1-507cb197e4a5 req-d836e7f8-6aa4-4e51-8e43-8e71c15506c0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "bef937b1-7990-4cb0-8126-746317b65e5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:20:47 compute-0 nova_compute[255040]: 2025-11-29 08:20:47.866 255071 DEBUG oslo_concurrency.lockutils [req-be863186-b354-49bf-8ab1-507cb197e4a5 req-d836e7f8-6aa4-4e51-8e43-8e71c15506c0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "bef937b1-7990-4cb0-8126-746317b65e5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:47 compute-0 nova_compute[255040]: 2025-11-29 08:20:47.867 255071 DEBUG nova.compute.manager [req-be863186-b354-49bf-8ab1-507cb197e4a5 req-d836e7f8-6aa4-4e51-8e43-8e71c15506c0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] No waiting events found dispatching network-vif-plugged-26c5d3b7-4c28-4b38-9949-5d6291c59eae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:20:47 compute-0 nova_compute[255040]: 2025-11-29 08:20:47.867 255071 WARNING nova.compute.manager [req-be863186-b354-49bf-8ab1-507cb197e4a5 req-d836e7f8-6aa4-4e51-8e43-8e71c15506c0 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Received unexpected event network-vif-plugged-26c5d3b7-4c28-4b38-9949-5d6291c59eae for instance with vm_state building and task_state spawning.
Nov 29 08:20:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 255 B/s wr, 66 op/s
Nov 29 08:20:48 compute-0 quirky_wright[299854]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:20:48 compute-0 quirky_wright[299854]: --> relative data size: 1.0
Nov 29 08:20:48 compute-0 quirky_wright[299854]: --> All data devices are unavailable
Nov 29 08:20:48 compute-0 systemd[1]: libpod-882c027937536b778d6bbb46bb3db11672891284dee1e545b84438d229e03a1c.scope: Deactivated successfully.
Nov 29 08:20:48 compute-0 systemd[1]: libpod-882c027937536b778d6bbb46bb3db11672891284dee1e545b84438d229e03a1c.scope: Consumed 1.039s CPU time.
Nov 29 08:20:48 compute-0 conmon[299854]: conmon 882c027937536b778d6b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-882c027937536b778d6bbb46bb3db11672891284dee1e545b84438d229e03a1c.scope/container/memory.events
Nov 29 08:20:48 compute-0 podman[299837]: 2025-11-29 08:20:48.233897547 +0000 UTC m=+1.220035610 container died 882c027937536b778d6bbb46bb3db11672891284dee1e545b84438d229e03a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 08:20:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-8600efd29911c7b70ad26ea756f4a62e684515bf814593710f62486a9dadbbcc-merged.mount: Deactivated successfully.
Nov 29 08:20:48 compute-0 podman[299837]: 2025-11-29 08:20:48.308050543 +0000 UTC m=+1.294188636 container remove 882c027937536b778d6bbb46bb3db11672891284dee1e545b84438d229e03a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 08:20:48 compute-0 systemd[1]: libpod-conmon-882c027937536b778d6bbb46bb3db11672891284dee1e545b84438d229e03a1c.scope: Deactivated successfully.
Nov 29 08:20:48 compute-0 sudo[299730]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:48 compute-0 sudo[299901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:20:48 compute-0 sudo[299901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.406 255071 DEBUG nova.compute.manager [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:20:48 compute-0 sudo[299901]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.408 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404448.4055128, bef937b1-7990-4cb0-8126-746317b65e5f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.408 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] VM Started (Lifecycle Event)
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.413 255071 DEBUG nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.417 255071 INFO nova.virt.libvirt.driver [-] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Instance spawned successfully.
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.417 255071 DEBUG nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.430 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.437 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.441 255071 DEBUG nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.442 255071 DEBUG nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.442 255071 DEBUG nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.442 255071 DEBUG nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.443 255071 DEBUG nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.443 255071 DEBUG nova.virt.libvirt.driver [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.467 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.467 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404448.4072342, bef937b1-7990-4cb0-8126-746317b65e5f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.468 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] VM Paused (Lifecycle Event)
Nov 29 08:20:48 compute-0 sudo[299927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:20:48 compute-0 sudo[299927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:48 compute-0 sudo[299927]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.494 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.498 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404448.4115906, bef937b1-7990-4cb0-8126-746317b65e5f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.498 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] VM Resumed (Lifecycle Event)
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.507 255071 INFO nova.compute.manager [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Took 8.55 seconds to spawn the instance on the hypervisor.
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.507 255071 DEBUG nova.compute.manager [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:20:48 compute-0 sudo[299952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:20:48 compute-0 sudo[299952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:48 compute-0 sudo[299952]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.543 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.546 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.571 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.584 255071 INFO nova.compute.manager [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Took 10.46 seconds to build instance.
Nov 29 08:20:48 compute-0 sudo[299977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 08:20:48 compute-0 sudo[299977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:48 compute-0 nova_compute[255040]: 2025-11-29 08:20:48.598 255071 DEBUG oslo_concurrency.lockutils [None req-0c28a0ef-9557-4eb2-8a63-d76e585df566 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "bef937b1-7990-4cb0-8126-746317b65e5f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:20:48 compute-0 podman[300043]: 2025-11-29 08:20:48.933123665 +0000 UTC m=+0.040476475 container create 352a7badbf8772a346d25bbb73eadc31a76a6f9e0641ed3c2895ad20b6c49f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 08:20:48 compute-0 systemd[1]: Started libpod-conmon-352a7badbf8772a346d25bbb73eadc31a76a6f9e0641ed3c2895ad20b6c49f64.scope.
Nov 29 08:20:49 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:20:49 compute-0 podman[300043]: 2025-11-29 08:20:48.916178691 +0000 UTC m=+0.023531501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:20:49 compute-0 podman[300043]: 2025-11-29 08:20:49.018999175 +0000 UTC m=+0.126352005 container init 352a7badbf8772a346d25bbb73eadc31a76a6f9e0641ed3c2895ad20b6c49f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:20:49 compute-0 podman[300043]: 2025-11-29 08:20:49.024925184 +0000 UTC m=+0.132277994 container start 352a7badbf8772a346d25bbb73eadc31a76a6f9e0641ed3c2895ad20b6c49f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:20:49 compute-0 podman[300043]: 2025-11-29 08:20:49.027884003 +0000 UTC m=+0.135236833 container attach 352a7badbf8772a346d25bbb73eadc31a76a6f9e0641ed3c2895ad20b6c49f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_montalcini, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 08:20:49 compute-0 musing_montalcini[300060]: 167 167
Nov 29 08:20:49 compute-0 systemd[1]: libpod-352a7badbf8772a346d25bbb73eadc31a76a6f9e0641ed3c2895ad20b6c49f64.scope: Deactivated successfully.
Nov 29 08:20:49 compute-0 conmon[300060]: conmon 352a7badbf8772a346d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-352a7badbf8772a346d25bbb73eadc31a76a6f9e0641ed3c2895ad20b6c49f64.scope/container/memory.events
Nov 29 08:20:49 compute-0 podman[300065]: 2025-11-29 08:20:49.071755318 +0000 UTC m=+0.027354254 container died 352a7badbf8772a346d25bbb73eadc31a76a6f9e0641ed3c2895ad20b6c49f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 08:20:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-cee9275b05b097e896d266f923086b43b5c78d78cff3ce7f2aeb4693e06ae920-merged.mount: Deactivated successfully.
Nov 29 08:20:49 compute-0 podman[300065]: 2025-11-29 08:20:49.153458666 +0000 UTC m=+0.109057602 container remove 352a7badbf8772a346d25bbb73eadc31a76a6f9e0641ed3c2895ad20b6c49f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:20:49 compute-0 systemd[1]: libpod-conmon-352a7badbf8772a346d25bbb73eadc31a76a6f9e0641ed3c2895ad20b6c49f64.scope: Deactivated successfully.
Nov 29 08:20:49 compute-0 ceph-mon[75237]: pgmap v2078: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 255 B/s wr, 66 op/s
Nov 29 08:20:49 compute-0 podman[300088]: 2025-11-29 08:20:49.320279755 +0000 UTC m=+0.041302978 container create 83e2d75eda483eef87267bad373729c52add50451d6a22c292874360be661455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_albattani, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:20:49 compute-0 systemd[1]: Started libpod-conmon-83e2d75eda483eef87267bad373729c52add50451d6a22c292874360be661455.scope.
Nov 29 08:20:49 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8286ecc5d4fcea651ee30757d62d71bb67267f5a9d34c7177ee1e39f31b4061/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8286ecc5d4fcea651ee30757d62d71bb67267f5a9d34c7177ee1e39f31b4061/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8286ecc5d4fcea651ee30757d62d71bb67267f5a9d34c7177ee1e39f31b4061/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8286ecc5d4fcea651ee30757d62d71bb67267f5a9d34c7177ee1e39f31b4061/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:20:49 compute-0 podman[300088]: 2025-11-29 08:20:49.300574707 +0000 UTC m=+0.021597950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:20:49 compute-0 podman[300088]: 2025-11-29 08:20:49.4186484 +0000 UTC m=+0.139671633 container init 83e2d75eda483eef87267bad373729c52add50451d6a22c292874360be661455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:20:49 compute-0 podman[300088]: 2025-11-29 08:20:49.426817939 +0000 UTC m=+0.147841162 container start 83e2d75eda483eef87267bad373729c52add50451d6a22c292874360be661455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_albattani, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 08:20:49 compute-0 podman[300088]: 2025-11-29 08:20:49.430595969 +0000 UTC m=+0.151619212 container attach 83e2d75eda483eef87267bad373729c52add50451d6a22c292874360be661455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 08:20:49 compute-0 nova_compute[255040]: 2025-11-29 08:20:49.590 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 13 KiB/s wr, 75 op/s
Nov 29 08:20:50 compute-0 tender_albattani[300105]: {
Nov 29 08:20:50 compute-0 tender_albattani[300105]:     "0": [
Nov 29 08:20:50 compute-0 tender_albattani[300105]:         {
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "devices": [
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "/dev/loop3"
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             ],
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "lv_name": "ceph_lv0",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "lv_size": "21470642176",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "name": "ceph_lv0",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "tags": {
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.cluster_name": "ceph",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.crush_device_class": "",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.encrypted": "0",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.osd_id": "0",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.type": "block",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.vdo": "0"
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             },
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "type": "block",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "vg_name": "ceph_vg0"
Nov 29 08:20:50 compute-0 tender_albattani[300105]:         }
Nov 29 08:20:50 compute-0 tender_albattani[300105]:     ],
Nov 29 08:20:50 compute-0 tender_albattani[300105]:     "1": [
Nov 29 08:20:50 compute-0 tender_albattani[300105]:         {
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "devices": [
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "/dev/loop4"
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             ],
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "lv_name": "ceph_lv1",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "lv_size": "21470642176",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "name": "ceph_lv1",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "tags": {
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.cluster_name": "ceph",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.crush_device_class": "",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.encrypted": "0",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.osd_id": "1",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.type": "block",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.vdo": "0"
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             },
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "type": "block",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "vg_name": "ceph_vg1"
Nov 29 08:20:50 compute-0 tender_albattani[300105]:         }
Nov 29 08:20:50 compute-0 tender_albattani[300105]:     ],
Nov 29 08:20:50 compute-0 tender_albattani[300105]:     "2": [
Nov 29 08:20:50 compute-0 tender_albattani[300105]:         {
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "devices": [
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "/dev/loop5"
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             ],
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "lv_name": "ceph_lv2",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "lv_size": "21470642176",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "name": "ceph_lv2",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "tags": {
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.cluster_name": "ceph",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.crush_device_class": "",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.encrypted": "0",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.osd_id": "2",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.type": "block",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:                 "ceph.vdo": "0"
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             },
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "type": "block",
Nov 29 08:20:50 compute-0 tender_albattani[300105]:             "vg_name": "ceph_vg2"
Nov 29 08:20:50 compute-0 tender_albattani[300105]:         }
Nov 29 08:20:50 compute-0 tender_albattani[300105]:     ]
Nov 29 08:20:50 compute-0 tender_albattani[300105]: }
Nov 29 08:20:50 compute-0 systemd[1]: libpod-83e2d75eda483eef87267bad373729c52add50451d6a22c292874360be661455.scope: Deactivated successfully.
Nov 29 08:20:50 compute-0 conmon[300105]: conmon 83e2d75eda483eef8726 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83e2d75eda483eef87267bad373729c52add50451d6a22c292874360be661455.scope/container/memory.events
Nov 29 08:20:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:50 compute-0 podman[300088]: 2025-11-29 08:20:50.216283374 +0000 UTC m=+0.937306597 container died 83e2d75eda483eef87267bad373729c52add50451d6a22c292874360be661455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_albattani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:20:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8286ecc5d4fcea651ee30757d62d71bb67267f5a9d34c7177ee1e39f31b4061-merged.mount: Deactivated successfully.
Nov 29 08:20:50 compute-0 podman[300088]: 2025-11-29 08:20:50.270894796 +0000 UTC m=+0.991918019 container remove 83e2d75eda483eef87267bad373729c52add50451d6a22c292874360be661455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_albattani, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 08:20:50 compute-0 systemd[1]: libpod-conmon-83e2d75eda483eef87267bad373729c52add50451d6a22c292874360be661455.scope: Deactivated successfully.
Nov 29 08:20:50 compute-0 sudo[299977]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:50 compute-0 nova_compute[255040]: 2025-11-29 08:20:50.331 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:50 compute-0 sudo[300126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:20:50 compute-0 sudo[300126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:50 compute-0 sudo[300126]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:50 compute-0 sudo[300151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:20:50 compute-0 sudo[300151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:50 compute-0 sudo[300151]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:50 compute-0 sudo[300176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:20:50 compute-0 sudo[300176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:50 compute-0 sudo[300176]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:50 compute-0 sudo[300201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 08:20:50 compute-0 sudo[300201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:50 compute-0 podman[300267]: 2025-11-29 08:20:50.932564279 +0000 UTC m=+0.040382932 container create 37aeb94443cb60e90363ae2e0343bfb2d4ebc3ff2dd363d418feb117b2717a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_elion, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 08:20:50 compute-0 systemd[1]: Started libpod-conmon-37aeb94443cb60e90363ae2e0343bfb2d4ebc3ff2dd363d418feb117b2717a15.scope.
Nov 29 08:20:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:20:51 compute-0 podman[300267]: 2025-11-29 08:20:50.916846139 +0000 UTC m=+0.024664812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:20:51 compute-0 podman[300267]: 2025-11-29 08:20:51.02068942 +0000 UTC m=+0.128508093 container init 37aeb94443cb60e90363ae2e0343bfb2d4ebc3ff2dd363d418feb117b2717a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_elion, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:20:51 compute-0 podman[300267]: 2025-11-29 08:20:51.030469442 +0000 UTC m=+0.138288095 container start 37aeb94443cb60e90363ae2e0343bfb2d4ebc3ff2dd363d418feb117b2717a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 08:20:51 compute-0 podman[300267]: 2025-11-29 08:20:51.03337407 +0000 UTC m=+0.141192723 container attach 37aeb94443cb60e90363ae2e0343bfb2d4ebc3ff2dd363d418feb117b2717a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:20:51 compute-0 brave_elion[300285]: 167 167
Nov 29 08:20:51 compute-0 systemd[1]: libpod-37aeb94443cb60e90363ae2e0343bfb2d4ebc3ff2dd363d418feb117b2717a15.scope: Deactivated successfully.
Nov 29 08:20:51 compute-0 conmon[300285]: conmon 37aeb94443cb60e90363 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37aeb94443cb60e90363ae2e0343bfb2d4ebc3ff2dd363d418feb117b2717a15.scope/container/memory.events
Nov 29 08:20:51 compute-0 podman[300267]: 2025-11-29 08:20:51.036045501 +0000 UTC m=+0.143864154 container died 37aeb94443cb60e90363ae2e0343bfb2d4ebc3ff2dd363d418feb117b2717a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_elion, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:20:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8fa063fe89db65606bce4d7b874ec35f3b9c1c78a65a204f570e1a17050fc33-merged.mount: Deactivated successfully.
Nov 29 08:20:51 compute-0 podman[300267]: 2025-11-29 08:20:51.088858536 +0000 UTC m=+0.196677189 container remove 37aeb94443cb60e90363ae2e0343bfb2d4ebc3ff2dd363d418feb117b2717a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 08:20:51 compute-0 systemd[1]: libpod-conmon-37aeb94443cb60e90363ae2e0343bfb2d4ebc3ff2dd363d418feb117b2717a15.scope: Deactivated successfully.
Nov 29 08:20:51 compute-0 ceph-mon[75237]: pgmap v2079: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 13 KiB/s wr, 75 op/s
Nov 29 08:20:51 compute-0 podman[300310]: 2025-11-29 08:20:51.270443829 +0000 UTC m=+0.044321057 container create a72c2450b763fba9280736103bca38cdd1ce8a37982356e9e29f2253cdc6ddb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_margulis, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 08:20:51 compute-0 systemd[1]: Started libpod-conmon-a72c2450b763fba9280736103bca38cdd1ce8a37982356e9e29f2253cdc6ddb3.scope.
Nov 29 08:20:51 compute-0 podman[300310]: 2025-11-29 08:20:51.249752606 +0000 UTC m=+0.023629864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:20:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:20:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb2f7db453ec037cdd7e44558842176b7d9cbf00981a8b6f18f16abea8a82df3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:20:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb2f7db453ec037cdd7e44558842176b7d9cbf00981a8b6f18f16abea8a82df3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:20:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb2f7db453ec037cdd7e44558842176b7d9cbf00981a8b6f18f16abea8a82df3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:20:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb2f7db453ec037cdd7e44558842176b7d9cbf00981a8b6f18f16abea8a82df3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:20:51 compute-0 podman[300310]: 2025-11-29 08:20:51.374697542 +0000 UTC m=+0.148574770 container init a72c2450b763fba9280736103bca38cdd1ce8a37982356e9e29f2253cdc6ddb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_margulis, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 08:20:51 compute-0 podman[300310]: 2025-11-29 08:20:51.387923646 +0000 UTC m=+0.161800894 container start a72c2450b763fba9280736103bca38cdd1ce8a37982356e9e29f2253cdc6ddb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 08:20:51 compute-0 podman[300310]: 2025-11-29 08:20:51.393571647 +0000 UTC m=+0.167448885 container attach a72c2450b763fba9280736103bca38cdd1ce8a37982356e9e29f2253cdc6ddb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 08:20:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 99 op/s
Nov 29 08:20:52 compute-0 loving_margulis[300327]: {
Nov 29 08:20:52 compute-0 loving_margulis[300327]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 08:20:52 compute-0 loving_margulis[300327]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:20:52 compute-0 loving_margulis[300327]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:20:52 compute-0 loving_margulis[300327]:         "osd_id": 2,
Nov 29 08:20:52 compute-0 loving_margulis[300327]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:20:52 compute-0 loving_margulis[300327]:         "type": "bluestore"
Nov 29 08:20:52 compute-0 loving_margulis[300327]:     },
Nov 29 08:20:52 compute-0 loving_margulis[300327]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 08:20:52 compute-0 loving_margulis[300327]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:20:52 compute-0 loving_margulis[300327]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:20:52 compute-0 loving_margulis[300327]:         "osd_id": 0,
Nov 29 08:20:52 compute-0 loving_margulis[300327]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:20:52 compute-0 loving_margulis[300327]:         "type": "bluestore"
Nov 29 08:20:52 compute-0 loving_margulis[300327]:     },
Nov 29 08:20:52 compute-0 loving_margulis[300327]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 08:20:52 compute-0 loving_margulis[300327]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:20:52 compute-0 loving_margulis[300327]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:20:52 compute-0 loving_margulis[300327]:         "osd_id": 1,
Nov 29 08:20:52 compute-0 loving_margulis[300327]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:20:52 compute-0 loving_margulis[300327]:         "type": "bluestore"
Nov 29 08:20:52 compute-0 loving_margulis[300327]:     }
Nov 29 08:20:52 compute-0 loving_margulis[300327]: }
Nov 29 08:20:52 compute-0 systemd[1]: libpod-a72c2450b763fba9280736103bca38cdd1ce8a37982356e9e29f2253cdc6ddb3.scope: Deactivated successfully.
Nov 29 08:20:52 compute-0 systemd[1]: libpod-a72c2450b763fba9280736103bca38cdd1ce8a37982356e9e29f2253cdc6ddb3.scope: Consumed 1.019s CPU time.
Nov 29 08:20:52 compute-0 podman[300360]: 2025-11-29 08:20:52.440215602 +0000 UTC m=+0.026805289 container died a72c2450b763fba9280736103bca38cdd1ce8a37982356e9e29f2253cdc6ddb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_margulis, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 08:20:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb2f7db453ec037cdd7e44558842176b7d9cbf00981a8b6f18f16abea8a82df3-merged.mount: Deactivated successfully.
Nov 29 08:20:52 compute-0 podman[300360]: 2025-11-29 08:20:52.5055196 +0000 UTC m=+0.092109277 container remove a72c2450b763fba9280736103bca38cdd1ce8a37982356e9e29f2253cdc6ddb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 08:20:52 compute-0 systemd[1]: libpod-conmon-a72c2450b763fba9280736103bca38cdd1ce8a37982356e9e29f2253cdc6ddb3.scope: Deactivated successfully.
Nov 29 08:20:52 compute-0 sudo[300201]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:20:52 compute-0 podman[300361]: 2025-11-29 08:20:52.55589896 +0000 UTC m=+0.115249538 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:20:52 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:20:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:20:52 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:20:52 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 15eca69f-c74c-4623-96ea-5c931a907923 does not exist
Nov 29 08:20:52 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 2fc53202-4a61-4b12-8f6d-354ee008fd35 does not exist
Nov 29 08:20:52 compute-0 sudo[300394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:20:52 compute-0 sudo[300394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:52 compute-0 sudo[300394]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:52 compute-0 sudo[300419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:20:52 compute-0 sudo[300419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:20:52 compute-0 sudo[300419]: pam_unix(sudo:session): session closed for user root
Nov 29 08:20:53 compute-0 ceph-mon[75237]: pgmap v2080: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 99 op/s
Nov 29 08:20:53 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:20:53 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:20:53.213436) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404453213607, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 2188, "num_deletes": 258, "total_data_size": 3423831, "memory_usage": 3494336, "flush_reason": "Manual Compaction"}
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404453249916, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 3364091, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37367, "largest_seqno": 39554, "table_properties": {"data_size": 3353998, "index_size": 6459, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21026, "raw_average_key_size": 20, "raw_value_size": 3333756, "raw_average_value_size": 3297, "num_data_blocks": 283, "num_entries": 1011, "num_filter_entries": 1011, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404248, "oldest_key_time": 1764404248, "file_creation_time": 1764404453, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 36726 microseconds, and 15550 cpu microseconds.
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:20:53.250174) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 3364091 bytes OK
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:20:53.250259) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:20:53.252192) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:20:53.252214) EVENT_LOG_v1 {"time_micros": 1764404453252207, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:20:53.252234) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3414509, prev total WAL file size 3414509, number of live WAL files 2.
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:20:53.254203) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(3285KB)], [77(10MB)]
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404453254421, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 14652063, "oldest_snapshot_seqno": -1}
Nov 29 08:20:53 compute-0 nova_compute[255040]: 2025-11-29 08:20:53.366 255071 DEBUG nova.compute.manager [req-8842c786-e025-49eb-aaa3-c25afb9687f3 req-32e7d9b7-9a63-40d2-b724-632e8fd65a61 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Received event network-changed-26c5d3b7-4c28-4b38-9949-5d6291c59eae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:20:53 compute-0 nova_compute[255040]: 2025-11-29 08:20:53.367 255071 DEBUG nova.compute.manager [req-8842c786-e025-49eb-aaa3-c25afb9687f3 req-32e7d9b7-9a63-40d2-b724-632e8fd65a61 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Refreshing instance network info cache due to event network-changed-26c5d3b7-4c28-4b38-9949-5d6291c59eae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:20:53 compute-0 nova_compute[255040]: 2025-11-29 08:20:53.367 255071 DEBUG oslo_concurrency.lockutils [req-8842c786-e025-49eb-aaa3-c25afb9687f3 req-32e7d9b7-9a63-40d2-b724-632e8fd65a61 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-bef937b1-7990-4cb0-8126-746317b65e5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:20:53 compute-0 nova_compute[255040]: 2025-11-29 08:20:53.367 255071 DEBUG oslo_concurrency.lockutils [req-8842c786-e025-49eb-aaa3-c25afb9687f3 req-32e7d9b7-9a63-40d2-b724-632e8fd65a61 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-bef937b1-7990-4cb0-8126-746317b65e5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:20:53 compute-0 nova_compute[255040]: 2025-11-29 08:20:53.368 255071 DEBUG nova.network.neutron [req-8842c786-e025-49eb-aaa3-c25afb9687f3 req-32e7d9b7-9a63-40d2-b724-632e8fd65a61 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Refreshing network info cache for port 26c5d3b7-4c28-4b38-9949-5d6291c59eae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 7445 keys, 12905803 bytes, temperature: kUnknown
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404453380436, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 12905803, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12848026, "index_size": 38081, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18629, "raw_key_size": 188260, "raw_average_key_size": 25, "raw_value_size": 12706519, "raw_average_value_size": 1706, "num_data_blocks": 1517, "num_entries": 7445, "num_filter_entries": 7445, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764404453, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:20:53.380779) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12905803 bytes
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:20:53.382126) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 116.1 rd, 102.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.8 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 7974, records dropped: 529 output_compression: NoCompression
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:20:53.382147) EVENT_LOG_v1 {"time_micros": 1764404453382137, "job": 44, "event": "compaction_finished", "compaction_time_micros": 126167, "compaction_time_cpu_micros": 43142, "output_level": 6, "num_output_files": 1, "total_output_size": 12905803, "num_input_records": 7974, "num_output_records": 7445, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404453382800, "job": 44, "event": "table_file_deletion", "file_number": 79}
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404453384802, "job": 44, "event": "table_file_deletion", "file_number": 77}
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:20:53.253967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:20:53.384867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:20:53.384873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:20:53.384875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:20:53.384877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:20:53 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:20:53.384879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:20:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 61 op/s
Nov 29 08:20:54 compute-0 nova_compute[255040]: 2025-11-29 08:20:54.594 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:54 compute-0 nova_compute[255040]: 2025-11-29 08:20:54.934 255071 DEBUG nova.network.neutron [req-8842c786-e025-49eb-aaa3-c25afb9687f3 req-32e7d9b7-9a63-40d2-b724-632e8fd65a61 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Updated VIF entry in instance network info cache for port 26c5d3b7-4c28-4b38-9949-5d6291c59eae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:20:54 compute-0 nova_compute[255040]: 2025-11-29 08:20:54.935 255071 DEBUG nova.network.neutron [req-8842c786-e025-49eb-aaa3-c25afb9687f3 req-32e7d9b7-9a63-40d2-b724-632e8fd65a61 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Updating instance_info_cache with network_info: [{"id": "26c5d3b7-4c28-4b38-9949-5d6291c59eae", "address": "fa:16:3e:27:e8:23", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c5d3b7-4c", "ovs_interfaceid": "26c5d3b7-4c28-4b38-9949-5d6291c59eae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:20:54 compute-0 nova_compute[255040]: 2025-11-29 08:20:54.954 255071 DEBUG oslo_concurrency.lockutils [req-8842c786-e025-49eb-aaa3-c25afb9687f3 req-32e7d9b7-9a63-40d2-b724-632e8fd65a61 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-bef937b1-7990-4cb0-8126-746317b65e5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:20:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:20:55 compute-0 ceph-mon[75237]: pgmap v2081: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 61 op/s
Nov 29 08:20:55 compute-0 nova_compute[255040]: 2025-11-29 08:20:55.336 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0054373870629029104 of space, bias 1.0, pg target 1.6312161188708731 quantized to 32 (current 32)
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.9013621638340822e-05 quantized to 32 (current 32)
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.19918670028325844 quantized to 32 (current 32)
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 16)
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:20:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Nov 29 08:20:57 compute-0 ceph-mon[75237]: pgmap v2082: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Nov 29 08:20:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Nov 29 08:20:58 compute-0 podman[300444]: 2025-11-29 08:20:58.89531323 +0000 UTC m=+0.060830630 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd)
Nov 29 08:20:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:20:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/928419242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:20:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:20:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/928419242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:20:59 compute-0 ceph-mon[75237]: pgmap v2083: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Nov 29 08:20:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/928419242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:20:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/928419242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:20:59 compute-0 nova_compute[255040]: 2025-11-29 08:20:59.598 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 66 op/s
Nov 29 08:21:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:00 compute-0 nova_compute[255040]: 2025-11-29 08:21:00.338 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:01 compute-0 ceph-mon[75237]: pgmap v2084: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 66 op/s
Nov 29 08:21:01 compute-0 ovn_controller[153295]: 2025-11-29T08:21:01Z|00066|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.11
Nov 29 08:21:01 compute-0 ovn_controller[153295]: 2025-11-29T08:21:01Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:27:e8:23 10.100.0.11
Nov 29 08:21:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 7.1 KiB/s wr, 73 op/s
Nov 29 08:21:03 compute-0 ceph-mon[75237]: pgmap v2085: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 7.1 KiB/s wr, 73 op/s
Nov 29 08:21:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 675 KiB/s rd, 7.1 KiB/s wr, 30 op/s
Nov 29 08:21:04 compute-0 nova_compute[255040]: 2025-11-29 08:21:04.601 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:05 compute-0 ovn_controller[153295]: 2025-11-29T08:21:05Z|00068|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.11
Nov 29 08:21:05 compute-0 ovn_controller[153295]: 2025-11-29T08:21:05Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:27:e8:23 10.100.0.11
Nov 29 08:21:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:05 compute-0 ceph-mon[75237]: pgmap v2086: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 675 KiB/s rd, 7.1 KiB/s wr, 30 op/s
Nov 29 08:21:05 compute-0 nova_compute[255040]: 2025-11-29 08:21:05.341 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 997 KiB/s rd, 7.4 KiB/s wr, 57 op/s
Nov 29 08:21:06 compute-0 ovn_controller[153295]: 2025-11-29T08:21:06Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:27:e8:23 10.100.0.11
Nov 29 08:21:06 compute-0 ovn_controller[153295]: 2025-11-29T08:21:06Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:27:e8:23 10.100.0.11
Nov 29 08:21:07 compute-0 ceph-mon[75237]: pgmap v2087: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 997 KiB/s rd, 7.4 KiB/s wr, 57 op/s
Nov 29 08:21:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 7.4 KiB/s wr, 45 op/s
Nov 29 08:21:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:21:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:21:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:21:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:21:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:21:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:21:09 compute-0 ceph-mon[75237]: pgmap v2088: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 7.4 KiB/s wr, 45 op/s
Nov 29 08:21:09 compute-0 nova_compute[255040]: 2025-11-29 08:21:09.603 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 11 KiB/s wr, 46 op/s
Nov 29 08:21:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:10 compute-0 nova_compute[255040]: 2025-11-29 08:21:10.375 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:10 compute-0 podman[300466]: 2025-11-29 08:21:10.956707849 +0000 UTC m=+0.110636944 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 08:21:11 compute-0 ceph-mon[75237]: pgmap v2089: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 11 KiB/s wr, 46 op/s
Nov 29 08:21:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 21 KiB/s wr, 46 op/s
Nov 29 08:21:13 compute-0 ceph-mon[75237]: pgmap v2090: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 21 KiB/s wr, 46 op/s
Nov 29 08:21:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 14 KiB/s wr, 28 op/s
Nov 29 08:21:14 compute-0 nova_compute[255040]: 2025-11-29 08:21:14.606 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:15 compute-0 ceph-mon[75237]: pgmap v2091: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 14 KiB/s wr, 28 op/s
Nov 29 08:21:15 compute-0 nova_compute[255040]: 2025-11-29 08:21:15.378 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 17 KiB/s wr, 28 op/s
Nov 29 08:21:16 compute-0 ceph-mon[75237]: pgmap v2092: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 17 KiB/s wr, 28 op/s
Nov 29 08:21:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 17 KiB/s wr, 1 op/s
Nov 29 08:21:19 compute-0 ceph-mon[75237]: pgmap v2093: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 17 KiB/s wr, 1 op/s
Nov 29 08:21:19 compute-0 nova_compute[255040]: 2025-11-29 08:21:19.607 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Nov 29 08:21:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:21:20.227785) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404480227809, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 455, "num_deletes": 251, "total_data_size": 405672, "memory_usage": 414920, "flush_reason": "Manual Compaction"}
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404480232637, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 296677, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39555, "largest_seqno": 40009, "table_properties": {"data_size": 294248, "index_size": 529, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6587, "raw_average_key_size": 20, "raw_value_size": 289314, "raw_average_value_size": 890, "num_data_blocks": 24, "num_entries": 325, "num_filter_entries": 325, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404454, "oldest_key_time": 1764404454, "file_creation_time": 1764404480, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 4882 microseconds, and 1894 cpu microseconds.
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:21:20.232664) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 296677 bytes OK
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:21:20.232682) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:21:20.234550) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:21:20.234562) EVENT_LOG_v1 {"time_micros": 1764404480234558, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:21:20.234572) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 402932, prev total WAL file size 402932, number of live WAL files 2.
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:21:20.235059) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323538' seq:72057594037927935, type:22 .. '6D6772737461740031353130' seq:0, type:0; will stop at (end)
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(289KB)], [80(12MB)]
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404480235218, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 13202480, "oldest_snapshot_seqno": -1}
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 7266 keys, 9997948 bytes, temperature: kUnknown
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404480357082, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 9997948, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9945897, "index_size": 32803, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18181, "raw_key_size": 184709, "raw_average_key_size": 25, "raw_value_size": 9812018, "raw_average_value_size": 1350, "num_data_blocks": 1297, "num_entries": 7266, "num_filter_entries": 7266, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764404480, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:21:20.357355) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 9997948 bytes
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:21:20.358954) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.3 rd, 82.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 12.3 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(78.2) write-amplify(33.7) OK, records in: 7770, records dropped: 504 output_compression: NoCompression
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:21:20.358994) EVENT_LOG_v1 {"time_micros": 1764404480358976, "job": 46, "event": "compaction_finished", "compaction_time_micros": 121954, "compaction_time_cpu_micros": 54620, "output_level": 6, "num_output_files": 1, "total_output_size": 9997948, "num_input_records": 7770, "num_output_records": 7266, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404480359352, "job": 46, "event": "table_file_deletion", "file_number": 82}
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404480363970, "job": 46, "event": "table_file_deletion", "file_number": 80}
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:21:20.234882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:21:20.364064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:21:20.364073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:21:20.364076) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:21:20.364079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:21:20 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:21:20.364082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:21:20 compute-0 nova_compute[255040]: 2025-11-29 08:21:20.381 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:21 compute-0 ceph-mon[75237]: pgmap v2094: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Nov 29 08:21:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s wr, 1 op/s
Nov 29 08:21:22 compute-0 nova_compute[255040]: 2025-11-29 08:21:22.444 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:22 compute-0 podman[300492]: 2025-11-29 08:21:22.918612462 +0000 UTC m=+0.075494393 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 08:21:23 compute-0 ovn_controller[153295]: 2025-11-29T08:21:23Z|00276|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Nov 29 08:21:23 compute-0 ceph-mon[75237]: pgmap v2095: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s wr, 1 op/s
Nov 29 08:21:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Nov 29 08:21:24 compute-0 nova_compute[255040]: 2025-11-29 08:21:24.610 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:25 compute-0 ceph-mon[75237]: pgmap v2096: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Nov 29 08:21:25 compute-0 nova_compute[255040]: 2025-11-29 08:21:25.383 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:25 compute-0 nova_compute[255040]: 2025-11-29 08:21:25.565 255071 DEBUG oslo_concurrency.lockutils [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "bef937b1-7990-4cb0-8126-746317b65e5f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:25 compute-0 nova_compute[255040]: 2025-11-29 08:21:25.565 255071 DEBUG oslo_concurrency.lockutils [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "bef937b1-7990-4cb0-8126-746317b65e5f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:25 compute-0 nova_compute[255040]: 2025-11-29 08:21:25.566 255071 DEBUG oslo_concurrency.lockutils [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "bef937b1-7990-4cb0-8126-746317b65e5f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:25 compute-0 nova_compute[255040]: 2025-11-29 08:21:25.566 255071 DEBUG oslo_concurrency.lockutils [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "bef937b1-7990-4cb0-8126-746317b65e5f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:25 compute-0 nova_compute[255040]: 2025-11-29 08:21:25.566 255071 DEBUG oslo_concurrency.lockutils [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "bef937b1-7990-4cb0-8126-746317b65e5f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:25 compute-0 nova_compute[255040]: 2025-11-29 08:21:25.567 255071 INFO nova.compute.manager [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Terminating instance
Nov 29 08:21:25 compute-0 nova_compute[255040]: 2025-11-29 08:21:25.568 255071 DEBUG nova.compute.manager [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:21:25 compute-0 kernel: tap26c5d3b7-4c (unregistering): left promiscuous mode
Nov 29 08:21:25 compute-0 NetworkManager[49116]: <info>  [1764404485.6270] device (tap26c5d3b7-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:21:25 compute-0 nova_compute[255040]: 2025-11-29 08:21:25.637 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:25 compute-0 ovn_controller[153295]: 2025-11-29T08:21:25Z|00277|binding|INFO|Releasing lport 26c5d3b7-4c28-4b38-9949-5d6291c59eae from this chassis (sb_readonly=0)
Nov 29 08:21:25 compute-0 nova_compute[255040]: 2025-11-29 08:21:25.639 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:25 compute-0 ovn_controller[153295]: 2025-11-29T08:21:25Z|00278|binding|INFO|Setting lport 26c5d3b7-4c28-4b38-9949-5d6291c59eae down in Southbound
Nov 29 08:21:25 compute-0 ovn_controller[153295]: 2025-11-29T08:21:25Z|00279|binding|INFO|Removing iface tap26c5d3b7-4c ovn-installed in OVS
Nov 29 08:21:25 compute-0 nova_compute[255040]: 2025-11-29 08:21:25.664 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:25 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Nov 29 08:21:25 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Consumed 16.763s CPU time.
Nov 29 08:21:25 compute-0 systemd-machined[216271]: Machine qemu-28-instance-0000001c terminated.
Nov 29 08:21:25 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:21:25.781 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:e8:23 10.100.0.11'], port_security=['fa:16:3e:27:e8:23 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'bef937b1-7990-4cb0-8126-746317b65e5f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a234aa60-c8c5-4137-96cd-77f576498813', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd25c6608beec4f818c6e402939192f16', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2022bec6-39c6-4719-b618-05c5c5bc6af6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b002bcc-9ffd-4aaa-8483-7d6ef4853f0e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=26c5d3b7-4c28-4b38-9949-5d6291c59eae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:21:25 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:21:25.783 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 26c5d3b7-4c28-4b38-9949-5d6291c59eae in datapath a234aa60-c8c5-4137-96cd-77f576498813 unbound from our chassis
Nov 29 08:21:25 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:21:25.784 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a234aa60-c8c5-4137-96cd-77f576498813, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:21:25 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:21:25.786 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[40f15d18-cf69-49c3-8552-16ab0b8dde9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:25 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:21:25.786 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813 namespace which is not needed anymore
Nov 29 08:21:25 compute-0 nova_compute[255040]: 2025-11-29 08:21:25.811 255071 INFO nova.virt.libvirt.driver [-] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Instance destroyed successfully.
Nov 29 08:21:25 compute-0 nova_compute[255040]: 2025-11-29 08:21:25.811 255071 DEBUG nova.objects.instance [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lazy-loading 'resources' on Instance uuid bef937b1-7990-4cb0-8126-746317b65e5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:21:25 compute-0 nova_compute[255040]: 2025-11-29 08:21:25.906 255071 DEBUG nova.virt.libvirt.vif [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:20:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1414064021',display_name='tempest-TransferEncryptedVolumeTest-server-1414064021',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1414064021',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHZ/+4uVEasBc+0e7HUZBciJ5ezONfDyC9abvZvKTAfyTotAeMYwBOUphcmP9ofLztEKtRidvxMb+4vqS+q0JDF+wXrAnm0iCWndPLMz17r0Q90fDoTo8tKBi9U0NUAS+w==',key_name='tempest-TransferEncryptedVolumeTest-127275625',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:20:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d25c6608beec4f818c6e402939192f16',ramdisk_id='',reservation_id='r-u7645mv2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1043863442',owner_user_name='tempest-TransferEncryptedVolumeTest-1043863442-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:20:48Z,user_data=None,user_id='a08e1ef223b748efa4d5bdc804150f97',uuid=bef937b1-7990-4cb0-8126-746317b65e5f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "26c5d3b7-4c28-4b38-9949-5d6291c59eae", "address": "fa:16:3e:27:e8:23", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c5d3b7-4c", "ovs_interfaceid": "26c5d3b7-4c28-4b38-9949-5d6291c59eae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:21:25 compute-0 nova_compute[255040]: 2025-11-29 08:21:25.907 255071 DEBUG nova.network.os_vif_util [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converting VIF {"id": "26c5d3b7-4c28-4b38-9949-5d6291c59eae", "address": "fa:16:3e:27:e8:23", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c5d3b7-4c", "ovs_interfaceid": "26c5d3b7-4c28-4b38-9949-5d6291c59eae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:21:25 compute-0 nova_compute[255040]: 2025-11-29 08:21:25.908 255071 DEBUG nova.network.os_vif_util [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:27:e8:23,bridge_name='br-int',has_traffic_filtering=True,id=26c5d3b7-4c28-4b38-9949-5d6291c59eae,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c5d3b7-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:21:25 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[299679]: [NOTICE]   (299707) : haproxy version is 2.8.14-c23fe91
Nov 29 08:21:25 compute-0 nova_compute[255040]: 2025-11-29 08:21:25.909 255071 DEBUG os_vif [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:e8:23,bridge_name='br-int',has_traffic_filtering=True,id=26c5d3b7-4c28-4b38-9949-5d6291c59eae,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c5d3b7-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:21:25 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[299679]: [NOTICE]   (299707) : path to executable is /usr/sbin/haproxy
Nov 29 08:21:25 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[299679]: [WARNING]  (299707) : Exiting Master process...
Nov 29 08:21:25 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[299679]: [WARNING]  (299707) : Exiting Master process...
Nov 29 08:21:25 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[299679]: [ALERT]    (299707) : Current worker (299719) exited with code 143 (Terminated)
Nov 29 08:21:25 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[299679]: [WARNING]  (299707) : All workers exited. Exiting... (0)
Nov 29 08:21:25 compute-0 nova_compute[255040]: 2025-11-29 08:21:25.912 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:25 compute-0 nova_compute[255040]: 2025-11-29 08:21:25.912 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26c5d3b7-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:25 compute-0 systemd[1]: libpod-f8de2bdca36f35573664ba334d7449375ef8b700a96a35ee52150356573f28b3.scope: Deactivated successfully.
Nov 29 08:21:25 compute-0 nova_compute[255040]: 2025-11-29 08:21:25.914 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:25 compute-0 nova_compute[255040]: 2025-11-29 08:21:25.918 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:21:25 compute-0 podman[300548]: 2025-11-29 08:21:25.920614821 +0000 UTC m=+0.045987393 container died f8de2bdca36f35573664ba334d7449375ef8b700a96a35ee52150356573f28b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:21:25 compute-0 nova_compute[255040]: 2025-11-29 08:21:25.921 255071 INFO os_vif [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:e8:23,bridge_name='br-int',has_traffic_filtering=True,id=26c5d3b7-4c28-4b38-9949-5d6291c59eae,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c5d3b7-4c')
Nov 29 08:21:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f8de2bdca36f35573664ba334d7449375ef8b700a96a35ee52150356573f28b3-userdata-shm.mount: Deactivated successfully.
Nov 29 08:21:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea90da5ae7c9b4cae2e8b57cfc244d67a4c27b8dd11e83311ff83c4b06849f36-merged.mount: Deactivated successfully.
Nov 29 08:21:25 compute-0 podman[300548]: 2025-11-29 08:21:25.962222326 +0000 UTC m=+0.087594898 container cleanup f8de2bdca36f35573664ba334d7449375ef8b700a96a35ee52150356573f28b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:21:25 compute-0 systemd[1]: libpod-conmon-f8de2bdca36f35573664ba334d7449375ef8b700a96a35ee52150356573f28b3.scope: Deactivated successfully.
Nov 29 08:21:26 compute-0 podman[300593]: 2025-11-29 08:21:26.029296692 +0000 UTC m=+0.044327668 container remove f8de2bdca36f35573664ba334d7449375ef8b700a96a35ee52150356573f28b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:21:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:21:26.035 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[1f688e38-4c0a-40cf-8714-905e8a74b237]: (4, ('Sat Nov 29 08:21:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813 (f8de2bdca36f35573664ba334d7449375ef8b700a96a35ee52150356573f28b3)\nf8de2bdca36f35573664ba334d7449375ef8b700a96a35ee52150356573f28b3\nSat Nov 29 08:21:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813 (f8de2bdca36f35573664ba334d7449375ef8b700a96a35ee52150356573f28b3)\nf8de2bdca36f35573664ba334d7449375ef8b700a96a35ee52150356573f28b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:21:26.037 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba0ad91-43f8-4171-abfc-67226ed86465]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:21:26.038 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa234aa60-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:26 compute-0 nova_compute[255040]: 2025-11-29 08:21:26.041 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:26 compute-0 kernel: tapa234aa60-c0: left promiscuous mode
Nov 29 08:21:26 compute-0 nova_compute[255040]: 2025-11-29 08:21:26.055 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:21:26.058 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[9474c2a2-07e6-44c5-a0f9-8e891b149dfd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:21:26.081 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc2f61a-cb74-47b2-bf32-4b894da8c4c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:21:26.082 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[1f6358c4-9900-425a-845d-def21d4ad468]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:21:26.099 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[e74f22d8-3140-45f0-be6b-7ec2c4f99369]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668280, 'reachable_time': 38133, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300610, 'error': None, 'target': 'ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:26 compute-0 systemd[1]: run-netns-ovnmeta\x2da234aa60\x2dc8c5\x2d4137\x2d96cd\x2d77f576498813.mount: Deactivated successfully.
Nov 29 08:21:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:21:26.102 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:21:26 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:21:26.103 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[fdfa7d31-72a9-4b5f-ad3f-254b332da5c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:21:26 compute-0 nova_compute[255040]: 2025-11-29 08:21:26.107 255071 INFO nova.virt.libvirt.driver [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Deleting instance files /var/lib/nova/instances/bef937b1-7990-4cb0-8126-746317b65e5f_del
Nov 29 08:21:26 compute-0 nova_compute[255040]: 2025-11-29 08:21:26.108 255071 INFO nova.virt.libvirt.driver [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Deletion of /var/lib/nova/instances/bef937b1-7990-4cb0-8126-746317b65e5f_del complete
Nov 29 08:21:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 14 KiB/s wr, 4 op/s
Nov 29 08:21:26 compute-0 nova_compute[255040]: 2025-11-29 08:21:26.259 255071 INFO nova.compute.manager [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Took 0.69 seconds to destroy the instance on the hypervisor.
Nov 29 08:21:26 compute-0 nova_compute[255040]: 2025-11-29 08:21:26.260 255071 DEBUG oslo.service.loopingcall [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:21:26 compute-0 nova_compute[255040]: 2025-11-29 08:21:26.261 255071 DEBUG nova.compute.manager [-] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:21:26 compute-0 nova_compute[255040]: 2025-11-29 08:21:26.261 255071 DEBUG nova.network.neutron [-] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:21:26 compute-0 nova_compute[255040]: 2025-11-29 08:21:26.776 255071 DEBUG nova.compute.manager [req-557d7d3c-9617-4f8b-9f47-fbeb690c7bfb req-316496f9-2b6c-453d-a137-df29118448e9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Received event network-vif-unplugged-26c5d3b7-4c28-4b38-9949-5d6291c59eae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:26 compute-0 nova_compute[255040]: 2025-11-29 08:21:26.777 255071 DEBUG oslo_concurrency.lockutils [req-557d7d3c-9617-4f8b-9f47-fbeb690c7bfb req-316496f9-2b6c-453d-a137-df29118448e9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "bef937b1-7990-4cb0-8126-746317b65e5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:26 compute-0 nova_compute[255040]: 2025-11-29 08:21:26.777 255071 DEBUG oslo_concurrency.lockutils [req-557d7d3c-9617-4f8b-9f47-fbeb690c7bfb req-316496f9-2b6c-453d-a137-df29118448e9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "bef937b1-7990-4cb0-8126-746317b65e5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:26 compute-0 nova_compute[255040]: 2025-11-29 08:21:26.777 255071 DEBUG oslo_concurrency.lockutils [req-557d7d3c-9617-4f8b-9f47-fbeb690c7bfb req-316496f9-2b6c-453d-a137-df29118448e9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "bef937b1-7990-4cb0-8126-746317b65e5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:26 compute-0 nova_compute[255040]: 2025-11-29 08:21:26.777 255071 DEBUG nova.compute.manager [req-557d7d3c-9617-4f8b-9f47-fbeb690c7bfb req-316496f9-2b6c-453d-a137-df29118448e9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] No waiting events found dispatching network-vif-unplugged-26c5d3b7-4c28-4b38-9949-5d6291c59eae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:21:26 compute-0 nova_compute[255040]: 2025-11-29 08:21:26.778 255071 DEBUG nova.compute.manager [req-557d7d3c-9617-4f8b-9f47-fbeb690c7bfb req-316496f9-2b6c-453d-a137-df29118448e9 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Received event network-vif-unplugged-26c5d3b7-4c28-4b38-9949-5d6291c59eae for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:21:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:21:27.147 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:21:27.148 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:21:27.148 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:27 compute-0 ceph-mon[75237]: pgmap v2097: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 14 KiB/s wr, 4 op/s
Nov 29 08:21:27 compute-0 nova_compute[255040]: 2025-11-29 08:21:27.694 255071 DEBUG nova.network.neutron [-] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:21:27 compute-0 nova_compute[255040]: 2025-11-29 08:21:27.725 255071 INFO nova.compute.manager [-] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Took 1.46 seconds to deallocate network for instance.
Nov 29 08:21:27 compute-0 nova_compute[255040]: 2025-11-29 08:21:27.892 255071 INFO nova.compute.manager [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Took 0.17 seconds to detach 1 volumes for instance.
Nov 29 08:21:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:21:27.935 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:21:27 compute-0 nova_compute[255040]: 2025-11-29 08:21:27.935 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:21:27.936 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:21:27 compute-0 nova_compute[255040]: 2025-11-29 08:21:27.953 255071 DEBUG oslo_concurrency.lockutils [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:27 compute-0 nova_compute[255040]: 2025-11-29 08:21:27.953 255071 DEBUG oslo_concurrency.lockutils [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:28 compute-0 nova_compute[255040]: 2025-11-29 08:21:28.011 255071 DEBUG oslo_concurrency.processutils [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 217 KiB/s rd, 11 KiB/s wr, 15 op/s
Nov 29 08:21:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:21:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4049687699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:28 compute-0 ceph-mon[75237]: pgmap v2098: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 217 KiB/s rd, 11 KiB/s wr, 15 op/s
Nov 29 08:21:28 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/4049687699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:28 compute-0 nova_compute[255040]: 2025-11-29 08:21:28.436 255071 DEBUG oslo_concurrency.processutils [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:28 compute-0 nova_compute[255040]: 2025-11-29 08:21:28.445 255071 DEBUG nova.compute.provider_tree [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:21:28 compute-0 nova_compute[255040]: 2025-11-29 08:21:28.462 255071 DEBUG nova.scheduler.client.report [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:21:28 compute-0 nova_compute[255040]: 2025-11-29 08:21:28.480 255071 DEBUG oslo_concurrency.lockutils [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:28 compute-0 nova_compute[255040]: 2025-11-29 08:21:28.505 255071 INFO nova.scheduler.client.report [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Deleted allocations for instance bef937b1-7990-4cb0-8126-746317b65e5f
Nov 29 08:21:28 compute-0 nova_compute[255040]: 2025-11-29 08:21:28.572 255071 DEBUG oslo_concurrency.lockutils [None req-18af63bd-00cb-47f9-b847-2e4cbb17392a a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "bef937b1-7990-4cb0-8126-746317b65e5f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:28 compute-0 nova_compute[255040]: 2025-11-29 08:21:28.864 255071 DEBUG nova.compute.manager [req-5d61abf5-2267-4e8d-93f8-32f9eb0631f2 req-76b5a0e6-a0f7-4462-8107-e7376bf748c2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Received event network-vif-plugged-26c5d3b7-4c28-4b38-9949-5d6291c59eae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:28 compute-0 nova_compute[255040]: 2025-11-29 08:21:28.864 255071 DEBUG oslo_concurrency.lockutils [req-5d61abf5-2267-4e8d-93f8-32f9eb0631f2 req-76b5a0e6-a0f7-4462-8107-e7376bf748c2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "bef937b1-7990-4cb0-8126-746317b65e5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:28 compute-0 nova_compute[255040]: 2025-11-29 08:21:28.865 255071 DEBUG oslo_concurrency.lockutils [req-5d61abf5-2267-4e8d-93f8-32f9eb0631f2 req-76b5a0e6-a0f7-4462-8107-e7376bf748c2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "bef937b1-7990-4cb0-8126-746317b65e5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:28 compute-0 nova_compute[255040]: 2025-11-29 08:21:28.865 255071 DEBUG oslo_concurrency.lockutils [req-5d61abf5-2267-4e8d-93f8-32f9eb0631f2 req-76b5a0e6-a0f7-4462-8107-e7376bf748c2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "bef937b1-7990-4cb0-8126-746317b65e5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:28 compute-0 nova_compute[255040]: 2025-11-29 08:21:28.865 255071 DEBUG nova.compute.manager [req-5d61abf5-2267-4e8d-93f8-32f9eb0631f2 req-76b5a0e6-a0f7-4462-8107-e7376bf748c2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] No waiting events found dispatching network-vif-plugged-26c5d3b7-4c28-4b38-9949-5d6291c59eae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:21:28 compute-0 nova_compute[255040]: 2025-11-29 08:21:28.865 255071 WARNING nova.compute.manager [req-5d61abf5-2267-4e8d-93f8-32f9eb0631f2 req-76b5a0e6-a0f7-4462-8107-e7376bf748c2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Received unexpected event network-vif-plugged-26c5d3b7-4c28-4b38-9949-5d6291c59eae for instance with vm_state deleted and task_state None.
Nov 29 08:21:28 compute-0 nova_compute[255040]: 2025-11-29 08:21:28.866 255071 DEBUG nova.compute.manager [req-5d61abf5-2267-4e8d-93f8-32f9eb0631f2 req-76b5a0e6-a0f7-4462-8107-e7376bf748c2 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Received event network-vif-deleted-26c5d3b7-4c28-4b38-9949-5d6291c59eae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:21:29 compute-0 podman[300634]: 2025-11-29 08:21:29.895284222 +0000 UTC m=+0.060421930 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 08:21:29 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:21:29.938 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:21:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 219 KiB/s rd, 11 KiB/s wr, 19 op/s
Nov 29 08:21:30 compute-0 nova_compute[255040]: 2025-11-29 08:21:30.384 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:30 compute-0 ceph-mon[75237]: pgmap v2099: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 219 KiB/s rd, 11 KiB/s wr, 19 op/s
Nov 29 08:21:30 compute-0 nova_compute[255040]: 2025-11-29 08:21:30.915 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 7.6 KiB/s wr, 19 op/s
Nov 29 08:21:32 compute-0 ceph-mon[75237]: pgmap v2100: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 7.6 KiB/s wr, 19 op/s
Nov 29 08:21:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:21:33 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2953052023' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:21:33 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:21:33 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2953052023' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:21:33 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2953052023' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:21:33 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2953052023' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:21:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 597 B/s wr, 18 op/s
Nov 29 08:21:34 compute-0 ceph-mon[75237]: pgmap v2101: 305 pgs: 305 active+clean; 453 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 597 B/s wr, 18 op/s
Nov 29 08:21:35 compute-0 nova_compute[255040]: 2025-11-29 08:21:35.387 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:35 compute-0 nova_compute[255040]: 2025-11-29 08:21:35.918 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 328 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 938 B/s wr, 25 op/s
Nov 29 08:21:37 compute-0 ceph-mon[75237]: pgmap v2102: 305 pgs: 305 active+clean; 328 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 938 B/s wr, 25 op/s
Nov 29 08:21:38 compute-0 nova_compute[255040]: 2025-11-29 08:21:38.009 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 160 KiB/s rd, 1.2 KiB/s wr, 34 op/s
Nov 29 08:21:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:21:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:21:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:21:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:21:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:21:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:21:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:21:38
Nov 29 08:21:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:21:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:21:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'default.rgw.log', 'images', 'vms', 'volumes', 'default.rgw.meta']
Nov 29 08:21:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:21:38 compute-0 nova_compute[255040]: 2025-11-29 08:21:38.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:38 compute-0 nova_compute[255040]: 2025-11-29 08:21:38.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:38 compute-0 nova_compute[255040]: 2025-11-29 08:21:38.977 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 08:21:39 compute-0 nova_compute[255040]: 2025-11-29 08:21:39.000 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 08:21:39 compute-0 ceph-mon[75237]: pgmap v2103: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 160 KiB/s rd, 1.2 KiB/s wr, 34 op/s
Nov 29 08:21:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 23 op/s
Nov 29 08:21:40 compute-0 nova_compute[255040]: 2025-11-29 08:21:40.389 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:40 compute-0 nova_compute[255040]: 2025-11-29 08:21:40.809 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404485.80874, bef937b1-7990-4cb0-8126-746317b65e5f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:21:40 compute-0 nova_compute[255040]: 2025-11-29 08:21:40.810 255071 INFO nova.compute.manager [-] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] VM Stopped (Lifecycle Event)
Nov 29 08:21:40 compute-0 nova_compute[255040]: 2025-11-29 08:21:40.920 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:41 compute-0 nova_compute[255040]: 2025-11-29 08:21:41.123 255071 DEBUG nova.compute.manager [None req-1b818a88-5676-476e-9fb3-fd272570ed73 - - - - - -] [instance: bef937b1-7990-4cb0-8126-746317b65e5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:21:41 compute-0 ceph-mon[75237]: pgmap v2104: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 23 op/s
Nov 29 08:21:41 compute-0 podman[300654]: 2025-11-29 08:21:41.931775395 +0000 UTC m=+0.101147100 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 08:21:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 395 KiB/s rd, 682 B/s wr, 24 op/s
Nov 29 08:21:43 compute-0 nova_compute[255040]: 2025-11-29 08:21:43.000 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:43 compute-0 ceph-mon[75237]: pgmap v2105: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 395 KiB/s rd, 682 B/s wr, 24 op/s
Nov 29 08:21:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:21:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:21:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:21:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:21:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:21:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:21:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:21:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:21:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:21:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:21:43 compute-0 nova_compute[255040]: 2025-11-29 08:21:43.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:43 compute-0 nova_compute[255040]: 2025-11-29 08:21:43.975 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:21:43 compute-0 nova_compute[255040]: 2025-11-29 08:21:43.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 682 B/s wr, 24 op/s
Nov 29 08:21:44 compute-0 nova_compute[255040]: 2025-11-29 08:21:44.149 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:44 compute-0 nova_compute[255040]: 2025-11-29 08:21:44.149 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:44 compute-0 nova_compute[255040]: 2025-11-29 08:21:44.149 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:44 compute-0 nova_compute[255040]: 2025-11-29 08:21:44.150 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:21:44 compute-0 nova_compute[255040]: 2025-11-29 08:21:44.150 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:44 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:21:44 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2643024438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:44 compute-0 nova_compute[255040]: 2025-11-29 08:21:44.565 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:44 compute-0 nova_compute[255040]: 2025-11-29 08:21:44.734 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:21:44 compute-0 nova_compute[255040]: 2025-11-29 08:21:44.736 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4328MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:21:44 compute-0 nova_compute[255040]: 2025-11-29 08:21:44.736 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:21:44 compute-0 nova_compute[255040]: 2025-11-29 08:21:44.737 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:21:44 compute-0 nova_compute[255040]: 2025-11-29 08:21:44.869 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:21:44 compute-0 nova_compute[255040]: 2025-11-29 08:21:44.870 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:21:44 compute-0 nova_compute[255040]: 2025-11-29 08:21:44.920 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:21:45 compute-0 nova_compute[255040]: 2025-11-29 08:21:45.389 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:21:45 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1407135013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:45 compute-0 ceph-mon[75237]: pgmap v2106: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 682 B/s wr, 24 op/s
Nov 29 08:21:45 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2643024438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:45 compute-0 nova_compute[255040]: 2025-11-29 08:21:45.821 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.901s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:21:45 compute-0 nova_compute[255040]: 2025-11-29 08:21:45.828 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:21:45 compute-0 nova_compute[255040]: 2025-11-29 08:21:45.853 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:21:45 compute-0 nova_compute[255040]: 2025-11-29 08:21:45.923 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:45 compute-0 nova_compute[255040]: 2025-11-29 08:21:45.998 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:21:45 compute-0 nova_compute[255040]: 2025-11-29 08:21:45.998 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:21:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 682 B/s wr, 25 op/s
Nov 29 08:21:46 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1407135013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:21:46 compute-0 ceph-mon[75237]: pgmap v2107: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 682 B/s wr, 25 op/s
Nov 29 08:21:47 compute-0 nova_compute[255040]: 2025-11-29 08:21:47.999 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:48 compute-0 nova_compute[255040]: 2025-11-29 08:21:47.999 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:48 compute-0 nova_compute[255040]: 2025-11-29 08:21:48.000 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:21:48 compute-0 nova_compute[255040]: 2025-11-29 08:21:48.000 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:21:48 compute-0 nova_compute[255040]: 2025-11-29 08:21:48.017 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:21:48 compute-0 nova_compute[255040]: 2025-11-29 08:21:48.018 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:48 compute-0 nova_compute[255040]: 2025-11-29 08:21:48.018 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 19 op/s
Nov 29 08:21:49 compute-0 ceph-mon[75237]: pgmap v2108: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 19 op/s
Nov 29 08:21:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 9 op/s
Nov 29 08:21:50 compute-0 nova_compute[255040]: 2025-11-29 08:21:50.441 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:50 compute-0 nova_compute[255040]: 2025-11-29 08:21:50.926 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:51 compute-0 ceph-mon[75237]: pgmap v2109: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 9 op/s
Nov 29 08:21:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Nov 29 08:21:52 compute-0 sudo[300728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:21:52 compute-0 sudo[300728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:52 compute-0 sudo[300728]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:52 compute-0 sudo[300753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:21:52 compute-0 sudo[300753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:52 compute-0 sudo[300753]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:52 compute-0 sudo[300778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:21:52 compute-0 sudo[300778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:52 compute-0 sudo[300778]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:52 compute-0 sudo[300803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 08:21:52 compute-0 sudo[300803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:53 compute-0 podman[300827]: 2025-11-29 08:21:53.038626331 +0000 UTC m=+0.051402178 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:21:53 compute-0 sudo[300803]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:53 compute-0 ceph-mon[75237]: pgmap v2110: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Nov 29 08:21:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:21:53 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:21:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:21:53 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:21:53 compute-0 sudo[300867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:21:53 compute-0 sudo[300867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:53 compute-0 sudo[300867]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:53 compute-0 sudo[300892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:21:53 compute-0 sudo[300892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:53 compute-0 sudo[300892]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:53 compute-0 sudo[300917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:21:53 compute-0 sudo[300917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:53 compute-0 sudo[300917]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:53 compute-0 sudo[300942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:21:53 compute-0 sudo[300942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:53 compute-0 sudo[300942]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:21:53 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:21:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:21:53 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:21:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:21:53 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:21:53 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev b3c8a4f2-c3bb-4bc6-a20c-cfcf4cfece3a does not exist
Nov 29 08:21:53 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 6f7fa3b7-ebba-4293-a098-62bee7792ae3 does not exist
Nov 29 08:21:53 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev e4db8ec6-c900-4d66-8a2f-768f9e220a73 does not exist
Nov 29 08:21:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:21:53 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:21:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:21:53 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:21:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:21:53 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:21:53 compute-0 sudo[300998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:21:53 compute-0 sudo[300998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:53 compute-0 sudo[300998]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:54 compute-0 sudo[301023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:21:54 compute-0 sudo[301023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:54 compute-0 sudo[301023]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:54 compute-0 sudo[301048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:21:54 compute-0 sudo[301048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:54 compute-0 sudo[301048]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 22 KiB/s wr, 6 op/s
Nov 29 08:21:54 compute-0 sudo[301073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:21:54 compute-0 sudo[301073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:21:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:21:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:21:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:21:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:21:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:21:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:21:54 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:21:54 compute-0 podman[301140]: 2025-11-29 08:21:54.503754964 +0000 UTC m=+0.041152393 container create 28197a62c99450e15c3825faada2751850864ad387bf609806343e76e087ab10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lewin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:21:54 compute-0 systemd[1]: Started libpod-conmon-28197a62c99450e15c3825faada2751850864ad387bf609806343e76e087ab10.scope.
Nov 29 08:21:54 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:21:54 compute-0 podman[301140]: 2025-11-29 08:21:54.486550503 +0000 UTC m=+0.023947952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:21:54 compute-0 podman[301140]: 2025-11-29 08:21:54.590284402 +0000 UTC m=+0.127681851 container init 28197a62c99450e15c3825faada2751850864ad387bf609806343e76e087ab10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lewin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 08:21:54 compute-0 podman[301140]: 2025-11-29 08:21:54.597565887 +0000 UTC m=+0.134963306 container start 28197a62c99450e15c3825faada2751850864ad387bf609806343e76e087ab10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lewin, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:21:54 compute-0 podman[301140]: 2025-11-29 08:21:54.601068461 +0000 UTC m=+0.138465890 container attach 28197a62c99450e15c3825faada2751850864ad387bf609806343e76e087ab10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lewin, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:21:54 compute-0 zealous_lewin[301156]: 167 167
Nov 29 08:21:54 compute-0 systemd[1]: libpod-28197a62c99450e15c3825faada2751850864ad387bf609806343e76e087ab10.scope: Deactivated successfully.
Nov 29 08:21:54 compute-0 podman[301140]: 2025-11-29 08:21:54.604594205 +0000 UTC m=+0.141991634 container died 28197a62c99450e15c3825faada2751850864ad387bf609806343e76e087ab10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lewin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:21:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-30c36736768502e7f493ad15241c82111871f54549cf7c8e23f7ba0f1bd81193-merged.mount: Deactivated successfully.
Nov 29 08:21:54 compute-0 podman[301140]: 2025-11-29 08:21:54.639614543 +0000 UTC m=+0.177011962 container remove 28197a62c99450e15c3825faada2751850864ad387bf609806343e76e087ab10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lewin, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:21:54 compute-0 systemd[1]: libpod-conmon-28197a62c99450e15c3825faada2751850864ad387bf609806343e76e087ab10.scope: Deactivated successfully.
Nov 29 08:21:54 compute-0 podman[301178]: 2025-11-29 08:21:54.795499549 +0000 UTC m=+0.039228292 container create a4f02dcb267cff0f7d57dfd2271106824466e24f172743ce60604a21d302794b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_fermat, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:21:54 compute-0 systemd[1]: Started libpod-conmon-a4f02dcb267cff0f7d57dfd2271106824466e24f172743ce60604a21d302794b.scope.
Nov 29 08:21:54 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a0e44dcbc82965a807ab6233b98e8e97bb1a75874217a52105fade781d8d03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a0e44dcbc82965a807ab6233b98e8e97bb1a75874217a52105fade781d8d03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a0e44dcbc82965a807ab6233b98e8e97bb1a75874217a52105fade781d8d03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a0e44dcbc82965a807ab6233b98e8e97bb1a75874217a52105fade781d8d03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a0e44dcbc82965a807ab6233b98e8e97bb1a75874217a52105fade781d8d03/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:21:54 compute-0 podman[301178]: 2025-11-29 08:21:54.868647988 +0000 UTC m=+0.112376731 container init a4f02dcb267cff0f7d57dfd2271106824466e24f172743ce60604a21d302794b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:21:54 compute-0 podman[301178]: 2025-11-29 08:21:54.779042428 +0000 UTC m=+0.022771191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:21:54 compute-0 podman[301178]: 2025-11-29 08:21:54.877810254 +0000 UTC m=+0.121538997 container start a4f02dcb267cff0f7d57dfd2271106824466e24f172743ce60604a21d302794b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_fermat, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 08:21:54 compute-0 podman[301178]: 2025-11-29 08:21:54.880862175 +0000 UTC m=+0.124590948 container attach a4f02dcb267cff0f7d57dfd2271106824466e24f172743ce60604a21d302794b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_fermat, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:21:55 compute-0 ceph-mon[75237]: pgmap v2111: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 22 KiB/s wr, 6 op/s
Nov 29 08:21:55 compute-0 nova_compute[255040]: 2025-11-29 08:21:55.442 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:21:55 compute-0 recursing_fermat[301194]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:21:55 compute-0 recursing_fermat[301194]: --> relative data size: 1.0
Nov 29 08:21:55 compute-0 recursing_fermat[301194]: --> All data devices are unavailable
Nov 29 08:21:55 compute-0 nova_compute[255040]: 2025-11-29 08:21:55.927 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:21:55 compute-0 systemd[1]: libpod-a4f02dcb267cff0f7d57dfd2271106824466e24f172743ce60604a21d302794b.scope: Deactivated successfully.
Nov 29 08:21:55 compute-0 systemd[1]: libpod-a4f02dcb267cff0f7d57dfd2271106824466e24f172743ce60604a21d302794b.scope: Consumed 1.024s CPU time.
Nov 29 08:21:55 compute-0 conmon[301194]: conmon a4f02dcb267cff0f7d57 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a4f02dcb267cff0f7d57dfd2271106824466e24f172743ce60604a21d302794b.scope/container/memory.events
Nov 29 08:21:55 compute-0 podman[301178]: 2025-11-29 08:21:55.959831725 +0000 UTC m=+1.203560478 container died a4f02dcb267cff0f7d57dfd2271106824466e24f172743ce60604a21d302794b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 29 08:21:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-34a0e44dcbc82965a807ab6233b98e8e97bb1a75874217a52105fade781d8d03-merged.mount: Deactivated successfully.
Nov 29 08:21:56 compute-0 podman[301178]: 2025-11-29 08:21:56.027698493 +0000 UTC m=+1.271427276 container remove a4f02dcb267cff0f7d57dfd2271106824466e24f172743ce60604a21d302794b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_fermat, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 08:21:56 compute-0 systemd[1]: libpod-conmon-a4f02dcb267cff0f7d57dfd2271106824466e24f172743ce60604a21d302794b.scope: Deactivated successfully.
Nov 29 08:21:56 compute-0 sudo[301073]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:56 compute-0 sudo[301237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:21:56 compute-0 sudo[301237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:56 compute-0 sudo[301237]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 22 KiB/s wr, 6 op/s
Nov 29 08:21:56 compute-0 sudo[301262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:21:56 compute-0 sudo[301262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:56 compute-0 sudo[301262]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0028986552345835774 of space, bias 1.0, pg target 0.8695965703750732 quantized to 32 (current 32)
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:21:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:21:56 compute-0 sudo[301287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:21:56 compute-0 sudo[301287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:56 compute-0 sudo[301287]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:56 compute-0 sudo[301312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 08:21:56 compute-0 sudo[301312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:56 compute-0 podman[301374]: 2025-11-29 08:21:56.760189342 +0000 UTC m=+0.039819888 container create 369450cc82a644cf4646543190589763ef58703ffc9db2c99517e44e919404d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 08:21:56 compute-0 systemd[1]: Started libpod-conmon-369450cc82a644cf4646543190589763ef58703ffc9db2c99517e44e919404d8.scope.
Nov 29 08:21:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:21:56 compute-0 podman[301374]: 2025-11-29 08:21:56.836767794 +0000 UTC m=+0.116398360 container init 369450cc82a644cf4646543190589763ef58703ffc9db2c99517e44e919404d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_germain, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:21:56 compute-0 podman[301374]: 2025-11-29 08:21:56.743123496 +0000 UTC m=+0.022754062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:21:56 compute-0 podman[301374]: 2025-11-29 08:21:56.843919075 +0000 UTC m=+0.123549621 container start 369450cc82a644cf4646543190589763ef58703ffc9db2c99517e44e919404d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_germain, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:21:56 compute-0 magical_germain[301390]: 167 167
Nov 29 08:21:56 compute-0 systemd[1]: libpod-369450cc82a644cf4646543190589763ef58703ffc9db2c99517e44e919404d8.scope: Deactivated successfully.
Nov 29 08:21:56 compute-0 nova_compute[255040]: 2025-11-29 08:21:56.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:56 compute-0 nova_compute[255040]: 2025-11-29 08:21:56.992 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:21:56 compute-0 nova_compute[255040]: 2025-11-29 08:21:56.992 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 08:21:56 compute-0 podman[301374]: 2025-11-29 08:21:56.993782949 +0000 UTC m=+0.273413495 container attach 369450cc82a644cf4646543190589763ef58703ffc9db2c99517e44e919404d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 29 08:21:56 compute-0 podman[301374]: 2025-11-29 08:21:56.995334621 +0000 UTC m=+0.274965197 container died 369450cc82a644cf4646543190589763ef58703ffc9db2c99517e44e919404d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_germain, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 08:21:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1d7fc75425c2bb893b31eb8aa4e1daea04ec1f09c5087f6fa27de2ad9d3a1b2-merged.mount: Deactivated successfully.
Nov 29 08:21:57 compute-0 podman[301374]: 2025-11-29 08:21:57.09049368 +0000 UTC m=+0.370124256 container remove 369450cc82a644cf4646543190589763ef58703ffc9db2c99517e44e919404d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_germain, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:21:57 compute-0 systemd[1]: libpod-conmon-369450cc82a644cf4646543190589763ef58703ffc9db2c99517e44e919404d8.scope: Deactivated successfully.
Nov 29 08:21:57 compute-0 ceph-mon[75237]: pgmap v2112: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 22 KiB/s wr, 6 op/s
Nov 29 08:21:57 compute-0 podman[301415]: 2025-11-29 08:21:57.327812626 +0000 UTC m=+0.045684534 container create eacbdf159c140291c1bcda55585cd746c4b30b2cd62acf85742e191c5918fab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_leavitt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 08:21:57 compute-0 systemd[1]: Started libpod-conmon-eacbdf159c140291c1bcda55585cd746c4b30b2cd62acf85742e191c5918fab7.scope.
Nov 29 08:21:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:21:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58627621ec610c9ba7b3c135c615052c7d80856e1364d5fdc6a83701c9bd5b12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:21:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58627621ec610c9ba7b3c135c615052c7d80856e1364d5fdc6a83701c9bd5b12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:21:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58627621ec610c9ba7b3c135c615052c7d80856e1364d5fdc6a83701c9bd5b12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:21:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58627621ec610c9ba7b3c135c615052c7d80856e1364d5fdc6a83701c9bd5b12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:21:57 compute-0 podman[301415]: 2025-11-29 08:21:57.312830915 +0000 UTC m=+0.030702853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:21:57 compute-0 podman[301415]: 2025-11-29 08:21:57.417409185 +0000 UTC m=+0.135281123 container init eacbdf159c140291c1bcda55585cd746c4b30b2cd62acf85742e191c5918fab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:21:57 compute-0 podman[301415]: 2025-11-29 08:21:57.432647394 +0000 UTC m=+0.150519322 container start eacbdf159c140291c1bcda55585cd746c4b30b2cd62acf85742e191c5918fab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_leavitt, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:21:57 compute-0 podman[301415]: 2025-11-29 08:21:57.436424515 +0000 UTC m=+0.154296473 container attach eacbdf159c140291c1bcda55585cd746c4b30b2cd62acf85742e191c5918fab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_leavitt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:21:57 compute-0 ovn_controller[153295]: 2025-11-29T08:21:57Z|00280|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 29 08:21:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]: {
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:     "0": [
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:         {
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "devices": [
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "/dev/loop3"
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             ],
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "lv_name": "ceph_lv0",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "lv_size": "21470642176",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "name": "ceph_lv0",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "tags": {
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.cluster_name": "ceph",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.crush_device_class": "",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.encrypted": "0",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.osd_id": "0",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.type": "block",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.vdo": "0"
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             },
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "type": "block",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "vg_name": "ceph_vg0"
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:         }
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:     ],
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:     "1": [
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:         {
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "devices": [
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "/dev/loop4"
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             ],
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "lv_name": "ceph_lv1",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "lv_size": "21470642176",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "name": "ceph_lv1",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "tags": {
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.cluster_name": "ceph",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.crush_device_class": "",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.encrypted": "0",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.osd_id": "1",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.type": "block",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.vdo": "0"
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             },
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "type": "block",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "vg_name": "ceph_vg1"
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:         }
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:     ],
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:     "2": [
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:         {
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "devices": [
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "/dev/loop5"
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             ],
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "lv_name": "ceph_lv2",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "lv_size": "21470642176",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "name": "ceph_lv2",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "tags": {
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.cluster_name": "ceph",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.crush_device_class": "",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.encrypted": "0",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.osd_id": "2",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.type": "block",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:                 "ceph.vdo": "0"
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             },
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "type": "block",
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:             "vg_name": "ceph_vg2"
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:         }
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]:     ]
Nov 29 08:21:58 compute-0 elastic_leavitt[301431]: }
Nov 29 08:21:58 compute-0 systemd[1]: libpod-eacbdf159c140291c1bcda55585cd746c4b30b2cd62acf85742e191c5918fab7.scope: Deactivated successfully.
Nov 29 08:21:58 compute-0 podman[301415]: 2025-11-29 08:21:58.217596949 +0000 UTC m=+0.935468867 container died eacbdf159c140291c1bcda55585cd746c4b30b2cd62acf85742e191c5918fab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 08:21:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-58627621ec610c9ba7b3c135c615052c7d80856e1364d5fdc6a83701c9bd5b12-merged.mount: Deactivated successfully.
Nov 29 08:21:58 compute-0 podman[301415]: 2025-11-29 08:21:58.275299024 +0000 UTC m=+0.993170942 container remove eacbdf159c140291c1bcda55585cd746c4b30b2cd62acf85742e191c5918fab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 08:21:58 compute-0 systemd[1]: libpod-conmon-eacbdf159c140291c1bcda55585cd746c4b30b2cd62acf85742e191c5918fab7.scope: Deactivated successfully.
Nov 29 08:21:58 compute-0 sudo[301312]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:58 compute-0 sudo[301454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:21:58 compute-0 sudo[301454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:58 compute-0 sudo[301454]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:58 compute-0 sudo[301479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:21:58 compute-0 sudo[301479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:58 compute-0 sudo[301479]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:58 compute-0 sudo[301504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:21:58 compute-0 sudo[301504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:58 compute-0 sudo[301504]: pam_unix(sudo:session): session closed for user root
Nov 29 08:21:58 compute-0 sudo[301529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 08:21:58 compute-0 sudo[301529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:21:58 compute-0 podman[301594]: 2025-11-29 08:21:58.918928444 +0000 UTC m=+0.053471503 container create 9ab3b2c4575f2c5b526dccf47fe676965b8cce0672d505fa1b3e0421de3fb45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:21:58 compute-0 systemd[1]: Started libpod-conmon-9ab3b2c4575f2c5b526dccf47fe676965b8cce0672d505fa1b3e0421de3fb45b.scope.
Nov 29 08:21:58 compute-0 podman[301594]: 2025-11-29 08:21:58.8904364 +0000 UTC m=+0.024979479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:21:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:21:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2572890903' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:21:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:21:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2572890903' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:21:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:21:59 compute-0 podman[301594]: 2025-11-29 08:21:59.010591979 +0000 UTC m=+0.145135008 container init 9ab3b2c4575f2c5b526dccf47fe676965b8cce0672d505fa1b3e0421de3fb45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wing, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 08:21:59 compute-0 podman[301594]: 2025-11-29 08:21:59.019272661 +0000 UTC m=+0.153815700 container start 9ab3b2c4575f2c5b526dccf47fe676965b8cce0672d505fa1b3e0421de3fb45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wing, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:21:59 compute-0 systemd[1]: libpod-9ab3b2c4575f2c5b526dccf47fe676965b8cce0672d505fa1b3e0421de3fb45b.scope: Deactivated successfully.
Nov 29 08:21:59 compute-0 podman[301594]: 2025-11-29 08:21:59.023119094 +0000 UTC m=+0.157662123 container attach 9ab3b2c4575f2c5b526dccf47fe676965b8cce0672d505fa1b3e0421de3fb45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:21:59 compute-0 adoring_wing[301610]: 167 167
Nov 29 08:21:59 compute-0 conmon[301610]: conmon 9ab3b2c4575f2c5b526d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9ab3b2c4575f2c5b526dccf47fe676965b8cce0672d505fa1b3e0421de3fb45b.scope/container/memory.events
Nov 29 08:21:59 compute-0 podman[301594]: 2025-11-29 08:21:59.024158512 +0000 UTC m=+0.158701531 container died 9ab3b2c4575f2c5b526dccf47fe676965b8cce0672d505fa1b3e0421de3fb45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wing, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 08:21:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3623bb49c92fe71d8c4fa6f8ee7e0cad0189bed05a8d9462d634a849e172bab-merged.mount: Deactivated successfully.
Nov 29 08:21:59 compute-0 podman[301594]: 2025-11-29 08:21:59.061838432 +0000 UTC m=+0.196381461 container remove 9ab3b2c4575f2c5b526dccf47fe676965b8cce0672d505fa1b3e0421de3fb45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wing, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:21:59 compute-0 systemd[1]: libpod-conmon-9ab3b2c4575f2c5b526dccf47fe676965b8cce0672d505fa1b3e0421de3fb45b.scope: Deactivated successfully.
Nov 29 08:21:59 compute-0 podman[301634]: 2025-11-29 08:21:59.231921207 +0000 UTC m=+0.041908284 container create f948a0d9faceb03865d023e5b551bf0e3626dd31726b9db5ce95f88494a06564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 08:21:59 compute-0 ceph-mon[75237]: pgmap v2113: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 29 08:21:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2572890903' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:21:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2572890903' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:21:59 compute-0 systemd[1]: Started libpod-conmon-f948a0d9faceb03865d023e5b551bf0e3626dd31726b9db5ce95f88494a06564.scope.
Nov 29 08:21:59 compute-0 podman[301634]: 2025-11-29 08:21:59.213971866 +0000 UTC m=+0.023958973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:21:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:21:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ae2194e9093e66482116cdb89509d288a1017d20d4fc2e9d6b2af5be665250f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:21:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ae2194e9093e66482116cdb89509d288a1017d20d4fc2e9d6b2af5be665250f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:21:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ae2194e9093e66482116cdb89509d288a1017d20d4fc2e9d6b2af5be665250f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:21:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ae2194e9093e66482116cdb89509d288a1017d20d4fc2e9d6b2af5be665250f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:21:59 compute-0 podman[301634]: 2025-11-29 08:21:59.336527779 +0000 UTC m=+0.146514896 container init f948a0d9faceb03865d023e5b551bf0e3626dd31726b9db5ce95f88494a06564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:21:59 compute-0 podman[301634]: 2025-11-29 08:21:59.350328139 +0000 UTC m=+0.160315226 container start f948a0d9faceb03865d023e5b551bf0e3626dd31726b9db5ce95f88494a06564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chebyshev, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 08:21:59 compute-0 podman[301634]: 2025-11-29 08:21:59.353705489 +0000 UTC m=+0.163692616 container attach f948a0d9faceb03865d023e5b551bf0e3626dd31726b9db5ce95f88494a06564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chebyshev, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:22:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]: {
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]:         "osd_id": 2,
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]:         "type": "bluestore"
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]:     },
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]:         "osd_id": 0,
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]:         "type": "bluestore"
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]:     },
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]:         "osd_id": 1,
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]:         "type": "bluestore"
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]:     }
Nov 29 08:22:00 compute-0 ecstatic_chebyshev[301648]: }
Nov 29 08:22:00 compute-0 systemd[1]: libpod-f948a0d9faceb03865d023e5b551bf0e3626dd31726b9db5ce95f88494a06564.scope: Deactivated successfully.
Nov 29 08:22:00 compute-0 podman[301682]: 2025-11-29 08:22:00.365735386 +0000 UTC m=+0.028974478 container died f948a0d9faceb03865d023e5b551bf0e3626dd31726b9db5ce95f88494a06564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chebyshev, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 08:22:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ae2194e9093e66482116cdb89509d288a1017d20d4fc2e9d6b2af5be665250f-merged.mount: Deactivated successfully.
Nov 29 08:22:00 compute-0 podman[301682]: 2025-11-29 08:22:00.419454734 +0000 UTC m=+0.082693756 container remove f948a0d9faceb03865d023e5b551bf0e3626dd31726b9db5ce95f88494a06564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 08:22:00 compute-0 systemd[1]: libpod-conmon-f948a0d9faceb03865d023e5b551bf0e3626dd31726b9db5ce95f88494a06564.scope: Deactivated successfully.
Nov 29 08:22:00 compute-0 podman[301681]: 2025-11-29 08:22:00.441544616 +0000 UTC m=+0.087824724 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 29 08:22:00 compute-0 nova_compute[255040]: 2025-11-29 08:22:00.445 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:00 compute-0 sudo[301529]: pam_unix(sudo:session): session closed for user root
Nov 29 08:22:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:22:00 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:22:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:22:00 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:22:00 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev d6e9b76c-a4d2-42da-84dd-b9190c01451c does not exist
Nov 29 08:22:00 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev f35d4f5f-509d-4417-83a5-298d28957748 does not exist
Nov 29 08:22:00 compute-0 sudo[301716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:22:00 compute-0 sudo[301716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:22:00 compute-0 sudo[301716]: pam_unix(sudo:session): session closed for user root
Nov 29 08:22:00 compute-0 sudo[301741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:22:00 compute-0 sudo[301741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:22:00 compute-0 sudo[301741]: pam_unix(sudo:session): session closed for user root
Nov 29 08:22:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:00 compute-0 nova_compute[255040]: 2025-11-29 08:22:00.930 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:01 compute-0 ceph-mon[75237]: pgmap v2114: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 29 08:22:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:22:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:22:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 1.9 KiB/s rd, 21 KiB/s wr, 2 op/s
Nov 29 08:22:03 compute-0 ceph-mon[75237]: pgmap v2115: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 1.9 KiB/s rd, 21 KiB/s wr, 2 op/s
Nov 29 08:22:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:22:04 compute-0 ceph-mon[75237]: pgmap v2116: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:22:05 compute-0 nova_compute[255040]: 2025-11-29 08:22:05.447 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:05 compute-0 nova_compute[255040]: 2025-11-29 08:22:05.932 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 333 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 5.2 MiB/s wr, 42 op/s
Nov 29 08:22:07 compute-0 ceph-mon[75237]: pgmap v2117: 305 pgs: 305 active+clean; 333 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 5.2 MiB/s wr, 42 op/s
Nov 29 08:22:07 compute-0 nova_compute[255040]: 2025-11-29 08:22:07.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:22:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 385 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 29 08:22:08 compute-0 ceph-mon[75237]: pgmap v2118: 305 pgs: 305 active+clean; 385 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 29 08:22:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:22:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:22:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:22:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:22:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:22:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:22:08 compute-0 nova_compute[255040]: 2025-11-29 08:22:08.935 255071 DEBUG oslo_concurrency.lockutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "8573a183-5b0d-4d79-ad1c-f531019fbe12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:08 compute-0 nova_compute[255040]: 2025-11-29 08:22:08.936 255071 DEBUG oslo_concurrency.lockutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "8573a183-5b0d-4d79-ad1c-f531019fbe12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:08 compute-0 nova_compute[255040]: 2025-11-29 08:22:08.953 255071 DEBUG nova.compute.manager [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.066 255071 DEBUG oslo_concurrency.lockutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.067 255071 DEBUG oslo_concurrency.lockutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.074 255071 DEBUG nova.virt.hardware [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.075 255071 INFO nova.compute.claims [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.155 255071 DEBUG oslo_concurrency.processutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:22:09 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/805234533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.581 255071 DEBUG oslo_concurrency.processutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.587 255071 DEBUG nova.compute.provider_tree [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.608 255071 DEBUG nova.scheduler.client.report [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:22:09 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/805234533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.635 255071 DEBUG oslo_concurrency.lockutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.636 255071 DEBUG nova.compute.manager [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.714 255071 DEBUG nova.compute.manager [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.714 255071 DEBUG nova.network.neutron [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.739 255071 INFO nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.764 255071 DEBUG nova.compute.manager [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.808 255071 INFO nova.virt.block_device [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Booting with volume 1640093f-533d-43f4-ac27-350862646719 at /dev/vda
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.957 255071 DEBUG os_brick.utils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.959 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.973 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.974 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1ffacd-3316-4db1-b3de-3eba9a00ee85]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.976 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.986 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.986 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9e244c-453d-4673-9e39-ee623c50cf1b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:09 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.988 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.999 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:09.999 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[38a0931b-6be8-4b3a-989a-9c3da15ba93b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.001 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[908bb3d7-2cf5-4d52-8f38-f4efcb12ff81]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.003 255071 DEBUG oslo_concurrency.processutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.031 255071 DEBUG oslo_concurrency.processutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.036 255071 DEBUG os_brick.initiator.connectors.lightos [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.037 255071 DEBUG os_brick.initiator.connectors.lightos [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.037 255071 DEBUG os_brick.initiator.connectors.lightos [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.038 255071 DEBUG os_brick.utils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] <== get_connector_properties: return (80ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.039 255071 DEBUG nova.virt.block_device [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Updating existing volume attachment record: e3430992-f796-4752-b701-5cfca75f3756 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:22:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 385 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.435 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.452 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.456 255071 WARNING nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.456 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Triggering sync for uuid 8573a183-5b0d-4d79-ad1c-f531019fbe12 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.457 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "8573a183-5b0d-4d79-ad1c-f531019fbe12" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:22:10 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/690260714' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:10 compute-0 ceph-mon[75237]: pgmap v2119: 305 pgs: 305 active+clean; 385 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 29 08:22:10 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/690260714' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.666 255071 DEBUG nova.policy [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a08e1ef223b748efa4d5bdc804150f97', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd25c6608beec4f818c6e402939192f16', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:22:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.842 255071 DEBUG nova.compute.manager [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.843 255071 DEBUG nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.843 255071 INFO nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Creating image(s)
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.844 255071 DEBUG nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.844 255071 DEBUG nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Ensure instance console log exists: /var/lib/nova/instances/8573a183-5b0d-4d79-ad1c-f531019fbe12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.845 255071 DEBUG oslo_concurrency.lockutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.845 255071 DEBUG oslo_concurrency.lockutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.845 255071 DEBUG oslo_concurrency.lockutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:10 compute-0 nova_compute[255040]: 2025-11-29 08:22:10.934 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:11 compute-0 nova_compute[255040]: 2025-11-29 08:22:11.496 255071 DEBUG nova.network.neutron [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Successfully created port: 959bf4ec-937f-4a99-904e-6fe192ad94b1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:22:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 385 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 29 08:22:12 compute-0 nova_compute[255040]: 2025-11-29 08:22:12.827 255071 DEBUG nova.network.neutron [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Successfully updated port: 959bf4ec-937f-4a99-904e-6fe192ad94b1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:22:12 compute-0 nova_compute[255040]: 2025-11-29 08:22:12.844 255071 DEBUG oslo_concurrency.lockutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "refresh_cache-8573a183-5b0d-4d79-ad1c-f531019fbe12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:22:12 compute-0 nova_compute[255040]: 2025-11-29 08:22:12.844 255071 DEBUG oslo_concurrency.lockutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquired lock "refresh_cache-8573a183-5b0d-4d79-ad1c-f531019fbe12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:22:12 compute-0 nova_compute[255040]: 2025-11-29 08:22:12.845 255071 DEBUG nova.network.neutron [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:22:12 compute-0 nova_compute[255040]: 2025-11-29 08:22:12.917 255071 DEBUG nova.compute.manager [req-9238453d-5bc2-489f-b98f-e674cb679657 req-9f77887a-f126-43f9-bdcf-82d98802a0ce cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Received event network-changed-959bf4ec-937f-4a99-904e-6fe192ad94b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:12 compute-0 nova_compute[255040]: 2025-11-29 08:22:12.917 255071 DEBUG nova.compute.manager [req-9238453d-5bc2-489f-b98f-e674cb679657 req-9f77887a-f126-43f9-bdcf-82d98802a0ce cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Refreshing instance network info cache due to event network-changed-959bf4ec-937f-4a99-904e-6fe192ad94b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:22:12 compute-0 nova_compute[255040]: 2025-11-29 08:22:12.917 255071 DEBUG oslo_concurrency.lockutils [req-9238453d-5bc2-489f-b98f-e674cb679657 req-9f77887a-f126-43f9-bdcf-82d98802a0ce cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-8573a183-5b0d-4d79-ad1c-f531019fbe12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:22:12 compute-0 podman[301795]: 2025-11-29 08:22:12.957170729 +0000 UTC m=+0.121921906 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 08:22:13 compute-0 ceph-mon[75237]: pgmap v2120: 305 pgs: 305 active+clean; 385 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.284 255071 DEBUG nova.network.neutron [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.894 255071 DEBUG nova.network.neutron [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Updating instance_info_cache with network_info: [{"id": "959bf4ec-937f-4a99-904e-6fe192ad94b1", "address": "fa:16:3e:a4:71:d2", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap959bf4ec-93", "ovs_interfaceid": "959bf4ec-937f-4a99-904e-6fe192ad94b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.916 255071 DEBUG oslo_concurrency.lockutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Releasing lock "refresh_cache-8573a183-5b0d-4d79-ad1c-f531019fbe12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.917 255071 DEBUG nova.compute.manager [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Instance network_info: |[{"id": "959bf4ec-937f-4a99-904e-6fe192ad94b1", "address": "fa:16:3e:a4:71:d2", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap959bf4ec-93", "ovs_interfaceid": "959bf4ec-937f-4a99-904e-6fe192ad94b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.917 255071 DEBUG oslo_concurrency.lockutils [req-9238453d-5bc2-489f-b98f-e674cb679657 req-9f77887a-f126-43f9-bdcf-82d98802a0ce cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-8573a183-5b0d-4d79-ad1c-f531019fbe12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.918 255071 DEBUG nova.network.neutron [req-9238453d-5bc2-489f-b98f-e674cb679657 req-9f77887a-f126-43f9-bdcf-82d98802a0ce cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Refreshing network info cache for port 959bf4ec-937f-4a99-904e-6fe192ad94b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.922 255071 DEBUG nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Start _get_guest_xml network_info=[{"id": "959bf4ec-937f-4a99-904e-6fe192ad94b1", "address": "fa:16:3e:a4:71:d2", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap959bf4ec-93", "ovs_interfaceid": "959bf4ec-937f-4a99-904e-6fe192ad94b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1640093f-533d-43f4-ac27-350862646719', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1640093f-533d-43f4-ac27-350862646719', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '8573a183-5b0d-4d79-ad1c-f531019fbe12', 'attached_at': '', 'detached_at': '', 'volume_id': '1640093f-533d-43f4-ac27-350862646719', 'serial': '1640093f-533d-43f4-ac27-350862646719'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'delete_on_termination': False, 'attachment_id': 'e3430992-f796-4752-b701-5cfca75f3756', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.928 255071 WARNING nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.933 255071 DEBUG nova.virt.libvirt.host [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.934 255071 DEBUG nova.virt.libvirt.host [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.943 255071 DEBUG nova.virt.libvirt.host [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.944 255071 DEBUG nova.virt.libvirt.host [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.945 255071 DEBUG nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.945 255071 DEBUG nova.virt.hardware [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.946 255071 DEBUG nova.virt.hardware [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.948 255071 DEBUG nova.virt.hardware [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.949 255071 DEBUG nova.virt.hardware [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.950 255071 DEBUG nova.virt.hardware [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.950 255071 DEBUG nova.virt.hardware [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.951 255071 DEBUG nova.virt.hardware [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.952 255071 DEBUG nova.virt.hardware [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.953 255071 DEBUG nova.virt.hardware [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.953 255071 DEBUG nova.virt.hardware [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.954 255071 DEBUG nova.virt.hardware [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:22:13 compute-0 nova_compute[255040]: 2025-11-29 08:22:13.995 255071 DEBUG nova.storage.rbd_utils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] rbd image 8573a183-5b0d-4d79-ad1c-f531019fbe12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.001 255071 DEBUG oslo_concurrency.processutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 385 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 29 08:22:14 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:22:14 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3512008239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.441 255071 DEBUG oslo_concurrency.processutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.585 255071 DEBUG os_brick.encryptors [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Using volume encryption metadata '{'encryption_key_id': '4eb07245-a2c4-485b-a12e-e56abd96d121', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1640093f-533d-43f4-ac27-350862646719', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1640093f-533d-43f4-ac27-350862646719', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '8573a183-5b0d-4d79-ad1c-f531019fbe12', 'attached_at': '', 'detached_at': '', 'volume_id': '1640093f-533d-43f4-ac27-350862646719', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.587 255071 DEBUG barbicanclient.client [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.604 255071 DEBUG barbicanclient.v1.secrets [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/4eb07245-a2c4-485b-a12e-e56abd96d121 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.605 255071 INFO barbicanclient.base [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/4eb07245-a2c4-485b-a12e-e56abd96d121
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.631 255071 DEBUG barbicanclient.client [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.632 255071 INFO barbicanclient.base [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/4eb07245-a2c4-485b-a12e-e56abd96d121
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.654 255071 DEBUG barbicanclient.client [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.655 255071 INFO barbicanclient.base [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/4eb07245-a2c4-485b-a12e-e56abd96d121
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.676 255071 DEBUG barbicanclient.client [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.676 255071 INFO barbicanclient.base [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/4eb07245-a2c4-485b-a12e-e56abd96d121
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.696 255071 DEBUG barbicanclient.client [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.696 255071 INFO barbicanclient.base [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/4eb07245-a2c4-485b-a12e-e56abd96d121
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.724 255071 DEBUG barbicanclient.client [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.725 255071 INFO barbicanclient.base [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/4eb07245-a2c4-485b-a12e-e56abd96d121
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.749 255071 DEBUG barbicanclient.client [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.750 255071 INFO barbicanclient.base [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/4eb07245-a2c4-485b-a12e-e56abd96d121
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.779 255071 DEBUG barbicanclient.client [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.779 255071 INFO barbicanclient.base [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/4eb07245-a2c4-485b-a12e-e56abd96d121
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.803 255071 DEBUG barbicanclient.client [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.803 255071 INFO barbicanclient.base [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/4eb07245-a2c4-485b-a12e-e56abd96d121
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.807 255071 DEBUG nova.network.neutron [req-9238453d-5bc2-489f-b98f-e674cb679657 req-9f77887a-f126-43f9-bdcf-82d98802a0ce cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Updated VIF entry in instance network info cache for port 959bf4ec-937f-4a99-904e-6fe192ad94b1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.807 255071 DEBUG nova.network.neutron [req-9238453d-5bc2-489f-b98f-e674cb679657 req-9f77887a-f126-43f9-bdcf-82d98802a0ce cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Updating instance_info_cache with network_info: [{"id": "959bf4ec-937f-4a99-904e-6fe192ad94b1", "address": "fa:16:3e:a4:71:d2", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap959bf4ec-93", "ovs_interfaceid": "959bf4ec-937f-4a99-904e-6fe192ad94b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.824 255071 DEBUG oslo_concurrency.lockutils [req-9238453d-5bc2-489f-b98f-e674cb679657 req-9f77887a-f126-43f9-bdcf-82d98802a0ce cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-8573a183-5b0d-4d79-ad1c-f531019fbe12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.829 255071 DEBUG barbicanclient.client [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.830 255071 INFO barbicanclient.base [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/4eb07245-a2c4-485b-a12e-e56abd96d121
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.918 255071 DEBUG barbicanclient.client [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.919 255071 INFO barbicanclient.base [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/4eb07245-a2c4-485b-a12e-e56abd96d121
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.939 255071 DEBUG barbicanclient.client [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.940 255071 INFO barbicanclient.base [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/4eb07245-a2c4-485b-a12e-e56abd96d121
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.958 255071 DEBUG barbicanclient.client [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.959 255071 INFO barbicanclient.base [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/4eb07245-a2c4-485b-a12e-e56abd96d121
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.981 255071 DEBUG barbicanclient.client [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:14 compute-0 nova_compute[255040]: 2025-11-29 08:22:14.982 255071 INFO barbicanclient.base [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/4eb07245-a2c4-485b-a12e-e56abd96d121
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.002 255071 DEBUG barbicanclient.client [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.002 255071 INFO barbicanclient.base [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/4eb07245-a2c4-485b-a12e-e56abd96d121
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.022 255071 DEBUG barbicanclient.client [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.023 255071 DEBUG nova.virt.libvirt.host [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 08:22:15 compute-0 nova_compute[255040]:   <usage type="volume">
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <volume>1640093f-533d-43f4-ac27-350862646719</volume>
Nov 29 08:22:15 compute-0 nova_compute[255040]:   </usage>
Nov 29 08:22:15 compute-0 nova_compute[255040]: </secret>
Nov 29 08:22:15 compute-0 nova_compute[255040]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.051 255071 DEBUG nova.virt.libvirt.vif [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:22:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1303239054',display_name='tempest-TransferEncryptedVolumeTest-server-1303239054',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1303239054',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE92PotWc36K+VLDIaJ8QpGP59cELlheEqm9nY+TFm8JbcBbR2J8kqRcsvjGW95/sxJ5sqaLllJygdYCELfyHlA83lAF017jxtDaIPnwvxv16NEk587eEM5n6ok24IEshQ==',key_name='tempest-TransferEncryptedVolumeTest-1268363580',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d25c6608beec4f818c6e402939192f16',ramdisk_id='',reservation_id='r-p0zwkc37',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1043863442',owner_user_name='tempest-TransferEncryptedVolumeTest-1043863442-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:22:09Z,user_data=None,user_id='a08e1ef223b748efa4d5bdc804150f97',uuid=8573a183-5b0d-4d79-ad1c-f531019fbe12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "959bf4ec-937f-4a99-904e-6fe192ad94b1", "address": "fa:16:3e:a4:71:d2", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap959bf4ec-93", "ovs_interfaceid": "959bf4ec-937f-4a99-904e-6fe192ad94b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.051 255071 DEBUG nova.network.os_vif_util [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converting VIF {"id": "959bf4ec-937f-4a99-904e-6fe192ad94b1", "address": "fa:16:3e:a4:71:d2", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap959bf4ec-93", "ovs_interfaceid": "959bf4ec-937f-4a99-904e-6fe192ad94b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.052 255071 DEBUG nova.network.os_vif_util [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:71:d2,bridge_name='br-int',has_traffic_filtering=True,id=959bf4ec-937f-4a99-904e-6fe192ad94b1,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap959bf4ec-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.054 255071 DEBUG nova.objects.instance [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8573a183-5b0d-4d79-ad1c-f531019fbe12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.065 255071 DEBUG nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:22:15 compute-0 nova_compute[255040]:   <uuid>8573a183-5b0d-4d79-ad1c-f531019fbe12</uuid>
Nov 29 08:22:15 compute-0 nova_compute[255040]:   <name>instance-0000001d</name>
Nov 29 08:22:15 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:22:15 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:22:15 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-1303239054</nova:name>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:22:13</nova:creationTime>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:22:15 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:22:15 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:22:15 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:22:15 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:22:15 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:22:15 compute-0 nova_compute[255040]:         <nova:user uuid="a08e1ef223b748efa4d5bdc804150f97">tempest-TransferEncryptedVolumeTest-1043863442-project-member</nova:user>
Nov 29 08:22:15 compute-0 nova_compute[255040]:         <nova:project uuid="d25c6608beec4f818c6e402939192f16">tempest-TransferEncryptedVolumeTest-1043863442</nova:project>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:22:15 compute-0 nova_compute[255040]:         <nova:port uuid="959bf4ec-937f-4a99-904e-6fe192ad94b1">
Nov 29 08:22:15 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:22:15 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:22:15 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <system>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <entry name="serial">8573a183-5b0d-4d79-ad1c-f531019fbe12</entry>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <entry name="uuid">8573a183-5b0d-4d79-ad1c-f531019fbe12</entry>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     </system>
Nov 29 08:22:15 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:22:15 compute-0 nova_compute[255040]:   <os>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:   </os>
Nov 29 08:22:15 compute-0 nova_compute[255040]:   <features>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:   </features>
Nov 29 08:22:15 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:22:15 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:22:15 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/8573a183-5b0d-4d79-ad1c-f531019fbe12_disk.config">
Nov 29 08:22:15 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       </source>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:22:15 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <source protocol="rbd" name="volumes/volume-1640093f-533d-43f4-ac27-350862646719">
Nov 29 08:22:15 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       </source>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:22:15 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <serial>1640093f-533d-43f4-ac27-350862646719</serial>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <encryption format="luks">
Nov 29 08:22:15 compute-0 nova_compute[255040]:         <secret type="passphrase" uuid="a72e925f-7a11-4bbe-86c8-15cf97065933"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       </encryption>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:a4:71:d2"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <target dev="tap959bf4ec-93"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/8573a183-5b0d-4d79-ad1c-f531019fbe12/console.log" append="off"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <video>
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     </video>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:22:15 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:22:15 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:22:15 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:22:15 compute-0 nova_compute[255040]: </domain>
Nov 29 08:22:15 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.068 255071 DEBUG nova.compute.manager [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Preparing to wait for external event network-vif-plugged-959bf4ec-937f-4a99-904e-6fe192ad94b1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.069 255071 DEBUG oslo_concurrency.lockutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "8573a183-5b0d-4d79-ad1c-f531019fbe12-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.069 255071 DEBUG oslo_concurrency.lockutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "8573a183-5b0d-4d79-ad1c-f531019fbe12-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.069 255071 DEBUG oslo_concurrency.lockutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "8573a183-5b0d-4d79-ad1c-f531019fbe12-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.070 255071 DEBUG nova.virt.libvirt.vif [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:22:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1303239054',display_name='tempest-TransferEncryptedVolumeTest-server-1303239054',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1303239054',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE92PotWc36K+VLDIaJ8QpGP59cELlheEqm9nY+TFm8JbcBbR2J8kqRcsvjGW95/sxJ5sqaLllJygdYCELfyHlA83lAF017jxtDaIPnwvxv16NEk587eEM5n6ok24IEshQ==',key_name='tempest-TransferEncryptedVolumeTest-1268363580',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d25c6608beec4f818c6e402939192f16',ramdisk_id='',reservation_id='r-p0zwkc37',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1043863442',owner_user_name='tempest-TransferEncryptedVolumeTest-1043863442-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:22:09Z,user_data=None,user_id='a08e1ef223b748efa4d5bdc804150f97',uuid=8573a183-5b0d-4d79-ad1c-f531019fbe12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "959bf4ec-937f-4a99-904e-6fe192ad94b1", "address": "fa:16:3e:a4:71:d2", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap959bf4ec-93", "ovs_interfaceid": "959bf4ec-937f-4a99-904e-6fe192ad94b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.071 255071 DEBUG nova.network.os_vif_util [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converting VIF {"id": "959bf4ec-937f-4a99-904e-6fe192ad94b1", "address": "fa:16:3e:a4:71:d2", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap959bf4ec-93", "ovs_interfaceid": "959bf4ec-937f-4a99-904e-6fe192ad94b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.072 255071 DEBUG nova.network.os_vif_util [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:71:d2,bridge_name='br-int',has_traffic_filtering=True,id=959bf4ec-937f-4a99-904e-6fe192ad94b1,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap959bf4ec-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.072 255071 DEBUG os_vif [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:71:d2,bridge_name='br-int',has_traffic_filtering=True,id=959bf4ec-937f-4a99-904e-6fe192ad94b1,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap959bf4ec-93') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.073 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.074 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.075 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.081 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.082 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap959bf4ec-93, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.083 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap959bf4ec-93, col_values=(('external_ids', {'iface-id': '959bf4ec-937f-4a99-904e-6fe192ad94b1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:71:d2', 'vm-uuid': '8573a183-5b0d-4d79-ad1c-f531019fbe12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:22:15 compute-0 NetworkManager[49116]: <info>  [1764404535.0859] manager: (tap959bf4ec-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.085 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.089 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.095 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.097 255071 INFO os_vif [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:71:d2,bridge_name='br-int',has_traffic_filtering=True,id=959bf4ec-937f-4a99-904e-6fe192ad94b1,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap959bf4ec-93')
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.142 255071 DEBUG nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.142 255071 DEBUG nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.143 255071 DEBUG nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] No VIF found with MAC fa:16:3e:a4:71:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.143 255071 INFO nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Using config drive
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.167 255071 DEBUG nova.storage.rbd_utils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] rbd image 8573a183-5b0d-4d79-ad1c-f531019fbe12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:22:15 compute-0 ceph-mon[75237]: pgmap v2121: 305 pgs: 305 active+clean; 385 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 29 08:22:15 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3512008239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.451 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.815 255071 INFO nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Creating config drive at /var/lib/nova/instances/8573a183-5b0d-4d79-ad1c-f531019fbe12/disk.config
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.821 255071 DEBUG oslo_concurrency.processutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8573a183-5b0d-4d79-ad1c-f531019fbe12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ypp4w0e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.951 255071 DEBUG oslo_concurrency.processutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8573a183-5b0d-4d79-ad1c-f531019fbe12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ypp4w0e" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.974 255071 DEBUG nova.storage.rbd_utils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] rbd image 8573a183-5b0d-4d79-ad1c-f531019fbe12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:22:15 compute-0 nova_compute[255040]: 2025-11-29 08:22:15.978 255071 DEBUG oslo_concurrency.processutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8573a183-5b0d-4d79-ad1c-f531019fbe12/disk.config 8573a183-5b0d-4d79-ad1c-f531019fbe12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:16 compute-0 nova_compute[255040]: 2025-11-29 08:22:16.129 255071 DEBUG oslo_concurrency.processutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8573a183-5b0d-4d79-ad1c-f531019fbe12/disk.config 8573a183-5b0d-4d79-ad1c-f531019fbe12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:16 compute-0 nova_compute[255040]: 2025-11-29 08:22:16.130 255071 INFO nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Deleting local config drive /var/lib/nova/instances/8573a183-5b0d-4d79-ad1c-f531019fbe12/disk.config because it was imported into RBD.
Nov 29 08:22:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 385 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 29 08:22:16 compute-0 kernel: tap959bf4ec-93: entered promiscuous mode
Nov 29 08:22:16 compute-0 NetworkManager[49116]: <info>  [1764404536.2040] manager: (tap959bf4ec-93): new Tun device (/org/freedesktop/NetworkManager/Devices/145)
Nov 29 08:22:16 compute-0 nova_compute[255040]: 2025-11-29 08:22:16.203 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:16 compute-0 ovn_controller[153295]: 2025-11-29T08:22:16Z|00281|binding|INFO|Claiming lport 959bf4ec-937f-4a99-904e-6fe192ad94b1 for this chassis.
Nov 29 08:22:16 compute-0 ovn_controller[153295]: 2025-11-29T08:22:16Z|00282|binding|INFO|959bf4ec-937f-4a99-904e-6fe192ad94b1: Claiming fa:16:3e:a4:71:d2 10.100.0.7
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.215 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:71:d2 10.100.0.7'], port_security=['fa:16:3e:a4:71:d2 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8573a183-5b0d-4d79-ad1c-f531019fbe12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a234aa60-c8c5-4137-96cd-77f576498813', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd25c6608beec4f818c6e402939192f16', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fd377438-6ae0-49fd-8ec7-c089abbaa180', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b002bcc-9ffd-4aaa-8483-7d6ef4853f0e, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=959bf4ec-937f-4a99-904e-6fe192ad94b1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.217 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 959bf4ec-937f-4a99-904e-6fe192ad94b1 in datapath a234aa60-c8c5-4137-96cd-77f576498813 bound to our chassis
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.218 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a234aa60-c8c5-4137-96cd-77f576498813
Nov 29 08:22:16 compute-0 ovn_controller[153295]: 2025-11-29T08:22:16Z|00283|binding|INFO|Setting lport 959bf4ec-937f-4a99-904e-6fe192ad94b1 ovn-installed in OVS
Nov 29 08:22:16 compute-0 ovn_controller[153295]: 2025-11-29T08:22:16Z|00284|binding|INFO|Setting lport 959bf4ec-937f-4a99-904e-6fe192ad94b1 up in Southbound
Nov 29 08:22:16 compute-0 nova_compute[255040]: 2025-11-29 08:22:16.229 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.231 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[dbddee37-32e0-4b3a-a37e-db03b3063d00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.233 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa234aa60-c1 in ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:22:16 compute-0 nova_compute[255040]: 2025-11-29 08:22:16.231 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.235 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa234aa60-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.235 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[f7c86e62-ece6-4658-ab57-9304a99c5dc0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.236 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[68bba274-ec5e-4edb-9a4f-a3fc329304d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:16 compute-0 systemd-machined[216271]: New machine qemu-29-instance-0000001d.
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.257 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[7edcf729-f792-4c57-af93-a1af094541e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:16 compute-0 systemd[1]: Started Virtual Machine qemu-29-instance-0000001d.
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.273 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[08b5be75-86cc-45a4-8b30-081bd6022f07]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:16 compute-0 systemd-udevd[301937]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.310 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[28db1056-834f-4168-b008-ba95550e559d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:16 compute-0 NetworkManager[49116]: <info>  [1764404536.3142] device (tap959bf4ec-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:22:16 compute-0 NetworkManager[49116]: <info>  [1764404536.3155] device (tap959bf4ec-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.317 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[578ca46f-bd74-4274-a780-c22ae6bb195d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:16 compute-0 NetworkManager[49116]: <info>  [1764404536.3178] manager: (tapa234aa60-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/146)
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.352 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b51e4a-fdfe-410a-8fd1-b7980428c9f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.356 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[17586ccb-199b-417b-98e7-d617c1182e58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:16 compute-0 NetworkManager[49116]: <info>  [1764404536.3824] device (tapa234aa60-c0): carrier: link connected
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.386 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[41559df8-b7f6-4ec7-81d2-e8e02d1455c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.402 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[dcaae20c-e384-427e-a3bb-c7890adbccc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa234aa60-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:9b:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677372, 'reachable_time': 17735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301965, 'error': None, 'target': 'ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.416 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f4a064-fa7d-4695-b575-56fe756091bc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe65:9b6a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677372, 'tstamp': 677372}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301966, 'error': None, 'target': 'ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.437 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[75516f02-5360-4857-8549-cffe845cd521]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa234aa60-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:9b:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677372, 'reachable_time': 17735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301967, 'error': None, 'target': 'ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:16 compute-0 nova_compute[255040]: 2025-11-29 08:22:16.463 255071 DEBUG nova.compute.manager [req-dd676fbf-e662-4c87-b877-d3f8262cab35 req-3070d93b-91be-44f3-b9d3-55b6ba57b903 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Received event network-vif-plugged-959bf4ec-937f-4a99-904e-6fe192ad94b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:16 compute-0 nova_compute[255040]: 2025-11-29 08:22:16.463 255071 DEBUG oslo_concurrency.lockutils [req-dd676fbf-e662-4c87-b877-d3f8262cab35 req-3070d93b-91be-44f3-b9d3-55b6ba57b903 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "8573a183-5b0d-4d79-ad1c-f531019fbe12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:16 compute-0 nova_compute[255040]: 2025-11-29 08:22:16.465 255071 DEBUG oslo_concurrency.lockutils [req-dd676fbf-e662-4c87-b877-d3f8262cab35 req-3070d93b-91be-44f3-b9d3-55b6ba57b903 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "8573a183-5b0d-4d79-ad1c-f531019fbe12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:16 compute-0 nova_compute[255040]: 2025-11-29 08:22:16.465 255071 DEBUG oslo_concurrency.lockutils [req-dd676fbf-e662-4c87-b877-d3f8262cab35 req-3070d93b-91be-44f3-b9d3-55b6ba57b903 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "8573a183-5b0d-4d79-ad1c-f531019fbe12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:16 compute-0 nova_compute[255040]: 2025-11-29 08:22:16.465 255071 DEBUG nova.compute.manager [req-dd676fbf-e662-4c87-b877-d3f8262cab35 req-3070d93b-91be-44f3-b9d3-55b6ba57b903 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Processing event network-vif-plugged-959bf4ec-937f-4a99-904e-6fe192ad94b1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.480 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[e9de017e-c0f0-415e-a90f-0e2f87a3a013]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.543 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[8c75b169-1b9b-46c2-abf2-31b342439146]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.544 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa234aa60-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.544 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.544 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa234aa60-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:22:16 compute-0 nova_compute[255040]: 2025-11-29 08:22:16.546 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:16 compute-0 NetworkManager[49116]: <info>  [1764404536.5475] manager: (tapa234aa60-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Nov 29 08:22:16 compute-0 kernel: tapa234aa60-c0: entered promiscuous mode
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.549 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa234aa60-c0, col_values=(('external_ids', {'iface-id': '821a8872-735e-4a04-8244-d3a33097614d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:22:16 compute-0 ovn_controller[153295]: 2025-11-29T08:22:16Z|00285|binding|INFO|Releasing lport 821a8872-735e-4a04-8244-d3a33097614d from this chassis (sb_readonly=0)
Nov 29 08:22:16 compute-0 nova_compute[255040]: 2025-11-29 08:22:16.550 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:16 compute-0 nova_compute[255040]: 2025-11-29 08:22:16.562 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.563 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a234aa60-c8c5-4137-96cd-77f576498813.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a234aa60-c8c5-4137-96cd-77f576498813.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.564 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[9ca060dd-d2fa-4936-b10b-c9111bacfed8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.565 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-a234aa60-c8c5-4137-96cd-77f576498813
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/a234aa60-c8c5-4137-96cd-77f576498813.pid.haproxy
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID a234aa60-c8c5-4137-96cd-77f576498813
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:22:16 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:16.567 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813', 'env', 'PROCESS_TAG=haproxy-a234aa60-c8c5-4137-96cd-77f576498813', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a234aa60-c8c5-4137-96cd-77f576498813.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:22:16 compute-0 podman[302035]: 2025-11-29 08:22:16.959317675 +0000 UTC m=+0.061619961 container create 096bad1943c7812ec623924179c195f4a37946f96b797b472ab444bdf4c10916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:22:17 compute-0 systemd[1]: Started libpod-conmon-096bad1943c7812ec623924179c195f4a37946f96b797b472ab444bdf4c10916.scope.
Nov 29 08:22:17 compute-0 podman[302035]: 2025-11-29 08:22:16.923688912 +0000 UTC m=+0.025991298 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:22:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c53d2b2d61f45b623cd210ae70072d94174163b664d2b1bd017460a10bd811ac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:22:17 compute-0 podman[302035]: 2025-11-29 08:22:17.054547576 +0000 UTC m=+0.156849882 container init 096bad1943c7812ec623924179c195f4a37946f96b797b472ab444bdf4c10916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 08:22:17 compute-0 podman[302035]: 2025-11-29 08:22:17.061200225 +0000 UTC m=+0.163502511 container start 096bad1943c7812ec623924179c195f4a37946f96b797b472ab444bdf4c10916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 08:22:17 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[302051]: [NOTICE]   (302055) : New worker (302057) forked
Nov 29 08:22:17 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[302051]: [NOTICE]   (302055) : Loading success.
Nov 29 08:22:17 compute-0 ceph-mon[75237]: pgmap v2122: 305 pgs: 305 active+clean; 385 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 29 08:22:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 385 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 4.2 MiB/s wr, 4 op/s
Nov 29 08:22:18 compute-0 nova_compute[255040]: 2025-11-29 08:22:18.547 255071 DEBUG nova.compute.manager [req-3240eee1-cecd-424b-b953-60362a34019f req-712326a5-e5ba-4b6f-b658-e12820823795 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Received event network-vif-plugged-959bf4ec-937f-4a99-904e-6fe192ad94b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:18 compute-0 nova_compute[255040]: 2025-11-29 08:22:18.548 255071 DEBUG oslo_concurrency.lockutils [req-3240eee1-cecd-424b-b953-60362a34019f req-712326a5-e5ba-4b6f-b658-e12820823795 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "8573a183-5b0d-4d79-ad1c-f531019fbe12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:18 compute-0 nova_compute[255040]: 2025-11-29 08:22:18.548 255071 DEBUG oslo_concurrency.lockutils [req-3240eee1-cecd-424b-b953-60362a34019f req-712326a5-e5ba-4b6f-b658-e12820823795 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "8573a183-5b0d-4d79-ad1c-f531019fbe12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:18 compute-0 nova_compute[255040]: 2025-11-29 08:22:18.549 255071 DEBUG oslo_concurrency.lockutils [req-3240eee1-cecd-424b-b953-60362a34019f req-712326a5-e5ba-4b6f-b658-e12820823795 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "8573a183-5b0d-4d79-ad1c-f531019fbe12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:18 compute-0 nova_compute[255040]: 2025-11-29 08:22:18.549 255071 DEBUG nova.compute.manager [req-3240eee1-cecd-424b-b953-60362a34019f req-712326a5-e5ba-4b6f-b658-e12820823795 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] No waiting events found dispatching network-vif-plugged-959bf4ec-937f-4a99-904e-6fe192ad94b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:22:18 compute-0 nova_compute[255040]: 2025-11-29 08:22:18.549 255071 WARNING nova.compute.manager [req-3240eee1-cecd-424b-b953-60362a34019f req-712326a5-e5ba-4b6f-b658-e12820823795 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Received unexpected event network-vif-plugged-959bf4ec-937f-4a99-904e-6fe192ad94b1 for instance with vm_state building and task_state spawning.
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.099 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404539.0993106, 8573a183-5b0d-4d79-ad1c-f531019fbe12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.100 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] VM Started (Lifecycle Event)
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.102 255071 DEBUG nova.compute.manager [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.106 255071 DEBUG nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.110 255071 INFO nova.virt.libvirt.driver [-] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Instance spawned successfully.
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.110 255071 DEBUG nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.120 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.127 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.132 255071 DEBUG nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.133 255071 DEBUG nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.133 255071 DEBUG nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.133 255071 DEBUG nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.134 255071 DEBUG nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.134 255071 DEBUG nova.virt.libvirt.driver [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.159 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.159 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404539.1020253, 8573a183-5b0d-4d79-ad1c-f531019fbe12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.160 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] VM Paused (Lifecycle Event)
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.185 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.188 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404539.1055553, 8573a183-5b0d-4d79-ad1c-f531019fbe12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.188 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] VM Resumed (Lifecycle Event)
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.192 255071 INFO nova.compute.manager [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Took 8.35 seconds to spawn the instance on the hypervisor.
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.193 255071 DEBUG nova.compute.manager [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.215 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.219 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:22:19 compute-0 ceph-mon[75237]: pgmap v2123: 305 pgs: 305 active+clean; 385 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 4.2 MiB/s wr, 4 op/s
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.252 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.255 255071 INFO nova.compute.manager [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Took 10.26 seconds to build instance.
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.275 255071 DEBUG oslo_concurrency.lockutils [None req-9a5256bc-a3f6-44e4-9847-5c77d100b3f1 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "8573a183-5b0d-4d79-ad1c-f531019fbe12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.276 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "8573a183-5b0d-4d79-ad1c-f531019fbe12" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 8.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.276 255071 INFO nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] During sync_power_state the instance has a pending task (block_device_mapping). Skip.
Nov 29 08:22:19 compute-0 nova_compute[255040]: 2025-11-29 08:22:19.277 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "8573a183-5b0d-4d79-ad1c-f531019fbe12" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:20 compute-0 nova_compute[255040]: 2025-11-29 08:22:20.085 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 385 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 12 KiB/s wr, 9 op/s
Nov 29 08:22:20 compute-0 nova_compute[255040]: 2025-11-29 08:22:20.492 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:21 compute-0 ceph-mon[75237]: pgmap v2124: 305 pgs: 305 active+clean; 385 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 12 KiB/s wr, 9 op/s
Nov 29 08:22:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 385 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 43 op/s
Nov 29 08:22:23 compute-0 ceph-mon[75237]: pgmap v2125: 305 pgs: 305 active+clean; 385 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 43 op/s
Nov 29 08:22:23 compute-0 podman[302072]: 2025-11-29 08:22:23.938516962 +0000 UTC m=+0.098146180 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 08:22:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 385 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 43 op/s
Nov 29 08:22:25 compute-0 nova_compute[255040]: 2025-11-29 08:22:25.088 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:25 compute-0 nova_compute[255040]: 2025-11-29 08:22:25.188 255071 DEBUG nova.compute.manager [req-f7414c07-5704-41fd-b04c-8ab13f718146 req-bc42ed92-718e-436e-a644-9a4a7432763d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Received event network-changed-959bf4ec-937f-4a99-904e-6fe192ad94b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:25 compute-0 nova_compute[255040]: 2025-11-29 08:22:25.189 255071 DEBUG nova.compute.manager [req-f7414c07-5704-41fd-b04c-8ab13f718146 req-bc42ed92-718e-436e-a644-9a4a7432763d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Refreshing instance network info cache due to event network-changed-959bf4ec-937f-4a99-904e-6fe192ad94b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:22:25 compute-0 nova_compute[255040]: 2025-11-29 08:22:25.190 255071 DEBUG oslo_concurrency.lockutils [req-f7414c07-5704-41fd-b04c-8ab13f718146 req-bc42ed92-718e-436e-a644-9a4a7432763d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-8573a183-5b0d-4d79-ad1c-f531019fbe12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:22:25 compute-0 nova_compute[255040]: 2025-11-29 08:22:25.190 255071 DEBUG oslo_concurrency.lockutils [req-f7414c07-5704-41fd-b04c-8ab13f718146 req-bc42ed92-718e-436e-a644-9a4a7432763d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-8573a183-5b0d-4d79-ad1c-f531019fbe12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:22:25 compute-0 nova_compute[255040]: 2025-11-29 08:22:25.191 255071 DEBUG nova.network.neutron [req-f7414c07-5704-41fd-b04c-8ab13f718146 req-bc42ed92-718e-436e-a644-9a4a7432763d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Refreshing network info cache for port 959bf4ec-937f-4a99-904e-6fe192ad94b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:22:25 compute-0 ceph-mon[75237]: pgmap v2126: 305 pgs: 305 active+clean; 385 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 43 op/s
Nov 29 08:22:25 compute-0 nova_compute[255040]: 2025-11-29 08:22:25.494 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 385 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:22:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:27.147 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:27.148 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:27.148 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:27 compute-0 ceph-mon[75237]: pgmap v2127: 305 pgs: 305 active+clean; 385 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:22:27 compute-0 nova_compute[255040]: 2025-11-29 08:22:27.478 255071 DEBUG nova.network.neutron [req-f7414c07-5704-41fd-b04c-8ab13f718146 req-bc42ed92-718e-436e-a644-9a4a7432763d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Updated VIF entry in instance network info cache for port 959bf4ec-937f-4a99-904e-6fe192ad94b1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:22:27 compute-0 nova_compute[255040]: 2025-11-29 08:22:27.479 255071 DEBUG nova.network.neutron [req-f7414c07-5704-41fd-b04c-8ab13f718146 req-bc42ed92-718e-436e-a644-9a4a7432763d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Updating instance_info_cache with network_info: [{"id": "959bf4ec-937f-4a99-904e-6fe192ad94b1", "address": "fa:16:3e:a4:71:d2", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap959bf4ec-93", "ovs_interfaceid": "959bf4ec-937f-4a99-904e-6fe192ad94b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:22:27 compute-0 nova_compute[255040]: 2025-11-29 08:22:27.595 255071 DEBUG oslo_concurrency.lockutils [req-f7414c07-5704-41fd-b04c-8ab13f718146 req-bc42ed92-718e-436e-a644-9a4a7432763d cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-8573a183-5b0d-4d79-ad1c-f531019fbe12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:22:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 385 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:22:29 compute-0 ceph-mon[75237]: pgmap v2128: 305 pgs: 305 active+clean; 385 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:22:30 compute-0 nova_compute[255040]: 2025-11-29 08:22:30.134 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 385 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Nov 29 08:22:30 compute-0 nova_compute[255040]: 2025-11-29 08:22:30.497 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:30 compute-0 podman[302094]: 2025-11-29 08:22:30.924385536 +0000 UTC m=+0.090440933 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 29 08:22:31 compute-0 ceph-mon[75237]: pgmap v2129: 305 pgs: 305 active+clean; 385 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Nov 29 08:22:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 385 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 664 KiB/s wr, 78 op/s
Nov 29 08:22:32 compute-0 ovn_controller[153295]: 2025-11-29T08:22:32Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a4:71:d2 10.100.0.7
Nov 29 08:22:32 compute-0 ovn_controller[153295]: 2025-11-29T08:22:32Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a4:71:d2 10.100.0.7
Nov 29 08:22:33 compute-0 ceph-mon[75237]: pgmap v2130: 305 pgs: 305 active+clean; 385 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 664 KiB/s wr, 78 op/s
Nov 29 08:22:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 385 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 664 KiB/s wr, 44 op/s
Nov 29 08:22:35 compute-0 nova_compute[255040]: 2025-11-29 08:22:35.172 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:35 compute-0 ceph-mon[75237]: pgmap v2131: 305 pgs: 305 active+clean; 385 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 664 KiB/s wr, 44 op/s
Nov 29 08:22:35 compute-0 nova_compute[255040]: 2025-11-29 08:22:35.499 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 431 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.4 MiB/s wr, 93 op/s
Nov 29 08:22:37 compute-0 ceph-mon[75237]: pgmap v2132: 305 pgs: 305 active+clean; 431 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.4 MiB/s wr, 93 op/s
Nov 29 08:22:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 453 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 543 KiB/s rd, 5.8 MiB/s wr, 76 op/s
Nov 29 08:22:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:22:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:22:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:22:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:22:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:22:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:22:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:22:38
Nov 29 08:22:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:22:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:22:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'volumes']
Nov 29 08:22:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:22:39 compute-0 sshd-session[302112]: Connection closed by 45.78.219.195 port 60916 [preauth]
Nov 29 08:22:39 compute-0 ceph-mon[75237]: pgmap v2133: 305 pgs: 305 active+clean; 453 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 543 KiB/s rd, 5.8 MiB/s wr, 76 op/s
Nov 29 08:22:39 compute-0 nova_compute[255040]: 2025-11-29 08:22:39.998 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:22:40 compute-0 nova_compute[255040]: 2025-11-29 08:22:39.999 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:22:40 compute-0 nova_compute[255040]: 2025-11-29 08:22:40.175 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 453 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 543 KiB/s rd, 5.8 MiB/s wr, 76 op/s
Nov 29 08:22:40 compute-0 ceph-mon[75237]: pgmap v2134: 305 pgs: 305 active+clean; 453 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 543 KiB/s rd, 5.8 MiB/s wr, 76 op/s
Nov 29 08:22:40 compute-0 nova_compute[255040]: 2025-11-29 08:22:40.533 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.286 255071 DEBUG oslo_concurrency.lockutils [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "8573a183-5b0d-4d79-ad1c-f531019fbe12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.286 255071 DEBUG oslo_concurrency.lockutils [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "8573a183-5b0d-4d79-ad1c-f531019fbe12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.287 255071 DEBUG oslo_concurrency.lockutils [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "8573a183-5b0d-4d79-ad1c-f531019fbe12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.287 255071 DEBUG oslo_concurrency.lockutils [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "8573a183-5b0d-4d79-ad1c-f531019fbe12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.287 255071 DEBUG oslo_concurrency.lockutils [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "8573a183-5b0d-4d79-ad1c-f531019fbe12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.289 255071 INFO nova.compute.manager [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Terminating instance
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.290 255071 DEBUG nova.compute.manager [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:22:41 compute-0 kernel: tap959bf4ec-93 (unregistering): left promiscuous mode
Nov 29 08:22:41 compute-0 NetworkManager[49116]: <info>  [1764404561.3393] device (tap959bf4ec-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.351 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:41 compute-0 ovn_controller[153295]: 2025-11-29T08:22:41Z|00286|binding|INFO|Releasing lport 959bf4ec-937f-4a99-904e-6fe192ad94b1 from this chassis (sb_readonly=0)
Nov 29 08:22:41 compute-0 ovn_controller[153295]: 2025-11-29T08:22:41Z|00287|binding|INFO|Setting lport 959bf4ec-937f-4a99-904e-6fe192ad94b1 down in Southbound
Nov 29 08:22:41 compute-0 ovn_controller[153295]: 2025-11-29T08:22:41Z|00288|binding|INFO|Removing iface tap959bf4ec-93 ovn-installed in OVS
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.353 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:41.359 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:71:d2 10.100.0.7'], port_security=['fa:16:3e:a4:71:d2 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8573a183-5b0d-4d79-ad1c-f531019fbe12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a234aa60-c8c5-4137-96cd-77f576498813', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd25c6608beec4f818c6e402939192f16', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd377438-6ae0-49fd-8ec7-c089abbaa180', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b002bcc-9ffd-4aaa-8483-7d6ef4853f0e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=959bf4ec-937f-4a99-904e-6fe192ad94b1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:22:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:41.360 163500 INFO neutron.agent.ovn.metadata.agent [-] Port 959bf4ec-937f-4a99-904e-6fe192ad94b1 in datapath a234aa60-c8c5-4137-96cd-77f576498813 unbound from our chassis
Nov 29 08:22:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:41.361 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a234aa60-c8c5-4137-96cd-77f576498813, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:22:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:41.363 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[ff32699b-2927-447d-8467-a0159d2cc4bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:41.365 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813 namespace which is not needed anymore
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.375 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:41 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Nov 29 08:22:41 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Consumed 16.130s CPU time.
Nov 29 08:22:41 compute-0 systemd-machined[216271]: Machine qemu-29-instance-0000001d terminated.
Nov 29 08:22:41 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[302051]: [NOTICE]   (302055) : haproxy version is 2.8.14-c23fe91
Nov 29 08:22:41 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[302051]: [NOTICE]   (302055) : path to executable is /usr/sbin/haproxy
Nov 29 08:22:41 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[302051]: [WARNING]  (302055) : Exiting Master process...
Nov 29 08:22:41 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[302051]: [ALERT]    (302055) : Current worker (302057) exited with code 143 (Terminated)
Nov 29 08:22:41 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[302051]: [WARNING]  (302055) : All workers exited. Exiting... (0)
Nov 29 08:22:41 compute-0 systemd[1]: libpod-096bad1943c7812ec623924179c195f4a37946f96b797b472ab444bdf4c10916.scope: Deactivated successfully.
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.508 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.513 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:41 compute-0 podman[302138]: 2025-11-29 08:22:41.51493616 +0000 UTC m=+0.044212704 container died 096bad1943c7812ec623924179c195f4a37946f96b797b472ab444bdf4c10916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.523 255071 INFO nova.virt.libvirt.driver [-] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Instance destroyed successfully.
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.524 255071 DEBUG nova.objects.instance [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lazy-loading 'resources' on Instance uuid 8573a183-5b0d-4d79-ad1c-f531019fbe12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.537 255071 DEBUG nova.virt.libvirt.vif [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:22:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1303239054',display_name='tempest-TransferEncryptedVolumeTest-server-1303239054',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1303239054',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE92PotWc36K+VLDIaJ8QpGP59cELlheEqm9nY+TFm8JbcBbR2J8kqRcsvjGW95/sxJ5sqaLllJygdYCELfyHlA83lAF017jxtDaIPnwvxv16NEk587eEM5n6ok24IEshQ==',key_name='tempest-TransferEncryptedVolumeTest-1268363580',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:22:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d25c6608beec4f818c6e402939192f16',ramdisk_id='',reservation_id='r-p0zwkc37',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1043863442',owner_user_name='tempest-TransferEncryptedVolumeTest-1043863442-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:22:19Z,user_data=None,user_id='a08e1ef223b748efa4d5bdc804150f97',uuid=8573a183-5b0d-4d79-ad1c-f531019fbe12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "959bf4ec-937f-4a99-904e-6fe192ad94b1", "address": "fa:16:3e:a4:71:d2", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap959bf4ec-93", "ovs_interfaceid": "959bf4ec-937f-4a99-904e-6fe192ad94b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.539 255071 DEBUG nova.network.os_vif_util [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converting VIF {"id": "959bf4ec-937f-4a99-904e-6fe192ad94b1", "address": "fa:16:3e:a4:71:d2", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap959bf4ec-93", "ovs_interfaceid": "959bf4ec-937f-4a99-904e-6fe192ad94b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.541 255071 DEBUG nova.network.os_vif_util [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:71:d2,bridge_name='br-int',has_traffic_filtering=True,id=959bf4ec-937f-4a99-904e-6fe192ad94b1,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap959bf4ec-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.542 255071 DEBUG os_vif [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:71:d2,bridge_name='br-int',has_traffic_filtering=True,id=959bf4ec-937f-4a99-904e-6fe192ad94b1,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap959bf4ec-93') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:22:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-096bad1943c7812ec623924179c195f4a37946f96b797b472ab444bdf4c10916-userdata-shm.mount: Deactivated successfully.
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.546 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c53d2b2d61f45b623cd210ae70072d94174163b664d2b1bd017460a10bd811ac-merged.mount: Deactivated successfully.
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.547 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap959bf4ec-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.549 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.551 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.554 255071 INFO os_vif [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:71:d2,bridge_name='br-int',has_traffic_filtering=True,id=959bf4ec-937f-4a99-904e-6fe192ad94b1,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap959bf4ec-93')
Nov 29 08:22:41 compute-0 podman[302138]: 2025-11-29 08:22:41.558015575 +0000 UTC m=+0.087292119 container cleanup 096bad1943c7812ec623924179c195f4a37946f96b797b472ab444bdf4c10916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:22:41 compute-0 systemd[1]: libpod-conmon-096bad1943c7812ec623924179c195f4a37946f96b797b472ab444bdf4c10916.scope: Deactivated successfully.
Nov 29 08:22:41 compute-0 podman[302177]: 2025-11-29 08:22:41.616983024 +0000 UTC m=+0.040727552 container remove 096bad1943c7812ec623924179c195f4a37946f96b797b472ab444bdf4c10916 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:22:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:41.623 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[33f43c4c-0c95-4160-a821-4b431bb8bac7]: (4, ('Sat Nov 29 08:22:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813 (096bad1943c7812ec623924179c195f4a37946f96b797b472ab444bdf4c10916)\n096bad1943c7812ec623924179c195f4a37946f96b797b472ab444bdf4c10916\nSat Nov 29 08:22:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813 (096bad1943c7812ec623924179c195f4a37946f96b797b472ab444bdf4c10916)\n096bad1943c7812ec623924179c195f4a37946f96b797b472ab444bdf4c10916\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:41.625 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[c62c4f96-8021-49e0-b3ff-4d41201a005f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:41.626 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa234aa60-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:22:41 compute-0 kernel: tapa234aa60-c0: left promiscuous mode
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.628 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:41.635 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[32b7294b-2187-40d7-957c-695aea1e9b7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.640 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:41.654 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[5bf11ba0-a52a-42b8-b52b-eb7fb06716df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:41.655 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[24a4a55c-4332-43ff-9d4a-47f44039943e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:41.673 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[ed7543b7-1cbc-4f1b-b757-5f7d98e70574]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677364, 'reachable_time': 17768, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302209, 'error': None, 'target': 'ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:41 compute-0 systemd[1]: run-netns-ovnmeta\x2da234aa60\x2dc8c5\x2d4137\x2d96cd\x2d77f576498813.mount: Deactivated successfully.
Nov 29 08:22:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:41.678 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:22:41 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:41.679 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[51bd3c49-6d52-4ebd-9992-1baed8340965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.745 255071 INFO nova.virt.libvirt.driver [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Deleting instance files /var/lib/nova/instances/8573a183-5b0d-4d79-ad1c-f531019fbe12_del
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.746 255071 INFO nova.virt.libvirt.driver [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Deletion of /var/lib/nova/instances/8573a183-5b0d-4d79-ad1c-f531019fbe12_del complete
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.799 255071 INFO nova.compute.manager [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Took 0.51 seconds to destroy the instance on the hypervisor.
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.801 255071 DEBUG oslo.service.loopingcall [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.802 255071 DEBUG nova.compute.manager [-] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.802 255071 DEBUG nova.network.neutron [-] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.896 255071 DEBUG nova.compute.manager [req-238491eb-040a-4248-8716-0f96ded502ee req-0afa77dd-f5cd-4c5d-8e40-212c8776c4f7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Received event network-vif-unplugged-959bf4ec-937f-4a99-904e-6fe192ad94b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.896 255071 DEBUG oslo_concurrency.lockutils [req-238491eb-040a-4248-8716-0f96ded502ee req-0afa77dd-f5cd-4c5d-8e40-212c8776c4f7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "8573a183-5b0d-4d79-ad1c-f531019fbe12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.896 255071 DEBUG oslo_concurrency.lockutils [req-238491eb-040a-4248-8716-0f96ded502ee req-0afa77dd-f5cd-4c5d-8e40-212c8776c4f7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "8573a183-5b0d-4d79-ad1c-f531019fbe12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.897 255071 DEBUG oslo_concurrency.lockutils [req-238491eb-040a-4248-8716-0f96ded502ee req-0afa77dd-f5cd-4c5d-8e40-212c8776c4f7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "8573a183-5b0d-4d79-ad1c-f531019fbe12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.897 255071 DEBUG nova.compute.manager [req-238491eb-040a-4248-8716-0f96ded502ee req-0afa77dd-f5cd-4c5d-8e40-212c8776c4f7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] No waiting events found dispatching network-vif-unplugged-959bf4ec-937f-4a99-904e-6fe192ad94b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:22:41 compute-0 nova_compute[255040]: 2025-11-29 08:22:41.897 255071 DEBUG nova.compute.manager [req-238491eb-040a-4248-8716-0f96ded502ee req-0afa77dd-f5cd-4c5d-8e40-212c8776c4f7 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Received event network-vif-unplugged-959bf4ec-937f-4a99-904e-6fe192ad94b1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:22:42 compute-0 nova_compute[255040]: 2025-11-29 08:22:42.043 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:42.044 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:22:42 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:42.045 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:22:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 453 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 543 KiB/s rd, 5.8 MiB/s wr, 77 op/s
Nov 29 08:22:42 compute-0 nova_compute[255040]: 2025-11-29 08:22:42.880 255071 DEBUG nova.network.neutron [-] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:22:42 compute-0 nova_compute[255040]: 2025-11-29 08:22:42.907 255071 INFO nova.compute.manager [-] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Took 1.10 seconds to deallocate network for instance.
Nov 29 08:22:42 compute-0 nova_compute[255040]: 2025-11-29 08:22:42.949 255071 DEBUG nova.compute.manager [req-b05044ce-b266-4a6f-8d86-2294ea383fce req-7bd4fd81-a1a8-4c1b-b3af-210b2043a679 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Received event network-vif-deleted-959bf4ec-937f-4a99-904e-6fe192ad94b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:42 compute-0 nova_compute[255040]: 2025-11-29 08:22:42.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:22:43 compute-0 nova_compute[255040]: 2025-11-29 08:22:43.102 255071 INFO nova.compute.manager [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Took 0.20 seconds to detach 1 volumes for instance.
Nov 29 08:22:43 compute-0 nova_compute[255040]: 2025-11-29 08:22:43.167 255071 DEBUG oslo_concurrency.lockutils [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:43 compute-0 nova_compute[255040]: 2025-11-29 08:22:43.167 255071 DEBUG oslo_concurrency.lockutils [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:43 compute-0 nova_compute[255040]: 2025-11-29 08:22:43.225 255071 DEBUG oslo_concurrency.processutils [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:43 compute-0 ceph-mon[75237]: pgmap v2135: 305 pgs: 305 active+clean; 453 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 543 KiB/s rd, 5.8 MiB/s wr, 77 op/s
Nov 29 08:22:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:22:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:22:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:22:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:22:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:22:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:22:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:22:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:22:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:22:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:22:43 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:22:43 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2542159298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:43 compute-0 nova_compute[255040]: 2025-11-29 08:22:43.651 255071 DEBUG oslo_concurrency.processutils [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:43 compute-0 nova_compute[255040]: 2025-11-29 08:22:43.660 255071 DEBUG nova.compute.provider_tree [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:22:43 compute-0 nova_compute[255040]: 2025-11-29 08:22:43.678 255071 DEBUG nova.scheduler.client.report [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:22:43 compute-0 nova_compute[255040]: 2025-11-29 08:22:43.700 255071 DEBUG oslo_concurrency.lockutils [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:43 compute-0 nova_compute[255040]: 2025-11-29 08:22:43.729 255071 INFO nova.scheduler.client.report [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Deleted allocations for instance 8573a183-5b0d-4d79-ad1c-f531019fbe12
Nov 29 08:22:43 compute-0 nova_compute[255040]: 2025-11-29 08:22:43.785 255071 DEBUG oslo_concurrency.lockutils [None req-8917b27a-77dd-412a-bded-22945d570026 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "8573a183-5b0d-4d79-ad1c-f531019fbe12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:43 compute-0 podman[302233]: 2025-11-29 08:22:43.938233348 +0000 UTC m=+0.100925804 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:22:43 compute-0 nova_compute[255040]: 2025-11-29 08:22:43.963 255071 DEBUG nova.compute.manager [req-e0947353-a176-4010-98c1-a759c68f4427 req-b76df2fc-34f4-4710-85a1-7fe96882d25a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Received event network-vif-plugged-959bf4ec-937f-4a99-904e-6fe192ad94b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:43 compute-0 nova_compute[255040]: 2025-11-29 08:22:43.964 255071 DEBUG oslo_concurrency.lockutils [req-e0947353-a176-4010-98c1-a759c68f4427 req-b76df2fc-34f4-4710-85a1-7fe96882d25a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "8573a183-5b0d-4d79-ad1c-f531019fbe12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:43 compute-0 nova_compute[255040]: 2025-11-29 08:22:43.964 255071 DEBUG oslo_concurrency.lockutils [req-e0947353-a176-4010-98c1-a759c68f4427 req-b76df2fc-34f4-4710-85a1-7fe96882d25a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "8573a183-5b0d-4d79-ad1c-f531019fbe12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:43 compute-0 nova_compute[255040]: 2025-11-29 08:22:43.965 255071 DEBUG oslo_concurrency.lockutils [req-e0947353-a176-4010-98c1-a759c68f4427 req-b76df2fc-34f4-4710-85a1-7fe96882d25a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "8573a183-5b0d-4d79-ad1c-f531019fbe12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:43 compute-0 nova_compute[255040]: 2025-11-29 08:22:43.965 255071 DEBUG nova.compute.manager [req-e0947353-a176-4010-98c1-a759c68f4427 req-b76df2fc-34f4-4710-85a1-7fe96882d25a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] No waiting events found dispatching network-vif-plugged-959bf4ec-937f-4a99-904e-6fe192ad94b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:22:43 compute-0 nova_compute[255040]: 2025-11-29 08:22:43.965 255071 WARNING nova.compute.manager [req-e0947353-a176-4010-98c1-a759c68f4427 req-b76df2fc-34f4-4710-85a1-7fe96882d25a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Received unexpected event network-vif-plugged-959bf4ec-937f-4a99-904e-6fe192ad94b1 for instance with vm_state deleted and task_state None.
Nov 29 08:22:43 compute-0 nova_compute[255040]: 2025-11-29 08:22:43.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:22:43 compute-0 nova_compute[255040]: 2025-11-29 08:22:43.975 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:22:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 453 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 377 KiB/s rd, 5.2 MiB/s wr, 63 op/s
Nov 29 08:22:44 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2542159298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:44 compute-0 nova_compute[255040]: 2025-11-29 08:22:44.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:22:44 compute-0 nova_compute[255040]: 2025-11-29 08:22:44.997 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:44 compute-0 nova_compute[255040]: 2025-11-29 08:22:44.997 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:44 compute-0 nova_compute[255040]: 2025-11-29 08:22:44.998 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:44 compute-0 nova_compute[255040]: 2025-11-29 08:22:44.998 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:22:44 compute-0 nova_compute[255040]: 2025-11-29 08:22:44.998 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:45 compute-0 ceph-mon[75237]: pgmap v2136: 305 pgs: 305 active+clean; 453 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 377 KiB/s rd, 5.2 MiB/s wr, 63 op/s
Nov 29 08:22:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:22:45 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1750858529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:45 compute-0 nova_compute[255040]: 2025-11-29 08:22:45.440 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:45 compute-0 nova_compute[255040]: 2025-11-29 08:22:45.535 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:45 compute-0 nova_compute[255040]: 2025-11-29 08:22:45.606 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:22:45 compute-0 nova_compute[255040]: 2025-11-29 08:22:45.607 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4293MB free_disk=59.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:22:45 compute-0 nova_compute[255040]: 2025-11-29 08:22:45.607 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:45 compute-0 nova_compute[255040]: 2025-11-29 08:22:45.608 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:45 compute-0 nova_compute[255040]: 2025-11-29 08:22:45.979 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:22:45 compute-0 nova_compute[255040]: 2025-11-29 08:22:45.979 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:22:46 compute-0 nova_compute[255040]: 2025-11-29 08:22:46.000 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 453 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 387 KiB/s rd, 5.2 MiB/s wr, 73 op/s
Nov 29 08:22:46 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1750858529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:46 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:22:46 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/267525984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:46 compute-0 nova_compute[255040]: 2025-11-29 08:22:46.476 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:46 compute-0 nova_compute[255040]: 2025-11-29 08:22:46.482 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:22:46 compute-0 nova_compute[255040]: 2025-11-29 08:22:46.499 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:22:46 compute-0 nova_compute[255040]: 2025-11-29 08:22:46.519 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:22:46 compute-0 nova_compute[255040]: 2025-11-29 08:22:46.520 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.912s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:46 compute-0 nova_compute[255040]: 2025-11-29 08:22:46.549 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:47 compute-0 ceph-mon[75237]: pgmap v2137: 305 pgs: 305 active+clean; 453 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 387 KiB/s rd, 5.2 MiB/s wr, 73 op/s
Nov 29 08:22:47 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/267525984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 453 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 1.4 MiB/s wr, 27 op/s
Nov 29 08:22:48 compute-0 nova_compute[255040]: 2025-11-29 08:22:48.520 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:22:48 compute-0 nova_compute[255040]: 2025-11-29 08:22:48.520 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:22:48 compute-0 nova_compute[255040]: 2025-11-29 08:22:48.521 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:22:48 compute-0 nova_compute[255040]: 2025-11-29 08:22:48.538 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:22:48 compute-0 nova_compute[255040]: 2025-11-29 08:22:48.539 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:22:48 compute-0 nova_compute[255040]: 2025-11-29 08:22:48.539 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:22:48 compute-0 nova_compute[255040]: 2025-11-29 08:22:48.988 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:22:49 compute-0 ceph-mon[75237]: pgmap v2138: 305 pgs: 305 active+clean; 453 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 1.4 MiB/s wr, 27 op/s
Nov 29 08:22:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 453 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 15 KiB/s wr, 15 op/s
Nov 29 08:22:50 compute-0 nova_compute[255040]: 2025-11-29 08:22:50.567 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:51 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:51.047 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:22:51 compute-0 ceph-mon[75237]: pgmap v2139: 305 pgs: 305 active+clean; 453 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 15 KiB/s wr, 15 op/s
Nov 29 08:22:51 compute-0 nova_compute[255040]: 2025-11-29 08:22:51.551 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:51 compute-0 nova_compute[255040]: 2025-11-29 08:22:51.971 255071 DEBUG oslo_concurrency.lockutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:51 compute-0 nova_compute[255040]: 2025-11-29 08:22:51.971 255071 DEBUG oslo_concurrency.lockutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:51 compute-0 nova_compute[255040]: 2025-11-29 08:22:51.995 255071 DEBUG nova.compute.manager [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:22:52 compute-0 nova_compute[255040]: 2025-11-29 08:22:52.081 255071 DEBUG oslo_concurrency.lockutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:52 compute-0 nova_compute[255040]: 2025-11-29 08:22:52.081 255071 DEBUG oslo_concurrency.lockutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:52 compute-0 nova_compute[255040]: 2025-11-29 08:22:52.088 255071 DEBUG nova.virt.hardware [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:22:52 compute-0 nova_compute[255040]: 2025-11-29 08:22:52.088 255071 INFO nova.compute.claims [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:22:52 compute-0 nova_compute[255040]: 2025-11-29 08:22:52.183 255071 DEBUG oslo_concurrency.processutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 453 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 15 KiB/s wr, 15 op/s
Nov 29 08:22:52 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:22:52 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1265292317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:52 compute-0 nova_compute[255040]: 2025-11-29 08:22:52.647 255071 DEBUG oslo_concurrency.processutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:52 compute-0 nova_compute[255040]: 2025-11-29 08:22:52.653 255071 DEBUG nova.compute.provider_tree [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:22:52 compute-0 nova_compute[255040]: 2025-11-29 08:22:52.675 255071 DEBUG nova.scheduler.client.report [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:22:52 compute-0 nova_compute[255040]: 2025-11-29 08:22:52.700 255071 DEBUG oslo_concurrency.lockutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:52 compute-0 nova_compute[255040]: 2025-11-29 08:22:52.701 255071 DEBUG nova.compute.manager [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:22:52 compute-0 nova_compute[255040]: 2025-11-29 08:22:52.767 255071 DEBUG nova.compute.manager [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:22:52 compute-0 nova_compute[255040]: 2025-11-29 08:22:52.768 255071 DEBUG nova.network.neutron [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:22:52 compute-0 nova_compute[255040]: 2025-11-29 08:22:52.797 255071 INFO nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:22:52 compute-0 nova_compute[255040]: 2025-11-29 08:22:52.818 255071 DEBUG nova.compute.manager [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:22:52 compute-0 nova_compute[255040]: 2025-11-29 08:22:52.864 255071 INFO nova.virt.block_device [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Booting with volume 1640093f-533d-43f4-ac27-350862646719 at /dev/vda
Nov 29 08:22:52 compute-0 nova_compute[255040]: 2025-11-29 08:22:52.991 255071 DEBUG os_brick.utils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:22:52 compute-0 nova_compute[255040]: 2025-11-29 08:22:52.994 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:53 compute-0 nova_compute[255040]: 2025-11-29 08:22:53.011 262843 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:53 compute-0 nova_compute[255040]: 2025-11-29 08:22:53.011 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[74b9ad4b-c1e2-41b4-8160-dab54c80b24a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:53 compute-0 nova_compute[255040]: 2025-11-29 08:22:53.013 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:53 compute-0 nova_compute[255040]: 2025-11-29 08:22:53.022 262843 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:53 compute-0 nova_compute[255040]: 2025-11-29 08:22:53.022 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[a12bd575-1a92-4ac5-b35b-a97525f35fd9]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9694aeb50ce', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:53 compute-0 nova_compute[255040]: 2025-11-29 08:22:53.024 262843 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:53 compute-0 nova_compute[255040]: 2025-11-29 08:22:53.035 262843 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:53 compute-0 nova_compute[255040]: 2025-11-29 08:22:53.035 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[7a7f2e79-6a4e-4404-bdd8-370b1e8f85c4]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:53 compute-0 nova_compute[255040]: 2025-11-29 08:22:53.037 262843 DEBUG oslo.privsep.daemon [-] privsep: reply[9fee2699-8dcb-4e8d-865a-296c3a078509]: (4, 'a28c55e7-2003-4883-bda8-258835775761') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:53 compute-0 nova_compute[255040]: 2025-11-29 08:22:53.037 255071 DEBUG oslo_concurrency.processutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:53 compute-0 nova_compute[255040]: 2025-11-29 08:22:53.065 255071 DEBUG oslo_concurrency.processutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:53 compute-0 nova_compute[255040]: 2025-11-29 08:22:53.070 255071 DEBUG os_brick.initiator.connectors.lightos [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:22:53 compute-0 nova_compute[255040]: 2025-11-29 08:22:53.070 255071 DEBUG os_brick.initiator.connectors.lightos [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:22:53 compute-0 nova_compute[255040]: 2025-11-29 08:22:53.071 255071 DEBUG os_brick.initiator.connectors.lightos [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:22:53 compute-0 nova_compute[255040]: 2025-11-29 08:22:53.072 255071 DEBUG os_brick.utils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] <== get_connector_properties: return (79ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9694aeb50ce', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a28c55e7-2003-4883-bda8-258835775761', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:22:53 compute-0 nova_compute[255040]: 2025-11-29 08:22:53.072 255071 DEBUG nova.virt.block_device [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Updating existing volume attachment record: 9cfb2d13-a7ef-42c1-b043-ca033b1f59d9 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:22:53 compute-0 ceph-mon[75237]: pgmap v2140: 305 pgs: 305 active+clean; 453 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 15 KiB/s wr, 15 op/s
Nov 29 08:22:53 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1265292317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:22:53 compute-0 nova_compute[255040]: 2025-11-29 08:22:53.355 255071 DEBUG nova.policy [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a08e1ef223b748efa4d5bdc804150f97', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd25c6608beec4f818c6e402939192f16', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:22:53 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:22:53 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1257679170' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 453 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 13 op/s
Nov 29 08:22:54 compute-0 nova_compute[255040]: 2025-11-29 08:22:54.208 255071 DEBUG nova.compute.manager [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:22:54 compute-0 nova_compute[255040]: 2025-11-29 08:22:54.209 255071 DEBUG nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:22:54 compute-0 nova_compute[255040]: 2025-11-29 08:22:54.210 255071 INFO nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Creating image(s)
Nov 29 08:22:54 compute-0 nova_compute[255040]: 2025-11-29 08:22:54.210 255071 DEBUG nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:22:54 compute-0 nova_compute[255040]: 2025-11-29 08:22:54.210 255071 DEBUG nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Ensure instance console log exists: /var/lib/nova/instances/e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:22:54 compute-0 nova_compute[255040]: 2025-11-29 08:22:54.211 255071 DEBUG oslo_concurrency.lockutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:54 compute-0 nova_compute[255040]: 2025-11-29 08:22:54.211 255071 DEBUG oslo_concurrency.lockutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:54 compute-0 nova_compute[255040]: 2025-11-29 08:22:54.211 255071 DEBUG oslo_concurrency.lockutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:54 compute-0 nova_compute[255040]: 2025-11-29 08:22:54.486 255071 DEBUG nova.network.neutron [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Successfully created port: c25e6c8d-6940-4cef-9779-5f3cbc44baef _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:22:54 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1257679170' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:54 compute-0 podman[302334]: 2025-11-29 08:22:54.88895655 +0000 UTC m=+0.052593709 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 08:22:55 compute-0 nova_compute[255040]: 2025-11-29 08:22:55.425 255071 DEBUG nova.network.neutron [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Successfully updated port: c25e6c8d-6940-4cef-9779-5f3cbc44baef _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:22:55 compute-0 nova_compute[255040]: 2025-11-29 08:22:55.445 255071 DEBUG oslo_concurrency.lockutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "refresh_cache-e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:22:55 compute-0 nova_compute[255040]: 2025-11-29 08:22:55.446 255071 DEBUG oslo_concurrency.lockutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquired lock "refresh_cache-e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:22:55 compute-0 nova_compute[255040]: 2025-11-29 08:22:55.446 255071 DEBUG nova.network.neutron [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:22:55 compute-0 nova_compute[255040]: 2025-11-29 08:22:55.513 255071 DEBUG nova.compute.manager [req-cb442702-3f5b-4acf-8f63-ccfd97905ee5 req-07dfe664-b97a-4bc9-9ad9-07d27dde4e42 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Received event network-changed-c25e6c8d-6940-4cef-9779-5f3cbc44baef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:55 compute-0 nova_compute[255040]: 2025-11-29 08:22:55.513 255071 DEBUG nova.compute.manager [req-cb442702-3f5b-4acf-8f63-ccfd97905ee5 req-07dfe664-b97a-4bc9-9ad9-07d27dde4e42 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Refreshing instance network info cache due to event network-changed-c25e6c8d-6940-4cef-9779-5f3cbc44baef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:22:55 compute-0 nova_compute[255040]: 2025-11-29 08:22:55.513 255071 DEBUG oslo_concurrency.lockutils [req-cb442702-3f5b-4acf-8f63-ccfd97905ee5 req-07dfe664-b97a-4bc9-9ad9-07d27dde4e42 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:22:55 compute-0 ceph-mon[75237]: pgmap v2141: 305 pgs: 305 active+clean; 453 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 13 op/s
Nov 29 08:22:55 compute-0 nova_compute[255040]: 2025-11-29 08:22:55.579 255071 DEBUG nova.network.neutron [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:22:55 compute-0 nova_compute[255040]: 2025-11-29 08:22:55.605 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 453 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 13 op/s
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0054373870629029104 of space, bias 1.0, pg target 1.6312161188708731 quantized to 32 (current 32)
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.9013621638340822e-05 quantized to 32 (current 32)
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.19918670028325844 quantized to 32 (current 32)
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 16)
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:22:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.499 255071 DEBUG nova.network.neutron [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Updating instance_info_cache with network_info: [{"id": "c25e6c8d-6940-4cef-9779-5f3cbc44baef", "address": "fa:16:3e:c0:65:fa", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc25e6c8d-69", "ovs_interfaceid": "c25e6c8d-6940-4cef-9779-5f3cbc44baef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.516 255071 DEBUG oslo_concurrency.lockutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Releasing lock "refresh_cache-e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.517 255071 DEBUG nova.compute.manager [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Instance network_info: |[{"id": "c25e6c8d-6940-4cef-9779-5f3cbc44baef", "address": "fa:16:3e:c0:65:fa", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc25e6c8d-69", "ovs_interfaceid": "c25e6c8d-6940-4cef-9779-5f3cbc44baef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.518 255071 DEBUG oslo_concurrency.lockutils [req-cb442702-3f5b-4acf-8f63-ccfd97905ee5 req-07dfe664-b97a-4bc9-9ad9-07d27dde4e42 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.518 255071 DEBUG nova.network.neutron [req-cb442702-3f5b-4acf-8f63-ccfd97905ee5 req-07dfe664-b97a-4bc9-9ad9-07d27dde4e42 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Refreshing network info cache for port c25e6c8d-6940-4cef-9779-5f3cbc44baef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.525 255071 DEBUG nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Start _get_guest_xml network_info=[{"id": "c25e6c8d-6940-4cef-9779-5f3cbc44baef", "address": "fa:16:3e:c0:65:fa", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc25e6c8d-69", "ovs_interfaceid": "c25e6c8d-6940-4cef-9779-5f3cbc44baef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1640093f-533d-43f4-ac27-350862646719', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1640093f-533d-43f4-ac27-350862646719', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e', 'attached_at': '', 'detached_at': '', 'volume_id': '1640093f-533d-43f4-ac27-350862646719', 'serial': '1640093f-533d-43f4-ac27-350862646719'}, 'disk_bus': 'virtio', 'device_type': 'disk', 'boot_index': 0, 'delete_on_termination': False, 'attachment_id': '9cfb2d13-a7ef-42c1-b043-ca033b1f59d9', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.527 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404561.5210538, 8573a183-5b0d-4d79-ad1c-f531019fbe12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.528 255071 INFO nova.compute.manager [-] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] VM Stopped (Lifecycle Event)
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.535 255071 WARNING nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:22:56 compute-0 ceph-mon[75237]: pgmap v2142: 305 pgs: 305 active+clean; 453 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 13 op/s
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.541 255071 DEBUG nova.virt.libvirt.host [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.542 255071 DEBUG nova.virt.libvirt.host [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.551 255071 DEBUG nova.compute.manager [None req-436d458d-00b1-4073-9863-6bb0f07a6899 - - - - - -] [instance: 8573a183-5b0d-4d79-ad1c-f531019fbe12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.554 255071 DEBUG nova.virt.libvirt.host [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.555 255071 DEBUG nova.virt.libvirt.host [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.556 255071 DEBUG nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.556 255071 DEBUG nova.virt.hardware [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:56:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c9fe27a-ed9a-4e02-a21e-16ae3c396f08',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.557 255071 DEBUG nova.virt.hardware [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.558 255071 DEBUG nova.virt.hardware [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.558 255071 DEBUG nova.virt.hardware [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.559 255071 DEBUG nova.virt.hardware [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.559 255071 DEBUG nova.virt.hardware [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.560 255071 DEBUG nova.virt.hardware [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.560 255071 DEBUG nova.virt.hardware [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.561 255071 DEBUG nova.virt.hardware [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.561 255071 DEBUG nova.virt.hardware [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.562 255071 DEBUG nova.virt.hardware [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.597 255071 DEBUG nova.storage.rbd_utils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] rbd image e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.602 255071 DEBUG oslo_concurrency.processutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:56 compute-0 nova_compute[255040]: 2025-11-29 08:22:56.622 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:56 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:22:56 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1307497479' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.012 255071 DEBUG oslo_concurrency.processutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.123 255071 DEBUG os_brick.encryptors [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Using volume encryption metadata '{'encryption_key_id': 'eadc5983-c235-472d-b304-df3088e0125e', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1640093f-533d-43f4-ac27-350862646719', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1640093f-533d-43f4-ac27-350862646719', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e', 'attached_at': '', 'detached_at': '', 'volume_id': '1640093f-533d-43f4-ac27-350862646719', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.126 255071 DEBUG barbicanclient.client [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.138 255071 DEBUG barbicanclient.v1.secrets [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/eadc5983-c235-472d-b304-df3088e0125e get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.139 255071 INFO barbicanclient.base [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/eadc5983-c235-472d-b304-df3088e0125e
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.160 255071 DEBUG barbicanclient.client [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.160 255071 INFO barbicanclient.base [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/eadc5983-c235-472d-b304-df3088e0125e
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.185 255071 DEBUG barbicanclient.client [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.186 255071 INFO barbicanclient.base [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/eadc5983-c235-472d-b304-df3088e0125e
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.212 255071 DEBUG barbicanclient.client [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.212 255071 INFO barbicanclient.base [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/eadc5983-c235-472d-b304-df3088e0125e
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.235 255071 DEBUG barbicanclient.client [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.236 255071 INFO barbicanclient.base [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/eadc5983-c235-472d-b304-df3088e0125e
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.258 255071 DEBUG barbicanclient.client [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.259 255071 INFO barbicanclient.base [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/eadc5983-c235-472d-b304-df3088e0125e
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.280 255071 DEBUG barbicanclient.client [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.281 255071 INFO barbicanclient.base [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/eadc5983-c235-472d-b304-df3088e0125e
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.309 255071 DEBUG barbicanclient.client [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.310 255071 INFO barbicanclient.base [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/eadc5983-c235-472d-b304-df3088e0125e
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.340 255071 DEBUG barbicanclient.client [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.341 255071 INFO barbicanclient.base [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/eadc5983-c235-472d-b304-df3088e0125e
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.372 255071 DEBUG barbicanclient.client [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.373 255071 INFO barbicanclient.base [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/eadc5983-c235-472d-b304-df3088e0125e
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.405 255071 DEBUG barbicanclient.client [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.406 255071 INFO barbicanclient.base [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/eadc5983-c235-472d-b304-df3088e0125e
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.432 255071 DEBUG barbicanclient.client [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.432 255071 INFO barbicanclient.base [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/eadc5983-c235-472d-b304-df3088e0125e
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.458 255071 DEBUG barbicanclient.client [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.459 255071 INFO barbicanclient.base [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/eadc5983-c235-472d-b304-df3088e0125e
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.486 255071 DEBUG barbicanclient.client [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.487 255071 INFO barbicanclient.base [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/eadc5983-c235-472d-b304-df3088e0125e
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.510 255071 DEBUG barbicanclient.client [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.510 255071 INFO barbicanclient.base [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Calculated Secrets uuid ref: secrets/eadc5983-c235-472d-b304-df3088e0125e
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.535 255071 DEBUG barbicanclient.client [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.536 255071 DEBUG nova.virt.libvirt.host [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 08:22:57 compute-0 nova_compute[255040]:   <usage type="volume">
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <volume>1640093f-533d-43f4-ac27-350862646719</volume>
Nov 29 08:22:57 compute-0 nova_compute[255040]:   </usage>
Nov 29 08:22:57 compute-0 nova_compute[255040]: </secret>
Nov 29 08:22:57 compute-0 nova_compute[255040]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 29 08:22:57 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1307497479' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.570 255071 DEBUG nova.virt.libvirt.vif [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:22:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-416106883',display_name='tempest-TransferEncryptedVolumeTest-server-416106883',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-416106883',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE92PotWc36K+VLDIaJ8QpGP59cELlheEqm9nY+TFm8JbcBbR2J8kqRcsvjGW95/sxJ5sqaLllJygdYCELfyHlA83lAF017jxtDaIPnwvxv16NEk587eEM5n6ok24IEshQ==',key_name='tempest-TransferEncryptedVolumeTest-1268363580',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d25c6608beec4f818c6e402939192f16',ramdisk_id='',reservation_id='r-wvfpwerz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1043863442',owner_user_name='tempest-TransferEncryptedVolumeTest-1043863442-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:22:52Z,user_data=None,user_id='a08e1ef223b748efa4d5bdc804150f97',uuid=e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c25e6c8d-6940-4cef-9779-5f3cbc44baef", "address": "fa:16:3e:c0:65:fa", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc25e6c8d-69", "ovs_interfaceid": "c25e6c8d-6940-4cef-9779-5f3cbc44baef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.570 255071 DEBUG nova.network.os_vif_util [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converting VIF {"id": "c25e6c8d-6940-4cef-9779-5f3cbc44baef", "address": "fa:16:3e:c0:65:fa", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc25e6c8d-69", "ovs_interfaceid": "c25e6c8d-6940-4cef-9779-5f3cbc44baef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.571 255071 DEBUG nova.network.os_vif_util [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:65:fa,bridge_name='br-int',has_traffic_filtering=True,id=c25e6c8d-6940-4cef-9779-5f3cbc44baef,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc25e6c8d-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.573 255071 DEBUG nova.objects.instance [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lazy-loading 'pci_devices' on Instance uuid e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.590 255071 DEBUG nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:22:57 compute-0 nova_compute[255040]:   <uuid>e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e</uuid>
Nov 29 08:22:57 compute-0 nova_compute[255040]:   <name>instance-0000001e</name>
Nov 29 08:22:57 compute-0 nova_compute[255040]:   <memory>131072</memory>
Nov 29 08:22:57 compute-0 nova_compute[255040]:   <vcpu>1</vcpu>
Nov 29 08:22:57 compute-0 nova_compute[255040]:   <metadata>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-416106883</nova:name>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <nova:creationTime>2025-11-29 08:22:56</nova:creationTime>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <nova:flavor name="m1.nano">
Nov 29 08:22:57 compute-0 nova_compute[255040]:         <nova:memory>128</nova:memory>
Nov 29 08:22:57 compute-0 nova_compute[255040]:         <nova:disk>1</nova:disk>
Nov 29 08:22:57 compute-0 nova_compute[255040]:         <nova:swap>0</nova:swap>
Nov 29 08:22:57 compute-0 nova_compute[255040]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:22:57 compute-0 nova_compute[255040]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       </nova:flavor>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <nova:owner>
Nov 29 08:22:57 compute-0 nova_compute[255040]:         <nova:user uuid="a08e1ef223b748efa4d5bdc804150f97">tempest-TransferEncryptedVolumeTest-1043863442-project-member</nova:user>
Nov 29 08:22:57 compute-0 nova_compute[255040]:         <nova:project uuid="d25c6608beec4f818c6e402939192f16">tempest-TransferEncryptedVolumeTest-1043863442</nova:project>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       </nova:owner>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <nova:ports>
Nov 29 08:22:57 compute-0 nova_compute[255040]:         <nova:port uuid="c25e6c8d-6940-4cef-9779-5f3cbc44baef">
Nov 29 08:22:57 compute-0 nova_compute[255040]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:         </nova:port>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       </nova:ports>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     </nova:instance>
Nov 29 08:22:57 compute-0 nova_compute[255040]:   </metadata>
Nov 29 08:22:57 compute-0 nova_compute[255040]:   <sysinfo type="smbios">
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <system>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <entry name="serial">e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e</entry>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <entry name="uuid">e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e</entry>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     </system>
Nov 29 08:22:57 compute-0 nova_compute[255040]:   </sysinfo>
Nov 29 08:22:57 compute-0 nova_compute[255040]:   <os>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <boot dev="hd"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <smbios mode="sysinfo"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:   </os>
Nov 29 08:22:57 compute-0 nova_compute[255040]:   <features>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <acpi/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <apic/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <vmcoreinfo/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:   </features>
Nov 29 08:22:57 compute-0 nova_compute[255040]:   <clock offset="utc">
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <timer name="hpet" present="no"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:   </clock>
Nov 29 08:22:57 compute-0 nova_compute[255040]:   <cpu mode="host-model" match="exact">
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:   </cpu>
Nov 29 08:22:57 compute-0 nova_compute[255040]:   <devices>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <disk type="network" device="cdrom">
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <driver type="raw" cache="none"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <source protocol="rbd" name="vms/e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e_disk.config">
Nov 29 08:22:57 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       </source>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:22:57 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <target dev="sda" bus="sata"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <disk type="network" device="disk">
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <source protocol="rbd" name="volumes/volume-1640093f-533d-43f4-ac27-350862646719">
Nov 29 08:22:57 compute-0 nova_compute[255040]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       </source>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <auth username="openstack">
Nov 29 08:22:57 compute-0 nova_compute[255040]:         <secret type="ceph" uuid="321e9cb7-01a2-5759-bf8c-981c9a64aa3e"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       </auth>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <target dev="vda" bus="virtio"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <serial>1640093f-533d-43f4-ac27-350862646719</serial>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <encryption format="luks">
Nov 29 08:22:57 compute-0 nova_compute[255040]:         <secret type="passphrase" uuid="d36db493-516e-4c0c-ac81-62d38b725cfc"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       </encryption>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     </disk>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <interface type="ethernet">
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <mac address="fa:16:3e:c0:65:fa"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <mtu size="1442"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <target dev="tapc25e6c8d-69"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     </interface>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <serial type="pty">
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <log file="/var/lib/nova/instances/e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e/console.log" append="off"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     </serial>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <video>
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <model type="virtio"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     </video>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <input type="tablet" bus="usb"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <rng model="virtio">
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     </rng>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <controller type="usb" index="0"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     <memballoon model="virtio">
Nov 29 08:22:57 compute-0 nova_compute[255040]:       <stats period="10"/>
Nov 29 08:22:57 compute-0 nova_compute[255040]:     </memballoon>
Nov 29 08:22:57 compute-0 nova_compute[255040]:   </devices>
Nov 29 08:22:57 compute-0 nova_compute[255040]: </domain>
Nov 29 08:22:57 compute-0 nova_compute[255040]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.591 255071 DEBUG nova.compute.manager [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Preparing to wait for external event network-vif-plugged-c25e6c8d-6940-4cef-9779-5f3cbc44baef prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.592 255071 DEBUG oslo_concurrency.lockutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.592 255071 DEBUG oslo_concurrency.lockutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.592 255071 DEBUG oslo_concurrency.lockutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.593 255071 DEBUG nova.virt.libvirt.vif [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:22:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-416106883',display_name='tempest-TransferEncryptedVolumeTest-server-416106883',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-416106883',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE92PotWc36K+VLDIaJ8QpGP59cELlheEqm9nY+TFm8JbcBbR2J8kqRcsvjGW95/sxJ5sqaLllJygdYCELfyHlA83lAF017jxtDaIPnwvxv16NEk587eEM5n6ok24IEshQ==',key_name='tempest-TransferEncryptedVolumeTest-1268363580',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d25c6608beec4f818c6e402939192f16',ramdisk_id='',reservation_id='r-wvfpwerz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1043863442',owner_user_name='tempest-TransferEncryptedVolumeTest-1043863442-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:22:52Z,user_data=None,user_id='a08e1ef223b748efa4d5bdc804150f97',uuid=e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c25e6c8d-6940-4cef-9779-5f3cbc44baef", "address": "fa:16:3e:c0:65:fa", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc25e6c8d-69", "ovs_interfaceid": "c25e6c8d-6940-4cef-9779-5f3cbc44baef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.593 255071 DEBUG nova.network.os_vif_util [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converting VIF {"id": "c25e6c8d-6940-4cef-9779-5f3cbc44baef", "address": "fa:16:3e:c0:65:fa", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc25e6c8d-69", "ovs_interfaceid": "c25e6c8d-6940-4cef-9779-5f3cbc44baef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.594 255071 DEBUG nova.network.os_vif_util [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:65:fa,bridge_name='br-int',has_traffic_filtering=True,id=c25e6c8d-6940-4cef-9779-5f3cbc44baef,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc25e6c8d-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.594 255071 DEBUG os_vif [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:65:fa,bridge_name='br-int',has_traffic_filtering=True,id=c25e6c8d-6940-4cef-9779-5f3cbc44baef,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc25e6c8d-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.595 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.595 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.596 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.598 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.598 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc25e6c8d-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.598 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc25e6c8d-69, col_values=(('external_ids', {'iface-id': 'c25e6c8d-6940-4cef-9779-5f3cbc44baef', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c0:65:fa', 'vm-uuid': 'e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.600 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:57 compute-0 NetworkManager[49116]: <info>  [1764404577.6012] manager: (tapc25e6c8d-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.604 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.606 255071 INFO os_vif [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:65:fa,bridge_name='br-int',has_traffic_filtering=True,id=c25e6c8d-6940-4cef-9779-5f3cbc44baef,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc25e6c8d-69')
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.625 255071 DEBUG nova.network.neutron [req-cb442702-3f5b-4acf-8f63-ccfd97905ee5 req-07dfe664-b97a-4bc9-9ad9-07d27dde4e42 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Updated VIF entry in instance network info cache for port c25e6c8d-6940-4cef-9779-5f3cbc44baef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.625 255071 DEBUG nova.network.neutron [req-cb442702-3f5b-4acf-8f63-ccfd97905ee5 req-07dfe664-b97a-4bc9-9ad9-07d27dde4e42 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Updating instance_info_cache with network_info: [{"id": "c25e6c8d-6940-4cef-9779-5f3cbc44baef", "address": "fa:16:3e:c0:65:fa", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc25e6c8d-69", "ovs_interfaceid": "c25e6c8d-6940-4cef-9779-5f3cbc44baef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.644 255071 DEBUG oslo_concurrency.lockutils [req-cb442702-3f5b-4acf-8f63-ccfd97905ee5 req-07dfe664-b97a-4bc9-9ad9-07d27dde4e42 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.661 255071 DEBUG nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.662 255071 DEBUG nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.662 255071 DEBUG nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] No VIF found with MAC fa:16:3e:c0:65:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.662 255071 INFO nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Using config drive
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.684 255071 DEBUG nova.storage.rbd_utils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] rbd image e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.935 255071 INFO nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Creating config drive at /var/lib/nova/instances/e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e/disk.config
Nov 29 08:22:57 compute-0 nova_compute[255040]: 2025-11-29 08:22:57.947 255071 DEBUG oslo_concurrency.processutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu9kdpl0v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:58 compute-0 nova_compute[255040]: 2025-11-29 08:22:58.076 255071 DEBUG oslo_concurrency.processutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu9kdpl0v" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:58 compute-0 nova_compute[255040]: 2025-11-29 08:22:58.099 255071 DEBUG nova.storage.rbd_utils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] rbd image e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:22:58 compute-0 nova_compute[255040]: 2025-11-29 08:22:58.102 255071 DEBUG oslo_concurrency.processutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e/disk.config e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:22:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 453 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 341 B/s wr, 3 op/s
Nov 29 08:22:58 compute-0 nova_compute[255040]: 2025-11-29 08:22:58.243 255071 DEBUG oslo_concurrency.processutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e/disk.config e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:22:58 compute-0 nova_compute[255040]: 2025-11-29 08:22:58.244 255071 INFO nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Deleting local config drive /var/lib/nova/instances/e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e/disk.config because it was imported into RBD.
Nov 29 08:22:58 compute-0 kernel: tapc25e6c8d-69: entered promiscuous mode
Nov 29 08:22:58 compute-0 NetworkManager[49116]: <info>  [1764404578.2927] manager: (tapc25e6c8d-69): new Tun device (/org/freedesktop/NetworkManager/Devices/149)
Nov 29 08:22:58 compute-0 nova_compute[255040]: 2025-11-29 08:22:58.292 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:58 compute-0 ovn_controller[153295]: 2025-11-29T08:22:58Z|00289|binding|INFO|Claiming lport c25e6c8d-6940-4cef-9779-5f3cbc44baef for this chassis.
Nov 29 08:22:58 compute-0 ovn_controller[153295]: 2025-11-29T08:22:58Z|00290|binding|INFO|c25e6c8d-6940-4cef-9779-5f3cbc44baef: Claiming fa:16:3e:c0:65:fa 10.100.0.6
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.298 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:65:fa 10.100.0.6'], port_security=['fa:16:3e:c0:65:fa 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a234aa60-c8c5-4137-96cd-77f576498813', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd25c6608beec4f818c6e402939192f16', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fd377438-6ae0-49fd-8ec7-c089abbaa180', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b002bcc-9ffd-4aaa-8483-7d6ef4853f0e, chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=c25e6c8d-6940-4cef-9779-5f3cbc44baef) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.299 163500 INFO neutron.agent.ovn.metadata.agent [-] Port c25e6c8d-6940-4cef-9779-5f3cbc44baef in datapath a234aa60-c8c5-4137-96cd-77f576498813 bound to our chassis
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.300 163500 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a234aa60-c8c5-4137-96cd-77f576498813
Nov 29 08:22:58 compute-0 ovn_controller[153295]: 2025-11-29T08:22:58Z|00291|binding|INFO|Setting lport c25e6c8d-6940-4cef-9779-5f3cbc44baef ovn-installed in OVS
Nov 29 08:22:58 compute-0 ovn_controller[153295]: 2025-11-29T08:22:58Z|00292|binding|INFO|Setting lport c25e6c8d-6940-4cef-9779-5f3cbc44baef up in Southbound
Nov 29 08:22:58 compute-0 nova_compute[255040]: 2025-11-29 08:22:58.315 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:58 compute-0 nova_compute[255040]: 2025-11-29 08:22:58.319 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.318 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[023d9e28-2e32-4ec0-8e96-7b00e76d3170]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:58 compute-0 systemd-udevd[302465]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.321 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa234aa60-c1 in ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.323 261880 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa234aa60-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.323 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[4af339a0-e2a5-44f6-8128-e4365fea2f3d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.324 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[3f1e2809-4414-4703-969b-151a15094308]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:58 compute-0 systemd-machined[216271]: New machine qemu-30-instance-0000001e.
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.336 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[74a9c6b3-a281-4d72-9dc0-adf751c9f9f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:58 compute-0 NetworkManager[49116]: <info>  [1764404578.3389] device (tapc25e6c8d-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:22:58 compute-0 NetworkManager[49116]: <info>  [1764404578.3399] device (tapc25e6c8d-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.348 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[d7bcc703-4720-4953-8192-1a9776499c72]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:58 compute-0 systemd[1]: Started Virtual Machine qemu-30-instance-0000001e.
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.376 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[ac5fceae-b512-4587-9e98-f3170428251f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:58 compute-0 systemd-udevd[302469]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:22:58 compute-0 NetworkManager[49116]: <info>  [1764404578.3837] manager: (tapa234aa60-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/150)
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.382 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[7097d8f8-00e1-46f1-b4b1-5ee29a3b53b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.415 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[9e678dd5-aa16-4b97-a05b-40919278d1d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.419 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[a038730f-9b06-45b6-b965-ae25f5a0871f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:58 compute-0 NetworkManager[49116]: <info>  [1764404578.4434] device (tapa234aa60-c0): carrier: link connected
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.448 261961 DEBUG oslo.privsep.daemon [-] privsep: reply[b56a929c-f611-459e-b389-0a590af96f10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.464 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[1bae6384-e565-4860-8196-d60176297be0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa234aa60-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:9b:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681578, 'reachable_time': 34831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302498, 'error': None, 'target': 'ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.479 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[29eed4ee-cc94-4b28-9681-c6782b72c977]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe65:9b6a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 681578, 'tstamp': 681578}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302499, 'error': None, 'target': 'ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.494 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[7f5aa418-00f3-4317-a2d0-08d3f0f8c737]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa234aa60-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:9b:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681578, 'reachable_time': 34831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302500, 'error': None, 'target': 'ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.521 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[83da4a70-bb59-4cd6-81a0-2f32213a5e55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:58 compute-0 ceph-mon[75237]: pgmap v2143: 305 pgs: 305 active+clean; 453 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 341 B/s wr, 3 op/s
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.574 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[3b7ac14d-1f38-436a-9083-52a425847f87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.576 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa234aa60-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.576 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.576 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa234aa60-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:22:58 compute-0 kernel: tapa234aa60-c0: entered promiscuous mode
Nov 29 08:22:58 compute-0 nova_compute[255040]: 2025-11-29 08:22:58.578 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:58 compute-0 NetworkManager[49116]: <info>  [1764404578.5790] manager: (tapa234aa60-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.583 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa234aa60-c0, col_values=(('external_ids', {'iface-id': '821a8872-735e-4a04-8244-d3a33097614d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:22:58 compute-0 nova_compute[255040]: 2025-11-29 08:22:58.584 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:58 compute-0 ovn_controller[153295]: 2025-11-29T08:22:58Z|00293|binding|INFO|Releasing lport 821a8872-735e-4a04-8244-d3a33097614d from this chassis (sb_readonly=0)
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.587 163500 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a234aa60-c8c5-4137-96cd-77f576498813.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a234aa60-c8c5-4137-96cd-77f576498813.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:22:58 compute-0 nova_compute[255040]: 2025-11-29 08:22:58.595 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.595 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[d5a8bf44-0a7c-4066-a810-da0aa5d3a321]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.597 163500 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: global
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:     log         /dev/log local0 debug
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:     log-tag     haproxy-metadata-proxy-a234aa60-c8c5-4137-96cd-77f576498813
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:     user        root
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:     group       root
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:     maxconn     1024
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:     pidfile     /var/lib/neutron/external/pids/a234aa60-c8c5-4137-96cd-77f576498813.pid.haproxy
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:     daemon
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: defaults
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:     log global
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:     mode http
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:     option httplog
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:     option dontlognull
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:     option http-server-close
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:     option forwardfor
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:     retries                 3
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:     timeout http-request    30s
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:     timeout connect         30s
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:     timeout client          32s
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:     timeout server          32s
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:     timeout http-keep-alive 30s
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: listen listener
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:     bind 169.254.169.254:80
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:     http-request add-header X-OVN-Network-ID a234aa60-c8c5-4137-96cd-77f576498813
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:22:58 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:22:58.597 163500 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813', 'env', 'PROCESS_TAG=haproxy-a234aa60-c8c5-4137-96cd-77f576498813', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a234aa60-c8c5-4137-96cd-77f576498813.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:22:58 compute-0 nova_compute[255040]: 2025-11-29 08:22:58.856 255071 DEBUG nova.compute.manager [req-b10f3f47-00a1-4133-8f6e-670924a2bc06 req-af15b7a5-4078-4c5b-9fa1-90785f9b8caf cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Received event network-vif-plugged-c25e6c8d-6940-4cef-9779-5f3cbc44baef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:22:58 compute-0 nova_compute[255040]: 2025-11-29 08:22:58.856 255071 DEBUG oslo_concurrency.lockutils [req-b10f3f47-00a1-4133-8f6e-670924a2bc06 req-af15b7a5-4078-4c5b-9fa1-90785f9b8caf cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:22:58 compute-0 nova_compute[255040]: 2025-11-29 08:22:58.857 255071 DEBUG oslo_concurrency.lockutils [req-b10f3f47-00a1-4133-8f6e-670924a2bc06 req-af15b7a5-4078-4c5b-9fa1-90785f9b8caf cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:22:58 compute-0 nova_compute[255040]: 2025-11-29 08:22:58.857 255071 DEBUG oslo_concurrency.lockutils [req-b10f3f47-00a1-4133-8f6e-670924a2bc06 req-af15b7a5-4078-4c5b-9fa1-90785f9b8caf cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:22:58 compute-0 nova_compute[255040]: 2025-11-29 08:22:58.857 255071 DEBUG nova.compute.manager [req-b10f3f47-00a1-4133-8f6e-670924a2bc06 req-af15b7a5-4078-4c5b-9fa1-90785f9b8caf cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Processing event network-vif-plugged-c25e6c8d-6940-4cef-9779-5f3cbc44baef _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:22:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:22:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2682124456' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:22:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:22:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2682124456' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:22:59 compute-0 podman[302568]: 2025-11-29 08:22:58.927556453 +0000 UTC m=+0.023217963 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:22:59 compute-0 podman[302568]: 2025-11-29 08:22:59.474865323 +0000 UTC m=+0.570526813 container create 2458f79376b71ef16747747eaf6e652b4ba462a38b6f05b6368557d52f678518 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:22:59 compute-0 systemd[1]: Started libpod-conmon-2458f79376b71ef16747747eaf6e652b4ba462a38b6f05b6368557d52f678518.scope.
Nov 29 08:22:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/509c9239d6f4f2f681f4acd600217b5b6be50adef9dc69ebae842ddd84bc6b96/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:22:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2682124456' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:22:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2682124456' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:22:59 compute-0 podman[302568]: 2025-11-29 08:22:59.576105424 +0000 UTC m=+0.671766914 container init 2458f79376b71ef16747747eaf6e652b4ba462a38b6f05b6368557d52f678518 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 08:22:59 compute-0 podman[302568]: 2025-11-29 08:22:59.581893329 +0000 UTC m=+0.677554819 container start 2458f79376b71ef16747747eaf6e652b4ba462a38b6f05b6368557d52f678518 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 08:22:59 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[302583]: [NOTICE]   (302587) : New worker (302589) forked
Nov 29 08:22:59 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[302583]: [NOTICE]   (302587) : Loading success.
Nov 29 08:23:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 6.3 KiB/s rd, 12 KiB/s wr, 8 op/s
Nov 29 08:23:00 compute-0 ceph-mon[75237]: pgmap v2144: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 6.3 KiB/s rd, 12 KiB/s wr, 8 op/s
Nov 29 08:23:00 compute-0 nova_compute[255040]: 2025-11-29 08:23:00.607 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:00 compute-0 sudo[302598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:23:00 compute-0 sudo[302598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:00 compute-0 sudo[302598]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:00 compute-0 sudo[302623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:23:00 compute-0 sudo[302623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:00 compute-0 sudo[302623]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:00 compute-0 sudo[302648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:23:00 compute-0 sudo[302648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:00 compute-0 sudo[302648]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:00 compute-0 sudo[302673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:23:00 compute-0 sudo[302673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:00 compute-0 nova_compute[255040]: 2025-11-29 08:23:00.940 255071 DEBUG nova.compute.manager [req-78783a48-f231-4106-b32f-7b442499c867 req-767f794f-c373-49ec-a1f1-6a45122afc5e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Received event network-vif-plugged-c25e6c8d-6940-4cef-9779-5f3cbc44baef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:23:00 compute-0 nova_compute[255040]: 2025-11-29 08:23:00.941 255071 DEBUG oslo_concurrency.lockutils [req-78783a48-f231-4106-b32f-7b442499c867 req-767f794f-c373-49ec-a1f1-6a45122afc5e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:00 compute-0 nova_compute[255040]: 2025-11-29 08:23:00.941 255071 DEBUG oslo_concurrency.lockutils [req-78783a48-f231-4106-b32f-7b442499c867 req-767f794f-c373-49ec-a1f1-6a45122afc5e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:00 compute-0 nova_compute[255040]: 2025-11-29 08:23:00.942 255071 DEBUG oslo_concurrency.lockutils [req-78783a48-f231-4106-b32f-7b442499c867 req-767f794f-c373-49ec-a1f1-6a45122afc5e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:00 compute-0 nova_compute[255040]: 2025-11-29 08:23:00.942 255071 DEBUG nova.compute.manager [req-78783a48-f231-4106-b32f-7b442499c867 req-767f794f-c373-49ec-a1f1-6a45122afc5e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] No waiting events found dispatching network-vif-plugged-c25e6c8d-6940-4cef-9779-5f3cbc44baef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:23:00 compute-0 nova_compute[255040]: 2025-11-29 08:23:00.942 255071 WARNING nova.compute.manager [req-78783a48-f231-4106-b32f-7b442499c867 req-767f794f-c373-49ec-a1f1-6a45122afc5e cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Received unexpected event network-vif-plugged-c25e6c8d-6940-4cef-9779-5f3cbc44baef for instance with vm_state building and task_state spawning.
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.113 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404581.112532, e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.113 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] VM Started (Lifecycle Event)
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.115 255071 DEBUG nova.compute.manager [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.118 255071 DEBUG nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.121 255071 INFO nova.virt.libvirt.driver [-] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Instance spawned successfully.
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.122 255071 DEBUG nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.141 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.148 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.152 255071 DEBUG nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.152 255071 DEBUG nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.153 255071 DEBUG nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.154 255071 DEBUG nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.154 255071 DEBUG nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.155 255071 DEBUG nova.virt.libvirt.driver [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.188 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.194 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404581.1127334, e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.194 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] VM Paused (Lifecycle Event)
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.220 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.223 255071 DEBUG nova.virt.driver [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] Emitting event <LifecycleEvent: 1764404581.1177466, e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.223 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] VM Resumed (Lifecycle Event)
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.235 255071 INFO nova.compute.manager [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Took 7.03 seconds to spawn the instance on the hypervisor.
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.236 255071 DEBUG nova.compute.manager [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.258 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.261 255071 DEBUG nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.290 255071 INFO nova.compute.manager [None req-f833a2b9-2337-4a1f-9a94-c76d726bce39 - - - - - -] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.303 255071 INFO nova.compute.manager [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Took 9.25 seconds to build instance.
Nov 29 08:23:01 compute-0 nova_compute[255040]: 2025-11-29 08:23:01.318 255071 DEBUG oslo_concurrency.lockutils [None req-64e31dcc-0e24-4361-a586-73fc93ae930f a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.347s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:01 compute-0 sudo[302673]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 08:23:01 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 08:23:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:23:01 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:23:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:23:01 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:23:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:23:01 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:23:01 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev e5614d90-6208-401a-9827-0bacf8642146 does not exist
Nov 29 08:23:01 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 5b823927-0d1d-49a1-b240-e0d26a22dc4a does not exist
Nov 29 08:23:01 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 883b3d44-ee68-4a5d-ac71-51d036eb5290 does not exist
Nov 29 08:23:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:23:01 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:23:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:23:01 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:23:01 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:23:01 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:23:01 compute-0 sudo[302733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:23:01 compute-0 sudo[302733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:01 compute-0 sudo[302733]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 08:23:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:23:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:23:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:23:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:23:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:23:01 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:23:01 compute-0 sudo[302759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:23:01 compute-0 sudo[302759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:01 compute-0 sudo[302759]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:01 compute-0 podman[302757]: 2025-11-29 08:23:01.638719741 +0000 UTC m=+0.092134979 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:23:01 compute-0 sudo[302803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:23:01 compute-0 sudo[302803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:01 compute-0 sudo[302803]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:01 compute-0 sudo[302828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:23:01 compute-0 sudo[302828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:02 compute-0 podman[302894]: 2025-11-29 08:23:02.107814915 +0000 UTC m=+0.043848195 container create 396e6dae72c1b88284802127746ec07f129945765f6ba0c490bdc3bee557f733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:23:02 compute-0 systemd[1]: Started libpod-conmon-396e6dae72c1b88284802127746ec07f129945765f6ba0c490bdc3bee557f733.scope.
Nov 29 08:23:02 compute-0 podman[302894]: 2025-11-29 08:23:02.088054126 +0000 UTC m=+0.024087426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:23:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:23:02 compute-0 podman[302894]: 2025-11-29 08:23:02.199264754 +0000 UTC m=+0.135298044 container init 396e6dae72c1b88284802127746ec07f129945765f6ba0c490bdc3bee557f733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chaum, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 08:23:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Nov 29 08:23:02 compute-0 podman[302894]: 2025-11-29 08:23:02.208060431 +0000 UTC m=+0.144093711 container start 396e6dae72c1b88284802127746ec07f129945765f6ba0c490bdc3bee557f733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chaum, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:23:02 compute-0 podman[302894]: 2025-11-29 08:23:02.211365199 +0000 UTC m=+0.147398509 container attach 396e6dae72c1b88284802127746ec07f129945765f6ba0c490bdc3bee557f733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chaum, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 08:23:02 compute-0 busy_chaum[302911]: 167 167
Nov 29 08:23:02 compute-0 systemd[1]: libpod-396e6dae72c1b88284802127746ec07f129945765f6ba0c490bdc3bee557f733.scope: Deactivated successfully.
Nov 29 08:23:02 compute-0 conmon[302911]: conmon 396e6dae72c1b8828480 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-396e6dae72c1b88284802127746ec07f129945765f6ba0c490bdc3bee557f733.scope/container/memory.events
Nov 29 08:23:02 compute-0 podman[302894]: 2025-11-29 08:23:02.216796074 +0000 UTC m=+0.152829354 container died 396e6dae72c1b88284802127746ec07f129945765f6ba0c490bdc3bee557f733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chaum, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:23:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-4744eb389f2e8ece7297f348731f6e1cb87b920740061fc99b17b3dfeef9432b-merged.mount: Deactivated successfully.
Nov 29 08:23:02 compute-0 podman[302894]: 2025-11-29 08:23:02.258395109 +0000 UTC m=+0.194428389 container remove 396e6dae72c1b88284802127746ec07f129945765f6ba0c490bdc3bee557f733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:23:02 compute-0 systemd[1]: libpod-conmon-396e6dae72c1b88284802127746ec07f129945765f6ba0c490bdc3bee557f733.scope: Deactivated successfully.
Nov 29 08:23:02 compute-0 podman[302937]: 2025-11-29 08:23:02.433741195 +0000 UTC m=+0.046324921 container create 7a762aefca44c3a3754c79979d96e235f8ec0b861b659967dc5b16e755f3d5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:23:02 compute-0 systemd[1]: Started libpod-conmon-7a762aefca44c3a3754c79979d96e235f8ec0b861b659967dc5b16e755f3d5f2.scope.
Nov 29 08:23:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:23:02 compute-0 podman[302937]: 2025-11-29 08:23:02.415164628 +0000 UTC m=+0.027748354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309e44d8e82d8a542638857776a2fbc4c44fcd55739cee5def8f2cc3776a291c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309e44d8e82d8a542638857776a2fbc4c44fcd55739cee5def8f2cc3776a291c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309e44d8e82d8a542638857776a2fbc4c44fcd55739cee5def8f2cc3776a291c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309e44d8e82d8a542638857776a2fbc4c44fcd55739cee5def8f2cc3776a291c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309e44d8e82d8a542638857776a2fbc4c44fcd55739cee5def8f2cc3776a291c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:23:02 compute-0 podman[302937]: 2025-11-29 08:23:02.542906509 +0000 UTC m=+0.155490235 container init 7a762aefca44c3a3754c79979d96e235f8ec0b861b659967dc5b16e755f3d5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:23:02 compute-0 podman[302937]: 2025-11-29 08:23:02.550515053 +0000 UTC m=+0.163098779 container start 7a762aefca44c3a3754c79979d96e235f8ec0b861b659967dc5b16e755f3d5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:23:02 compute-0 podman[302937]: 2025-11-29 08:23:02.553803291 +0000 UTC m=+0.166387047 container attach 7a762aefca44c3a3754c79979d96e235f8ec0b861b659967dc5b16e755f3d5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 08:23:02 compute-0 ceph-mon[75237]: pgmap v2145: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Nov 29 08:23:02 compute-0 nova_compute[255040]: 2025-11-29 08:23:02.601 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:03 compute-0 agitated_ride[302952]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:23:03 compute-0 agitated_ride[302952]: --> relative data size: 1.0
Nov 29 08:23:03 compute-0 agitated_ride[302952]: --> All data devices are unavailable
Nov 29 08:23:03 compute-0 systemd[1]: libpod-7a762aefca44c3a3754c79979d96e235f8ec0b861b659967dc5b16e755f3d5f2.scope: Deactivated successfully.
Nov 29 08:23:03 compute-0 podman[302937]: 2025-11-29 08:23:03.5733661 +0000 UTC m=+1.185949836 container died 7a762aefca44c3a3754c79979d96e235f8ec0b861b659967dc5b16e755f3d5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ride, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:23:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-309e44d8e82d8a542638857776a2fbc4c44fcd55739cee5def8f2cc3776a291c-merged.mount: Deactivated successfully.
Nov 29 08:23:03 compute-0 podman[302937]: 2025-11-29 08:23:03.6349804 +0000 UTC m=+1.247564126 container remove 7a762aefca44c3a3754c79979d96e235f8ec0b861b659967dc5b16e755f3d5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ride, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:23:03 compute-0 systemd[1]: libpod-conmon-7a762aefca44c3a3754c79979d96e235f8ec0b861b659967dc5b16e755f3d5f2.scope: Deactivated successfully.
Nov 29 08:23:03 compute-0 sudo[302828]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:03 compute-0 sudo[302996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:23:03 compute-0 sudo[302996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:03 compute-0 sudo[302996]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:03 compute-0 sudo[303021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:23:03 compute-0 sudo[303021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:03 compute-0 sudo[303021]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:03 compute-0 sudo[303046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:23:03 compute-0 sudo[303046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:03 compute-0 sudo[303046]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:03 compute-0 sudo[303071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 08:23:03 compute-0 sudo[303071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Nov 29 08:23:04 compute-0 podman[303136]: 2025-11-29 08:23:04.227619384 +0000 UTC m=+0.039678014 container create e0279e6e9446596b605e54bc647a9c9350264ee25ed007c46fe0d26c27695225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_curie, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 08:23:04 compute-0 systemd[1]: Started libpod-conmon-e0279e6e9446596b605e54bc647a9c9350264ee25ed007c46fe0d26c27695225.scope.
Nov 29 08:23:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:23:04 compute-0 podman[303136]: 2025-11-29 08:23:04.210989108 +0000 UTC m=+0.023047768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:23:04 compute-0 podman[303136]: 2025-11-29 08:23:04.308937432 +0000 UTC m=+0.120996112 container init e0279e6e9446596b605e54bc647a9c9350264ee25ed007c46fe0d26c27695225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_curie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:23:04 compute-0 podman[303136]: 2025-11-29 08:23:04.316434142 +0000 UTC m=+0.128492792 container start e0279e6e9446596b605e54bc647a9c9350264ee25ed007c46fe0d26c27695225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_curie, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:23:04 compute-0 podman[303136]: 2025-11-29 08:23:04.3197017 +0000 UTC m=+0.131760340 container attach e0279e6e9446596b605e54bc647a9c9350264ee25ed007c46fe0d26c27695225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_curie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 08:23:04 compute-0 intelligent_curie[303152]: 167 167
Nov 29 08:23:04 compute-0 systemd[1]: libpod-e0279e6e9446596b605e54bc647a9c9350264ee25ed007c46fe0d26c27695225.scope: Deactivated successfully.
Nov 29 08:23:04 compute-0 podman[303136]: 2025-11-29 08:23:04.322931077 +0000 UTC m=+0.134989747 container died e0279e6e9446596b605e54bc647a9c9350264ee25ed007c46fe0d26c27695225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_curie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 29 08:23:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-c39a7ff63c0c8a2b7302189db94abdf027f05d28380ec62ad74d5f64cdb8a270-merged.mount: Deactivated successfully.
Nov 29 08:23:04 compute-0 podman[303136]: 2025-11-29 08:23:04.365413784 +0000 UTC m=+0.177472414 container remove e0279e6e9446596b605e54bc647a9c9350264ee25ed007c46fe0d26c27695225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_curie, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:23:04 compute-0 systemd[1]: libpod-conmon-e0279e6e9446596b605e54bc647a9c9350264ee25ed007c46fe0d26c27695225.scope: Deactivated successfully.
Nov 29 08:23:04 compute-0 podman[303176]: 2025-11-29 08:23:04.540084723 +0000 UTC m=+0.043281520 container create 97e030d811a2780f622a92cde5322fd7cf30685a0fc6fcfcc446a75a348304e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brattain, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 08:23:04 compute-0 systemd[1]: Started libpod-conmon-97e030d811a2780f622a92cde5322fd7cf30685a0fc6fcfcc446a75a348304e3.scope.
Nov 29 08:23:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:23:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80d5230ee0eb1baddbcb5562e2f0ae490045885c1bfb714bb0627b71cd4cc0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:23:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80d5230ee0eb1baddbcb5562e2f0ae490045885c1bfb714bb0627b71cd4cc0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:23:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80d5230ee0eb1baddbcb5562e2f0ae490045885c1bfb714bb0627b71cd4cc0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:23:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a80d5230ee0eb1baddbcb5562e2f0ae490045885c1bfb714bb0627b71cd4cc0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:23:04 compute-0 podman[303176]: 2025-11-29 08:23:04.520123498 +0000 UTC m=+0.023320305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:23:04 compute-0 podman[303176]: 2025-11-29 08:23:04.619989013 +0000 UTC m=+0.123185890 container init 97e030d811a2780f622a92cde5322fd7cf30685a0fc6fcfcc446a75a348304e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:23:04 compute-0 podman[303176]: 2025-11-29 08:23:04.62879664 +0000 UTC m=+0.131993437 container start 97e030d811a2780f622a92cde5322fd7cf30685a0fc6fcfcc446a75a348304e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:23:04 compute-0 podman[303176]: 2025-11-29 08:23:04.632271132 +0000 UTC m=+0.135467949 container attach 97e030d811a2780f622a92cde5322fd7cf30685a0fc6fcfcc446a75a348304e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brattain, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 08:23:05 compute-0 ceph-mon[75237]: pgmap v2146: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Nov 29 08:23:05 compute-0 confident_brattain[303192]: {
Nov 29 08:23:05 compute-0 confident_brattain[303192]:     "0": [
Nov 29 08:23:05 compute-0 confident_brattain[303192]:         {
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "devices": [
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "/dev/loop3"
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             ],
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "lv_name": "ceph_lv0",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "lv_size": "21470642176",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "name": "ceph_lv0",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "tags": {
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.cluster_name": "ceph",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.crush_device_class": "",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.encrypted": "0",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.osd_id": "0",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.type": "block",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.vdo": "0"
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             },
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "type": "block",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "vg_name": "ceph_vg0"
Nov 29 08:23:05 compute-0 confident_brattain[303192]:         }
Nov 29 08:23:05 compute-0 confident_brattain[303192]:     ],
Nov 29 08:23:05 compute-0 confident_brattain[303192]:     "1": [
Nov 29 08:23:05 compute-0 confident_brattain[303192]:         {
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "devices": [
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "/dev/loop4"
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             ],
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "lv_name": "ceph_lv1",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "lv_size": "21470642176",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "name": "ceph_lv1",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "tags": {
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.cluster_name": "ceph",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.crush_device_class": "",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.encrypted": "0",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.osd_id": "1",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.type": "block",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.vdo": "0"
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             },
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "type": "block",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "vg_name": "ceph_vg1"
Nov 29 08:23:05 compute-0 confident_brattain[303192]:         }
Nov 29 08:23:05 compute-0 confident_brattain[303192]:     ],
Nov 29 08:23:05 compute-0 confident_brattain[303192]:     "2": [
Nov 29 08:23:05 compute-0 confident_brattain[303192]:         {
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "devices": [
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "/dev/loop5"
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             ],
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "lv_name": "ceph_lv2",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "lv_size": "21470642176",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "name": "ceph_lv2",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "tags": {
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.cluster_name": "ceph",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.crush_device_class": "",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.encrypted": "0",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.osd_id": "2",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.type": "block",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:                 "ceph.vdo": "0"
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             },
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "type": "block",
Nov 29 08:23:05 compute-0 confident_brattain[303192]:             "vg_name": "ceph_vg2"
Nov 29 08:23:05 compute-0 confident_brattain[303192]:         }
Nov 29 08:23:05 compute-0 confident_brattain[303192]:     ]
Nov 29 08:23:05 compute-0 confident_brattain[303192]: }
Nov 29 08:23:05 compute-0 systemd[1]: libpod-97e030d811a2780f622a92cde5322fd7cf30685a0fc6fcfcc446a75a348304e3.scope: Deactivated successfully.
Nov 29 08:23:05 compute-0 conmon[303192]: conmon 97e030d811a2780f622a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-97e030d811a2780f622a92cde5322fd7cf30685a0fc6fcfcc446a75a348304e3.scope/container/memory.events
Nov 29 08:23:05 compute-0 podman[303176]: 2025-11-29 08:23:05.494694072 +0000 UTC m=+0.997890909 container died 97e030d811a2780f622a92cde5322fd7cf30685a0fc6fcfcc446a75a348304e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 08:23:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a80d5230ee0eb1baddbcb5562e2f0ae490045885c1bfb714bb0627b71cd4cc0f-merged.mount: Deactivated successfully.
Nov 29 08:23:05 compute-0 podman[303176]: 2025-11-29 08:23:05.561522502 +0000 UTC m=+1.064719299 container remove 97e030d811a2780f622a92cde5322fd7cf30685a0fc6fcfcc446a75a348304e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brattain, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 08:23:05 compute-0 systemd[1]: libpod-conmon-97e030d811a2780f622a92cde5322fd7cf30685a0fc6fcfcc446a75a348304e3.scope: Deactivated successfully.
Nov 29 08:23:05 compute-0 sudo[303071]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:05 compute-0 nova_compute[255040]: 2025-11-29 08:23:05.649 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:05 compute-0 sudo[303212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:23:05 compute-0 sudo[303212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:05 compute-0 sudo[303212]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:05 compute-0 sudo[303237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:23:05 compute-0 sudo[303237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:05 compute-0 sudo[303237]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:05 compute-0 sudo[303262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:23:05 compute-0 sudo[303262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:05 compute-0 sudo[303262]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:05 compute-0 sudo[303287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 08:23:05 compute-0 sudo[303287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:06 compute-0 podman[303353]: 2025-11-29 08:23:06.182448463 +0000 UTC m=+0.045323554 container create e2670cc2ae7d250fd7acc3ddb63829fc6ebbb5075d967f3ec4f0a228be7a012e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_edison, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:23:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 12 KiB/s wr, 51 op/s
Nov 29 08:23:06 compute-0 systemd[1]: Started libpod-conmon-e2670cc2ae7d250fd7acc3ddb63829fc6ebbb5075d967f3ec4f0a228be7a012e.scope.
Nov 29 08:23:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:23:06 compute-0 podman[303353]: 2025-11-29 08:23:06.161674417 +0000 UTC m=+0.024549528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:23:06 compute-0 podman[303353]: 2025-11-29 08:23:06.270204174 +0000 UTC m=+0.133079285 container init e2670cc2ae7d250fd7acc3ddb63829fc6ebbb5075d967f3ec4f0a228be7a012e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_edison, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 08:23:06 compute-0 podman[303353]: 2025-11-29 08:23:06.277736855 +0000 UTC m=+0.140611946 container start e2670cc2ae7d250fd7acc3ddb63829fc6ebbb5075d967f3ec4f0a228be7a012e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 08:23:06 compute-0 podman[303353]: 2025-11-29 08:23:06.281655371 +0000 UTC m=+0.144530532 container attach e2670cc2ae7d250fd7acc3ddb63829fc6ebbb5075d967f3ec4f0a228be7a012e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_edison, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 08:23:06 compute-0 sharp_edison[303369]: 167 167
Nov 29 08:23:06 compute-0 systemd[1]: libpod-e2670cc2ae7d250fd7acc3ddb63829fc6ebbb5075d967f3ec4f0a228be7a012e.scope: Deactivated successfully.
Nov 29 08:23:06 compute-0 podman[303353]: 2025-11-29 08:23:06.284719673 +0000 UTC m=+0.147594784 container died e2670cc2ae7d250fd7acc3ddb63829fc6ebbb5075d967f3ec4f0a228be7a012e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_edison, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 29 08:23:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-4193972eb18765abe3d2485f3dce63dd3c931b578d96f1c39d78bbde4ce01f88-merged.mount: Deactivated successfully.
Nov 29 08:23:06 compute-0 podman[303353]: 2025-11-29 08:23:06.331668461 +0000 UTC m=+0.194543552 container remove e2670cc2ae7d250fd7acc3ddb63829fc6ebbb5075d967f3ec4f0a228be7a012e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 08:23:06 compute-0 systemd[1]: libpod-conmon-e2670cc2ae7d250fd7acc3ddb63829fc6ebbb5075d967f3ec4f0a228be7a012e.scope: Deactivated successfully.
Nov 29 08:23:06 compute-0 podman[303393]: 2025-11-29 08:23:06.526784756 +0000 UTC m=+0.050725789 container create 71671632e0781f3c9bebacbc96c3a167c624d580987aa67d50bbb2e9dbf32785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_pike, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 08:23:06 compute-0 systemd[1]: Started libpod-conmon-71671632e0781f3c9bebacbc96c3a167c624d580987aa67d50bbb2e9dbf32785.scope.
Nov 29 08:23:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:23:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e1cd41019f8ca8a328b202d10d90999840c08ac8bcf7f25bce3c7f1ed8f4560/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:23:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e1cd41019f8ca8a328b202d10d90999840c08ac8bcf7f25bce3c7f1ed8f4560/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:23:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e1cd41019f8ca8a328b202d10d90999840c08ac8bcf7f25bce3c7f1ed8f4560/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:23:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e1cd41019f8ca8a328b202d10d90999840c08ac8bcf7f25bce3c7f1ed8f4560/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:23:06 compute-0 podman[303393]: 2025-11-29 08:23:06.502071574 +0000 UTC m=+0.026012697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:23:06 compute-0 podman[303393]: 2025-11-29 08:23:06.604345533 +0000 UTC m=+0.128286586 container init 71671632e0781f3c9bebacbc96c3a167c624d580987aa67d50bbb2e9dbf32785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_pike, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:23:06 compute-0 podman[303393]: 2025-11-29 08:23:06.610890949 +0000 UTC m=+0.134831992 container start 71671632e0781f3c9bebacbc96c3a167c624d580987aa67d50bbb2e9dbf32785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 08:23:06 compute-0 podman[303393]: 2025-11-29 08:23:06.615942465 +0000 UTC m=+0.139883528 container attach 71671632e0781f3c9bebacbc96c3a167c624d580987aa67d50bbb2e9dbf32785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_pike, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:23:07 compute-0 nova_compute[255040]: 2025-11-29 08:23:07.127 255071 DEBUG nova.compute.manager [req-1cfb6edc-5fb9-415d-8a0a-4a049fa505f8 req-4c7dec62-a384-4ddf-bb1b-6f4ea07ce243 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Received event network-changed-c25e6c8d-6940-4cef-9779-5f3cbc44baef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:23:07 compute-0 nova_compute[255040]: 2025-11-29 08:23:07.129 255071 DEBUG nova.compute.manager [req-1cfb6edc-5fb9-415d-8a0a-4a049fa505f8 req-4c7dec62-a384-4ddf-bb1b-6f4ea07ce243 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Refreshing instance network info cache due to event network-changed-c25e6c8d-6940-4cef-9779-5f3cbc44baef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:23:07 compute-0 nova_compute[255040]: 2025-11-29 08:23:07.130 255071 DEBUG oslo_concurrency.lockutils [req-1cfb6edc-5fb9-415d-8a0a-4a049fa505f8 req-4c7dec62-a384-4ddf-bb1b-6f4ea07ce243 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "refresh_cache-e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:23:07 compute-0 nova_compute[255040]: 2025-11-29 08:23:07.130 255071 DEBUG oslo_concurrency.lockutils [req-1cfb6edc-5fb9-415d-8a0a-4a049fa505f8 req-4c7dec62-a384-4ddf-bb1b-6f4ea07ce243 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquired lock "refresh_cache-e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:23:07 compute-0 nova_compute[255040]: 2025-11-29 08:23:07.130 255071 DEBUG nova.network.neutron [req-1cfb6edc-5fb9-415d-8a0a-4a049fa505f8 req-4c7dec62-a384-4ddf-bb1b-6f4ea07ce243 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Refreshing network info cache for port c25e6c8d-6940-4cef-9779-5f3cbc44baef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:23:07 compute-0 ceph-mon[75237]: pgmap v2147: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 12 KiB/s wr, 51 op/s
Nov 29 08:23:07 compute-0 optimistic_pike[303410]: {
Nov 29 08:23:07 compute-0 optimistic_pike[303410]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 08:23:07 compute-0 optimistic_pike[303410]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:23:07 compute-0 optimistic_pike[303410]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:23:07 compute-0 optimistic_pike[303410]:         "osd_id": 2,
Nov 29 08:23:07 compute-0 optimistic_pike[303410]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:23:07 compute-0 optimistic_pike[303410]:         "type": "bluestore"
Nov 29 08:23:07 compute-0 optimistic_pike[303410]:     },
Nov 29 08:23:07 compute-0 optimistic_pike[303410]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 08:23:07 compute-0 optimistic_pike[303410]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:23:07 compute-0 optimistic_pike[303410]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:23:07 compute-0 optimistic_pike[303410]:         "osd_id": 0,
Nov 29 08:23:07 compute-0 optimistic_pike[303410]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:23:07 compute-0 optimistic_pike[303410]:         "type": "bluestore"
Nov 29 08:23:07 compute-0 optimistic_pike[303410]:     },
Nov 29 08:23:07 compute-0 optimistic_pike[303410]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 08:23:07 compute-0 optimistic_pike[303410]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:23:07 compute-0 optimistic_pike[303410]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:23:07 compute-0 optimistic_pike[303410]:         "osd_id": 1,
Nov 29 08:23:07 compute-0 optimistic_pike[303410]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:23:07 compute-0 optimistic_pike[303410]:         "type": "bluestore"
Nov 29 08:23:07 compute-0 optimistic_pike[303410]:     }
Nov 29 08:23:07 compute-0 optimistic_pike[303410]: }
Nov 29 08:23:07 compute-0 systemd[1]: libpod-71671632e0781f3c9bebacbc96c3a167c624d580987aa67d50bbb2e9dbf32785.scope: Deactivated successfully.
Nov 29 08:23:07 compute-0 podman[303393]: 2025-11-29 08:23:07.592281985 +0000 UTC m=+1.116223038 container died 71671632e0781f3c9bebacbc96c3a167c624d580987aa67d50bbb2e9dbf32785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_pike, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:23:07 compute-0 nova_compute[255040]: 2025-11-29 08:23:07.605 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e1cd41019f8ca8a328b202d10d90999840c08ac8bcf7f25bce3c7f1ed8f4560-merged.mount: Deactivated successfully.
Nov 29 08:23:08 compute-0 podman[303393]: 2025-11-29 08:23:08.056649584 +0000 UTC m=+1.580590617 container remove 71671632e0781f3c9bebacbc96c3a167c624d580987aa67d50bbb2e9dbf32785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 08:23:08 compute-0 sudo[303287]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:23:08 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:23:08 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:23:08 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:23:08 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 860f6825-78e4-4a36-88a9-032a9d08c65f does not exist
Nov 29 08:23:08 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 8531236a-0de0-443a-90db-dcc695c2f2fd does not exist
Nov 29 08:23:08 compute-0 systemd[1]: libpod-conmon-71671632e0781f3c9bebacbc96c3a167c624d580987aa67d50bbb2e9dbf32785.scope: Deactivated successfully.
Nov 29 08:23:08 compute-0 sudo[303456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:23:08 compute-0 sudo[303456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:08 compute-0 sudo[303456]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:23:08 compute-0 sudo[303481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:23:08 compute-0 sudo[303481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:23:08 compute-0 sudo[303481]: pam_unix(sudo:session): session closed for user root
Nov 29 08:23:08 compute-0 nova_compute[255040]: 2025-11-29 08:23:08.698 255071 DEBUG nova.network.neutron [req-1cfb6edc-5fb9-415d-8a0a-4a049fa505f8 req-4c7dec62-a384-4ddf-bb1b-6f4ea07ce243 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Updated VIF entry in instance network info cache for port c25e6c8d-6940-4cef-9779-5f3cbc44baef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:23:08 compute-0 nova_compute[255040]: 2025-11-29 08:23:08.699 255071 DEBUG nova.network.neutron [req-1cfb6edc-5fb9-415d-8a0a-4a049fa505f8 req-4c7dec62-a384-4ddf-bb1b-6f4ea07ce243 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Updating instance_info_cache with network_info: [{"id": "c25e6c8d-6940-4cef-9779-5f3cbc44baef", "address": "fa:16:3e:c0:65:fa", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc25e6c8d-69", "ovs_interfaceid": "c25e6c8d-6940-4cef-9779-5f3cbc44baef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:23:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:23:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:23:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:23:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:23:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:23:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:23:08 compute-0 nova_compute[255040]: 2025-11-29 08:23:08.846 255071 DEBUG oslo_concurrency.lockutils [req-1cfb6edc-5fb9-415d-8a0a-4a049fa505f8 req-4c7dec62-a384-4ddf-bb1b-6f4ea07ce243 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Releasing lock "refresh_cache-e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:23:09 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:23:09 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:23:09 compute-0 ceph-mon[75237]: pgmap v2148: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:23:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:23:10 compute-0 nova_compute[255040]: 2025-11-29 08:23:10.652 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:11 compute-0 ceph-mon[75237]: pgmap v2149: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 08:23:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 65 op/s
Nov 29 08:23:12 compute-0 ceph-mon[75237]: pgmap v2150: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 65 op/s
Nov 29 08:23:12 compute-0 nova_compute[255040]: 2025-11-29 08:23:12.608 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:13 compute-0 ovn_controller[153295]: 2025-11-29T08:23:13Z|00074|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.6
Nov 29 08:23:13 compute-0 ovn_controller[153295]: 2025-11-29T08:23:13Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:c0:65:fa 10.100.0.6
Nov 29 08:23:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Nov 29 08:23:14 compute-0 podman[303506]: 2025-11-29 08:23:14.983141858 +0000 UTC m=+0.143370722 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Nov 29 08:23:15 compute-0 ceph-mon[75237]: pgmap v2151: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Nov 29 08:23:15 compute-0 nova_compute[255040]: 2025-11-29 08:23:15.690 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 852 B/s wr, 89 op/s
Nov 29 08:23:17 compute-0 ceph-mon[75237]: pgmap v2152: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 852 B/s wr, 89 op/s
Nov 29 08:23:17 compute-0 nova_compute[255040]: 2025-11-29 08:23:17.612 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:17 compute-0 ovn_controller[153295]: 2025-11-29T08:23:17Z|00076|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.6
Nov 29 08:23:17 compute-0 ovn_controller[153295]: 2025-11-29T08:23:17Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:c0:65:fa 10.100.0.6
Nov 29 08:23:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 7.5 KiB/s wr, 66 op/s
Nov 29 08:23:18 compute-0 ovn_controller[153295]: 2025-11-29T08:23:18Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c0:65:fa 10.100.0.6
Nov 29 08:23:18 compute-0 ovn_controller[153295]: 2025-11-29T08:23:18Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c0:65:fa 10.100.0.6
Nov 29 08:23:19 compute-0 ceph-mon[75237]: pgmap v2153: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 7.5 KiB/s wr, 66 op/s
Nov 29 08:23:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 11 KiB/s wr, 44 op/s
Nov 29 08:23:20 compute-0 nova_compute[255040]: 2025-11-29 08:23:20.692 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:21 compute-0 ceph-mon[75237]: pgmap v2154: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 11 KiB/s wr, 44 op/s
Nov 29 08:23:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 21 KiB/s wr, 44 op/s
Nov 29 08:23:22 compute-0 nova_compute[255040]: 2025-11-29 08:23:22.614 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:23 compute-0 ceph-mon[75237]: pgmap v2155: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 21 KiB/s wr, 44 op/s
Nov 29 08:23:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 21 KiB/s wr, 44 op/s
Nov 29 08:23:25 compute-0 ceph-mon[75237]: pgmap v2156: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 21 KiB/s wr, 44 op/s
Nov 29 08:23:25 compute-0 nova_compute[255040]: 2025-11-29 08:23:25.745 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:25 compute-0 podman[303532]: 2025-11-29 08:23:25.899120357 +0000 UTC m=+0.063775040 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Nov 29 08:23:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 25 KiB/s wr, 44 op/s
Nov 29 08:23:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:23:27.148 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:23:27.149 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:23:27.149 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:27 compute-0 ceph-mon[75237]: pgmap v2157: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 25 KiB/s wr, 44 op/s
Nov 29 08:23:27 compute-0 nova_compute[255040]: 2025-11-29 08:23:27.616 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:27 compute-0 sshd-session[303551]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Nov 29 08:23:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 282 KiB/s rd, 24 KiB/s wr, 18 op/s
Nov 29 08:23:29 compute-0 ceph-mon[75237]: pgmap v2158: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 282 KiB/s rd, 24 KiB/s wr, 18 op/s
Nov 29 08:23:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 334 B/s rd, 19 KiB/s wr, 1 op/s
Nov 29 08:23:30 compute-0 nova_compute[255040]: 2025-11-29 08:23:30.747 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:31 compute-0 ceph-mon[75237]: pgmap v2159: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 334 B/s rd, 19 KiB/s wr, 1 op/s
Nov 29 08:23:31 compute-0 podman[303553]: 2025-11-29 08:23:31.909166715 +0000 UTC m=+0.069250756 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 29 08:23:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s wr, 1 op/s
Nov 29 08:23:32 compute-0 nova_compute[255040]: 2025-11-29 08:23:32.618 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:33 compute-0 ceph-mon[75237]: pgmap v2160: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s wr, 1 op/s
Nov 29 08:23:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 1 op/s
Nov 29 08:23:35 compute-0 ovn_controller[153295]: 2025-11-29T08:23:35Z|00294|memory_trim|INFO|Detected inactivity (last active 30022 ms ago): trimming memory
Nov 29 08:23:35 compute-0 ceph-mon[75237]: pgmap v2161: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 1 op/s
Nov 29 08:23:35 compute-0 nova_compute[255040]: 2025-11-29 08:23:35.748 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Nov 29 08:23:37 compute-0 sshd-session[303551]: Connection closed by authenticating user root 139.19.117.197 port 41756 [preauth]
Nov 29 08:23:37 compute-0 ceph-mon[75237]: pgmap v2162: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Nov 29 08:23:37 compute-0 nova_compute[255040]: 2025-11-29 08:23:37.683 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s wr, 1 op/s
Nov 29 08:23:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:23:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:23:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:23:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:23:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:23:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:23:38 compute-0 nova_compute[255040]: 2025-11-29 08:23:38.913 255071 DEBUG oslo_concurrency.lockutils [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:38 compute-0 nova_compute[255040]: 2025-11-29 08:23:38.914 255071 DEBUG oslo_concurrency.lockutils [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:38 compute-0 nova_compute[255040]: 2025-11-29 08:23:38.914 255071 DEBUG oslo_concurrency.lockutils [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:38 compute-0 nova_compute[255040]: 2025-11-29 08:23:38.914 255071 DEBUG oslo_concurrency.lockutils [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:38 compute-0 nova_compute[255040]: 2025-11-29 08:23:38.914 255071 DEBUG oslo_concurrency.lockutils [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:38 compute-0 nova_compute[255040]: 2025-11-29 08:23:38.915 255071 INFO nova.compute.manager [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Terminating instance
Nov 29 08:23:38 compute-0 nova_compute[255040]: 2025-11-29 08:23:38.916 255071 DEBUG nova.compute.manager [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:23:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:23:38
Nov 29 08:23:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:23:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:23:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['.mgr', 'images', 'backups', 'default.rgw.control', 'vms', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta']
Nov 29 08:23:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:23:38 compute-0 kernel: tapc25e6c8d-69 (unregistering): left promiscuous mode
Nov 29 08:23:38 compute-0 NetworkManager[49116]: <info>  [1764404618.9816] device (tapc25e6c8d-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:23:38 compute-0 ovn_controller[153295]: 2025-11-29T08:23:38Z|00295|binding|INFO|Releasing lport c25e6c8d-6940-4cef-9779-5f3cbc44baef from this chassis (sb_readonly=0)
Nov 29 08:23:38 compute-0 ovn_controller[153295]: 2025-11-29T08:23:38Z|00296|binding|INFO|Setting lport c25e6c8d-6940-4cef-9779-5f3cbc44baef down in Southbound
Nov 29 08:23:38 compute-0 ovn_controller[153295]: 2025-11-29T08:23:38Z|00297|binding|INFO|Removing iface tapc25e6c8d-69 ovn-installed in OVS
Nov 29 08:23:38 compute-0 nova_compute[255040]: 2025-11-29 08:23:38.993 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:38 compute-0 nova_compute[255040]: 2025-11-29 08:23:38.995 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:23:39.000 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:65:fa 10.100.0.6'], port_security=['fa:16:3e:c0:65:fa 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a234aa60-c8c5-4137-96cd-77f576498813', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd25c6608beec4f818c6e402939192f16', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd377438-6ae0-49fd-8ec7-c089abbaa180', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b002bcc-9ffd-4aaa-8483-7d6ef4853f0e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>], logical_port=c25e6c8d-6940-4cef-9779-5f3cbc44baef) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa998714af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:23:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:23:39.001 163500 INFO neutron.agent.ovn.metadata.agent [-] Port c25e6c8d-6940-4cef-9779-5f3cbc44baef in datapath a234aa60-c8c5-4137-96cd-77f576498813 unbound from our chassis
Nov 29 08:23:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:23:39.002 163500 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a234aa60-c8c5-4137-96cd-77f576498813, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:23:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:23:39.002 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[928a7aee-5e38-4060-a0e4-14f8f1713a7b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:23:39.003 163500 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813 namespace which is not needed anymore
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.011 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:39 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Nov 29 08:23:39 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001e.scope: Consumed 17.159s CPU time.
Nov 29 08:23:39 compute-0 systemd-machined[216271]: Machine qemu-30-instance-0000001e terminated.
Nov 29 08:23:39 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[302583]: [NOTICE]   (302587) : haproxy version is 2.8.14-c23fe91
Nov 29 08:23:39 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[302583]: [NOTICE]   (302587) : path to executable is /usr/sbin/haproxy
Nov 29 08:23:39 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[302583]: [WARNING]  (302587) : Exiting Master process...
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.137 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:39 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[302583]: [ALERT]    (302587) : Current worker (302589) exited with code 143 (Terminated)
Nov 29 08:23:39 compute-0 neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813[302583]: [WARNING]  (302587) : All workers exited. Exiting... (0)
Nov 29 08:23:39 compute-0 systemd[1]: libpod-2458f79376b71ef16747747eaf6e652b4ba462a38b6f05b6368557d52f678518.scope: Deactivated successfully.
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.142 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:39 compute-0 podman[303599]: 2025-11-29 08:23:39.146650268 +0000 UTC m=+0.053907176 container died 2458f79376b71ef16747747eaf6e652b4ba462a38b6f05b6368557d52f678518 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.154 255071 INFO nova.virt.libvirt.driver [-] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Instance destroyed successfully.
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.154 255071 DEBUG nova.objects.instance [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lazy-loading 'resources' on Instance uuid e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.169 255071 DEBUG nova.virt.libvirt.vif [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:22:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-416106883',display_name='tempest-TransferEncryptedVolumeTest-server-416106883',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-416106883',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE92PotWc36K+VLDIaJ8QpGP59cELlheEqm9nY+TFm8JbcBbR2J8kqRcsvjGW95/sxJ5sqaLllJygdYCELfyHlA83lAF017jxtDaIPnwvxv16NEk587eEM5n6ok24IEshQ==',key_name='tempest-TransferEncryptedVolumeTest-1268363580',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:23:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d25c6608beec4f818c6e402939192f16',ramdisk_id='',reservation_id='r-wvfpwerz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1043863442',owner_user_name='tempest-TransferEncryptedVolumeTest-1043863442-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:23:01Z,user_data=None,user_id='a08e1ef223b748efa4d5bdc804150f97',uuid=e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c25e6c8d-6940-4cef-9779-5f3cbc44baef", "address": "fa:16:3e:c0:65:fa", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc25e6c8d-69", "ovs_interfaceid": "c25e6c8d-6940-4cef-9779-5f3cbc44baef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.170 255071 DEBUG nova.network.os_vif_util [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converting VIF {"id": "c25e6c8d-6940-4cef-9779-5f3cbc44baef", "address": "fa:16:3e:c0:65:fa", "network": {"id": "a234aa60-c8c5-4137-96cd-77f576498813", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-911341591-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d25c6608beec4f818c6e402939192f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc25e6c8d-69", "ovs_interfaceid": "c25e6c8d-6940-4cef-9779-5f3cbc44baef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.171 255071 DEBUG nova.network.os_vif_util [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c0:65:fa,bridge_name='br-int',has_traffic_filtering=True,id=c25e6c8d-6940-4cef-9779-5f3cbc44baef,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc25e6c8d-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.172 255071 DEBUG os_vif [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:65:fa,bridge_name='br-int',has_traffic_filtering=True,id=c25e6c8d-6940-4cef-9779-5f3cbc44baef,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc25e6c8d-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.173 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.174 255071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc25e6c8d-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.175 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.177 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.179 255071 INFO os_vif [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:65:fa,bridge_name='br-int',has_traffic_filtering=True,id=c25e6c8d-6940-4cef-9779-5f3cbc44baef,network=Network(a234aa60-c8c5-4137-96cd-77f576498813),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc25e6c8d-69')
Nov 29 08:23:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2458f79376b71ef16747747eaf6e652b4ba462a38b6f05b6368557d52f678518-userdata-shm.mount: Deactivated successfully.
Nov 29 08:23:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-509c9239d6f4f2f681f4acd600217b5b6be50adef9dc69ebae842ddd84bc6b96-merged.mount: Deactivated successfully.
Nov 29 08:23:39 compute-0 podman[303599]: 2025-11-29 08:23:39.190147683 +0000 UTC m=+0.097404581 container cleanup 2458f79376b71ef16747747eaf6e652b4ba462a38b6f05b6368557d52f678518 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:23:39 compute-0 systemd[1]: libpod-conmon-2458f79376b71ef16747747eaf6e652b4ba462a38b6f05b6368557d52f678518.scope: Deactivated successfully.
Nov 29 08:23:39 compute-0 podman[303652]: 2025-11-29 08:23:39.266489197 +0000 UTC m=+0.046239860 container remove 2458f79376b71ef16747747eaf6e652b4ba462a38b6f05b6368557d52f678518 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:23:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:23:39.272 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[23d95860-52d8-49e3-bd08-378e9c20ee5a]: (4, ('Sat Nov 29 08:23:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813 (2458f79376b71ef16747747eaf6e652b4ba462a38b6f05b6368557d52f678518)\n2458f79376b71ef16747747eaf6e652b4ba462a38b6f05b6368557d52f678518\nSat Nov 29 08:23:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813 (2458f79376b71ef16747747eaf6e652b4ba462a38b6f05b6368557d52f678518)\n2458f79376b71ef16747747eaf6e652b4ba462a38b6f05b6368557d52f678518\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:23:39.273 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[05cdc10a-e7ff-4353-99fc-43889c5ea6ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:23:39.274 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa234aa60-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.276 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:39 compute-0 kernel: tapa234aa60-c0: left promiscuous mode
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.288 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:23:39.291 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[a0216883-2e04-4f63-a1f2-a78fb92dcf79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:23:39.305 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[c59dad6e-f426-4153-8486-dc1b2a5fdf2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:23:39.307 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[89ee6fb8-1773-42c1-813c-cda949a5bf7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:23:39.323 261880 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb44784-ab68-42a8-9897-48fbe2ced71f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681571, 'reachable_time': 23847, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303671, 'error': None, 'target': 'ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:39 compute-0 systemd[1]: run-netns-ovnmeta\x2da234aa60\x2dc8c5\x2d4137\x2d96cd\x2d77f576498813.mount: Deactivated successfully.
Nov 29 08:23:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:23:39.327 163611 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a234aa60-c8c5-4137-96cd-77f576498813 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:23:39 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:23:39.327 163611 DEBUG oslo.privsep.daemon [-] privsep: reply[be012683-89c7-47b9-acd0-7f6a924265d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.368 255071 INFO nova.virt.libvirt.driver [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Deleting instance files /var/lib/nova/instances/e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e_del
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.369 255071 INFO nova.virt.libvirt.driver [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Deletion of /var/lib/nova/instances/e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e_del complete
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.428 255071 INFO nova.compute.manager [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Took 0.51 seconds to destroy the instance on the hypervisor.
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.428 255071 DEBUG oslo.service.loopingcall [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.429 255071 DEBUG nova.compute.manager [-] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.429 255071 DEBUG nova.network.neutron [-] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:23:39 compute-0 ceph-mon[75237]: pgmap v2163: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s wr, 1 op/s
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.900 255071 DEBUG nova.compute.manager [req-46e4ffb8-e6b4-42bd-9dd4-e306c0421e48 req-502d7d29-ee81-49c9-aadc-dc0b75b03dbd cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Received event network-vif-unplugged-c25e6c8d-6940-4cef-9779-5f3cbc44baef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.902 255071 DEBUG oslo_concurrency.lockutils [req-46e4ffb8-e6b4-42bd-9dd4-e306c0421e48 req-502d7d29-ee81-49c9-aadc-dc0b75b03dbd cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.902 255071 DEBUG oslo_concurrency.lockutils [req-46e4ffb8-e6b4-42bd-9dd4-e306c0421e48 req-502d7d29-ee81-49c9-aadc-dc0b75b03dbd cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.902 255071 DEBUG oslo_concurrency.lockutils [req-46e4ffb8-e6b4-42bd-9dd4-e306c0421e48 req-502d7d29-ee81-49c9-aadc-dc0b75b03dbd cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.902 255071 DEBUG nova.compute.manager [req-46e4ffb8-e6b4-42bd-9dd4-e306c0421e48 req-502d7d29-ee81-49c9-aadc-dc0b75b03dbd cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] No waiting events found dispatching network-vif-unplugged-c25e6c8d-6940-4cef-9779-5f3cbc44baef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:23:39 compute-0 nova_compute[255040]: 2025-11-29 08:23:39.903 255071 DEBUG nova.compute.manager [req-46e4ffb8-e6b4-42bd-9dd4-e306c0421e48 req-502d7d29-ee81-49c9-aadc-dc0b75b03dbd cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Received event network-vif-unplugged-c25e6c8d-6940-4cef-9779-5f3cbc44baef for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:23:40 compute-0 nova_compute[255040]: 2025-11-29 08:23:40.360 255071 DEBUG nova.network.neutron [-] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:23:40 compute-0 nova_compute[255040]: 2025-11-29 08:23:40.377 255071 INFO nova.compute.manager [-] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Took 0.95 seconds to deallocate network for instance.
Nov 29 08:23:40 compute-0 nova_compute[255040]: 2025-11-29 08:23:40.448 255071 DEBUG nova.compute.manager [req-50cc7c49-6e88-4fd4-9cc3-b29c5e4d23e7 req-37339ab5-ecd3-4adc-86f1-c23481fb4e4a cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Received event network-vif-deleted-c25e6c8d-6940-4cef-9779-5f3cbc44baef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:23:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 10 KiB/s wr, 4 op/s
Nov 29 08:23:40 compute-0 nova_compute[255040]: 2025-11-29 08:23:40.578 255071 INFO nova.compute.manager [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Took 0.20 seconds to detach 1 volumes for instance.
Nov 29 08:23:40 compute-0 nova_compute[255040]: 2025-11-29 08:23:40.630 255071 DEBUG oslo_concurrency.lockutils [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:40 compute-0 nova_compute[255040]: 2025-11-29 08:23:40.631 255071 DEBUG oslo_concurrency.lockutils [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:40 compute-0 nova_compute[255040]: 2025-11-29 08:23:40.678 255071 DEBUG oslo_concurrency.processutils [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:23:40 compute-0 nova_compute[255040]: 2025-11-29 08:23:40.750 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:23:41 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3828471268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:41 compute-0 nova_compute[255040]: 2025-11-29 08:23:41.115 255071 DEBUG oslo_concurrency.processutils [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:23:41 compute-0 nova_compute[255040]: 2025-11-29 08:23:41.121 255071 DEBUG nova.compute.provider_tree [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:23:41 compute-0 nova_compute[255040]: 2025-11-29 08:23:41.137 255071 DEBUG nova.scheduler.client.report [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:23:41 compute-0 nova_compute[255040]: 2025-11-29 08:23:41.157 255071 DEBUG oslo_concurrency.lockutils [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:41 compute-0 nova_compute[255040]: 2025-11-29 08:23:41.202 255071 INFO nova.scheduler.client.report [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Deleted allocations for instance e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e
Nov 29 08:23:41 compute-0 nova_compute[255040]: 2025-11-29 08:23:41.268 255071 DEBUG oslo_concurrency.lockutils [None req-d23c1a43-7581-4a88-90c4-8fe257d0f209 a08e1ef223b748efa4d5bdc804150f97 d25c6608beec4f818c6e402939192f16 - - default default] Lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:41 compute-0 ceph-mon[75237]: pgmap v2164: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 10 KiB/s wr, 4 op/s
Nov 29 08:23:41 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3828471268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:41 compute-0 nova_compute[255040]: 2025-11-29 08:23:41.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:23:41 compute-0 nova_compute[255040]: 2025-11-29 08:23:41.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:23:42 compute-0 nova_compute[255040]: 2025-11-29 08:23:42.042 255071 DEBUG nova.compute.manager [req-9ad3d414-0e13-4f03-9b55-b91c58f3371b req-3b9a4a67-97f4-4e47-8333-626284439842 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Received event network-vif-plugged-c25e6c8d-6940-4cef-9779-5f3cbc44baef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:23:42 compute-0 nova_compute[255040]: 2025-11-29 08:23:42.043 255071 DEBUG oslo_concurrency.lockutils [req-9ad3d414-0e13-4f03-9b55-b91c58f3371b req-3b9a4a67-97f4-4e47-8333-626284439842 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Acquiring lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:42 compute-0 nova_compute[255040]: 2025-11-29 08:23:42.043 255071 DEBUG oslo_concurrency.lockutils [req-9ad3d414-0e13-4f03-9b55-b91c58f3371b req-3b9a4a67-97f4-4e47-8333-626284439842 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:42 compute-0 nova_compute[255040]: 2025-11-29 08:23:42.043 255071 DEBUG oslo_concurrency.lockutils [req-9ad3d414-0e13-4f03-9b55-b91c58f3371b req-3b9a4a67-97f4-4e47-8333-626284439842 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] Lock "e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:42 compute-0 nova_compute[255040]: 2025-11-29 08:23:42.044 255071 DEBUG nova.compute.manager [req-9ad3d414-0e13-4f03-9b55-b91c58f3371b req-3b9a4a67-97f4-4e47-8333-626284439842 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] No waiting events found dispatching network-vif-plugged-c25e6c8d-6940-4cef-9779-5f3cbc44baef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:23:42 compute-0 nova_compute[255040]: 2025-11-29 08:23:42.044 255071 WARNING nova.compute.manager [req-9ad3d414-0e13-4f03-9b55-b91c58f3371b req-3b9a4a67-97f4-4e47-8333-626284439842 cb53babfc36f4e8d943884fc817d6078 d92bebadc64f48e5b0b6ef10e77b9a5d - - default default] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Received unexpected event network-vif-plugged-c25e6c8d-6940-4cef-9779-5f3cbc44baef for instance with vm_state deleted and task_state None.
Nov 29 08:23:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 8.6 KiB/s wr, 19 op/s
Nov 29 08:23:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:23:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:23:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:23:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:23:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:23:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:23:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:23:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:23:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:23:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:23:43 compute-0 nova_compute[255040]: 2025-11-29 08:23:43.504 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:23:43.505 163500 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'de:17:dc', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:8e:da:87:28:a0'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:23:43 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:23:43.507 163500 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:23:43 compute-0 ceph-mon[75237]: pgmap v2165: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 8.6 KiB/s wr, 19 op/s
Nov 29 08:23:44 compute-0 nova_compute[255040]: 2025-11-29 08:23:44.176 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 1.6 KiB/s wr, 18 op/s
Nov 29 08:23:44 compute-0 nova_compute[255040]: 2025-11-29 08:23:44.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:23:44 compute-0 nova_compute[255040]: 2025-11-29 08:23:44.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:23:44 compute-0 nova_compute[255040]: 2025-11-29 08:23:44.976 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:23:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:23:45 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/263458102' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:23:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:23:45 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/263458102' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:23:45 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:23:45.509 163500 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=230c4529-a404-4083-a72e-940c7905cc88, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:23:45 compute-0 ceph-mon[75237]: pgmap v2166: 305 pgs: 305 active+clean; 453 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 1.6 KiB/s wr, 18 op/s
Nov 29 08:23:45 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/263458102' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:23:45 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/263458102' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:23:45 compute-0 nova_compute[255040]: 2025-11-29 08:23:45.753 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:45 compute-0 podman[303695]: 2025-11-29 08:23:45.92411887 +0000 UTC m=+0.091741068 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:23:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 331 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 1.8 KiB/s wr, 22 op/s
Nov 29 08:23:46 compute-0 nova_compute[255040]: 2025-11-29 08:23:46.974 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:23:46 compute-0 nova_compute[255040]: 2025-11-29 08:23:46.995 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:46 compute-0 nova_compute[255040]: 2025-11-29 08:23:46.996 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:46 compute-0 nova_compute[255040]: 2025-11-29 08:23:46.996 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:46 compute-0 nova_compute[255040]: 2025-11-29 08:23:46.996 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:23:46 compute-0 nova_compute[255040]: 2025-11-29 08:23:46.996 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:23:47 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:23:47 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/225662004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:47 compute-0 nova_compute[255040]: 2025-11-29 08:23:47.416 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:23:47 compute-0 nova_compute[255040]: 2025-11-29 08:23:47.581 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:23:47 compute-0 nova_compute[255040]: 2025-11-29 08:23:47.582 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4289MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:23:47 compute-0 nova_compute[255040]: 2025-11-29 08:23:47.582 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:23:47 compute-0 nova_compute[255040]: 2025-11-29 08:23:47.582 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:23:47 compute-0 ceph-mon[75237]: pgmap v2167: 305 pgs: 305 active+clean; 331 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 1.8 KiB/s wr, 22 op/s
Nov 29 08:23:47 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/225662004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:47 compute-0 nova_compute[255040]: 2025-11-29 08:23:47.633 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:23:47 compute-0 nova_compute[255040]: 2025-11-29 08:23:47.634 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:23:47 compute-0 nova_compute[255040]: 2025-11-29 08:23:47.646 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Refreshing inventories for resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 08:23:47 compute-0 nova_compute[255040]: 2025-11-29 08:23:47.664 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Updating ProviderTree inventory for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 08:23:47 compute-0 nova_compute[255040]: 2025-11-29 08:23:47.664 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Updating inventory in ProviderTree for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 08:23:47 compute-0 nova_compute[255040]: 2025-11-29 08:23:47.680 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Refreshing aggregate associations for resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 08:23:47 compute-0 nova_compute[255040]: 2025-11-29 08:23:47.727 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Refreshing trait associations for resource provider 858d78b2-ffcd-4247-ba96-0ec767fec62e, traits: COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_ABM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE,COMPUTE_NODE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 08:23:47 compute-0 nova_compute[255040]: 2025-11-29 08:23:47.744 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:23:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:23:48 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1746472508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:48 compute-0 nova_compute[255040]: 2025-11-29 08:23:48.144 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:23:48 compute-0 nova_compute[255040]: 2025-11-29 08:23:48.150 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:23:48 compute-0 nova_compute[255040]: 2025-11-29 08:23:48.165 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:23:48 compute-0 nova_compute[255040]: 2025-11-29 08:23:48.185 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:23:48 compute-0 nova_compute[255040]: 2025-11-29 08:23:48.186 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:23:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 331 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 852 B/s wr, 21 op/s
Nov 29 08:23:48 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1746472508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:23:49 compute-0 nova_compute[255040]: 2025-11-29 08:23:49.140 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:49 compute-0 nova_compute[255040]: 2025-11-29 08:23:49.178 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:49 compute-0 nova_compute[255040]: 2025-11-29 08:23:49.451 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:49 compute-0 ceph-mon[75237]: pgmap v2168: 305 pgs: 305 active+clean; 331 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 852 B/s wr, 21 op/s
Nov 29 08:23:50 compute-0 nova_compute[255040]: 2025-11-29 08:23:50.188 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:23:50 compute-0 nova_compute[255040]: 2025-11-29 08:23:50.189 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:23:50 compute-0 nova_compute[255040]: 2025-11-29 08:23:50.189 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:23:50 compute-0 nova_compute[255040]: 2025-11-29 08:23:50.189 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:23:50 compute-0 nova_compute[255040]: 2025-11-29 08:23:50.202 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:23:50 compute-0 nova_compute[255040]: 2025-11-29 08:23:50.203 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:23:50 compute-0 nova_compute[255040]: 2025-11-29 08:23:50.204 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:23:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Nov 29 08:23:50 compute-0 nova_compute[255040]: 2025-11-29 08:23:50.756 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:51 compute-0 ceph-mon[75237]: pgmap v2169: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Nov 29 08:23:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 135 KiB/s rd, 938 B/s wr, 33 op/s
Nov 29 08:23:53 compute-0 ceph-mon[75237]: pgmap v2170: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 135 KiB/s rd, 938 B/s wr, 33 op/s
Nov 29 08:23:54 compute-0 nova_compute[255040]: 2025-11-29 08:23:54.153 255071 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404619.1518126, e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:23:54 compute-0 nova_compute[255040]: 2025-11-29 08:23:54.154 255071 INFO nova.compute.manager [-] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] VM Stopped (Lifecycle Event)
Nov 29 08:23:54 compute-0 nova_compute[255040]: 2025-11-29 08:23:54.177 255071 DEBUG nova.compute.manager [None req-2909c980-10c5-4742-b02c-9bbc6f6fb0a3 - - - - - -] [instance: e4ab8c0f-3bcd-49a1-bc2b-0a9c122e5d3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:23:54 compute-0 nova_compute[255040]: 2025-11-29 08:23:54.181 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 18 op/s
Nov 29 08:23:55 compute-0 ceph-mon[75237]: pgmap v2171: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 18 op/s
Nov 29 08:23:55 compute-0 nova_compute[255040]: 2025-11-29 08:23:55.758 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:23:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 18 op/s
Nov 29 08:23:56 compute-0 podman[303768]: 2025-11-29 08:23:56.935503219 +0000 UTC m=+0.082062589 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 08:23:57 compute-0 ceph-mon[75237]: pgmap v2172: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 18 op/s
Nov 29 08:23:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 341 B/s wr, 15 op/s
Nov 29 08:23:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:23:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1458168174' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:23:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:23:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1458168174' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:23:58 compute-0 nova_compute[255040]: 2025-11-29 08:23:58.984 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:23:59 compute-0 nova_compute[255040]: 2025-11-29 08:23:59.185 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:23:59 compute-0 ceph-mon[75237]: pgmap v2173: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 341 B/s wr, 15 op/s
Nov 29 08:23:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1458168174' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:23:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/1458168174' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:24:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 341 B/s wr, 15 op/s
Nov 29 08:24:00 compute-0 nova_compute[255040]: 2025-11-29 08:24:00.801 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:00.823199) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404640823320, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1545, "num_deletes": 256, "total_data_size": 2433213, "memory_usage": 2475560, "flush_reason": "Manual Compaction"}
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404640841890, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 2387708, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40010, "largest_seqno": 41554, "table_properties": {"data_size": 2380487, "index_size": 4228, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14783, "raw_average_key_size": 19, "raw_value_size": 2366123, "raw_average_value_size": 3150, "num_data_blocks": 189, "num_entries": 751, "num_filter_entries": 751, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404481, "oldest_key_time": 1764404481, "file_creation_time": 1764404640, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 18729 microseconds, and 8442 cpu microseconds.
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:00.841940) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 2387708 bytes OK
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:00.841957) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:00.843648) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:00.843671) EVENT_LOG_v1 {"time_micros": 1764404640843656, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:00.843687) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 2426438, prev total WAL file size 2426438, number of live WAL files 2.
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:00.844430) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323539' seq:72057594037927935, type:22 .. '6C6F676D0031353131' seq:0, type:0; will stop at (end)
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(2331KB)], [83(9763KB)]
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404640844538, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 12385656, "oldest_snapshot_seqno": -1}
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 7493 keys, 12237916 bytes, temperature: kUnknown
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404640922614, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 12237916, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12181658, "index_size": 36434, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18757, "raw_key_size": 190348, "raw_average_key_size": 25, "raw_value_size": 12041232, "raw_average_value_size": 1606, "num_data_blocks": 1452, "num_entries": 7493, "num_filter_entries": 7493, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764404640, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:00.923085) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 12237916 bytes
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:00.924605) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.3 rd, 156.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 9.5 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(10.3) write-amplify(5.1) OK, records in: 8017, records dropped: 524 output_compression: NoCompression
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:00.924637) EVENT_LOG_v1 {"time_micros": 1764404640924622, "job": 48, "event": "compaction_finished", "compaction_time_micros": 78218, "compaction_time_cpu_micros": 30073, "output_level": 6, "num_output_files": 1, "total_output_size": 12237916, "num_input_records": 8017, "num_output_records": 7493, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404640925653, "job": 48, "event": "table_file_deletion", "file_number": 85}
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404640929698, "job": 48, "event": "table_file_deletion", "file_number": 83}
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:00.844350) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:00.929840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:00.929849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:00.929856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:00.929859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:24:00 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:00.929862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:24:01 compute-0 ceph-mon[75237]: pgmap v2174: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 341 B/s wr, 15 op/s
Nov 29 08:24:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:02 compute-0 podman[303788]: 2025-11-29 08:24:02.878419566 +0000 UTC m=+0.054101050 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:24:03 compute-0 ceph-mon[75237]: pgmap v2175: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:04 compute-0 nova_compute[255040]: 2025-11-29 08:24:04.246 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:05 compute-0 nova_compute[255040]: 2025-11-29 08:24:05.802 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:05 compute-0 ceph-mon[75237]: pgmap v2176: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:07 compute-0 ceph-mon[75237]: pgmap v2177: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:08 compute-0 sudo[303808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:24:08 compute-0 sudo[303808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:08 compute-0 sudo[303808]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:08 compute-0 sudo[303833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:24:08 compute-0 sudo[303833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:08 compute-0 sudo[303833]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:08 compute-0 sudo[303858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:24:08 compute-0 sudo[303858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:08 compute-0 sudo[303858]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:08 compute-0 sudo[303883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 08:24:08 compute-0 sudo[303883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:24:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:24:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:24:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:24:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:24:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:24:09 compute-0 podman[303980]: 2025-11-29 08:24:09.120060946 +0000 UTC m=+0.067887759 container exec 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:24:09 compute-0 podman[303980]: 2025-11-29 08:24:09.212536673 +0000 UTC m=+0.160363486 container exec_died 3d40b4863c00851766785d33ae2bb3113a9f48c8e2ecc86a634e3b391e4d33f2 (image=quay.io/ceph/ceph:v18, name=ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 08:24:09 compute-0 nova_compute[255040]: 2025-11-29 08:24:09.248 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:09 compute-0 sudo[303883]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:24:09 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:24:09 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:24:09 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:24:09 compute-0 ceph-mon[75237]: pgmap v2178: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:09 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:24:09 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:24:09 compute-0 sudo[304137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:24:09 compute-0 sudo[304137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:09 compute-0 sudo[304137]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:09 compute-0 sudo[304162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:24:09 compute-0 sudo[304162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:09 compute-0 sudo[304162]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:10 compute-0 sudo[304187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:24:10 compute-0 sudo[304187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:10 compute-0 sudo[304187]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:10 compute-0 sudo[304212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:24:10 compute-0 sudo[304212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:10 compute-0 sudo[304212]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:24:10 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:24:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:24:10 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:24:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:24:10 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:24:10 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev ece395c5-d220-4650-9199-5f843e32291a does not exist
Nov 29 08:24:10 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 4291540b-b98c-435d-ade7-faacdd235ebc does not exist
Nov 29 08:24:10 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev a03a4831-9270-46de-9853-432ee340ef5d does not exist
Nov 29 08:24:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:24:10 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:24:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:24:10 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:24:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:24:10 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:24:10 compute-0 sudo[304267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:24:10 compute-0 sudo[304267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:10 compute-0 sudo[304267]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:10 compute-0 sudo[304292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:24:10 compute-0 sudo[304292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:10 compute-0 sudo[304292]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:10 compute-0 nova_compute[255040]: 2025-11-29 08:24:10.805 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:10 compute-0 sudo[304317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:24:10 compute-0 sudo[304317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:10 compute-0 sudo[304317]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:10 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:24:10 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:24:10 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:24:10 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:24:10 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:24:10 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:24:10 compute-0 sudo[304342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:24:10 compute-0 sudo[304342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:11 compute-0 podman[304407]: 2025-11-29 08:24:11.30767008 +0000 UTC m=+0.036362265 container create fce32c76aa9ec37c2b99c1f2b2d92aa15f99e209180dbb1fd43616cd62158e0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:24:11 compute-0 systemd[1]: Started libpod-conmon-fce32c76aa9ec37c2b99c1f2b2d92aa15f99e209180dbb1fd43616cd62158e0b.scope.
Nov 29 08:24:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:24:11 compute-0 podman[304407]: 2025-11-29 08:24:11.293009538 +0000 UTC m=+0.021701733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:24:11 compute-0 podman[304407]: 2025-11-29 08:24:11.39278037 +0000 UTC m=+0.121472565 container init fce32c76aa9ec37c2b99c1f2b2d92aa15f99e209180dbb1fd43616cd62158e0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khayyam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:24:11 compute-0 podman[304407]: 2025-11-29 08:24:11.400026084 +0000 UTC m=+0.128718259 container start fce32c76aa9ec37c2b99c1f2b2d92aa15f99e209180dbb1fd43616cd62158e0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:24:11 compute-0 podman[304407]: 2025-11-29 08:24:11.403651262 +0000 UTC m=+0.132343467 container attach fce32c76aa9ec37c2b99c1f2b2d92aa15f99e209180dbb1fd43616cd62158e0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khayyam, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:24:11 compute-0 sleepy_khayyam[304424]: 167 167
Nov 29 08:24:11 compute-0 systemd[1]: libpod-fce32c76aa9ec37c2b99c1f2b2d92aa15f99e209180dbb1fd43616cd62158e0b.scope: Deactivated successfully.
Nov 29 08:24:11 compute-0 podman[304407]: 2025-11-29 08:24:11.406469887 +0000 UTC m=+0.135162062 container died fce32c76aa9ec37c2b99c1f2b2d92aa15f99e209180dbb1fd43616cd62158e0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khayyam, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 08:24:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8cb5abda57961714b9143c5e37ed824d719bb95c41b17b572dfebd891755b8f-merged.mount: Deactivated successfully.
Nov 29 08:24:11 compute-0 podman[304407]: 2025-11-29 08:24:11.44649897 +0000 UTC m=+0.175191145 container remove fce32c76aa9ec37c2b99c1f2b2d92aa15f99e209180dbb1fd43616cd62158e0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:24:11 compute-0 systemd[1]: libpod-conmon-fce32c76aa9ec37c2b99c1f2b2d92aa15f99e209180dbb1fd43616cd62158e0b.scope: Deactivated successfully.
Nov 29 08:24:11 compute-0 podman[304447]: 2025-11-29 08:24:11.600858114 +0000 UTC m=+0.041927434 container create af21651173037daa08d54ea5794996155a1a3dae195274eb08cb43edb2e6fa53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jennings, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 08:24:11 compute-0 systemd[1]: Started libpod-conmon-af21651173037daa08d54ea5794996155a1a3dae195274eb08cb43edb2e6fa53.scope.
Nov 29 08:24:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:24:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b0855d89fab8ea004518f221a6326a6f86d0fad7a773707322d961852441fc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:24:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b0855d89fab8ea004518f221a6326a6f86d0fad7a773707322d961852441fc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:24:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b0855d89fab8ea004518f221a6326a6f86d0fad7a773707322d961852441fc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:24:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b0855d89fab8ea004518f221a6326a6f86d0fad7a773707322d961852441fc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:24:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b0855d89fab8ea004518f221a6326a6f86d0fad7a773707322d961852441fc6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:24:11 compute-0 podman[304447]: 2025-11-29 08:24:11.583355295 +0000 UTC m=+0.024424635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:24:11 compute-0 podman[304447]: 2025-11-29 08:24:11.681949786 +0000 UTC m=+0.123019116 container init af21651173037daa08d54ea5794996155a1a3dae195274eb08cb43edb2e6fa53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 08:24:11 compute-0 podman[304447]: 2025-11-29 08:24:11.688500701 +0000 UTC m=+0.129570031 container start af21651173037daa08d54ea5794996155a1a3dae195274eb08cb43edb2e6fa53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jennings, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 08:24:11 compute-0 podman[304447]: 2025-11-29 08:24:11.691695747 +0000 UTC m=+0.132765067 container attach af21651173037daa08d54ea5794996155a1a3dae195274eb08cb43edb2e6fa53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jennings, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 08:24:11 compute-0 ceph-mon[75237]: pgmap v2179: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:12 compute-0 musing_jennings[304463]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:24:12 compute-0 musing_jennings[304463]: --> relative data size: 1.0
Nov 29 08:24:12 compute-0 musing_jennings[304463]: --> All data devices are unavailable
Nov 29 08:24:12 compute-0 systemd[1]: libpod-af21651173037daa08d54ea5794996155a1a3dae195274eb08cb43edb2e6fa53.scope: Deactivated successfully.
Nov 29 08:24:12 compute-0 podman[304492]: 2025-11-29 08:24:12.765357844 +0000 UTC m=+0.030545698 container died af21651173037daa08d54ea5794996155a1a3dae195274eb08cb43edb2e6fa53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jennings, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 08:24:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b0855d89fab8ea004518f221a6326a6f86d0fad7a773707322d961852441fc6-merged.mount: Deactivated successfully.
Nov 29 08:24:12 compute-0 podman[304492]: 2025-11-29 08:24:12.812755734 +0000 UTC m=+0.077943538 container remove af21651173037daa08d54ea5794996155a1a3dae195274eb08cb43edb2e6fa53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jennings, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 08:24:12 compute-0 systemd[1]: libpod-conmon-af21651173037daa08d54ea5794996155a1a3dae195274eb08cb43edb2e6fa53.scope: Deactivated successfully.
Nov 29 08:24:12 compute-0 sudo[304342]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:12 compute-0 sudo[304507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:24:12 compute-0 sudo[304507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:12 compute-0 sudo[304507]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:13 compute-0 sudo[304532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:24:13 compute-0 sudo[304532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:13 compute-0 sudo[304532]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:13 compute-0 sudo[304557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:24:13 compute-0 sudo[304557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:13 compute-0 sudo[304557]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:13 compute-0 sudo[304582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 08:24:13 compute-0 sudo[304582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:13 compute-0 podman[304646]: 2025-11-29 08:24:13.488663358 +0000 UTC m=+0.038438540 container create 7ef5d1a6abc7ed6369c34b78ca8a7cda162826d701329afc6d5f04329f57843c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 08:24:13 compute-0 systemd[1]: Started libpod-conmon-7ef5d1a6abc7ed6369c34b78ca8a7cda162826d701329afc6d5f04329f57843c.scope.
Nov 29 08:24:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:24:13 compute-0 podman[304646]: 2025-11-29 08:24:13.560652656 +0000 UTC m=+0.110427828 container init 7ef5d1a6abc7ed6369c34b78ca8a7cda162826d701329afc6d5f04329f57843c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:24:13 compute-0 podman[304646]: 2025-11-29 08:24:13.566795921 +0000 UTC m=+0.116571093 container start 7ef5d1a6abc7ed6369c34b78ca8a7cda162826d701329afc6d5f04329f57843c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:24:13 compute-0 podman[304646]: 2025-11-29 08:24:13.47194831 +0000 UTC m=+0.021723502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:24:13 compute-0 podman[304646]: 2025-11-29 08:24:13.569866473 +0000 UTC m=+0.119641675 container attach 7ef5d1a6abc7ed6369c34b78ca8a7cda162826d701329afc6d5f04329f57843c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shtern, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:24:13 compute-0 objective_shtern[304662]: 167 167
Nov 29 08:24:13 compute-0 systemd[1]: libpod-7ef5d1a6abc7ed6369c34b78ca8a7cda162826d701329afc6d5f04329f57843c.scope: Deactivated successfully.
Nov 29 08:24:13 compute-0 podman[304646]: 2025-11-29 08:24:13.573776428 +0000 UTC m=+0.123551670 container died 7ef5d1a6abc7ed6369c34b78ca8a7cda162826d701329afc6d5f04329f57843c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shtern, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:24:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-16204b2dcc609c5e5f7d4227a4910533bd137fd1901139be3476dea11517dd67-merged.mount: Deactivated successfully.
Nov 29 08:24:13 compute-0 podman[304646]: 2025-11-29 08:24:13.612053043 +0000 UTC m=+0.161828215 container remove 7ef5d1a6abc7ed6369c34b78ca8a7cda162826d701329afc6d5f04329f57843c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 08:24:13 compute-0 systemd[1]: libpod-conmon-7ef5d1a6abc7ed6369c34b78ca8a7cda162826d701329afc6d5f04329f57843c.scope: Deactivated successfully.
Nov 29 08:24:13 compute-0 podman[304687]: 2025-11-29 08:24:13.761982719 +0000 UTC m=+0.038936014 container create c2da23178bff0fdba4f89cf2cd861a2110bd31c1d7c5cb75957a6595b8179545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hodgkin, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 08:24:13 compute-0 systemd[1]: Started libpod-conmon-c2da23178bff0fdba4f89cf2cd861a2110bd31c1d7c5cb75957a6595b8179545.scope.
Nov 29 08:24:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998b6c71d4d0d13306d8ace104afc55bfb65e78a26a83e8d23afef13f0b41bf2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998b6c71d4d0d13306d8ace104afc55bfb65e78a26a83e8d23afef13f0b41bf2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998b6c71d4d0d13306d8ace104afc55bfb65e78a26a83e8d23afef13f0b41bf2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998b6c71d4d0d13306d8ace104afc55bfb65e78a26a83e8d23afef13f0b41bf2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:24:13 compute-0 podman[304687]: 2025-11-29 08:24:13.744548372 +0000 UTC m=+0.021501697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:24:13 compute-0 podman[304687]: 2025-11-29 08:24:13.850830589 +0000 UTC m=+0.127783954 container init c2da23178bff0fdba4f89cf2cd861a2110bd31c1d7c5cb75957a6595b8179545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:24:13 compute-0 podman[304687]: 2025-11-29 08:24:13.85906663 +0000 UTC m=+0.136019925 container start c2da23178bff0fdba4f89cf2cd861a2110bd31c1d7c5cb75957a6595b8179545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 08:24:13 compute-0 podman[304687]: 2025-11-29 08:24:13.861698689 +0000 UTC m=+0.138652024 container attach c2da23178bff0fdba4f89cf2cd861a2110bd31c1d7c5cb75957a6595b8179545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 08:24:13 compute-0 ceph-mon[75237]: pgmap v2180: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:14 compute-0 nova_compute[255040]: 2025-11-29 08:24:14.251 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]: {
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:     "0": [
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:         {
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "devices": [
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "/dev/loop3"
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             ],
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "lv_name": "ceph_lv0",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "lv_size": "21470642176",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "name": "ceph_lv0",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "tags": {
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.cluster_name": "ceph",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.crush_device_class": "",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.encrypted": "0",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.osd_id": "0",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.type": "block",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.vdo": "0"
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             },
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "type": "block",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "vg_name": "ceph_vg0"
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:         }
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:     ],
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:     "1": [
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:         {
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "devices": [
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "/dev/loop4"
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             ],
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "lv_name": "ceph_lv1",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "lv_size": "21470642176",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "name": "ceph_lv1",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "tags": {
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.cluster_name": "ceph",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.crush_device_class": "",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.encrypted": "0",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.osd_id": "1",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.type": "block",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.vdo": "0"
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             },
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "type": "block",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "vg_name": "ceph_vg1"
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:         }
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:     ],
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:     "2": [
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:         {
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "devices": [
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "/dev/loop5"
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             ],
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "lv_name": "ceph_lv2",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "lv_size": "21470642176",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "name": "ceph_lv2",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "tags": {
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.cluster_name": "ceph",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.crush_device_class": "",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.encrypted": "0",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.osd_id": "2",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.type": "block",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:                 "ceph.vdo": "0"
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             },
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "type": "block",
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:             "vg_name": "ceph_vg2"
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:         }
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]:     ]
Nov 29 08:24:14 compute-0 flamboyant_hodgkin[304704]: }
Nov 29 08:24:14 compute-0 systemd[1]: libpod-c2da23178bff0fdba4f89cf2cd861a2110bd31c1d7c5cb75957a6595b8179545.scope: Deactivated successfully.
Nov 29 08:24:14 compute-0 podman[304687]: 2025-11-29 08:24:14.638794955 +0000 UTC m=+0.915748270 container died c2da23178bff0fdba4f89cf2cd861a2110bd31c1d7c5cb75957a6595b8179545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Nov 29 08:24:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-998b6c71d4d0d13306d8ace104afc55bfb65e78a26a83e8d23afef13f0b41bf2-merged.mount: Deactivated successfully.
Nov 29 08:24:14 compute-0 podman[304687]: 2025-11-29 08:24:14.706951379 +0000 UTC m=+0.983904674 container remove c2da23178bff0fdba4f89cf2cd861a2110bd31c1d7c5cb75957a6595b8179545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 08:24:14 compute-0 systemd[1]: libpod-conmon-c2da23178bff0fdba4f89cf2cd861a2110bd31c1d7c5cb75957a6595b8179545.scope: Deactivated successfully.
Nov 29 08:24:14 compute-0 sudo[304582]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:14 compute-0 sudo[304727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:24:14 compute-0 sudo[304727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:14 compute-0 sudo[304727]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:14 compute-0 sudo[304752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:24:14 compute-0 sudo[304752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:14 compute-0 sudo[304752]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:14 compute-0 sudo[304777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:24:14 compute-0 sudo[304777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:14 compute-0 sudo[304777]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:14 compute-0 sudo[304802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 08:24:14 compute-0 sudo[304802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:15 compute-0 podman[304865]: 2025-11-29 08:24:15.330007868 +0000 UTC m=+0.035427670 container create 58ad0fdeffda83aeb29563045ea33c68b03ba348a521de20bc81aba6e936b648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kalam, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:24:15 compute-0 systemd[1]: Started libpod-conmon-58ad0fdeffda83aeb29563045ea33c68b03ba348a521de20bc81aba6e936b648.scope.
Nov 29 08:24:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:24:15 compute-0 podman[304865]: 2025-11-29 08:24:15.406684962 +0000 UTC m=+0.112104874 container init 58ad0fdeffda83aeb29563045ea33c68b03ba348a521de20bc81aba6e936b648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kalam, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 08:24:15 compute-0 podman[304865]: 2025-11-29 08:24:15.315227123 +0000 UTC m=+0.020646945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:24:15 compute-0 podman[304865]: 2025-11-29 08:24:15.415449496 +0000 UTC m=+0.120869298 container start 58ad0fdeffda83aeb29563045ea33c68b03ba348a521de20bc81aba6e936b648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:24:15 compute-0 podman[304865]: 2025-11-29 08:24:15.419429683 +0000 UTC m=+0.124849505 container attach 58ad0fdeffda83aeb29563045ea33c68b03ba348a521de20bc81aba6e936b648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kalam, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:24:15 compute-0 determined_kalam[304882]: 167 167
Nov 29 08:24:15 compute-0 systemd[1]: libpod-58ad0fdeffda83aeb29563045ea33c68b03ba348a521de20bc81aba6e936b648.scope: Deactivated successfully.
Nov 29 08:24:15 compute-0 podman[304865]: 2025-11-29 08:24:15.421974012 +0000 UTC m=+0.127393814 container died 58ad0fdeffda83aeb29563045ea33c68b03ba348a521de20bc81aba6e936b648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kalam, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 08:24:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9ce8f85745be77364f6ada81fec7a6aa5296cd4319e494abec15d62df10bcfd-merged.mount: Deactivated successfully.
Nov 29 08:24:15 compute-0 podman[304865]: 2025-11-29 08:24:15.462648331 +0000 UTC m=+0.168068133 container remove 58ad0fdeffda83aeb29563045ea33c68b03ba348a521de20bc81aba6e936b648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 29 08:24:15 compute-0 systemd[1]: libpod-conmon-58ad0fdeffda83aeb29563045ea33c68b03ba348a521de20bc81aba6e936b648.scope: Deactivated successfully.
Nov 29 08:24:15 compute-0 podman[304906]: 2025-11-29 08:24:15.650593415 +0000 UTC m=+0.062428223 container create c5b282dde7e92d535c01a00354c9d76e8f563ae6180bd0fad0702b3e320fc510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:24:15 compute-0 systemd[1]: Started libpod-conmon-c5b282dde7e92d535c01a00354c9d76e8f563ae6180bd0fad0702b3e320fc510.scope.
Nov 29 08:24:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f0b6d1a577d2f373726d739a0e11dc41555ca5587280de6633c4c9df6342e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f0b6d1a577d2f373726d739a0e11dc41555ca5587280de6633c4c9df6342e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f0b6d1a577d2f373726d739a0e11dc41555ca5587280de6633c4c9df6342e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9f0b6d1a577d2f373726d739a0e11dc41555ca5587280de6633c4c9df6342e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:24:15 compute-0 podman[304906]: 2025-11-29 08:24:15.63470646 +0000 UTC m=+0.046541288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:24:15 compute-0 podman[304906]: 2025-11-29 08:24:15.738406307 +0000 UTC m=+0.150241125 container init c5b282dde7e92d535c01a00354c9d76e8f563ae6180bd0fad0702b3e320fc510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_carson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 29 08:24:15 compute-0 podman[304906]: 2025-11-29 08:24:15.747809689 +0000 UTC m=+0.159644507 container start c5b282dde7e92d535c01a00354c9d76e8f563ae6180bd0fad0702b3e320fc510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_carson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 08:24:15 compute-0 podman[304906]: 2025-11-29 08:24:15.753224333 +0000 UTC m=+0.165059161 container attach c5b282dde7e92d535c01a00354c9d76e8f563ae6180bd0fad0702b3e320fc510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_carson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:24:15 compute-0 nova_compute[255040]: 2025-11-29 08:24:15.806 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:15 compute-0 ceph-mon[75237]: pgmap v2181: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:16 compute-0 naughty_carson[304922]: {
Nov 29 08:24:16 compute-0 naughty_carson[304922]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 08:24:16 compute-0 naughty_carson[304922]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:24:16 compute-0 naughty_carson[304922]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:24:16 compute-0 naughty_carson[304922]:         "osd_id": 2,
Nov 29 08:24:16 compute-0 naughty_carson[304922]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:24:16 compute-0 naughty_carson[304922]:         "type": "bluestore"
Nov 29 08:24:16 compute-0 naughty_carson[304922]:     },
Nov 29 08:24:16 compute-0 naughty_carson[304922]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 08:24:16 compute-0 naughty_carson[304922]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:24:16 compute-0 naughty_carson[304922]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:24:16 compute-0 naughty_carson[304922]:         "osd_id": 0,
Nov 29 08:24:16 compute-0 naughty_carson[304922]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:24:16 compute-0 naughty_carson[304922]:         "type": "bluestore"
Nov 29 08:24:16 compute-0 naughty_carson[304922]:     },
Nov 29 08:24:16 compute-0 naughty_carson[304922]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 08:24:16 compute-0 naughty_carson[304922]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:24:16 compute-0 naughty_carson[304922]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:24:16 compute-0 naughty_carson[304922]:         "osd_id": 1,
Nov 29 08:24:16 compute-0 naughty_carson[304922]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:24:16 compute-0 naughty_carson[304922]:         "type": "bluestore"
Nov 29 08:24:16 compute-0 naughty_carson[304922]:     }
Nov 29 08:24:16 compute-0 naughty_carson[304922]: }
Nov 29 08:24:16 compute-0 systemd[1]: libpod-c5b282dde7e92d535c01a00354c9d76e8f563ae6180bd0fad0702b3e320fc510.scope: Deactivated successfully.
Nov 29 08:24:16 compute-0 podman[304906]: 2025-11-29 08:24:16.802790396 +0000 UTC m=+1.214625224 container died c5b282dde7e92d535c01a00354c9d76e8f563ae6180bd0fad0702b3e320fc510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:24:16 compute-0 systemd[1]: libpod-c5b282dde7e92d535c01a00354c9d76e8f563ae6180bd0fad0702b3e320fc510.scope: Consumed 1.062s CPU time.
Nov 29 08:24:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9f0b6d1a577d2f373726d739a0e11dc41555ca5587280de6633c4c9df6342e4-merged.mount: Deactivated successfully.
Nov 29 08:24:16 compute-0 podman[304906]: 2025-11-29 08:24:16.860194534 +0000 UTC m=+1.272029342 container remove c5b282dde7e92d535c01a00354c9d76e8f563ae6180bd0fad0702b3e320fc510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_carson, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:24:16 compute-0 systemd[1]: libpod-conmon-c5b282dde7e92d535c01a00354c9d76e8f563ae6180bd0fad0702b3e320fc510.scope: Deactivated successfully.
Nov 29 08:24:16 compute-0 sudo[304802]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:24:16 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:24:16 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:24:16 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:24:16 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev f2558a5a-b275-44a1-b17a-f1e014acaabb does not exist
Nov 29 08:24:16 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev c21e7b2c-d460-4088-9883-f03955431bcf does not exist
Nov 29 08:24:16 compute-0 podman[304956]: 2025-11-29 08:24:16.946341961 +0000 UTC m=+0.107924921 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 08:24:16 compute-0 sudo[304984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:24:16 compute-0 sudo[304984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:16 compute-0 sudo[304984]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:17 compute-0 sudo[305014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:24:17 compute-0 sudo[305014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:24:17 compute-0 sudo[305014]: pam_unix(sudo:session): session closed for user root
Nov 29 08:24:17 compute-0 ceph-mon[75237]: pgmap v2182: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:17 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:24:17 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:24:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:19 compute-0 nova_compute[255040]: 2025-11-29 08:24:19.256 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:19 compute-0 ceph-mon[75237]: pgmap v2183: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:20 compute-0 nova_compute[255040]: 2025-11-29 08:24:20.829 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:21 compute-0 ceph-mon[75237]: pgmap v2184: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:24 compute-0 ceph-mon[75237]: pgmap v2185: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:24 compute-0 nova_compute[255040]: 2025-11-29 08:24:24.261 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:25 compute-0 nova_compute[255040]: 2025-11-29 08:24:25.830 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:26 compute-0 ceph-mon[75237]: pgmap v2186: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:24:27.149 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:24:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:24:27.150 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:24:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:24:27.150 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:24:27 compute-0 ovn_controller[153295]: 2025-11-29T08:24:27Z|00298|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Nov 29 08:24:27 compute-0 podman[305039]: 2025-11-29 08:24:27.877967151 +0000 UTC m=+0.051545232 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 08:24:28 compute-0 ceph-mon[75237]: pgmap v2187: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:29 compute-0 nova_compute[255040]: 2025-11-29 08:24:29.264 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:30 compute-0 ceph-mon[75237]: pgmap v2188: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:30 compute-0 nova_compute[255040]: 2025-11-29 08:24:30.873 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:32 compute-0 ceph-mon[75237]: pgmap v2189: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:33 compute-0 podman[305058]: 2025-11-29 08:24:33.927021733 +0000 UTC m=+0.081212156 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:24:34 compute-0 ceph-mon[75237]: pgmap v2190: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:34 compute-0 nova_compute[255040]: 2025-11-29 08:24:34.267 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:35 compute-0 nova_compute[255040]: 2025-11-29 08:24:35.875 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:36 compute-0 ceph-mon[75237]: pgmap v2191: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:37.220760) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404677220798, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 559, "num_deletes": 251, "total_data_size": 613141, "memory_usage": 623576, "flush_reason": "Manual Compaction"}
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404677227620, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 596832, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41555, "largest_seqno": 42113, "table_properties": {"data_size": 593718, "index_size": 1086, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7204, "raw_average_key_size": 19, "raw_value_size": 587517, "raw_average_value_size": 1566, "num_data_blocks": 48, "num_entries": 375, "num_filter_entries": 375, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404641, "oldest_key_time": 1764404641, "file_creation_time": 1764404677, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 6959 microseconds, and 2512 cpu microseconds.
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:37.227724) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 596832 bytes OK
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:37.227741) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:37.233989) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:37.234021) EVENT_LOG_v1 {"time_micros": 1764404677234012, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:37.234077) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 610009, prev total WAL file size 610009, number of live WAL files 2.
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:37.234726) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(582KB)], [86(11MB)]
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404677234783, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 12834748, "oldest_snapshot_seqno": -1}
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 7355 keys, 11052583 bytes, temperature: kUnknown
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404677304133, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 11052583, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10998801, "index_size": 34278, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18437, "raw_key_size": 188149, "raw_average_key_size": 25, "raw_value_size": 10862351, "raw_average_value_size": 1476, "num_data_blocks": 1352, "num_entries": 7355, "num_filter_entries": 7355, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401203, "oldest_key_time": 0, "file_creation_time": 1764404677, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3e5adb8f-86c2-4b9b-abb8-241c7a80419a", "db_session_id": "UZSC7H07F1USVRLC4WEP", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:37.304358) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 11052583 bytes
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:37.305596) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.9 rd, 159.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 11.7 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(40.0) write-amplify(18.5) OK, records in: 7868, records dropped: 513 output_compression: NoCompression
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:37.305617) EVENT_LOG_v1 {"time_micros": 1764404677305607, "job": 50, "event": "compaction_finished", "compaction_time_micros": 69420, "compaction_time_cpu_micros": 27250, "output_level": 6, "num_output_files": 1, "total_output_size": 11052583, "num_input_records": 7868, "num_output_records": 7355, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404677305860, "job": 50, "event": "table_file_deletion", "file_number": 88}
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404677308306, "job": 50, "event": "table_file_deletion", "file_number": 86}
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:37.234606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:37.308416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:37.308423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:37.308425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:37.308427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:24:37 compute-0 ceph-mon[75237]: rocksdb: (Original Log Time 2025/11/29-08:24:37.308429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:24:38 compute-0 ceph-mon[75237]: pgmap v2192: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:24:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:24:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:24:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:24:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:24:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:24:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:24:38
Nov 29 08:24:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:24:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:24:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['vms', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'default.rgw.log', '.mgr', 'volumes', '.rgw.root']
Nov 29 08:24:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:24:39 compute-0 nova_compute[255040]: 2025-11-29 08:24:39.271 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:40 compute-0 ceph-mon[75237]: pgmap v2193: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:40 compute-0 nova_compute[255040]: 2025-11-29 08:24:40.908 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:42 compute-0 ceph-mon[75237]: pgmap v2194: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:42 compute-0 nova_compute[255040]: 2025-11-29 08:24:42.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:24:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:24:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:24:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:24:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:24:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:24:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:24:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:24:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:24:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:24:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:24:43 compute-0 nova_compute[255040]: 2025-11-29 08:24:43.976 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:24:44 compute-0 ceph-mon[75237]: pgmap v2195: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:44 compute-0 nova_compute[255040]: 2025-11-29 08:24:44.315 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:44 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:44 compute-0 nova_compute[255040]: 2025-11-29 08:24:44.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:24:45 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:45 compute-0 nova_compute[255040]: 2025-11-29 08:24:45.911 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:46 compute-0 ceph-mon[75237]: pgmap v2196: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:46 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:46 compute-0 nova_compute[255040]: 2025-11-29 08:24:46.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:24:46 compute-0 nova_compute[255040]: 2025-11-29 08:24:46.975 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:24:47 compute-0 podman[305079]: 2025-11-29 08:24:47.907817035 +0000 UTC m=+0.081853745 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 08:24:47 compute-0 nova_compute[255040]: 2025-11-29 08:24:47.974 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:24:47 compute-0 nova_compute[255040]: 2025-11-29 08:24:47.998 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:24:47 compute-0 nova_compute[255040]: 2025-11-29 08:24:47.998 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:24:47 compute-0 nova_compute[255040]: 2025-11-29 08:24:47.998 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:24:47 compute-0 nova_compute[255040]: 2025-11-29 08:24:47.999 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:24:47 compute-0 nova_compute[255040]: 2025-11-29 08:24:47.999 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:24:48 compute-0 ceph-mon[75237]: pgmap v2197: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:48 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:24:48 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/719894226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:24:48 compute-0 nova_compute[255040]: 2025-11-29 08:24:48.428 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:24:48 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:48 compute-0 nova_compute[255040]: 2025-11-29 08:24:48.594 255071 WARNING nova.virt.libvirt.driver [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:24:48 compute-0 nova_compute[255040]: 2025-11-29 08:24:48.595 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4291MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:24:48 compute-0 nova_compute[255040]: 2025-11-29 08:24:48.595 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:24:48 compute-0 nova_compute[255040]: 2025-11-29 08:24:48.596 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:24:48 compute-0 nova_compute[255040]: 2025-11-29 08:24:48.646 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:24:48 compute-0 nova_compute[255040]: 2025-11-29 08:24:48.646 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:24:48 compute-0 nova_compute[255040]: 2025-11-29 08:24:48.664 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:24:49 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:24:49 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3353235230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:24:49 compute-0 nova_compute[255040]: 2025-11-29 08:24:49.076 255071 DEBUG oslo_concurrency.processutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:24:49 compute-0 nova_compute[255040]: 2025-11-29 08:24:49.084 255071 DEBUG nova.compute.provider_tree [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed in ProviderTree for provider: 858d78b2-ffcd-4247-ba96-0ec767fec62e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:24:49 compute-0 nova_compute[255040]: 2025-11-29 08:24:49.107 255071 DEBUG nova.scheduler.client.report [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Inventory has not changed for provider 858d78b2-ffcd-4247-ba96-0ec767fec62e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:24:49 compute-0 nova_compute[255040]: 2025-11-29 08:24:49.109 255071 DEBUG nova.compute.resource_tracker [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:24:49 compute-0 nova_compute[255040]: 2025-11-29 08:24:49.109 255071 DEBUG oslo_concurrency.lockutils [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:24:49 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/719894226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:24:49 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3353235230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:24:49 compute-0 nova_compute[255040]: 2025-11-29 08:24:49.318 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:50 compute-0 nova_compute[255040]: 2025-11-29 08:24:50.110 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:24:50 compute-0 nova_compute[255040]: 2025-11-29 08:24:50.111 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:24:50 compute-0 nova_compute[255040]: 2025-11-29 08:24:50.111 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:24:50 compute-0 nova_compute[255040]: 2025-11-29 08:24:50.128 255071 DEBUG nova.compute.manager [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:24:50 compute-0 nova_compute[255040]: 2025-11-29 08:24:50.129 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:24:50 compute-0 ceph-mon[75237]: pgmap v2198: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:50 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:50 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:50 compute-0 nova_compute[255040]: 2025-11-29 08:24:50.913 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:50 compute-0 nova_compute[255040]: 2025-11-29 08:24:50.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:24:51 compute-0 nova_compute[255040]: 2025-11-29 08:24:51.969 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:24:52 compute-0 ceph-mon[75237]: pgmap v2199: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:52 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:53 compute-0 ceph-mon[75237]: pgmap v2200: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:54 compute-0 nova_compute[255040]: 2025-11-29 08:24:54.322 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:54 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:55 compute-0 ceph-mon[75237]: pgmap v2201: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:55 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:24:55 compute-0 nova_compute[255040]: 2025-11-29 08:24:55.914 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:24:56 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:57 compute-0 ceph-mon[75237]: pgmap v2202: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:58 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:58 compute-0 podman[305151]: 2025-11-29 08:24:58.933554628 +0000 UTC m=+0.100819687 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:24:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:24:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2227262370' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:24:58 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:24:58 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2227262370' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:24:59 compute-0 nova_compute[255040]: 2025-11-29 08:24:59.326 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:24:59 compute-0 ceph-mon[75237]: pgmap v2203: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:24:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2227262370' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:24:59 compute-0 ceph-mon[75237]: from='client.? 192.168.122.10:0/2227262370' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:25:00 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:00 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:25:00 compute-0 nova_compute[255040]: 2025-11-29 08:25:00.955 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:01 compute-0 ceph-mon[75237]: pgmap v2204: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:02 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:03 compute-0 ceph-mon[75237]: pgmap v2205: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:04 compute-0 nova_compute[255040]: 2025-11-29 08:25:04.332 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:04 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:04 compute-0 podman[305170]: 2025-11-29 08:25:04.883896099 +0000 UTC m=+0.052696224 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:25:05 compute-0 ceph-mon[75237]: pgmap v2206: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:05 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:25:05 compute-0 nova_compute[255040]: 2025-11-29 08:25:05.957 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:06 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:06 compute-0 sshd-session[305149]: Received disconnect from 45.78.219.195 port 35338:11: Bye Bye [preauth]
Nov 29 08:25:06 compute-0 sshd-session[305149]: Disconnected from authenticating user root 45.78.219.195 port 35338 [preauth]
Nov 29 08:25:07 compute-0 ceph-mon[75237]: pgmap v2207: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:08 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:25:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:25:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:25:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:25:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:25:08 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:25:08 compute-0 sshd-session[305190]: Accepted publickey for zuul from 192.168.122.10 port 45464 ssh2: ECDSA SHA256:zzPx6lues+u/Uo6Vz/mUT3GOEVfGsUrsby+q6+T28GI
Nov 29 08:25:08 compute-0 systemd-logind[782]: New session 52 of user zuul.
Nov 29 08:25:08 compute-0 systemd[1]: Started Session 52 of User zuul.
Nov 29 08:25:08 compute-0 sshd-session[305190]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 08:25:09 compute-0 sudo[305194]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 29 08:25:09 compute-0 sudo[305194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 08:25:09 compute-0 nova_compute[255040]: 2025-11-29 08:25:09.337 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:09 compute-0 ceph-mon[75237]: pgmap v2208: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:10 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:10 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:25:10 compute-0 nova_compute[255040]: 2025-11-29 08:25:10.988 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:11 compute-0 ceph-mon[75237]: pgmap v2209: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:12 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19209 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:12 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:12 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19211 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:12 compute-0 ceph-mon[75237]: from='client.19209 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:13 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 08:25:13 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2304916498' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 08:25:13 compute-0 ceph-mon[75237]: pgmap v2210: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:13 compute-0 ceph-mon[75237]: from='client.19211 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:13 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2304916498' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 08:25:14 compute-0 nova_compute[255040]: 2025-11-29 08:25:14.340 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:14 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:15 compute-0 ceph-mon[75237]: pgmap v2211: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:15 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:25:15 compute-0 nova_compute[255040]: 2025-11-29 08:25:15.991 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:16 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:17 compute-0 sudo[305496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:25:17 compute-0 sudo[305496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:17 compute-0 sudo[305496]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:17 compute-0 sudo[305521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:25:17 compute-0 sudo[305521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:17 compute-0 sudo[305521]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:17 compute-0 sudo[305546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:25:17 compute-0 sudo[305546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:17 compute-0 sudo[305546]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:17 compute-0 sudo[305574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:25:17 compute-0 sudo[305574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:17 compute-0 ceph-mon[75237]: pgmap v2212: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:17 compute-0 sudo[305574]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:25:17 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:25:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:25:17 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:25:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:25:17 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:25:17 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 394a8d57-2e50-4a0e-a14b-b52581d7b79c does not exist
Nov 29 08:25:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:25:17 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:25:17 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev dd91b25d-2e43-4e2f-be5e-486b38a4b3f5 does not exist
Nov 29 08:25:17 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 8b6cc381-9944-4b44-89d1-da6b62df6d04 does not exist
Nov 29 08:25:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:25:17 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:25:17 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:25:17 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:25:17 compute-0 sudo[305630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:25:17 compute-0 sudo[305630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:17 compute-0 sudo[305630]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:18 compute-0 sudo[305661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:25:18 compute-0 sudo[305661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:18 compute-0 sudo[305661]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:18 compute-0 podman[305654]: 2025-11-29 08:25:18.128336995 +0000 UTC m=+0.146165873 container health_status 9ffb5fe695d238f155124aea7f37d6dd43f98b0d06e845ac9487e82ea1e860cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 08:25:18 compute-0 sudo[305704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:25:18 compute-0 sudo[305704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:18 compute-0 sudo[305704]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:18 compute-0 sudo[305737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:25:18 compute-0 sudo[305737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:18 compute-0 ovs-vsctl[305807]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 29 08:25:18 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:18 compute-0 podman[305831]: 2025-11-29 08:25:18.594794551 +0000 UTC m=+0.053984570 container create ffd0bc11933650c4e6f14248d5afc9df20e6785559630f185c7430c6cbe142c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 08:25:18 compute-0 systemd[1]: Started libpod-conmon-ffd0bc11933650c4e6f14248d5afc9df20e6785559630f185c7430c6cbe142c8.scope.
Nov 29 08:25:18 compute-0 podman[305831]: 2025-11-29 08:25:18.567545679 +0000 UTC m=+0.026735718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:25:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:25:18 compute-0 podman[305831]: 2025-11-29 08:25:18.688420854 +0000 UTC m=+0.147610893 container init ffd0bc11933650c4e6f14248d5afc9df20e6785559630f185c7430c6cbe142c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_perlman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:25:18 compute-0 podman[305831]: 2025-11-29 08:25:18.697815875 +0000 UTC m=+0.157005904 container start ffd0bc11933650c4e6f14248d5afc9df20e6785559630f185c7430c6cbe142c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:25:18 compute-0 podman[305831]: 2025-11-29 08:25:18.701372971 +0000 UTC m=+0.160563010 container attach ffd0bc11933650c4e6f14248d5afc9df20e6785559630f185c7430c6cbe142c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 08:25:18 compute-0 nice_perlman[305860]: 167 167
Nov 29 08:25:18 compute-0 systemd[1]: libpod-ffd0bc11933650c4e6f14248d5afc9df20e6785559630f185c7430c6cbe142c8.scope: Deactivated successfully.
Nov 29 08:25:18 compute-0 podman[305831]: 2025-11-29 08:25:18.706064986 +0000 UTC m=+0.165255005 container died ffd0bc11933650c4e6f14248d5afc9df20e6785559630f185c7430c6cbe142c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_perlman, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:25:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ab54051480394ec9920c9dd8fdac9ea4051bb130e88981d51778021311e9b17-merged.mount: Deactivated successfully.
Nov 29 08:25:18 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:25:18 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:25:18 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:25:18 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:25:18 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:25:18 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:25:18 compute-0 podman[305831]: 2025-11-29 08:25:18.746328637 +0000 UTC m=+0.205518656 container remove ffd0bc11933650c4e6f14248d5afc9df20e6785559630f185c7430c6cbe142c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_perlman, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:25:18 compute-0 systemd[1]: libpod-conmon-ffd0bc11933650c4e6f14248d5afc9df20e6785559630f185c7430c6cbe142c8.scope: Deactivated successfully.
Nov 29 08:25:18 compute-0 podman[305901]: 2025-11-29 08:25:18.9313057 +0000 UTC m=+0.050599438 container create 74de05fb3d43c1bcf4d5c6f291a36a5999b2dfa2fcf80f003dd4bc52743503cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dirac, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 08:25:18 compute-0 systemd[1]: Started libpod-conmon-74de05fb3d43c1bcf4d5c6f291a36a5999b2dfa2fcf80f003dd4bc52743503cf.scope.
Nov 29 08:25:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2892814349f8b3ae9465fbb1d60e44fe4576d4563e1ca7d42a64c14a3e625c0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2892814349f8b3ae9465fbb1d60e44fe4576d4563e1ca7d42a64c14a3e625c0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2892814349f8b3ae9465fbb1d60e44fe4576d4563e1ca7d42a64c14a3e625c0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:25:19 compute-0 podman[305901]: 2025-11-29 08:25:18.909022092 +0000 UTC m=+0.028315820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2892814349f8b3ae9465fbb1d60e44fe4576d4563e1ca7d42a64c14a3e625c0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2892814349f8b3ae9465fbb1d60e44fe4576d4563e1ca7d42a64c14a3e625c0f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:25:19 compute-0 podman[305901]: 2025-11-29 08:25:19.021556501 +0000 UTC m=+0.140850239 container init 74de05fb3d43c1bcf4d5c6f291a36a5999b2dfa2fcf80f003dd4bc52743503cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dirac, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:25:19 compute-0 podman[305901]: 2025-11-29 08:25:19.028455456 +0000 UTC m=+0.147749144 container start 74de05fb3d43c1bcf4d5c6f291a36a5999b2dfa2fcf80f003dd4bc52743503cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dirac, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:25:19 compute-0 podman[305901]: 2025-11-29 08:25:19.031957401 +0000 UTC m=+0.151251089 container attach 74de05fb3d43c1bcf4d5c6f291a36a5999b2dfa2fcf80f003dd4bc52743503cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dirac, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:25:19 compute-0 nova_compute[255040]: 2025-11-29 08:25:19.372 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:19 compute-0 virtqemud[254549]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 29 08:25:19 compute-0 virtqemud[254549]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 29 08:25:19 compute-0 virtqemud[254549]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 08:25:19 compute-0 ceph-mon[75237]: pgmap v2213: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:20 compute-0 brave_dirac[305918]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:25:20 compute-0 brave_dirac[305918]: --> relative data size: 1.0
Nov 29 08:25:20 compute-0 brave_dirac[305918]: --> All data devices are unavailable
Nov 29 08:25:20 compute-0 systemd[1]: libpod-74de05fb3d43c1bcf4d5c6f291a36a5999b2dfa2fcf80f003dd4bc52743503cf.scope: Deactivated successfully.
Nov 29 08:25:20 compute-0 systemd[1]: libpod-74de05fb3d43c1bcf4d5c6f291a36a5999b2dfa2fcf80f003dd4bc52743503cf.scope: Consumed 1.055s CPU time.
Nov 29 08:25:20 compute-0 podman[305901]: 2025-11-29 08:25:20.164877759 +0000 UTC m=+1.284171447 container died 74de05fb3d43c1bcf4d5c6f291a36a5999b2dfa2fcf80f003dd4bc52743503cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dirac, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 08:25:20 compute-0 ceph-mds[101581]: mds.cephfs.compute-0.yemcdg asok_command: cache status {prefix=cache status} (starting...)
Nov 29 08:25:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-2892814349f8b3ae9465fbb1d60e44fe4576d4563e1ca7d42a64c14a3e625c0f-merged.mount: Deactivated successfully.
Nov 29 08:25:20 compute-0 podman[305901]: 2025-11-29 08:25:20.229546934 +0000 UTC m=+1.348840622 container remove 74de05fb3d43c1bcf4d5c6f291a36a5999b2dfa2fcf80f003dd4bc52743503cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 08:25:20 compute-0 systemd[1]: libpod-conmon-74de05fb3d43c1bcf4d5c6f291a36a5999b2dfa2fcf80f003dd4bc52743503cf.scope: Deactivated successfully.
Nov 29 08:25:20 compute-0 sudo[305737]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:20 compute-0 ceph-mds[101581]: mds.cephfs.compute-0.yemcdg asok_command: client ls {prefix=client ls} (starting...)
Nov 29 08:25:20 compute-0 sudo[306231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:25:20 compute-0 sudo[306231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:20 compute-0 sudo[306231]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:20 compute-0 lvm[306278]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 08:25:20 compute-0 lvm[306278]: VG ceph_vg1 finished
Nov 29 08:25:20 compute-0 sudo[306279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:25:20 compute-0 sudo[306279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:20 compute-0 sudo[306279]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:20 compute-0 lvm[306311]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 08:25:20 compute-0 lvm[306311]: VG ceph_vg2 finished
Nov 29 08:25:20 compute-0 lvm[306322]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 08:25:20 compute-0 lvm[306322]: VG ceph_vg0 finished
Nov 29 08:25:20 compute-0 sudo[306313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:25:20 compute-0 sudo[306313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:20 compute-0 sudo[306313]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:20 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:20 compute-0 sudo[306363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- lvm list --format json
Nov 29 08:25:20 compute-0 sudo[306363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:20 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19215 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:20 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:25:20 compute-0 podman[306478]: 2025-11-29 08:25:20.945038253 +0000 UTC m=+0.050215159 container create 4f9094f1572c2130f49b8a657c4d83b1f12c96b4cf0e4af102e757999cb74f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 29 08:25:20 compute-0 ceph-mds[101581]: mds.cephfs.compute-0.yemcdg asok_command: damage ls {prefix=damage ls} (starting...)
Nov 29 08:25:20 compute-0 systemd[1]: Started libpod-conmon-4f9094f1572c2130f49b8a657c4d83b1f12c96b4cf0e4af102e757999cb74f4a.scope.
Nov 29 08:25:20 compute-0 nova_compute[255040]: 2025-11-29 08:25:20.992 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:21 compute-0 podman[306478]: 2025-11-29 08:25:20.918917922 +0000 UTC m=+0.024094848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:25:21 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:25:21 compute-0 podman[306478]: 2025-11-29 08:25:21.039671102 +0000 UTC m=+0.144848008 container init 4f9094f1572c2130f49b8a657c4d83b1f12c96b4cf0e4af102e757999cb74f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_sinoussi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 29 08:25:21 compute-0 podman[306478]: 2025-11-29 08:25:21.05747913 +0000 UTC m=+0.162656036 container start 4f9094f1572c2130f49b8a657c4d83b1f12c96b4cf0e4af102e757999cb74f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:25:21 compute-0 podman[306478]: 2025-11-29 08:25:21.061253441 +0000 UTC m=+0.166430347 container attach 4f9094f1572c2130f49b8a657c4d83b1f12c96b4cf0e4af102e757999cb74f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:25:21 compute-0 eager_sinoussi[306503]: 167 167
Nov 29 08:25:21 compute-0 systemd[1]: libpod-4f9094f1572c2130f49b8a657c4d83b1f12c96b4cf0e4af102e757999cb74f4a.scope: Deactivated successfully.
Nov 29 08:25:21 compute-0 conmon[306503]: conmon 4f9094f1572c2130f49b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4f9094f1572c2130f49b8a657c4d83b1f12c96b4cf0e4af102e757999cb74f4a.scope/container/memory.events
Nov 29 08:25:21 compute-0 podman[306478]: 2025-11-29 08:25:21.066167843 +0000 UTC m=+0.171344749 container died 4f9094f1572c2130f49b8a657c4d83b1f12c96b4cf0e4af102e757999cb74f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 08:25:21 compute-0 ceph-mds[101581]: mds.cephfs.compute-0.yemcdg asok_command: dump loads {prefix=dump loads} (starting...)
Nov 29 08:25:21 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19217 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-97618fd3c339353b60ce9f3021ce93356e9fadff1df000e46f9775d2364e386d-merged.mount: Deactivated successfully.
Nov 29 08:25:21 compute-0 podman[306478]: 2025-11-29 08:25:21.112617719 +0000 UTC m=+0.217794625 container remove 4f9094f1572c2130f49b8a657c4d83b1f12c96b4cf0e4af102e757999cb74f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 08:25:21 compute-0 systemd[1]: libpod-conmon-4f9094f1572c2130f49b8a657c4d83b1f12c96b4cf0e4af102e757999cb74f4a.scope: Deactivated successfully.
Nov 29 08:25:21 compute-0 ceph-mds[101581]: mds.cephfs.compute-0.yemcdg asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 29 08:25:21 compute-0 podman[306555]: 2025-11-29 08:25:21.283360911 +0000 UTC m=+0.050040354 container create 187e1ab52ba0c1ae9c676abc9921f3f8585227a88897b792238a7fe198398a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hertz, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 08:25:21 compute-0 systemd[1]: Started libpod-conmon-187e1ab52ba0c1ae9c676abc9921f3f8585227a88897b792238a7fe198398a3c.scope.
Nov 29 08:25:21 compute-0 ceph-mds[101581]: mds.cephfs.compute-0.yemcdg asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 29 08:25:21 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:25:21 compute-0 podman[306555]: 2025-11-29 08:25:21.260725184 +0000 UTC m=+0.027404647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:25:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc053d4d3732c04d965e83fc54ebad08f12c588cb3ce02f335f0dbc9266adc8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:25:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc053d4d3732c04d965e83fc54ebad08f12c588cb3ce02f335f0dbc9266adc8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:25:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc053d4d3732c04d965e83fc54ebad08f12c588cb3ce02f335f0dbc9266adc8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:25:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc053d4d3732c04d965e83fc54ebad08f12c588cb3ce02f335f0dbc9266adc8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:25:21 compute-0 podman[306555]: 2025-11-29 08:25:21.375414191 +0000 UTC m=+0.142093654 container init 187e1ab52ba0c1ae9c676abc9921f3f8585227a88897b792238a7fe198398a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hertz, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:25:21 compute-0 podman[306555]: 2025-11-29 08:25:21.383425925 +0000 UTC m=+0.150105368 container start 187e1ab52ba0c1ae9c676abc9921f3f8585227a88897b792238a7fe198398a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hertz, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 08:25:21 compute-0 podman[306555]: 2025-11-29 08:25:21.386743715 +0000 UTC m=+0.153423188 container attach 187e1ab52ba0c1ae9c676abc9921f3f8585227a88897b792238a7fe198398a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hertz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 08:25:21 compute-0 ceph-mds[101581]: mds.cephfs.compute-0.yemcdg asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 29 08:25:21 compute-0 ceph-mds[101581]: mds.cephfs.compute-0.yemcdg asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 29 08:25:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 08:25:21 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/224327159' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 08:25:21 compute-0 ceph-mds[101581]: mds.cephfs.compute-0.yemcdg asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 29 08:25:21 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19223 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:21 compute-0 ceph-mgr[75527]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 08:25:21 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T08:25:21.848+0000 7fed72bf5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 08:25:21 compute-0 ceph-mon[75237]: pgmap v2214: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:21 compute-0 ceph-mon[75237]: from='client.19215 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:21 compute-0 ceph-mon[75237]: from='client.19217 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:21 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/224327159' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 08:25:21 compute-0 ceph-mds[101581]: mds.cephfs.compute-0.yemcdg asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 29 08:25:21 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:25:21 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2607240894' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:25:22 compute-0 ceph-mds[101581]: mds.cephfs.compute-0.yemcdg asok_command: ops {prefix=ops} (starting...)
Nov 29 08:25:22 compute-0 reverent_hertz[306613]: {
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:     "0": [
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:         {
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "devices": [
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "/dev/loop3"
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             ],
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "lv_name": "ceph_lv0",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "lv_size": "21470642176",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d2206e5d-36d0-4dcd-a218-91d42a449afa,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "lv_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "name": "ceph_lv0",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "tags": {
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.block_uuid": "U4Dc8I-cgCE-wfcO-09Je-BaYg-aZIW-ruiu2Y",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.cluster_name": "ceph",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.crush_device_class": "",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.encrypted": "0",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.osd_fsid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.osd_id": "0",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.type": "block",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.vdo": "0"
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             },
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "type": "block",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "vg_name": "ceph_vg0"
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:         }
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:     ],
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:     "1": [
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:         {
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "devices": [
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "/dev/loop4"
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             ],
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "lv_name": "ceph_lv1",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "lv_size": "21470642176",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e72f2659-baec-4840-b3cf-a1856ca51c15,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "lv_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "name": "ceph_lv1",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "tags": {
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.block_uuid": "yInaiw-VDvX-LnOR-TZ2d-eAMO-m8Mv-MkZ0GQ",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.cluster_name": "ceph",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.crush_device_class": "",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.encrypted": "0",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.osd_fsid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.osd_id": "1",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.type": "block",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.vdo": "0"
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             },
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "type": "block",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "vg_name": "ceph_vg1"
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:         }
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:     ],
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:     "2": [
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:         {
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "devices": [
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "/dev/loop5"
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             ],
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "lv_name": "ceph_lv2",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "lv_size": "21470642176",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=321e9cb7-01a2-5759-bf8c-981c9a64aa3e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2406c235-b877-477d-8a53-b5b71e6811ae,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "lv_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "name": "ceph_lv2",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "tags": {
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.block_uuid": "zL770v-2kfw-6Ghx-RNml-PHGx-w8Mm-J80KDt",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.cluster_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.cluster_name": "ceph",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.crush_device_class": "",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.encrypted": "0",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.osd_fsid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.osd_id": "2",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.type": "block",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:                 "ceph.vdo": "0"
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             },
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "type": "block",
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:             "vg_name": "ceph_vg2"
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:         }
Nov 29 08:25:22 compute-0 reverent_hertz[306613]:     ]
Nov 29 08:25:22 compute-0 reverent_hertz[306613]: }
Nov 29 08:25:22 compute-0 systemd[1]: libpod-187e1ab52ba0c1ae9c676abc9921f3f8585227a88897b792238a7fe198398a3c.scope: Deactivated successfully.
Nov 29 08:25:22 compute-0 podman[306555]: 2025-11-29 08:25:22.253904902 +0000 UTC m=+1.020584365 container died 187e1ab52ba0c1ae9c676abc9921f3f8585227a88897b792238a7fe198398a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hertz, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:25:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc053d4d3732c04d965e83fc54ebad08f12c588cb3ce02f335f0dbc9266adc8f-merged.mount: Deactivated successfully.
Nov 29 08:25:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 29 08:25:22 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3601695409' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 08:25:22 compute-0 podman[306555]: 2025-11-29 08:25:22.309457453 +0000 UTC m=+1.076136896 container remove 187e1ab52ba0c1ae9c676abc9921f3f8585227a88897b792238a7fe198398a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hertz, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:25:22 compute-0 systemd[1]: libpod-conmon-187e1ab52ba0c1ae9c676abc9921f3f8585227a88897b792238a7fe198398a3c.scope: Deactivated successfully.
Nov 29 08:25:22 compute-0 sudo[306363]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 29 08:25:22 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/331894458' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 08:25:22 compute-0 sudo[306745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:25:22 compute-0 sudo[306745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:22 compute-0 sudo[306745]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:22 compute-0 sudo[306781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:25:22 compute-0 sudo[306781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:22 compute-0 sudo[306781]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:22 compute-0 sudo[306825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:25:22 compute-0 sudo[306825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:22 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:22 compute-0 sudo[306825]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:22 compute-0 sudo[306871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/321e9cb7-01a2-5759-bf8c-981c9a64aa3e/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 321e9cb7-01a2-5759-bf8c-981c9a64aa3e -- raw list --format json
Nov 29 08:25:22 compute-0 sudo[306871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 08:25:22 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2029103106' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 08:25:22 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 29 08:25:22 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1038201915' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 08:25:22 compute-0 ceph-mds[101581]: mds.cephfs.compute-0.yemcdg asok_command: session ls {prefix=session ls} (starting...)
Nov 29 08:25:22 compute-0 podman[306941]: 2025-11-29 08:25:22.885623973 +0000 UTC m=+0.044238908 container create 7752530c6f48a3ce42b7d3738a4f121484b805ec176ea0ee655796093ff0a3b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:25:22 compute-0 ceph-mon[75237]: from='client.19223 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:22 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2607240894' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:25:22 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3601695409' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 08:25:22 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/331894458' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 08:25:22 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2029103106' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 08:25:22 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1038201915' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 08:25:22 compute-0 systemd[1]: Started libpod-conmon-7752530c6f48a3ce42b7d3738a4f121484b805ec176ea0ee655796093ff0a3b2.scope.
Nov 29 08:25:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:25:22 compute-0 podman[306941]: 2025-11-29 08:25:22.955793905 +0000 UTC m=+0.114408860 container init 7752530c6f48a3ce42b7d3738a4f121484b805ec176ea0ee655796093ff0a3b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:25:22 compute-0 podman[306941]: 2025-11-29 08:25:22.866597803 +0000 UTC m=+0.025212768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:25:22 compute-0 podman[306941]: 2025-11-29 08:25:22.961652303 +0000 UTC m=+0.120267238 container start 7752530c6f48a3ce42b7d3738a4f121484b805ec176ea0ee655796093ff0a3b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 08:25:22 compute-0 podman[306941]: 2025-11-29 08:25:22.965196068 +0000 UTC m=+0.123811023 container attach 7752530c6f48a3ce42b7d3738a4f121484b805ec176ea0ee655796093ff0a3b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 29 08:25:22 compute-0 epic_rhodes[306981]: 167 167
Nov 29 08:25:22 compute-0 systemd[1]: libpod-7752530c6f48a3ce42b7d3738a4f121484b805ec176ea0ee655796093ff0a3b2.scope: Deactivated successfully.
Nov 29 08:25:22 compute-0 conmon[306981]: conmon 7752530c6f48a3ce42b7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7752530c6f48a3ce42b7d3738a4f121484b805ec176ea0ee655796093ff0a3b2.scope/container/memory.events
Nov 29 08:25:22 compute-0 podman[306941]: 2025-11-29 08:25:22.969970276 +0000 UTC m=+0.128585201 container died 7752530c6f48a3ce42b7d3738a4f121484b805ec176ea0ee655796093ff0a3b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 08:25:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-102a60ecaa7e94c698631ca1c1ae86e4b8e5997a90323607e17a597d6b68d8c5-merged.mount: Deactivated successfully.
Nov 29 08:25:23 compute-0 ceph-mds[101581]: mds.cephfs.compute-0.yemcdg asok_command: status {prefix=status} (starting...)
Nov 29 08:25:23 compute-0 podman[306941]: 2025-11-29 08:25:23.004754219 +0000 UTC m=+0.163369154 container remove 7752530c6f48a3ce42b7d3738a4f121484b805ec176ea0ee655796093ff0a3b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:25:23 compute-0 systemd[1]: libpod-conmon-7752530c6f48a3ce42b7d3738a4f121484b805ec176ea0ee655796093ff0a3b2.scope: Deactivated successfully.
Nov 29 08:25:23 compute-0 podman[307024]: 2025-11-29 08:25:23.167925548 +0000 UTC m=+0.036340616 container create a4b4bc94962d2a4650ac3fe9d258b5a64c0054d880d40121d5bbe60a32fd87a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 08:25:23 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19237 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:23 compute-0 systemd[1]: Started libpod-conmon-a4b4bc94962d2a4650ac3fe9d258b5a64c0054d880d40121d5bbe60a32fd87a1.scope.
Nov 29 08:25:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 08:25:23 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1793944556' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 08:25:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e8d7f916b6d4c516411fc63870ee31dccb6ea5158037c13794d2a42b2cc190/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e8d7f916b6d4c516411fc63870ee31dccb6ea5158037c13794d2a42b2cc190/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e8d7f916b6d4c516411fc63870ee31dccb6ea5158037c13794d2a42b2cc190/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e8d7f916b6d4c516411fc63870ee31dccb6ea5158037c13794d2a42b2cc190/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:25:23 compute-0 podman[307024]: 2025-11-29 08:25:23.150568211 +0000 UTC m=+0.018983289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:25:23 compute-0 podman[307024]: 2025-11-29 08:25:23.254067389 +0000 UTC m=+0.122482487 container init a4b4bc94962d2a4650ac3fe9d258b5a64c0054d880d40121d5bbe60a32fd87a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lovelace, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 08:25:23 compute-0 podman[307024]: 2025-11-29 08:25:23.261259252 +0000 UTC m=+0.129674330 container start a4b4bc94962d2a4650ac3fe9d258b5a64c0054d880d40121d5bbe60a32fd87a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:25:23 compute-0 podman[307024]: 2025-11-29 08:25:23.264332925 +0000 UTC m=+0.132748013 container attach a4b4bc94962d2a4650ac3fe9d258b5a64c0054d880d40121d5bbe60a32fd87a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lovelace, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 08:25:23 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19239 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:23 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 08:25:23 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4291998563' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 08:25:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 08:25:24 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1139028660' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 08:25:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 08:25:24 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3458887656' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 08:25:24 compute-0 ceph-mon[75237]: pgmap v2215: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:24 compute-0 ceph-mon[75237]: from='client.19237 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:24 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1793944556' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 08:25:24 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/4291998563' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]: {
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]:     "2406c235-b877-477d-8a53-b5b71e6811ae": {
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]:         "osd_id": 2,
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]:         "osd_uuid": "2406c235-b877-477d-8a53-b5b71e6811ae",
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]:         "type": "bluestore"
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]:     },
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]:     "d2206e5d-36d0-4dcd-a218-91d42a449afa": {
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]:         "osd_id": 0,
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]:         "osd_uuid": "d2206e5d-36d0-4dcd-a218-91d42a449afa",
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]:         "type": "bluestore"
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]:     },
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]:     "e72f2659-baec-4840-b3cf-a1856ca51c15": {
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]:         "ceph_fsid": "321e9cb7-01a2-5759-bf8c-981c9a64aa3e",
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]:         "osd_id": 1,
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]:         "osd_uuid": "e72f2659-baec-4840-b3cf-a1856ca51c15",
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]:         "type": "bluestore"
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]:     }
Nov 29 08:25:24 compute-0 inspiring_lovelace[307046]: }
Nov 29 08:25:24 compute-0 systemd[1]: libpod-a4b4bc94962d2a4650ac3fe9d258b5a64c0054d880d40121d5bbe60a32fd87a1.scope: Deactivated successfully.
Nov 29 08:25:24 compute-0 podman[307024]: 2025-11-29 08:25:24.297685922 +0000 UTC m=+1.166100990 container died a4b4bc94962d2a4650ac3fe9d258b5a64c0054d880d40121d5bbe60a32fd87a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 08:25:24 compute-0 systemd[1]: libpod-a4b4bc94962d2a4650ac3fe9d258b5a64c0054d880d40121d5bbe60a32fd87a1.scope: Consumed 1.026s CPU time.
Nov 29 08:25:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-56e8d7f916b6d4c516411fc63870ee31dccb6ea5158037c13794d2a42b2cc190-merged.mount: Deactivated successfully.
Nov 29 08:25:24 compute-0 podman[307024]: 2025-11-29 08:25:24.349345708 +0000 UTC m=+1.217760776 container remove a4b4bc94962d2a4650ac3fe9d258b5a64c0054d880d40121d5bbe60a32fd87a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lovelace, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:25:24 compute-0 systemd[1]: libpod-conmon-a4b4bc94962d2a4650ac3fe9d258b5a64c0054d880d40121d5bbe60a32fd87a1.scope: Deactivated successfully.
Nov 29 08:25:24 compute-0 nova_compute[255040]: 2025-11-29 08:25:24.376 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:24 compute-0 sudo[306871]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:25:24 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:25:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:25:24 compute-0 ceph-mon[75237]: log_channel(audit) log [INF] : from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:25:24 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev ed200db2-8b78-45c9-a124-e03d93149cc4 does not exist
Nov 29 08:25:24 compute-0 ceph-mgr[75527]: [progress WARNING root] complete: ev 3113fde3-556e-48bc-a6ba-122118bb007e does not exist
Nov 29 08:25:24 compute-0 sudo[307253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:25:24 compute-0 sudo[307253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:24 compute-0 sudo[307253]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:24 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 08:25:24 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/693093583' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 08:25:24 compute-0 sudo[307278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:25:24 compute-0 sudo[307278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:25:24 compute-0 sudo[307278]: pam_unix(sudo:session): session closed for user root
Nov 29 08:25:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 29 08:25:24 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/576333619' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 08:25:24 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19253 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:24 compute-0 ceph-mgr[75527]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 08:25:24 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T08:25:24.895+0000 7fed72bf5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 08:25:24 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 08:25:24 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2058380100' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 08:25:25 compute-0 ceph-mon[75237]: from='client.19239 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:25 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1139028660' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 08:25:25 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3458887656' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 08:25:25 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:25:25 compute-0 ceph-mon[75237]: from='mgr.14126 192.168.122.100:0/1067641046' entity='mgr.compute-0.fwfehy' 
Nov 29 08:25:25 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/693093583' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 08:25:25 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/576333619' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 08:25:25 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2058380100' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 08:25:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 29 08:25:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4055110250' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 08:25:25 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19257 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:25 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19260 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 29 08:25:25 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3060542083' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 08:25:25 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:25:26 compute-0 nova_compute[255040]: 2025-11-29 08:25:26.039 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:26 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19263 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 08:25:26 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1242098' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 08:25:26 compute-0 ceph-mon[75237]: pgmap v2216: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:26 compute-0 ceph-mon[75237]: from='client.19253 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:26 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/4055110250' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 08:25:26 compute-0 ceph-mon[75237]: from='client.19257 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:26 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3060542083' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 08:25:26 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:03.973404+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97574912 unmapped: 15794176 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:04.973701+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97574912 unmapped: 15794176 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250655 data_alloc: 218103808 data_used: 7241728
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:05.973899+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f4aa9c00 session 0x5571f5cca1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97574912 unmapped: 15794176 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:06.974152+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97574912 unmapped: 15794176 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4fa55f000/0x0/0x4ffc00000, data 0xffa663/0x110f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:07.974433+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97574912 unmapped: 15794176 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572b000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f572b000 session 0x5571f313c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:08.974603+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97574912 unmapped: 15794176 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3763000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f3763000 session 0x5571f5762b40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:09.974805+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97574912 unmapped: 15794176 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.050296783s of 10.406439781s, submitted: 22
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f4aa9400 session 0x5571f4d90d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253462 data_alloc: 218103808 data_used: 7241728
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:10.974945+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97591296 unmapped: 15777792 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4fa55e000/0x0/0x4ffc00000, data 0xffa673/0x1110000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:11.975140+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97615872 unmapped: 15753216 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:12.975309+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4fa55e000/0x0/0x4ffc00000, data 0xffa673/0x1110000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 15679488 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:13.975441+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 15679488 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f4aa9c00 session 0x5571f4b6c960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f6c89c00 session 0x5571f570e3c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:14.975588+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6866400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97689600 unmapped: 15679488 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f6866400 session 0x5571f57625a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3763000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f3763000 session 0x5571f3b2f860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227535 data_alloc: 218103808 data_used: 7245824
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f4aa9400 session 0x5571f4b6d4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:15.975854+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97591296 unmapped: 15777792 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:16.976234+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4fa7b8000/0x0/0x4ffc00000, data 0xda0673/0xeb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97591296 unmapped: 15777792 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:17.976628+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97591296 unmapped: 15777792 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:18.976923+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97591296 unmapped: 15777792 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4fa7b8000/0x0/0x4ffc00000, data 0xda0673/0xeb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:19.977123+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97591296 unmapped: 15777792 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229363 data_alloc: 218103808 data_used: 7245824
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:20.977407+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97591296 unmapped: 15777792 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4fa7b8000/0x0/0x4ffc00000, data 0xda0673/0xeb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:21.977620+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97599488 unmapped: 15769600 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:22.977925+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97599488 unmapped: 15769600 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:23.978153+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97599488 unmapped: 15769600 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4fa7b8000/0x0/0x4ffc00000, data 0xda0673/0xeb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:24.978393+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97599488 unmapped: 15769600 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229363 data_alloc: 218103808 data_used: 7245824
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:25.978639+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 97599488 unmapped: 15769600 heap: 113369088 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f4aa9c00 session 0x5571f4ea7e00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f6c89c00 session 0x5571f4d8cd20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6866800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f6866800 session 0x5571f4d92b40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3763000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f3763000 session 0x5571f570f2c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.500174522s of 16.296358109s, submitted: 27
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:26.978844+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f4aa9400 session 0x5571f4e82960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f4aa9c00 session 0x5571f571be00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f6c89c00 session 0x5571f4b6de00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6866c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f6866c00 session 0x5571f4e305a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3763000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f3763000 session 0x5571f4d8c1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 98033664 unmapped: 22683648 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:27.978977+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f4aa9400 session 0x5571f3316000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4fa34e000/0x0/0x4ffc00000, data 0x120a673/0x1320000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f4aa9c00 session 0x5571f3843c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f6c89c00 session 0x5571f571ad20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 99033088 unmapped: 21684224 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f6867000 session 0x5571f4da85a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3763000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f3763000 session 0x5571f3a1e780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:28.979232+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 98852864 unmapped: 21864448 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:29.979456+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 98852864 unmapped: 21864448 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1312491 data_alloc: 218103808 data_used: 7245824
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f4aa9400 session 0x5571f4e82f00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:30.979716+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 98852864 unmapped: 21864448 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9e2b000/0x0/0x4ffc00000, data 0x172d673/0x1843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:31.979913+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 98852864 unmapped: 21864448 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:32.980184+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 98852864 unmapped: 21864448 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f6c89c00 session 0x5571f65825a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f4aa9c00 session 0x5571f571b4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:33.980439+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9e07000/0x0/0x4ffc00000, data 0x1751673/0x1867000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f6867400 session 0x5571f4d8cf00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3763000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 22339584 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f4aa9c00 session 0x5571f551ba40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:34.980598+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 22339584 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f6c89c00 session 0x5571f4d905a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f6867800 session 0x5571f25272c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338014 data_alloc: 234881024 data_used: 9711616
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:35.980746+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 21938176 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f7285c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:36.980963+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.985517502s of 10.341941833s, submitted: 59
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f3763000 session 0x5571f5a88b40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f4aa9400 session 0x5571f4e70d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 99139584 unmapped: 21577728 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f9e06000/0x0/0x4ffc00000, data 0x1751696/0x1868000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3763000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f3763000 session 0x5571f313c5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:37.981159+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 20365312 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:38.981380+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 20365312 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4fa294000/0x0/0x4ffc00000, data 0x12c3696/0x13da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:39.981507+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 20365312 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320740 data_alloc: 234881024 data_used: 12349440
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:40.981659+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 20365312 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:41.981777+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 20365312 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:42.981954+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4fa294000/0x0/0x4ffc00000, data 0x12c3696/0x13da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 20332544 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:43.982125+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f4aa9c00 session 0x5571f30b0780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 20324352 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:44.982311+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 28655616 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490896 data_alloc: 234881024 data_used: 12349440
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:45.982441+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 100548608 unmapped: 28565504 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f8a92000/0x0/0x4ffc00000, data 0x2ac36ca/0x2bdc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:46.982667+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.500057220s of 10.008541107s, submitted: 48
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 100564992 unmapped: 28549120 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:47.982820+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 109166592 unmapped: 19947520 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:48.982979+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 100859904 unmapped: 28254208 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:49.983140+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 22110208 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1709440 data_alloc: 234881024 data_used: 13537280
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:50.983341+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 115564544 unmapped: 13549568 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f6c12000/0x0/0x4ffc00000, data 0x493d6d0/0x4a56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:51.983552+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105578496 unmapped: 23535616 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:52.983717+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f5387000/0x0/0x4ffc00000, data 0x61ce6d8/0x62e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105766912 unmapped: 23347200 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:53.983874+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f4b87000/0x0/0x4ffc00000, data 0x69ce6d8/0x6ae7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105799680 unmapped: 23314432 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:54.984159+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105799680 unmapped: 23314432 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1930244 data_alloc: 234881024 data_used: 13443072
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:55.984322+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105799680 unmapped: 23314432 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:56.984568+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.213160515s of 10.017291069s, submitted: 129
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105938944 unmapped: 23175168 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f4387000/0x0/0x4ffc00000, data 0x71ce6d8/0x72e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:57.984721+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 22757376 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:58.984970+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f2b65000/0x0/0x4ffc00000, data 0x89f06d8/0x8b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 14360576 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:59.985172+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 22700032 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2204764 data_alloc: 234881024 data_used: 13443072
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:00.985316+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 14254080 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:01.985442+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 106471424 unmapped: 22642688 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f6c89c00 session 0x5571f56f6000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f7285800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:02.985657+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f7285800 session 0x5571f570f860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f1b65000/0x0/0x4ffc00000, data 0x99f06d8/0x9b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107765760 unmapped: 21348352 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:03.985852+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107814912 unmapped: 21299200 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:04.986076+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f7285400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f7285400 session 0x5571f4ea7c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107823104 unmapped: 21291008 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4f0b65000/0x0/0x4ffc00000, data 0xa9f06d8/0xab09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:05.986317+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2501170 data_alloc: 234881024 data_used: 13967360
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107978752 unmapped: 21135360 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:06.986676+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4efb64000/0x0/0x4ffc00000, data 0xb9f073a/0xbb0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.781510830s of 10.003835678s, submitted: 37
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3763000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108011520 unmapped: 21102592 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f3763000 session 0x5571f4e703c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:07.987259+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f4aa9c00 session 0x5571f6583c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f6c89c00 session 0x5571f4e71680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108707840 unmapped: 20406272 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4eeb11000/0x0/0x4ffc00000, data 0xca446e8/0xcb5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:08.987697+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 11862016 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f7285400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f7285400 session 0x5571f3dda780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:09.987909+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f7285800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f7285800 session 0x5571f4b6cf00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 12206080 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:10.988262+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2867606 data_alloc: 234881024 data_used: 13967360
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 20455424 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:11.988426+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108838912 unmapped: 20275200 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4ebb62000/0x0/0x4ffc00000, data 0xf9f36e8/0xfb0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:12.988615+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 117506048 unmapped: 11608064 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 heartbeat osd_stat(store_statfs(0x4ea362000/0x0/0x4ffc00000, data 0x111f36e8/0x1130c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:13.988763+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3763000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 19865600 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:14.988953+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 109355008 unmapped: 19759104 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f4aa9c00 session 0x5571f571ad20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:15.989124+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3418458 data_alloc: 234881024 data_used: 13971456
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f7285400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 ms_handle_reset con 0x5571f7285400 session 0x5571f4b6c960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 109494272 unmapped: 19619840 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 192 handle_osd_map epochs [192,193], i have 192, src has [1,193]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f7285000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 193 heartbeat osd_stat(store_statfs(0x4e7361000/0x0/0x4ffc00000, data 0x141f36f8/0x1430d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f7284000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 193 ms_handle_reset con 0x5571f7285000 session 0x5571f4e310e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:16.989341+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f7284400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.277887344s of 10.001016617s, submitted: 135
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 109584384 unmapped: 19529728 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 193 handle_osd_map epochs [193,194], i have 193, src has [1,194]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 194 ms_handle_reset con 0x5571f7284000 session 0x5571f4e82960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 194 ms_handle_reset con 0x5571f7284400 session 0x5571f55b10e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 194 ms_handle_reset con 0x5571f6c89c00 session 0x5571f65825a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:17.989478+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 109674496 unmapped: 19439616 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:18.989814+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f7284000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 194 ms_handle_reset con 0x5571f7284000 session 0x5571f55b1860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f7285000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 109748224 unmapped: 19365888 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 195 ms_handle_reset con 0x5571f7285000 session 0x5571f5ccaf00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 195 ms_handle_reset con 0x5571f6867800 session 0x5571f2d01c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:19.990155+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 195 ms_handle_reset con 0x5571f4aa9c00 session 0x5571f4b6cd20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 109756416 unmapped: 19357696 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 195 ms_handle_reset con 0x5571f6c89c00 session 0x5571f3b061e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 195 ms_handle_reset con 0x5571f3763000 session 0x5571f4b6d0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:20.990294+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f7284000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3649598 data_alloc: 234881024 data_used: 13983744
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 109764608 unmapped: 19349504 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 196 ms_handle_reset con 0x5571f7284000 session 0x5571f31a12c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:21.990469+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 196 ms_handle_reset con 0x5571f6867c00 session 0x5571f4da94a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 196 ms_handle_reset con 0x5571f7285c00 session 0x5571f4ea63c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 196 ms_handle_reset con 0x5571f6867800 session 0x5571f7f2fe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3763000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105832448 unmapped: 23281664 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 196 heartbeat osd_stat(store_statfs(0x4e534d000/0x0/0x4ffc00000, data 0x161ffbc4/0x1631e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 196 ms_handle_reset con 0x5571f3763000 session 0x5571f55b0000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:22.990687+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 196 ms_handle_reset con 0x5571f6867c00 session 0x5571f571b860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105906176 unmapped: 23207936 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:23.990912+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105906176 unmapped: 23207936 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:24.991194+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105906176 unmapped: 23207936 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:25.991362+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318309 data_alloc: 218103808 data_used: 7782400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105906176 unmapped: 23207936 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:26.991663+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 196 heartbeat osd_stat(store_statfs(0x4fa7aa000/0x0/0x4ffc00000, data 0xda7b4f/0xec3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105906176 unmapped: 23207936 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:27.991843+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.550793648s of 11.346415520s, submitted: 188
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 196 handle_osd_map epochs [196,197], i have 196, src has [1,197]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105906176 unmapped: 23207936 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 197 ms_handle_reset con 0x5571f6c89c00 session 0x5571f55b12c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f7284000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:28.991992+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 197 ms_handle_reset con 0x5571f7284000 session 0x5571f394bc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105914368 unmapped: 23199744 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:29.992238+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3763000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fa7a9000/0x0/0x4ffc00000, data 0xda973d/0xec4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 197 ms_handle_reset con 0x5571f3763000 session 0x5571f313a960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105914368 unmapped: 23199744 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:30.992416+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323135 data_alloc: 218103808 data_used: 7786496
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105914368 unmapped: 23199744 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:31.992680+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fa7a9000/0x0/0x4ffc00000, data 0xda974d/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 197 handle_osd_map epochs [198,198], i have 198, src has [1,198]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105914368 unmapped: 23199744 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:32.992908+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 197 handle_osd_map epochs [198,198], i have 198, src has [1,198]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fa7a5000/0x0/0x4ffc00000, data 0xdab293/0xec8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 198 ms_handle_reset con 0x5571f6867800 session 0x5571f5763a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 198 ms_handle_reset con 0x5571f6c89c00 session 0x5571f7f3b860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f7285c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 198 ms_handle_reset con 0x5571f6867c00 session 0x5571f551a1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 23912448 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 198 ms_handle_reset con 0x5571f7285c00 session 0x5571f7f3ba40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:33.993084+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 198 heartbeat osd_stat(store_statfs(0x4fa7a6000/0x0/0x4ffc00000, data 0xdab283/0xec7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3763000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 198 ms_handle_reset con 0x5571f6867800 session 0x5571f7f3be00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 198 ms_handle_reset con 0x5571f3763000 session 0x5571f4bef680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 198 ms_handle_reset con 0x5571f6867c00 session 0x5571f394a5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105267200 unmapped: 23846912 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:34.993304+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 198 handle_osd_map epochs [198,199], i have 198, src has [1,199]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105283584 unmapped: 23830528 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:35.993441+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 199 ms_handle_reset con 0x5571f6c89c00 session 0x5571f570ed20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f7285c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328527 data_alloc: 218103808 data_used: 7270400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fa7a3000/0x0/0x4ffc00000, data 0xdacd3d/0xeca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105316352 unmapped: 23797760 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:36.993659+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 199 ms_handle_reset con 0x5571f7285c00 session 0x5571f570e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105316352 unmapped: 23797760 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:37.993922+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3763000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.967693329s of 10.206396103s, submitted: 93
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105316352 unmapped: 23797760 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 199 ms_handle_reset con 0x5571f3763000 session 0x5571f6582960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:38.994110+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 199 ms_handle_reset con 0x5571f6867800 session 0x5571f4e301e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 199 ms_handle_reset con 0x5571f6867c00 session 0x5571f313c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 199 ms_handle_reset con 0x5571f6c89c00 session 0x5571f33163c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105717760 unmapped: 23396352 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:39.994450+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105717760 unmapped: 23396352 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:40.994705+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476894 data_alloc: 218103808 data_used: 7270400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 199 heartbeat osd_stat(store_statfs(0x4f9500000/0x0/0x4ffc00000, data 0x204fd9f/0x216e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105717760 unmapped: 23396352 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:41.995002+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f7285000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105734144 unmapped: 23379968 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 199 ms_handle_reset con 0x5571f7285000 session 0x5571f4eb74a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3763000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:42.995201+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105734144 unmapped: 23379968 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:43.995427+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105734144 unmapped: 23379968 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:44.995583+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 199 handle_osd_map epochs [200,200], i have 200, src has [1,200]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 106790912 unmapped: 22323200 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:45.995786+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483070 data_alloc: 218103808 data_used: 7282688
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 200 heartbeat osd_stat(store_statfs(0x4f94fb000/0x0/0x4ffc00000, data 0x20519d5/0x2172000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105742336 unmapped: 23371776 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:46.996153+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 105742336 unmapped: 23371776 heap: 129114112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:48.042559+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 200 ms_handle_reset con 0x5571f6867c00 session 0x5571f56f6000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.893971443s of 10.065669060s, submitted: 60
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 121184256 unmapped: 12132352 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:49.042817+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 25681920 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 200 heartbeat osd_stat(store_statfs(0x4f878e000/0x0/0x4ffc00000, data 0x2dbf9d5/0x2ee0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:50.042988+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 25681920 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:51.043213+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1591594 data_alloc: 218103808 data_used: 7278592
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 201 ms_handle_reset con 0x5571f6867800 session 0x5571f571a960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107642880 unmapped: 25673728 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:52.043458+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107642880 unmapped: 25673728 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:53.043778+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107642880 unmapped: 25673728 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 201 ms_handle_reset con 0x5571f3763000 session 0x5571f4eb7860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:54.044021+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f75a8400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 201 ms_handle_reset con 0x5571f75a8400 session 0x5571f3f201e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107642880 unmapped: 25673728 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:55.044274+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107642880 unmapped: 25673728 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 201 heartbeat osd_stat(store_statfs(0x4f878a000/0x0/0x4ffc00000, data 0x2dc15a9/0x2ee3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:56.044541+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1594417 data_alloc: 218103808 data_used: 7286784
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107642880 unmapped: 25673728 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:57.044745+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107642880 unmapped: 25673728 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:58.044902+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107642880 unmapped: 25673728 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:59.045117+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f75a8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107642880 unmapped: 25673728 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 201 ms_handle_reset con 0x5571f75a8000 session 0x5571f3200000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 201 heartbeat osd_stat(store_statfs(0x4f878a000/0x0/0x4ffc00000, data 0x2dc15a9/0x2ee3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:00.045338+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107642880 unmapped: 25673728 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:01.045511+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1594417 data_alloc: 218103808 data_used: 7286784
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107642880 unmapped: 25673728 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:02.045746+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 201 heartbeat osd_stat(store_statfs(0x4f878a000/0x0/0x4ffc00000, data 0x2dc15a9/0x2ee3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 201 heartbeat osd_stat(store_statfs(0x4f878a000/0x0/0x4ffc00000, data 0x2dc15a9/0x2ee3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107642880 unmapped: 25673728 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:03.045918+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 201 heartbeat osd_stat(store_statfs(0x4f878a000/0x0/0x4ffc00000, data 0x2dc15a9/0x2ee3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3763000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 201 ms_handle_reset con 0x5571f6867800 session 0x5571f4ea7c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 201 ms_handle_reset con 0x5571f3763000 session 0x5571f32003c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107667456 unmapped: 25649152 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f75a8400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:04.046114+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eacc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.854703903s of 15.671625137s, submitted: 19
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 201 ms_handle_reset con 0x5571f75a8400 session 0x5571f4da9c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572b800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107905024 unmapped: 25411584 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 201 ms_handle_reset con 0x5571f6eacc00 session 0x5571f4e310e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 201 ms_handle_reset con 0x5571f572b800 session 0x5571f31efa40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3763000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:05.046237+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 202 ms_handle_reset con 0x5571f6867800 session 0x5571f571ad20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eacc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 202 ms_handle_reset con 0x5571f6867c00 session 0x5571f570f860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 202 ms_handle_reset con 0x5571f6eacc00 session 0x5571f313c5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 202 ms_handle_reset con 0x5571f3763000 session 0x5571f3b2f0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 26230784 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:06.046432+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1641628 data_alloc: 218103808 data_used: 7294976
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f75a8800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 202 ms_handle_reset con 0x5571f75a8800 session 0x5571f4b6d680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3763000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 202 ms_handle_reset con 0x5571f3763000 session 0x5571f4d8d0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 26230784 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 202 ms_handle_reset con 0x5571f6867800 session 0x5571f3b072c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 202 handle_osd_map epochs [202,203], i have 202, src has [1,203]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:07.046660+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 203 ms_handle_reset con 0x5571f6867c00 session 0x5571f3b2e5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 26230784 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eacc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 203 ms_handle_reset con 0x5571f6eacc00 session 0x5571f5cca3c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:08.046821+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f75a8400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 203 ms_handle_reset con 0x5571f75a8400 session 0x5571f6582000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3763000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 203 heartbeat osd_stat(store_statfs(0x4f86ad000/0x0/0x4ffc00000, data 0x2dc4da5/0x2ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 26230784 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 203 handle_osd_map epochs [203,204], i have 203, src has [1,204]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:09.047049+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 204 ms_handle_reset con 0x5571f3763000 session 0x5571f2eff4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 26181632 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:10.047251+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107143168 unmapped: 26173440 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:11.047411+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1506148 data_alloc: 218103808 data_used: 7303168
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 26165248 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:12.047556+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 204 heartbeat osd_stat(store_statfs(0x4f94f0000/0x0/0x4ffc00000, data 0x2058987/0x217d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 204 handle_osd_map epochs [205,205], i have 204, src has [1,205]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 204 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 204 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 205 ms_handle_reset con 0x5571f6867800 session 0x5571f4e82780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 205 ms_handle_reset con 0x5571f6867c00 session 0x5571f551ab40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 26157056 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:13.047738+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 26157056 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:14.047906+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eacc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 205 ms_handle_reset con 0x5571f6eacc00 session 0x5571f551a000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572b000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 205 ms_handle_reset con 0x5571f572b000 session 0x5571f570e5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26148864 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:15.048050+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 26148864 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:16.048238+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456398 data_alloc: 218103808 data_used: 7303168
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3763000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.217712402s of 11.991531372s, submitted: 119
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 206 ms_handle_reset con 0x5571f6867800 session 0x5571f4e4a960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 206 ms_handle_reset con 0x5571f3763000 session 0x5571f4d903c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 206 ms_handle_reset con 0x5571f6867c00 session 0x5571f56f72c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107143168 unmapped: 26173440 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:17.048607+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eacc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a68c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 207 ms_handle_reset con 0x5571f5a68c00 session 0x5571f4d8c5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 26132480 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 207 heartbeat osd_stat(store_statfs(0x4f9b9a000/0x0/0x4ffc00000, data 0x19ab0f9/0x1ad3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:18.048845+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b91000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 26116096 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:19.048998+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 208 ms_handle_reset con 0x5571f4b91000 session 0x5571f3ddbe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3763000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 208 ms_handle_reset con 0x5571f4b90400 session 0x5571f394a5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 208 ms_handle_reset con 0x5571f6eacc00 session 0x5571f4da8960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a68c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 208 ms_handle_reset con 0x5571f6867800 session 0x5571f551a1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 26083328 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:20.049205+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 208 handle_osd_map epochs [208,209], i have 208, src has [1,209]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 209 ms_handle_reset con 0x5571f5a68c00 session 0x5571f5a88b40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 209 ms_handle_reset con 0x5571f3763000 session 0x5571f4bee780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 26066944 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:21.049328+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 209 ms_handle_reset con 0x5571f4b90400 session 0x5571f55b0960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1505063 data_alloc: 218103808 data_used: 7303168
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a68c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 209 ms_handle_reset con 0x5571f5a68c00 session 0x5571f3a1e1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107921408 unmapped: 25395200 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:22.049611+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 209 heartbeat osd_stat(store_statfs(0x4f9891000/0x0/0x4ffc00000, data 0x1caf8df/0x1ddc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107921408 unmapped: 25395200 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:23.049761+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eacc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 209 ms_handle_reset con 0x5571f6eacc00 session 0x5571f31efa40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 209 ms_handle_reset con 0x5571f6867800 session 0x5571f55b0d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 209 ms_handle_reset con 0x5571f4b90800 session 0x5571f570f680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 209 heartbeat osd_stat(store_statfs(0x4f9891000/0x0/0x4ffc00000, data 0x1caf8df/0x1ddc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 25387008 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:24.049999+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 209 ms_handle_reset con 0x5571f4b90400 session 0x5571f4d8d2c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 25387008 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:25.050185+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 25387008 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:26.050338+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1505396 data_alloc: 218103808 data_used: 7315456
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 25387008 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:27.050548+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 209 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1caf87d/0x1ddb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 25387008 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:28.050663+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107929600 unmapped: 25387008 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:29.050821+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a68c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.283888817s of 13.240381241s, submitted: 95
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 210 ms_handle_reset con 0x5571f5a68c00 session 0x5571f4eb72c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107937792 unmapped: 25378816 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:30.050986+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 210 ms_handle_reset con 0x5571f6867800 session 0x5571f4eb65a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 210 ms_handle_reset con 0x5571f6867c00 session 0x5571f4ea7c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eacc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 25362432 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:31.051240+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482956 data_alloc: 218103808 data_used: 7327744
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 25362432 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 210 heartbeat osd_stat(store_statfs(0x4f9b8e000/0x0/0x4ffc00000, data 0x19b2451/0x1adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:32.051461+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 25362432 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:33.051604+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 210 ms_handle_reset con 0x5571f6eacc00 session 0x5571f7f3a780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 25362432 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:34.051799+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 25362432 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:35.051934+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 25362432 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:36.052077+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482956 data_alloc: 218103808 data_used: 7327744
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 25362432 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:37.052425+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 210 heartbeat osd_stat(store_statfs(0x4f9b8e000/0x0/0x4ffc00000, data 0x19b2451/0x1adf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 210 handle_osd_map epochs [211,211], i have 210, src has [1,211]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 25354240 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:38.052592+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 211 heartbeat osd_stat(store_statfs(0x4f9b8a000/0x0/0x4ffc00000, data 0x19b4079/0x1ae2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 25354240 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:39.052768+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a68c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 212 ms_handle_reset con 0x5571f5a68c00 session 0x5571f2d003c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107970560 unmapped: 25346048 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:40.052947+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 212 handle_osd_map epochs [212,213], i have 212, src has [1,213]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.284337044s of 10.777566910s, submitted: 21
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 213 ms_handle_reset con 0x5571f4b90400 session 0x5571f5763680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 213 ms_handle_reset con 0x5571f6867c00 session 0x5571f2d01c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 213 ms_handle_reset con 0x5571f6867800 session 0x5571f4b6de00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 107970560 unmapped: 25346048 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:41.053181+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1493202 data_alloc: 218103808 data_used: 7319552
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 213 handle_osd_map epochs [213,214], i have 213, src has [1,214]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572cc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 214 ms_handle_reset con 0x5571f572cc00 session 0x5571f3f20960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 214 heartbeat osd_stat(store_statfs(0x4f9b84000/0x0/0x4ffc00000, data 0x19b73b3/0x1ae7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108019712 unmapped: 25296896 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:42.053594+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a68c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 214 ms_handle_reset con 0x5571f5a68c00 session 0x5571f5cca3c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 214 handle_osd_map epochs [214,215], i have 214, src has [1,215]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 215 ms_handle_reset con 0x5571f6867800 session 0x5571f5cca780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572d800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 215 ms_handle_reset con 0x5571f572d800 session 0x5571f313d0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108019712 unmapped: 25296896 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:43.053768+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 215 handle_osd_map epochs [216,216], i have 215, src has [1,216]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 216 ms_handle_reset con 0x5571f6867c00 session 0x5571f5a89680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 217 ms_handle_reset con 0x5571f4b90400 session 0x5571f313a1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108101632 unmapped: 25214976 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:44.053989+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572d800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 217 ms_handle_reset con 0x5571f572d800 session 0x5571f4e303c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 25174016 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:45.054075+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 218 ms_handle_reset con 0x5571f4b90400 session 0x5571f4e30b40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108191744 unmapped: 25124864 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:46.054193+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1513529 data_alloc: 218103808 data_used: 7340032
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108191744 unmapped: 25124864 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:47.054498+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 218 heartbeat osd_stat(store_statfs(0x4f9765000/0x0/0x4ffc00000, data 0x19c00a1/0x1af8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a68c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 25108480 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:48.054635+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 218 ms_handle_reset con 0x5571f6867800 session 0x5571f56f6780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 219 ms_handle_reset con 0x5571f5a68c00 session 0x5571f7f3a3c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 219 ms_handle_reset con 0x5571f6867c00 session 0x5571f4da85a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 25100288 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:49.054918+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 25100288 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:50.055051+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 219 ms_handle_reset con 0x5571f4b90400 session 0x5571f4b6d680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 219 heartbeat osd_stat(store_statfs(0x4f9764000/0x0/0x4ffc00000, data 0x19c1cb1/0x1af9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 25100288 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:51.055243+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1521559 data_alloc: 218103808 data_used: 7356416
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 25100288 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:52.055426+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 25100288 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:53.055607+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572d800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.511158943s of 13.009929657s, submitted: 151
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 219 ms_handle_reset con 0x5571f572d800 session 0x5571f3ddb860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a68c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 219 heartbeat osd_stat(store_statfs(0x4f9764000/0x0/0x4ffc00000, data 0x19c1cb1/0x1af9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 219 handle_osd_map epochs [220,220], i have 220, src has [1,220]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 220 handle_osd_map epochs [220,220], i have 220, src has [1,220]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 220 ms_handle_reset con 0x5571f5a68c00 session 0x5571f4d8cd20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:54.055817+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 25067520 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:55.055991+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 25067520 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 220 ms_handle_reset con 0x5571f6867800 session 0x5571f313d860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:56.056180+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 25067520 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1527540 data_alloc: 218103808 data_used: 7364608
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 220 heartbeat osd_stat(store_statfs(0x4f9760000/0x0/0x4ffc00000, data 0x19c37e9/0x1afd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572d000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 220 ms_handle_reset con 0x5571f572d000 session 0x5571f31a1c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 220 ms_handle_reset con 0x5571f4b90400 session 0x5571f5762b40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:57.056356+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572d000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 220 ms_handle_reset con 0x5571f572d000 session 0x5571f4e4b2c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 24674304 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572d800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 220 ms_handle_reset con 0x5571f572d800 session 0x5571f313b2c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a68c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 220 ms_handle_reset con 0x5571f5a68c00 session 0x5571f3a1e3c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:58.056487+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 24666112 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 220 heartbeat osd_stat(store_statfs(0x4f890a000/0x0/0x4ffc00000, data 0x281a7e9/0x2954000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:59.057074+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 24666112 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 heartbeat osd_stat(store_statfs(0x4f8906000/0x0/0x4ffc00000, data 0x281c3f5/0x2957000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:00.057640+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 24666112 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f6c89c00 session 0x5571f551b4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f572c800 session 0x5571f4e83860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f6867800 session 0x5571f500fe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f6c89c00 session 0x5571f570fc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 heartbeat osd_stat(store_statfs(0x4f8906000/0x0/0x4ffc00000, data 0x281c3f5/0x2957000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:01.058052+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 24649728 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f4b90400 session 0x5571f56f7680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572d000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f572d000 session 0x5571f4e4a3c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1642232 data_alloc: 218103808 data_used: 7380992
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f4b90400 session 0x5571f4e4b0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:02.058456+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108683264 unmapped: 24633344 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:03.058605+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 108683264 unmapped: 24633344 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f6c89c00 session 0x5571f56f7860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572d800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.641375542s of 10.360332489s, submitted: 84
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:04.058723+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 14106624 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f572d800 session 0x5571f570e780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a68c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f5a68c00 session 0x5571f551b0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc4800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f4fc4800 session 0x5571f4d923c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:05.058828+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 119341056 unmapped: 13975552 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:06.058971+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 119341056 unmapped: 13975552 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc4800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f4fc4800 session 0x5571f56f63c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f4b90400 session 0x5571f4da8780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1749248 data_alloc: 234881024 data_used: 22245376
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 heartbeat osd_stat(store_statfs(0x4f8906000/0x0/0x4ffc00000, data 0x281c405/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:07.059177+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 13959168 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:08.059309+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 13959168 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:09.059539+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 13959168 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572d800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:10.059738+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 13942784 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:11.059996+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 13942784 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f572d800 session 0x5571f4ea6000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a68c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1749248 data_alloc: 234881024 data_used: 22245376
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:12.060182+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 13279232 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f5a68c00 session 0x5571f313d860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 heartbeat osd_stat(store_statfs(0x4f8906000/0x0/0x4ffc00000, data 0x281c405/0x2958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:13.060302+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 13246464 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f6c89c00 session 0x5571f570e1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:14.060474+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 13230080 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.747097969s of 10.884669304s, submitted: 32
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:15.060599+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 heartbeat osd_stat(store_statfs(0x4f82ee000/0x0/0x4ffc00000, data 0x2e34405/0x2f70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 8421376 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f6c89c00 session 0x5571f5a89c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:16.060702+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 127148032 unmapped: 6168576 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f4b90400 session 0x5571f4e83860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1809195 data_alloc: 234881024 data_used: 22822912
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 heartbeat osd_stat(store_statfs(0x4f81d6000/0x0/0x4ffc00000, data 0x2f44405/0x3080000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:17.060865+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 127164416 unmapped: 6152192 heap: 133316608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc4800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:18.061004+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 135331840 unmapped: 11632640 heap: 146964480 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f4fc4800 session 0x5571f30b0780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:19.061180+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 124583936 unmapped: 22380544 heap: 146964480 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 heartbeat osd_stat(store_statfs(0x4f76ac000/0x0/0x4ffc00000, data 0x3a76405/0x3bb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:20.061309+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 22339584 heap: 146964480 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 heartbeat osd_stat(store_statfs(0x4f76ac000/0x0/0x4ffc00000, data 0x3a76405/0x3bb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:21.061442+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 22339584 heap: 146964480 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1894239 data_alloc: 234881024 data_used: 23232512
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:22.061586+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 22339584 heap: 146964480 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:23.061778+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572d800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 22339584 heap: 146964480 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f572d800 session 0x5571f3b2f860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:24.061988+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 22306816 heap: 146964480 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.040774345s of 10.317973137s, submitted: 120
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:25.062156+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 22282240 heap: 146964480 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 heartbeat osd_stat(store_statfs(0x4f76ac000/0x0/0x4ffc00000, data 0x3a76405/0x3bb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a68c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:26.062283+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 23060480 heap: 146964480 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f5a68c00 session 0x5571f6583860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1811893 data_alloc: 234881024 data_used: 23232512
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:27.062447+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 23060480 heap: 146964480 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:28.062588+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 23060480 heap: 146964480 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:29.062737+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 23060480 heap: 146964480 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:30.062890+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 23060480 heap: 146964480 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:31.063036+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 23060480 heap: 146964480 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 heartbeat osd_stat(store_statfs(0x4f81d3000/0x0/0x4ffc00000, data 0x2f4f405/0x308b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1811893 data_alloc: 234881024 data_used: 23232512
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:32.063168+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 23060480 heap: 146964480 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f4b90400 session 0x5571f313c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc4800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f4fc4800 session 0x5571f3b2e5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572d800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f572d800 session 0x5571f551ab40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 ms_handle_reset con 0x5571f6c89c00 session 0x5571f3a1f680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc4c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:33.063261+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 23060480 heap: 146964480 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:34.063438+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 26484736 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc5000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 heartbeat osd_stat(store_statfs(0x4f7764000/0x0/0x4ffc00000, data 0x39bd42e/0x3afa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,2])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 221 handle_osd_map epochs [222,222], i have 222, src has [1,222]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:35.063595+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 124755968 unmapped: 26411008 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.393390656s of 10.297008514s, submitted: 35
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 222 ms_handle_reset con 0x5571f4fc4c00 session 0x5571f4e4a000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc4c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 222 ms_handle_reset con 0x5571f4fc4c00 session 0x5571f4d8da40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 222 ms_handle_reset con 0x5571f4b90400 session 0x5571f3a1f4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc4800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:36.063716+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 222 ms_handle_reset con 0x5571f4fc5000 session 0x5571f4d90d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 222 ms_handle_reset con 0x5571f4fc4800 session 0x5571f3b07a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572d800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 222 ms_handle_reset con 0x5571f572d800 session 0x5571f500fa40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 124772352 unmapped: 26394624 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 222 heartbeat osd_stat(store_statfs(0x4f769e000/0x0/0x4ffc00000, data 0x3a80064/0x3bbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1904850 data_alloc: 234881024 data_used: 23240704
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:37.063941+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 124772352 unmapped: 26394624 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 222 heartbeat osd_stat(store_statfs(0x4f769e000/0x0/0x4ffc00000, data 0x3a8009d/0x3bbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:38.064051+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 222 handle_osd_map epochs [222,223], i have 222, src has [1,223]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572d800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 223 ms_handle_reset con 0x5571f572d800 session 0x5571f5a89a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 124772352 unmapped: 26394624 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 223 heartbeat osd_stat(store_statfs(0x4f769e000/0x0/0x4ffc00000, data 0x3a8009d/0x3bbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 223 ms_handle_reset con 0x5571f4b90400 session 0x5571f5763680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:39.064146+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 124772352 unmapped: 26394624 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc4800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 223 ms_handle_reset con 0x5571f4fc4800 session 0x5571f5a89680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc4c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 223 ms_handle_reset con 0x5571f4fc4c00 session 0x5571f551b4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:40.071437+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 124772352 unmapped: 26394624 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc5000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f39c5400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 224 ms_handle_reset con 0x5571f6c89c00 session 0x5571f3f21e00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:41.071598+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 124772352 unmapped: 26394624 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1935142 data_alloc: 234881024 data_used: 26361856
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:42.071767+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 134938624 unmapped: 16228352 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:43.071924+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 134938624 unmapped: 16228352 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc4800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 224 ms_handle_reset con 0x5571f4b90400 session 0x5571f4eb74a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc4c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 224 ms_handle_reset con 0x5571f4fc4800 session 0x5571f6582780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572d800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:44.072066+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 224 handle_osd_map epochs [224,225], i have 224, src has [1,225]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 134955008 unmapped: 16211968 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 225 ms_handle_reset con 0x5571f572d800 session 0x5571f4ea61e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 225 ms_handle_reset con 0x5571f38d4000 session 0x5571f3ddaf00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 225 heartbeat osd_stat(store_statfs(0x4f7699000/0x0/0x4ffc00000, data 0x3a83845/0x3bc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b86400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 225 ms_handle_reset con 0x5571f4b86400 session 0x5571f4e4bc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:45.072245+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 134971392 unmapped: 16195584 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:46.072488+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 134971392 unmapped: 16195584 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.563429832s of 11.094932556s, submitted: 28
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2001919 data_alloc: 251658240 data_used: 34287616
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 226 heartbeat osd_stat(store_statfs(0x4f7691000/0x0/0x4ffc00000, data 0x3a871e7/0x3bcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:47.072713+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 134971392 unmapped: 16195584 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 226 ms_handle_reset con 0x5571f6c89c00 session 0x5571f55b01e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 226 ms_handle_reset con 0x5571f38d4000 session 0x5571f4e82960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 226 ms_handle_reset con 0x5571f4b90400 session 0x5571f55b0000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 226 ms_handle_reset con 0x5571f4fc4c00 session 0x5571f4eb7e00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc4800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:48.072912+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 135053312 unmapped: 16113664 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 226 ms_handle_reset con 0x5571f4fc4800 session 0x5571f4e4b2c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:49.073084+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 227 ms_handle_reset con 0x5571f38d4000 session 0x5571f5a88000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136101888 unmapped: 15065088 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 227 handle_osd_map epochs [227,228], i have 227, src has [1,228]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc4800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc4c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 228 ms_handle_reset con 0x5571f4fc4c00 session 0x5571f3201c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:50.073377+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136134656 unmapped: 15032320 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 228 handle_osd_map epochs [228,229], i have 228, src has [1,229]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 228 heartbeat osd_stat(store_statfs(0x4f768b000/0x0/0x4ffc00000, data 0x3a8ae8a/0x3bd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 229 ms_handle_reset con 0x5571f4fc4800 session 0x5571f4eb6780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:51.073549+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 229 ms_handle_reset con 0x5571f4b90400 session 0x5571f2d01c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136183808 unmapped: 14983168 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2013167 data_alloc: 251658240 data_used: 34295808
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:52.073772+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136183808 unmapped: 14983168 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 229 handle_osd_map epochs [229,230], i have 229, src has [1,230]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 230 ms_handle_reset con 0x5571f6c89c00 session 0x5571f5763a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:53.073910+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 10846208 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:54.074075+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139829248 unmapped: 11337728 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 231 ms_handle_reset con 0x5571f38d4000 session 0x5571f4e31a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 231 handle_osd_map epochs [231,232], i have 231, src has [1,232]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc4800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:55.074260+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc4c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 232 ms_handle_reset con 0x5571f4fc4c00 session 0x5571f56f7c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 232 heartbeat osd_stat(store_statfs(0x4f6b26000/0x0/0x4ffc00000, data 0x45ed1b0/0x4737000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 10887168 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:56.074357+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 232 ms_handle_reset con 0x5571f4fc4800 session 0x5571f500f860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 232 handle_osd_map epochs [233,233], i have 232, src has [1,233]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 10461184 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 233 ms_handle_reset con 0x5571f4b90400 session 0x5571f56f7680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2115793 data_alloc: 251658240 data_used: 35495936
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:57.074527+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572d800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 10412032 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:58.074760+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.284686089s of 11.837862968s, submitted: 156
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 234 ms_handle_reset con 0x5571f572d800 session 0x5571f4d90f00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139362304 unmapped: 11804672 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 234 handle_osd_map epochs [234,235], i have 234, src has [1,235]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:59.074980+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139362304 unmapped: 11804672 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:00.075183+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 235 handle_osd_map epochs [235,236], i have 235, src has [1,236]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139362304 unmapped: 11804672 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 236 heartbeat osd_stat(store_statfs(0x4f6b13000/0x0/0x4ffc00000, data 0x45fa12a/0x4749000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:01.075336+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139362304 unmapped: 11804672 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117019 data_alloc: 251658240 data_used: 35504128
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:02.075562+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139362304 unmapped: 11804672 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:03.075698+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 236 ms_handle_reset con 0x5571f38d4000 session 0x5571f5ccaf00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 236 ms_handle_reset con 0x5571f4b90400 session 0x5571f7f2ed20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139362304 unmapped: 11804672 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:04.075903+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139362304 unmapped: 11804672 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 236 heartbeat osd_stat(store_statfs(0x4f6b13000/0x0/0x4ffc00000, data 0x45fb875/0x474b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:05.076068+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139362304 unmapped: 11804672 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 236 ms_handle_reset con 0x5571f572c800 session 0x5571f4d90780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 236 ms_handle_reset con 0x5571f6867800 session 0x5571f7f3a780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:06.076176+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc4800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 20701184 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 236 ms_handle_reset con 0x5571f4fc4800 session 0x5571f7f2e780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1855406 data_alloc: 234881024 data_used: 20795392
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:07.076398+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 20684800 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:08.076643+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 20684800 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 236 handle_osd_map epochs [236,237], i have 236, src has [1,237]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.000799179s of 10.470900536s, submitted: 52
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:09.076940+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 20668416 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:10.077168+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 20668416 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 237 heartbeat osd_stat(store_statfs(0x4f809a000/0x0/0x4ffc00000, data 0x307333f/0x31c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:11.077319+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 237 ms_handle_reset con 0x5571f38d4000 session 0x5571f394be00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 20668416 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1859652 data_alloc: 234881024 data_used: 20803584
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:12.077490+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 20668416 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:13.077693+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 237 handle_osd_map epochs [238,238], i have 238, src has [1,238]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 20652032 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:14.077836+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 239 handle_osd_map epochs [239,240], i have 239, src has [1,240]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 240 ms_handle_reset con 0x5571f4b90400 session 0x5571f4e30960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 20594688 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc4800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 240 ms_handle_reset con 0x5571f4fc4800 session 0x5571f4d8c1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:15.077969+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 240 ms_handle_reset con 0x5571f572c800 session 0x5571f4e83a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 20561920 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 240 ms_handle_reset con 0x5571f6867800 session 0x5571f551b0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 240 ms_handle_reset con 0x5571f38d4000 session 0x5571f56f63c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 240 ms_handle_reset con 0x5571f4b90400 session 0x5571f4e4b0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc4800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 240 ms_handle_reset con 0x5571f4fc4800 session 0x5571f4d910e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 240 ms_handle_reset con 0x5571f572c800 session 0x5571f7f2ef00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc4c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 240 ms_handle_reset con 0x5571f4fc4c00 session 0x5571f500fc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:16.078160+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 240 ms_handle_reset con 0x5571f38d4000 session 0x5571f3b074a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 240 ms_handle_reset con 0x5571f4b90400 session 0x5571f4e70f00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 240 heartbeat osd_stat(store_statfs(0x4f7f3e000/0x0/0x4ffc00000, data 0x31cd775/0x3320000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 129318912 unmapped: 21848064 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1881864 data_alloc: 234881024 data_used: 20799488
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 240 heartbeat osd_stat(store_statfs(0x4f7f3e000/0x0/0x4ffc00000, data 0x31cd775/0x3320000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:17.078392+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc4800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 21807104 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:18.078541+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 21807104 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 240 ms_handle_reset con 0x5571f572c800 session 0x5571f4e832c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:19.078663+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.462837219s of 10.646491051s, submitted: 79
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 129343488 unmapped: 21823488 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:20.079069+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 129343488 unmapped: 21823488 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 241 ms_handle_reset con 0x5571f4fc4800 session 0x5571f56f6780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:21.079238+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 129343488 unmapped: 21823488 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 241 ms_handle_reset con 0x5571f4fc5000 session 0x5571f313c780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 241 ms_handle_reset con 0x5571f39c5400 session 0x5571f7f3b2c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1886309 data_alloc: 234881024 data_used: 20815872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc5000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:22.079368+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 241 ms_handle_reset con 0x5571f4fc5000 session 0x5571f4eb7a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 30572544 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 241 heartbeat osd_stat(store_statfs(0x4f7f3a000/0x0/0x4ffc00000, data 0x31cf24b/0x3323000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:23.079514+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 30572544 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 241 ms_handle_reset con 0x5571f38d4000 session 0x5571f56f7860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:24.079692+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 30572544 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4fc4800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:25.079806+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 30547968 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:26.079963+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 30547968 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 241 ms_handle_reset con 0x5571f572c800 session 0x5571f5762960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572d800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1648361 data_alloc: 218103808 data_used: 9289728
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 241 ms_handle_reset con 0x5571f572d800 session 0x5571f3ddb860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:27.080175+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 120643584 unmapped: 30523392 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:28.080301+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 241 heartbeat osd_stat(store_statfs(0x4f95ce000/0x0/0x4ffc00000, data 0x1b3d1e9/0x1c90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 120643584 unmapped: 30523392 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 241 heartbeat osd_stat(store_statfs(0x4f95ce000/0x0/0x4ffc00000, data 0x1b3d1e9/0x1c90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:29.080421+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 120643584 unmapped: 30523392 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 241 handle_osd_map epochs [241,242], i have 241, src has [1,242]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.110794067s of 10.300960541s, submitted: 46
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572d800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 242 ms_handle_reset con 0x5571f572d800 session 0x5571f4e71860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:30.080530+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 242 heartbeat osd_stat(store_statfs(0x4f95c9000/0x0/0x4ffc00000, data 0x1b3ed05/0x1c94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 31432704 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:31.080621+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 31432704 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1654317 data_alloc: 218103808 data_used: 9297920
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:32.080758+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 31432704 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:33.080961+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 31432704 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:34.081122+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 242 heartbeat osd_stat(store_statfs(0x4f95c9000/0x0/0x4ffc00000, data 0x1b3ed05/0x1c94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 31375360 heap: 151166976 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 242 ms_handle_reset con 0x5571f38d4000 session 0x5571f57632c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f39c5400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 242 ms_handle_reset con 0x5571f39c5400 session 0x5571f4d901e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 242 ms_handle_reset con 0x5571f4b90000 session 0x5571f313c960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 242 ms_handle_reset con 0x5571f572c800 session 0x5571f313cf00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 242 ms_handle_reset con 0x5571f572c800 session 0x5571f4e830e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:35.081287+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 242 ms_handle_reset con 0x5571f38d4000 session 0x5571f4e82960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 121626624 unmapped: 36896768 heap: 158523392 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f39c5400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 242 ms_handle_reset con 0x5571f39c5400 session 0x5571f4e82780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:36.081412+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 121626624 unmapped: 36896768 heap: 158523392 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 242 ms_handle_reset con 0x5571f4b90000 session 0x5571f31a05a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572d400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 242 ms_handle_reset con 0x5571f572d400 session 0x5571f31a1c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1725268 data_alloc: 218103808 data_used: 9297920
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:37.081592+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572d400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 125624320 unmapped: 32899072 heap: 158523392 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:38.081722+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 131440640 unmapped: 27082752 heap: 158523392 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:39.081871+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 131440640 unmapped: 27082752 heap: 158523392 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 242 heartbeat osd_stat(store_statfs(0x4f7ff3000/0x0/0x4ffc00000, data 0x3113d77/0x326b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.954306602s of 10.359686852s, submitted: 130
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 242 ms_handle_reset con 0x5571f572d400 session 0x5571f3317680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 242 ms_handle_reset con 0x5571f38d4000 session 0x5571f4eb7e00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:40.082025+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f39c5400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 34570240 heap: 158523392 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 242 ms_handle_reset con 0x5571f39c5400 session 0x5571f7f3a960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:41.082221+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 34570240 heap: 158523392 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1766079 data_alloc: 234881024 data_used: 10362880
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:42.082343+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 34562048 heap: 158523392 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 242 ms_handle_reset con 0x5571f4b90000 session 0x5571f551b0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:43.082516+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 34562048 heap: 158523392 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:44.082634+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572d800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 242 heartbeat osd_stat(store_statfs(0x4f8844000/0x0/0x4ffc00000, data 0x28c4d05/0x2a1a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 34562048 heap: 158523392 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:45.082777+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 34562048 heap: 158523392 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 242 ms_handle_reset con 0x5571f572d800 session 0x5571f4d910e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:46.082922+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 34545664 heap: 158523392 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1769182 data_alloc: 234881024 data_used: 10362880
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:47.083110+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 242 heartbeat osd_stat(store_statfs(0x4f8840000/0x0/0x4ffc00000, data 0x28c7d15/0x2a1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 34545664 heap: 158523392 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:48.083235+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 243 ms_handle_reset con 0x5571f572c800 session 0x5571f551a5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 34545664 heap: 158523392 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:49.083404+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 34545664 heap: 158523392 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f39c5400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:50.083530+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.048246861s of 10.175660133s, submitted: 44
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 243 ms_handle_reset con 0x5571f38d4000 session 0x5571f32005a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 243 ms_handle_reset con 0x5571f4b90000 session 0x5571f3200960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 34537472 heap: 158523392 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:51.083678+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 244 heartbeat osd_stat(store_statfs(0x4f883b000/0x0/0x4ffc00000, data 0x28cb8d9/0x2a22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 34578432 heap: 158523392 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572d400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1775898 data_alloc: 234881024 data_used: 10375168
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:52.083866+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 34578432 heap: 158523392 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:53.084018+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 244 ms_handle_reset con 0x5571f572d400 session 0x5571f3201e00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 34570240 heap: 158523392 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 244 ms_handle_reset con 0x5571f39c5400 session 0x5571f5762960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:54.084246+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 244 ms_handle_reset con 0x5571f8b18000 session 0x5571f3201c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 245 ms_handle_reset con 0x5571f38d4000 session 0x5571f500f860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 125042688 unmapped: 33480704 heap: 158523392 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:55.085025+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 125042688 unmapped: 33480704 heap: 158523392 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f39c5400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 245 ms_handle_reset con 0x5571f4b90000 session 0x5571f3b07860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:56.085122+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 245 ms_handle_reset con 0x5571f572c800 session 0x5571f3b063c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572d400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 246 ms_handle_reset con 0x5571f572d400 session 0x5571f3b065a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 33456128 heap: 158523392 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 246 ms_handle_reset con 0x5571f4b90000 session 0x5571f7f3be00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 246 ms_handle_reset con 0x5571f38d4000 session 0x5571f394be00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1834305 data_alloc: 234881024 data_used: 11759616
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:57.085700+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 246 heartbeat osd_stat(store_statfs(0x4f882f000/0x0/0x4ffc00000, data 0x28d1462/0x2a2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,7,3])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 151412736 unmapped: 13721600 heap: 165134336 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 246 ms_handle_reset con 0x5571f39c5400 session 0x5571f5ccaf00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:58.085835+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 246 ms_handle_reset con 0x5571f572c800 session 0x5571f394a5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139460608 unmapped: 25673728 heap: 165134336 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:59.088887+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139460608 unmapped: 25673728 heap: 165134336 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 246 handle_osd_map epochs [246,247], i have 246, src has [1,247]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:00.089016+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.747765541s of 10.219471931s, submitted: 155
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139468800 unmapped: 25665536 heap: 165134336 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:01.089153+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 247 heartbeat osd_stat(store_statfs(0x4f7856000/0x0/0x4ffc00000, data 0x38a9cbb/0x3a07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139468800 unmapped: 25665536 heap: 165134336 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 247 heartbeat osd_stat(store_statfs(0x4f7856000/0x0/0x4ffc00000, data 0x38a9cbb/0x3a07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1958123 data_alloc: 234881024 data_used: 23617536
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:02.089268+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139468800 unmapped: 25665536 heap: 165134336 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 247 ms_handle_reset con 0x5571f8b18000 session 0x5571f5ccbe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 247 heartbeat osd_stat(store_statfs(0x4f7856000/0x0/0x4ffc00000, data 0x38a9cbb/0x3a07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:03.089598+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139476992 unmapped: 25657344 heap: 165134336 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:04.090291+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 247 handle_osd_map epochs [247,248], i have 247, src has [1,248]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136642560 unmapped: 28491776 heap: 165134336 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:05.090459+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136650752 unmapped: 28483584 heap: 165134336 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 248 ms_handle_reset con 0x5571f8b18000 session 0x5571f5cca1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 248 ms_handle_reset con 0x5571f38d4000 session 0x5571f4e832c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:06.090592+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136650752 unmapped: 28483584 heap: 165134336 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f39c5400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 248 ms_handle_reset con 0x5571f39c5400 session 0x5571f5ccb0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 248 ms_handle_reset con 0x5571f4b90000 session 0x5571f570e3c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 248 ms_handle_reset con 0x5571f572c800 session 0x5571f570f860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:07.090802+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1953200 data_alloc: 234881024 data_used: 23629824
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136650752 unmapped: 28483584 heap: 165134336 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:08.090987+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 248 heartbeat osd_stat(store_statfs(0x4f7854000/0x0/0x4ffc00000, data 0x38ab865/0x3a09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 248 ms_handle_reset con 0x5571f38d4000 session 0x5571f570fe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136650752 unmapped: 28483584 heap: 165134336 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:09.091135+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 248 handle_osd_map epochs [248,249], i have 248, src has [1,249]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136650752 unmapped: 28483584 heap: 165134336 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:10.091292+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f39c5400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.410539627s of 10.332668304s, submitted: 43
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136650752 unmapped: 28483584 heap: 165134336 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:11.091479+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 249 ms_handle_reset con 0x5571f39c5400 session 0x5571f570f2c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136650752 unmapped: 28483584 heap: 165134336 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 249 heartbeat osd_stat(store_statfs(0x4f7850000/0x0/0x4ffc00000, data 0x38ad3b9/0x3a0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:12.091770+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1957780 data_alloc: 234881024 data_used: 23629824
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136650752 unmapped: 28483584 heap: 165134336 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 249 ms_handle_reset con 0x5571f4b90000 session 0x5571f570eb40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:13.092206+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136658944 unmapped: 28475392 heap: 165134336 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:14.092524+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136658944 unmapped: 28475392 heap: 165134336 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 249 ms_handle_reset con 0x5571f8b18400 session 0x5571f5a894a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 249 ms_handle_reset con 0x5571f8b18000 session 0x5571f570f0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 250 ms_handle_reset con 0x5571f8b18000 session 0x5571f570fc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:15.092653+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 250 ms_handle_reset con 0x5571f38d4000 session 0x5571f570e780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f39c5400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 250 ms_handle_reset con 0x5571f39c5400 session 0x5571f570e1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 250 ms_handle_reset con 0x5571f4b90000 session 0x5571f3b07a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136708096 unmapped: 28426240 heap: 165134336 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 250 handle_osd_map epochs [250,251], i have 250, src has [1,251]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:16.092886+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136757248 unmapped: 28377088 heap: 165134336 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:17.093177+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1964224 data_alloc: 234881024 data_used: 23629824
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 251 heartbeat osd_stat(store_statfs(0x4f7849000/0x0/0x4ffc00000, data 0x38b0b61/0x3a13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 252 ms_handle_reset con 0x5571f8b18400 session 0x5571f3b06780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 252 ms_handle_reset con 0x5571f8b18800 session 0x5571f4d92d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f39c5400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 137068544 unmapped: 28065792 heap: 165134336 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:18.093366+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 141279232 unmapped: 36462592 heap: 177741824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:19.093618+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 145530880 unmapped: 32210944 heap: 177741824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:20.093747+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 142786560 unmapped: 34955264 heap: 177741824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.762221336s of 10.112544060s, submitted: 28
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:21.093878+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 138633216 unmapped: 39108608 heap: 177741824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:22.093996+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2406358 data_alloc: 251658240 data_used: 28942336
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 142835712 unmapped: 34906112 heap: 177741824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 252 heartbeat osd_stat(store_statfs(0x4f3422000/0x0/0x4ffc00000, data 0x7cd6799/0x7e3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:23.096294+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 147095552 unmapped: 30646272 heap: 177741824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:24.096457+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 138764288 unmapped: 38977536 heap: 177741824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:25.096588+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 253 heartbeat osd_stat(store_statfs(0x4ef023000/0x0/0x4ffc00000, data 0xc0d6799/0xc23b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 142991360 unmapped: 34750464 heap: 177741824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:26.097072+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 138829824 unmapped: 38912000 heap: 177741824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:27.097288+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3478228 data_alloc: 251658240 data_used: 28950528
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 151470080 unmapped: 26271744 heap: 177741824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:28.097420+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 253 heartbeat osd_stat(store_statfs(0x4eac20000/0x0/0x4ffc00000, data 0x104d8253/0x1063e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 151543808 unmapped: 26198016 heap: 177741824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:29.097553+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139223040 unmapped: 42721280 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:30.097699+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 253 heartbeat osd_stat(store_statfs(0x4e6820000/0x0/0x4ffc00000, data 0x148d8253/0x14a3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139460608 unmapped: 42483712 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 253 ms_handle_reset con 0x5571f8b18000 session 0x5571f3b072c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 253 ms_handle_reset con 0x5571f4b90000 session 0x5571f7f3a1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:31.097824+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 253 ms_handle_reset con 0x5571f8b18c00 session 0x5571f4b6d0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b19000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.230173111s of 10.660673141s, submitted: 77
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 253 ms_handle_reset con 0x5571f8b19000 session 0x5571f2d003c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139460608 unmapped: 42483712 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:32.098040+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4121028 data_alloc: 251658240 data_used: 28975104
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 253 heartbeat osd_stat(store_statfs(0x4e7e96000/0x0/0x4ffc00000, data 0x12663243/0x127c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,0,3])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 145326080 unmapped: 36618240 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:33.149509+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 253 ms_handle_reset con 0x5571f4b90000 session 0x5571f4eb72c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 253 ms_handle_reset con 0x5571f8b18000 session 0x5571f7f2f2c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 145539072 unmapped: 36405248 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:34.149627+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 253 handle_osd_map epochs [253,254], i have 253, src has [1,254]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 146595840 unmapped: 35348480 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:35.149817+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 254 heartbeat osd_stat(store_statfs(0x4f7292000/0x0/0x4ffc00000, data 0x3e64e6b/0x3fcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 146628608 unmapped: 35315712 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:36.150038+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 146628608 unmapped: 35315712 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 254 ms_handle_reset con 0x5571f8b18800 session 0x5571f4e310e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:37.150356+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2174086 data_alloc: 251658240 data_used: 32079872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 254 ms_handle_reset con 0x5571f8b18c00 session 0x5571f4d8c1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 146964480 unmapped: 34979840 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:38.150560+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 254 heartbeat osd_stat(store_statfs(0x4f7292000/0x0/0x4ffc00000, data 0x3e64e6b/0x3fcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b19000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 254 ms_handle_reset con 0x5571f8b19000 session 0x5571f5a88000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 143949824 unmapped: 37994496 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:39.150747+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 254 heartbeat osd_stat(store_statfs(0x4f7291000/0x0/0x4ffc00000, data 0x3e64e7b/0x3fcc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 143949824 unmapped: 37994496 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:40.151015+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 143949824 unmapped: 37994496 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:41.151152+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 143966208 unmapped: 37978112 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:42.151427+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2168042 data_alloc: 251658240 data_used: 32808960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 254 ms_handle_reset con 0x5571f4b90000 session 0x5571f4eb70e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 143966208 unmapped: 37978112 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:43.151793+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.239187241s of 11.747309685s, submitted: 96
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 254 heartbeat osd_stat(store_statfs(0x4f7292000/0x0/0x4ffc00000, data 0x3e64e6b/0x3fcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 254 ms_handle_reset con 0x5571f8b18000 session 0x5571f32003c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 143966208 unmapped: 37978112 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:44.151984+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 heartbeat osd_stat(store_statfs(0x4f728e000/0x0/0x4ffc00000, data 0x3e66935/0x3fcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,5])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 151945216 unmapped: 29999104 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:45.152180+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f8b18800 session 0x5571f551be00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f8b18c00 session 0x5571f570f4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 144105472 unmapped: 37838848 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:46.152335+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 144105472 unmapped: 37838848 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:47.152494+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2231857 data_alloc: 251658240 data_used: 32817152
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 144105472 unmapped: 37838848 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:48.152650+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b19400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f8b19400 session 0x5571f56f65a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f4b90000 session 0x5571f4d91a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f8b18000 session 0x5571f4d90d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f8b18800 session 0x5571f571b4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 144179200 unmapped: 37765120 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:49.152784+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 144187392 unmapped: 37756928 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:50.152954+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 heartbeat osd_stat(store_statfs(0x4f6b07000/0x0/0x4ffc00000, data 0x45ee935/0x4757000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 145973248 unmapped: 35971072 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:51.153209+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 144408576 unmapped: 37535744 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:52.153362+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572a400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2253566 data_alloc: 251658240 data_used: 32817152
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f8b18c00 session 0x5571f571b680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572a800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572b800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f572b800 session 0x5571f4b6d860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f4b90000 session 0x5571f4b6cf00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572b800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 144670720 unmapped: 37273600 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:53.153555+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.625908375s of 10.212938309s, submitted: 57
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 heartbeat osd_stat(store_statfs(0x4f6168000/0x0/0x4ffc00000, data 0x4f8d935/0x50f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,5])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 150839296 unmapped: 31105024 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:54.153950+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f572a400 session 0x5571f4e71680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f8b18000 session 0x5571f313be00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f572a800 session 0x5571f5cca000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f572b800 session 0x5571f4b6dc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f8b18800 session 0x5571f313a960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572a400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f4b90000 session 0x5571f313d860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 heartbeat osd_stat(store_statfs(0x4f6168000/0x0/0x4ffc00000, data 0x4f8d935/0x50f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 144048128 unmapped: 37896192 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:55.154172+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572a800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f572a800 session 0x5571f55b0b40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f572a400 session 0x5571f313ab40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572b800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f572b800 session 0x5571f4d925a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f4b90000 session 0x5571f4da8960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 144048128 unmapped: 37896192 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:56.154375+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572a400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f572a400 session 0x5571f500e5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572a800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 144064512 unmapped: 37879808 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:57.154544+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2311968 data_alloc: 251658240 data_used: 32817152
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 144203776 unmapped: 37740544 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:58.154702+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f572a800 session 0x5571f551a780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f8b18800 session 0x5571f313b680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 144203776 unmapped: 37740544 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:59.155080+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f7284400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f7285000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f8b18000 session 0x5571f4ea7e00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 144228352 unmapped: 37715968 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:00.155268+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 heartbeat osd_stat(store_statfs(0x4f6166000/0x0/0x4ffc00000, data 0x4f8d968/0x50f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 144416768 unmapped: 37527552 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:01.155421+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 146997248 unmapped: 34947072 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:02.155572+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f38d4000 session 0x5571f3b070e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f39c5400 session 0x5571f56f7c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2379528 data_alloc: 251658240 data_used: 42164224
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 32530432 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:03.155748+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.836299896s of 10.212245941s, submitted: 21
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 32497664 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:04.155925+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 32497664 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:05.156059+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 heartbeat osd_stat(store_statfs(0x4f66f1000/0x0/0x4ffc00000, data 0x4a02968/0x4b6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 32464896 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:06.156203+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 32464896 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:07.156429+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2334609 data_alloc: 251658240 data_used: 42061824
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:08.156684+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 32464896 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:09.156835+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 32464896 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:10.156967+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 32464896 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:11.157134+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 32464896 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 heartbeat osd_stat(store_statfs(0x4f66f1000/0x0/0x4ffc00000, data 0x4a02968/0x4b6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:12.157433+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149504000 unmapped: 32440320 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2336621 data_alloc: 251658240 data_used: 42110976
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:13.157592+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149733376 unmapped: 32210944 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.887722969s of 10.083938599s, submitted: 5
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:14.157808+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149733376 unmapped: 32210944 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 heartbeat osd_stat(store_statfs(0x4f66df000/0x0/0x4ffc00000, data 0x4a14968/0x4b7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,2])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:15.157973+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149733376 unmapped: 32210944 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:16.158218+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149733376 unmapped: 32210944 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 heartbeat osd_stat(store_statfs(0x4f66df000/0x0/0x4ffc00000, data 0x4a14968/0x4b7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,0,2])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:17.158423+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 150159360 unmapped: 31784960 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2341497 data_alloc: 251658240 data_used: 42127360
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:18.158656+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 150159360 unmapped: 31784960 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:19.158835+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 150609920 unmapped: 31334400 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 heartbeat osd_stat(store_statfs(0x4f65d6000/0x0/0x4ffc00000, data 0x4b1d968/0x4c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1,3])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:20.159016+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 151838720 unmapped: 30105600 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:21.159187+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 158138368 unmapped: 23805952 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:22.159473+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 23117824 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2404743 data_alloc: 251658240 data_used: 42016768
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 heartbeat osd_stat(store_statfs(0x4f5ee7000/0x0/0x4ffc00000, data 0x520c968/0x5377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,3,18])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:23.159635+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 23085056 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:24.159819+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 23085056 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:25.160029+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 23085056 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 2.150408506s of 12.364668846s, submitted: 32
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.832301140s, txc = 0x5571f323c600
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.831918716s, txc = 0x5571f3e68000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.831801891s, txc = 0x5571f32bbb00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.831669331s, txc = 0x5571f3e24900
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.831518650s, txc = 0x5571f3a83500
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.831317902s, txc = 0x5571f2f5c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.831207275s, txc = 0x5571f323c900
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.830897808s, txc = 0x5571f3dd2300
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.830754280s, txc = 0x5571f3e71800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.830445766s, txc = 0x5571f38f4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.829812527s, txc = 0x5571f32ba300
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.648219585s, txc = 0x5571f2ccdb00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.648050308s, txc = 0x5571f3e9e600
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.647362709s, txc = 0x5571f3e70300
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.646933079s, txc = 0x5571f2ca5500
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.646096706s, txc = 0x5571f3b26600
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.645798206s, txc = 0x5571f3e48f00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.645304680s, txc = 0x5571f3f61b00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:26.160226+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 156467200 unmapped: 25477120 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:27.160417+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f8b18800 session 0x5571f4ea61e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 156753920 unmapped: 25190400 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2408755 data_alloc: 251658240 data_used: 42065920
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:28.160550+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 157040640 unmapped: 24903680 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 heartbeat osd_stat(store_statfs(0x4f5c45000/0x0/0x4ffc00000, data 0x54ae968/0x5619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:29.160690+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 157138944 unmapped: 24805376 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 heartbeat osd_stat(store_statfs(0x4f5c3b000/0x0/0x4ffc00000, data 0x54b7968/0x5622000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:30.160852+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 157163520 unmapped: 24780800 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:31.161042+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 157163520 unmapped: 24780800 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:32.161145+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 157376512 unmapped: 24567808 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2418083 data_alloc: 251658240 data_used: 42508288
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:33.161246+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 157376512 unmapped: 24567808 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 ms_handle_reset con 0x5571f2c24800 session 0x5571f4d8cd20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 heartbeat osd_stat(store_statfs(0x4f5bc3000/0x0/0x4ffc00000, data 0x5530968/0x569b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:34.161419+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 157671424 unmapped: 24272896 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:35.161592+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 157704192 unmapped: 24240128 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:36.161752+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.760034084s of 10.018532753s, submitted: 71
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 153026560 unmapped: 28917760 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 256 ms_handle_reset con 0x5571f2c24c00 session 0x5571f4d92960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 256 ms_handle_reset con 0x5571f2c24c00 session 0x5571f4ea7c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:37.161914+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 153026560 unmapped: 28917760 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2254708 data_alloc: 251658240 data_used: 32669696
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 256 handle_osd_map epochs [256,257], i have 256, src has [1,257]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:38.162045+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 152354816 unmapped: 29589504 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f39c5400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 258 ms_handle_reset con 0x5571f39c5400 session 0x5571f500fa40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 258 ms_handle_reset con 0x5571f4b90400 session 0x5571f56f7e00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 258 ms_handle_reset con 0x5571f4fc4800 session 0x5571f7f3ba40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:39.162160+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 152387584 unmapped: 29556736 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 258 heartbeat osd_stat(store_statfs(0x4f6b85000/0x0/0x4ffc00000, data 0x456f100/0x46d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c25800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 258 ms_handle_reset con 0x5571f8b18800 session 0x5571f500f4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:40.162283+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 142098432 unmapped: 39845888 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 259 ms_handle_reset con 0x5571f2c25800 session 0x5571f4eb6d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:41.162496+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 142434304 unmapped: 39510016 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 259 ms_handle_reset con 0x5571f38d4000 session 0x5571f2efe780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:42.162609+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 142442496 unmapped: 39501824 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2107903 data_alloc: 234881024 data_used: 17346560
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 259 heartbeat osd_stat(store_statfs(0x4f7a3b000/0x0/0x4ffc00000, data 0x36b597a/0x3823000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:43.162764+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 142442496 unmapped: 39501824 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:44.162937+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 259 ms_handle_reset con 0x5571f7284400 session 0x5571f3ddaf00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 142450688 unmapped: 39493632 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 259 ms_handle_reset con 0x5571f2c24000 session 0x5571f3a1e5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 260 ms_handle_reset con 0x5571f2c24c00 session 0x5571f4b6c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 260 ms_handle_reset con 0x5571f2c24000 session 0x5571f3b2fa40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:45.163123+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 141631488 unmapped: 40312832 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c25800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 260 ms_handle_reset con 0x5571f2c25800 session 0x5571f7f3ab40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:46.163448+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 40280064 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.746595383s of 10.232762337s, submitted: 97
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 261 ms_handle_reset con 0x5571f38d4000 session 0x5571f4e830e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f7284400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 262 ms_handle_reset con 0x5571f7284400 session 0x5571f65821e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c25400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:47.163630+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 41910272 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 263 ms_handle_reset con 0x5571f2c25400 session 0x5571f4ea61e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2079939 data_alloc: 234881024 data_used: 16650240
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c25400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:48.163799+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 263 heartbeat osd_stat(store_statfs(0x4f6c31000/0x0/0x4ffc00000, data 0x331a910/0x348c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b3f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140042240 unmapped: 41902080 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 263 ms_handle_reset con 0x5571f2c25400 session 0x5571f2eff4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c25800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 263 ms_handle_reset con 0x5571f2c25800 session 0x5571f571b4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:49.163967+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140042240 unmapped: 41902080 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 263 handle_osd_map epochs [263,264], i have 263, src has [1,264]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 264 handle_osd_map epochs [264,264], i have 264, src has [1,264]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 264 ms_handle_reset con 0x5571f2c24000 session 0x5571f4e70f00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:50.164282+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140050432 unmapped: 41893888 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 264 heartbeat osd_stat(store_statfs(0x4f6c6e000/0x0/0x4ffc00000, data 0x32dc4d6/0x344e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b3f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:51.164439+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140050432 unmapped: 41893888 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 264 ms_handle_reset con 0x5571f2c24800 session 0x5571f551ad20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 264 ms_handle_reset con 0x5571f8b18c00 session 0x5571f3ddbe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 264 ms_handle_reset con 0x5571f7285000 session 0x5571f5cca1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:52.164612+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 50126848 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 264 ms_handle_reset con 0x5571f2c24000 session 0x5571f571a5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1894073 data_alloc: 218103808 data_used: 7835648
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 264 heartbeat osd_stat(store_statfs(0x4f79a4000/0x0/0x4ffc00000, data 0x21984d6/0x230a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5f4f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:53.164797+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 131080192 unmapped: 50864128 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:54.164997+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 131080192 unmapped: 50864128 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:55.165158+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 131080192 unmapped: 50864128 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 264 heartbeat osd_stat(store_statfs(0x4f79a4000/0x0/0x4ffc00000, data 0x21984b3/0x2309000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5f4f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:56.165326+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 131080192 unmapped: 50864128 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.614771843s of 10.092021942s, submitted: 111
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 264 ms_handle_reset con 0x5571f2c24800 session 0x5571f571a1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c25400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 265 ms_handle_reset con 0x5571f2c25400 session 0x5571f55b0000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:57.165546+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132153344 unmapped: 49790976 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1842551 data_alloc: 218103808 data_used: 7839744
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:58.165699+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132153344 unmapped: 49790976 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 265 handle_osd_map epochs [265,266], i have 265, src has [1,266]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:59.165877+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 266 heartbeat osd_stat(store_statfs(0x4f812a000/0x0/0x4ffc00000, data 0x1a11f5d/0x1b83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5f4f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132161536 unmapped: 49782784 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c25800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 266 ms_handle_reset con 0x5571f2c25800 session 0x5571f3f201e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:00.166043+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 49831936 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:01.166176+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132120576 unmapped: 49823744 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:02.166402+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132120576 unmapped: 49823744 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1852305 data_alloc: 218103808 data_used: 7843840
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:03.166562+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132120576 unmapped: 49823744 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:04.166822+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132120576 unmapped: 49823744 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 268 heartbeat osd_stat(store_statfs(0x4f8120000/0x0/0x4ffc00000, data 0x1a17349/0x1b8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5f4f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:05.166999+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132120576 unmapped: 49823744 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:06.167269+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132120576 unmapped: 49823744 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:07.167544+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132120576 unmapped: 49823744 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1852305 data_alloc: 218103808 data_used: 7843840
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:08.167728+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132120576 unmapped: 49823744 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:09.167921+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132120576 unmapped: 49823744 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.990506172s of 13.163058281s, submitted: 53
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 269 heartbeat osd_stat(store_statfs(0x4f8120000/0x0/0x4ffc00000, data 0x1a17349/0x1b8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5f4f9c6), peers [0,1] op hist [0,0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 269 ms_handle_reset con 0x5571f2c24000 session 0x5571f4e4ad20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 269 ms_handle_reset con 0x5571f2c24800 session 0x5571f4e4bc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c25400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 269 ms_handle_reset con 0x5571f2c25400 session 0x5571f4e4a3c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f7285000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 269 ms_handle_reset con 0x5571f7285000 session 0x5571f4e4b0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 269 ms_handle_reset con 0x5571f38d4000 session 0x5571f3a1e1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:10.168080+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132317184 unmapped: 49627136 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:11.168272+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132317184 unmapped: 49627136 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 269 heartbeat osd_stat(store_statfs(0x4f7dfd000/0x0/0x4ffc00000, data 0x1d39e03/0x1eb0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5f4f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:12.168534+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132317184 unmapped: 49627136 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1883341 data_alloc: 218103808 data_used: 7843840
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:13.168719+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132317184 unmapped: 49627136 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 269 ms_handle_reset con 0x5571f2c24000 session 0x5571f3a1f2c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:14.168882+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132685824 unmapped: 49258496 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:15.169039+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c25400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132685824 unmapped: 49258496 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:16.169228+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132685824 unmapped: 49258496 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:17.169542+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 269 heartbeat osd_stat(store_statfs(0x4f7dd4000/0x0/0x4ffc00000, data 0x1d63e03/0x1eda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5f4f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132685824 unmapped: 49258496 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1901604 data_alloc: 234881024 data_used: 9990144
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:18.169776+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132513792 unmapped: 49430528 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c25000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 269 ms_handle_reset con 0x5571f2c25000 session 0x5571f5ccb4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:19.169994+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132055040 unmapped: 49889280 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 269 ms_handle_reset con 0x5571f4aa9400 session 0x5571f5cca5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.044325829s of 10.184203148s, submitted: 33
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 269 ms_handle_reset con 0x5571f38d4000 session 0x5571f5ccbe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:20.170208+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132022272 unmapped: 49922048 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 269 ms_handle_reset con 0x5571f4aa9000 session 0x5571f500e5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:21.170492+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132022272 unmapped: 49922048 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 269 heartbeat osd_stat(store_statfs(0x4f7d93000/0x0/0x4ffc00000, data 0x1da3e66/0x1f1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5f4f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:22.170671+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132022272 unmapped: 49922048 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1910210 data_alloc: 234881024 data_used: 10252288
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 269 ms_handle_reset con 0x5571f4aa9000 session 0x5571f500fa40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:23.170814+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 269 ms_handle_reset con 0x5571f2c24000 session 0x5571f3b061e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c25000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136347648 unmapped: 45596672 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 269 ms_handle_reset con 0x5571f2c25000 session 0x5571f7f3b860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:24.171007+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132022272 unmapped: 49922048 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:25.171187+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132022272 unmapped: 49922048 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 269 ms_handle_reset con 0x5571f38d4000 session 0x5571f31efc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:26.171370+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 132022272 unmapped: 49922048 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 269 heartbeat osd_stat(store_statfs(0x4f7418000/0x0/0x4ffc00000, data 0x271ee66/0x2896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5f4f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 269 handle_osd_map epochs [269,270], i have 269, src has [1,270]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:27.171562+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 270 ms_handle_reset con 0x5571f6867400 session 0x5571f571b0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 51208192 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2055357 data_alloc: 234881024 data_used: 10276864
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 270 handle_osd_map epochs [270,271], i have 270, src has [1,271]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 271 ms_handle_reset con 0x5571f6867000 session 0x5571f500fe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 271 ms_handle_reset con 0x5571f2c24000 session 0x5571f4e70d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 271 ms_handle_reset con 0x5571f4aa9400 session 0x5571f3ddb680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:28.171693+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c25000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 271 ms_handle_reset con 0x5571f2c25000 session 0x5571f3843860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 134381568 unmapped: 47562752 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 271 ms_handle_reset con 0x5571f38d4000 session 0x5571f4d8dc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:29.171859+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 134438912 unmapped: 47505408 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 271 ms_handle_reset con 0x5571f38d4000 session 0x5571f3b07a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.349054337s of 10.112157822s, submitted: 93
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:30.172177+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c25000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 271 ms_handle_reset con 0x5571f2c25000 session 0x5571f4b6d0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 271 ms_handle_reset con 0x5571f2c24000 session 0x5571f4d8c5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 45416448 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6ead800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eac800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 271 ms_handle_reset con 0x5571f6eac800 session 0x5571f570fa40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 271 ms_handle_reset con 0x5571f6867000 session 0x5571f313cf00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 271 ms_handle_reset con 0x5571f6867000 session 0x5571f4e832c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:31.172350+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eac800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eac000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 133341184 unmapped: 48603136 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 ms_handle_reset con 0x5571f6ead800 session 0x5571f571a1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eadc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 ms_handle_reset con 0x5571f6eadc00 session 0x5571f2eff4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 ms_handle_reset con 0x5571f38d4000 session 0x5571f4e830e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 ms_handle_reset con 0x5571f4aa9400 session 0x5571f7f3be00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:32.172504+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 48586752 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 heartbeat osd_stat(store_statfs(0x4f61df000/0x0/0x4ffc00000, data 0x39508eb/0x3ace000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5f4f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2185662 data_alloc: 234881024 data_used: 11157504
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:33.172798+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 48586752 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:34.172990+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 48586752 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:35.173267+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 48586752 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 ms_handle_reset con 0x5571f6eac800 session 0x5571f5ccba40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 ms_handle_reset con 0x5571f6eac000 session 0x5571f4bef680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:36.175650+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 133234688 unmapped: 48709632 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 ms_handle_reset con 0x5571f38d4000 session 0x5571f313a3c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:37.176560+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 134283264 unmapped: 47661056 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2181478 data_alloc: 234881024 data_used: 11415552
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 heartbeat osd_stat(store_statfs(0x4f61e0000/0x0/0x4ffc00000, data 0x39508eb/0x3ace000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5f4f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:38.176721+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 134283264 unmapped: 47661056 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 heartbeat osd_stat(store_statfs(0x4f61e0000/0x0/0x4ffc00000, data 0x39508eb/0x3ace000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5f4f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:39.176876+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 ms_handle_reset con 0x5571f4aa9400 session 0x5571f2efe960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 heartbeat osd_stat(store_statfs(0x4f61e0000/0x0/0x4ffc00000, data 0x39508eb/0x3ace000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5f4f9c6), peers [0,1] op hist [1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 134283264 unmapped: 47661056 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 ms_handle_reset con 0x5571f6867000 session 0x5571f2d01a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:40.177021+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 134275072 unmapped: 47669248 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:41.177257+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 134275072 unmapped: 47669248 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eadc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 ms_handle_reset con 0x5571f6eadc00 session 0x5571f3b2e5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:42.177434+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 heartbeat osd_stat(store_statfs(0x4f7240000/0x0/0x4ffc00000, data 0x3910888/0x3a8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 13K writes, 54K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 4015 syncs, 3.36 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6347 writes, 26K keys, 6347 commit groups, 1.0 writes per commit group, ingest: 19.07 MB, 0.03 MB/s
                                           Interval WAL: 6347 writes, 2569 syncs, 2.47 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 134275072 unmapped: 47669248 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6ead800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eac800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 ms_handle_reset con 0x5571f6ead800 session 0x5571f4e4af00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 ms_handle_reset con 0x5571f6eac800 session 0x5571f2d01c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2739c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2176824 data_alloc: 234881024 data_used: 11149312
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 ms_handle_reset con 0x5571f2739c00 session 0x5571f4d92b40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:43.177584+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d4000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 ms_handle_reset con 0x5571f38d4000 session 0x5571f4d901e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 134406144 unmapped: 47538176 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2739c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.288263321s of 13.705665588s, submitted: 94
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 ms_handle_reset con 0x5571f2739c00 session 0x5571f3b06d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:44.177748+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eac800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 ms_handle_reset con 0x5571f6eac800 session 0x5571f4d8d2c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6ead800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 134438912 unmapped: 47505408 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eadc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 heartbeat osd_stat(store_statfs(0x4f723f000/0x0/0x4ffc00000, data 0x39108fa/0x3a8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:45.177912+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 134807552 unmapped: 47136768 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:46.178046+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 45547520 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 ms_handle_reset con 0x5571f4aa9000 session 0x5571f3b2fa40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:47.178157+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 ms_handle_reset con 0x5571f4aa9400 session 0x5571f3a1e5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 45539328 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2238144 data_alloc: 234881024 data_used: 16547840
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 ms_handle_reset con 0x5571f4b90400 session 0x5571f7f3af00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:48.178292+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2739c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 ms_handle_reset con 0x5571f2739c00 session 0x5571f3a1f2c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 135733248 unmapped: 46211072 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:49.178417+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 135733248 unmapped: 46211072 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 273 ms_handle_reset con 0x5571f4aa9000 session 0x5571f500e5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:50.178541+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 135725056 unmapped: 46219264 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 273 heartbeat osd_stat(store_statfs(0x4f6e90000/0x0/0x4ffc00000, data 0x3cbc530/0x3e3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:51.178762+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 135725056 unmapped: 46219264 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:52.178916+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 135725056 unmapped: 46219264 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2268798 data_alloc: 234881024 data_used: 16564224
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 273 heartbeat osd_stat(store_statfs(0x4f6e90000/0x0/0x4ffc00000, data 0x3cbc530/0x3e3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:53.179140+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 273 handle_osd_map epochs [273,274], i have 273, src has [1,274]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 274 heartbeat osd_stat(store_statfs(0x4f6e90000/0x0/0x4ffc00000, data 0x3cbc530/0x3e3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 274 ms_handle_reset con 0x5571f4aa9400 session 0x5571f31a0d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 135757824 unmapped: 46186496 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:54.179340+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 274 handle_osd_map epochs [274,275], i have 274, src has [1,275]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.976321220s of 10.798518181s, submitted: 76
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eac800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 275 ms_handle_reset con 0x5571f6eac800 session 0x5571f500e5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 275 ms_handle_reset con 0x5571f4b90400 session 0x5571f3b2f860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 135766016 unmapped: 46178304 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:55.179962+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2739c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 275 ms_handle_reset con 0x5571f2739c00 session 0x5571f4da8780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 135766016 unmapped: 46178304 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:56.180156+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 275 heartbeat osd_stat(store_statfs(0x4f6e88000/0x0/0x4ffc00000, data 0x3cbfd4a/0x3e45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 135766016 unmapped: 46178304 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 276 ms_handle_reset con 0x5571f4aa9000 session 0x5571f313a1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:57.180422+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 276 ms_handle_reset con 0x5571f4aa9400 session 0x5571f5a892c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 41623552 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2369148 data_alloc: 234881024 data_used: 16875520
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eac800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 277 ms_handle_reset con 0x5571f6eac800 session 0x5571f4d8d680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:58.180597+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 277 handle_osd_map epochs [277,278], i have 277, src has [1,278]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 278 ms_handle_reset con 0x5571f4b90000 session 0x5571f55b05a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140263424 unmapped: 41680896 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:59.181558+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 41189376 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:00.181732+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 278 ms_handle_reset con 0x5571f2c24800 session 0x5571f4da8960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 278 ms_handle_reset con 0x5571f2c25400 session 0x5571f4d92000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 278 heartbeat osd_stat(store_statfs(0x4f660d000/0x0/0x4ffc00000, data 0x479b180/0x46bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2739c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 41189376 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 278 ms_handle_reset con 0x5571f2739c00 session 0x5571f4e4bc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:01.181910+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 278 ms_handle_reset con 0x5571f4aa9000 session 0x5571f5762f00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 138739712 unmapped: 43204608 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:02.182075+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 138747904 unmapped: 43196416 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2292963 data_alloc: 234881024 data_used: 13844480
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:03.182294+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 138756096 unmapped: 43188224 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:04.182479+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.064669609s of 10.114242554s, submitted: 178
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139534336 unmapped: 42409984 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 278 ms_handle_reset con 0x5571f4aa9400 session 0x5571f4d92d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 278 ms_handle_reset con 0x5571f4b90000 session 0x5571f7f2f860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2739c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:05.182655+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 278 ms_handle_reset con 0x5571f6ead800 session 0x5571f500f4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 278 ms_handle_reset con 0x5571f6eadc00 session 0x5571f4bee780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139452416 unmapped: 42491904 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:06.182818+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 278 heartbeat osd_stat(store_statfs(0x4f6dee000/0x0/0x4ffc00000, data 0x3fbb180/0x3edf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139468800 unmapped: 42475520 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:07.183035+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139468800 unmapped: 42475520 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2315047 data_alloc: 234881024 data_used: 17235968
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:08.183168+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 278 ms_handle_reset con 0x5571f2739c00 session 0x5571f4da94a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139468800 unmapped: 42475520 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:09.183307+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139468800 unmapped: 42475520 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 279 heartbeat osd_stat(store_statfs(0x4f6dec000/0x0/0x4ffc00000, data 0x3fbcc4a/0x3ee1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:10.183471+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139468800 unmapped: 42475520 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:11.183671+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139468800 unmapped: 42475520 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:12.183802+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 279 heartbeat osd_stat(store_statfs(0x4f6dec000/0x0/0x4ffc00000, data 0x3fbcc4a/0x3ee1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c25400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139468800 unmapped: 42475520 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2317616 data_alloc: 234881024 data_used: 17244160
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:13.183955+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139468800 unmapped: 42475520 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:14.184125+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139468800 unmapped: 42475520 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:15.184349+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 279 heartbeat osd_stat(store_statfs(0x4f6ded000/0x0/0x4ffc00000, data 0x3fbcc4a/0x3ee1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 279 ms_handle_reset con 0x5571f2c25400 session 0x5571f65825a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139468800 unmapped: 42475520 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.255764008s of 11.247926712s, submitted: 41
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 279 handle_osd_map epochs [280,280], i have 280, src has [1,280]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:16.184500+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 279 handle_osd_map epochs [280,280], i have 280, src has [1,280]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 280 ms_handle_reset con 0x5571f2c24800 session 0x5571f3200960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139476992 unmapped: 42467328 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2739c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 280 heartbeat osd_stat(store_statfs(0x4f6de9000/0x0/0x4ffc00000, data 0x3fbe81e/0x3ee4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 280 ms_handle_reset con 0x5571f2739c00 session 0x5571f4e31a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:17.184683+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 280 ms_handle_reset con 0x5571f4b90000 session 0x5571f55b1c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6ead800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139476992 unmapped: 42467328 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 280 ms_handle_reset con 0x5571f6ead800 session 0x5571f3317c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2320839 data_alloc: 234881024 data_used: 17248256
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eadc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 280 ms_handle_reset con 0x5571f6eadc00 session 0x5571f5a88f00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2739c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:18.184819+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 280 handle_osd_map epochs [280,281], i have 280, src has [1,281]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139501568 unmapped: 42442752 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 281 ms_handle_reset con 0x5571f2739c00 session 0x5571f5a89a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6ead800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 281 ms_handle_reset con 0x5571f4b90000 session 0x5571f3a1e5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eac800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 281 ms_handle_reset con 0x5571f4aa9000 session 0x5571f3b06d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:19.184973+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572b400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 281 handle_osd_map epochs [281,282], i have 281, src has [1,282]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 281 ms_handle_reset con 0x5571f572b400 session 0x5571f4e830e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 282 ms_handle_reset con 0x5571f6eac800 session 0x5571f4e4b0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 282 ms_handle_reset con 0x5571f6ead800 session 0x5571f5a88960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 138043392 unmapped: 43900928 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2739c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa9000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 282 ms_handle_reset con 0x5571f2c24800 session 0x5571f65832c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572b400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 282 ms_handle_reset con 0x5571f572b400 session 0x5571f65834a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 282 ms_handle_reset con 0x5571f4b90000 session 0x5571f56f7c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:20.185140+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 282 ms_handle_reset con 0x5571f4aa9000 session 0x5571f313cf00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 282 ms_handle_reset con 0x5571f2739c00 session 0x5571f4b6d680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 282 heartbeat osd_stat(store_statfs(0x4f6de8000/0x0/0x4ffc00000, data 0x3fc0372/0x3ee4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 135372800 unmapped: 46571520 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:21.185313+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 135372800 unmapped: 46571520 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:22.185475+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 283 ms_handle_reset con 0x5571f2c24800 session 0x5571f4d925a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 135528448 unmapped: 46415872 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 283 ms_handle_reset con 0x5571f4b90000 session 0x5571f571bc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1959121 data_alloc: 218103808 data_used: 8171520
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:23.185641+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 135528448 unmapped: 46415872 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572b400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 283 ms_handle_reset con 0x5571f572b400 session 0x5571f571ab40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:24.185772+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 283 heartbeat osd_stat(store_statfs(0x4f9114000/0x0/0x4ffc00000, data 0x1a3168b/0x1bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 135528448 unmapped: 46415872 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:25.185905+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 135528448 unmapped: 46415872 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6ead800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.571042061s of 10.033976555s, submitted: 168
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 284 ms_handle_reset con 0x5571f6ead800 session 0x5571f56f7c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:26.186077+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2739c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 284 ms_handle_reset con 0x5571f2739c00 session 0x5571f5a88960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 284 ms_handle_reset con 0x5571f2c24800 session 0x5571f4e830e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 135528448 unmapped: 46415872 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572b400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:27.186305+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 284 ms_handle_reset con 0x5571f572b400 session 0x5571f56f7860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 284 ms_handle_reset con 0x5571f4b90000 session 0x5571f3b06d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eac800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572a800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 284 ms_handle_reset con 0x5571f572a800 session 0x5571f56f61e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 284 ms_handle_reset con 0x5571f6eac800 session 0x5571f55b1c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 284 handle_osd_map epochs [284,285], i have 284, src has [1,285]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136650752 unmapped: 45293568 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1978210 data_alloc: 218103808 data_used: 8183808
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:28.186453+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2739c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 285 ms_handle_reset con 0x5571f2739c00 session 0x5571f4da8960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136650752 unmapped: 45293568 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 285 heartbeat osd_stat(store_statfs(0x4f908a000/0x0/0x4ffc00000, data 0x1ab6e29/0x1c43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:29.186612+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136658944 unmapped: 45285376 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:30.186774+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 286 heartbeat osd_stat(store_statfs(0x4f9086000/0x0/0x4ffc00000, data 0x1ab8a51/0x1c46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136658944 unmapped: 45285376 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:31.187049+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 286 heartbeat osd_stat(store_statfs(0x4f9086000/0x0/0x4ffc00000, data 0x1ab8a51/0x1c46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136658944 unmapped: 45285376 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:32.187211+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136658944 unmapped: 45285376 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1981856 data_alloc: 218103808 data_used: 8183808
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:33.187364+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136658944 unmapped: 45285376 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 286 ms_handle_reset con 0x5571f2c24800 session 0x5571f4d8d680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:34.187578+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136658944 unmapped: 45285376 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b90000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572b400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 287 ms_handle_reset con 0x5571f572b400 session 0x5571f56f65a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:35.187746+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 287 handle_osd_map epochs [287,288], i have 287, src has [1,288]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136667136 unmapped: 45277184 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.695511818s of 10.010815620s, submitted: 88
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 288 ms_handle_reset con 0x5571f4b90000 session 0x5571f313a1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2739c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 288 ms_handle_reset con 0x5571f2739c00 session 0x5571f500e5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:36.187929+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136667136 unmapped: 45277184 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:37.188130+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 288 heartbeat osd_stat(store_statfs(0x4f907e000/0x0/0x4ffc00000, data 0x1abc795/0x1c4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136675328 unmapped: 45268992 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 289 ms_handle_reset con 0x5571f2c24800 session 0x5571f7f2f680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572b400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 289 ms_handle_reset con 0x5571f572b400 session 0x5571f4d921e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1993354 data_alloc: 218103808 data_used: 8196096
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:38.188269+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eac800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 289 ms_handle_reset con 0x5571f6eac800 session 0x5571f4e82780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136683520 unmapped: 45260800 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572a400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b19c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 289 ms_handle_reset con 0x5571f572a400 session 0x5571f4eb63c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 289 ms_handle_reset con 0x5571f8b19c00 session 0x5571f4da9c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2739c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:39.188409+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 289 ms_handle_reset con 0x5571f2739c00 session 0x5571f56f6f00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 289 ms_handle_reset con 0x5571f2c24800 session 0x5571f3b061e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572b400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eac800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136454144 unmapped: 45490176 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b19800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 289 ms_handle_reset con 0x5571f8b19800 session 0x5571f4d8c5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:40.188653+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 289 ms_handle_reset con 0x5571f8b18000 session 0x5571f4b6d0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2739c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 289 heartbeat osd_stat(store_statfs(0x4f9055000/0x0/0x4ffc00000, data 0x1ae7e91/0x1c79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 289 ms_handle_reset con 0x5571f2c24800 session 0x5571f4da85a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136462336 unmapped: 45481984 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:41.188886+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136462336 unmapped: 45481984 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b19800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b19c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 290 ms_handle_reset con 0x5571f8b19800 session 0x5571f4eb7e00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:42.189122+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 290 heartbeat osd_stat(store_statfs(0x4f9054000/0x0/0x4ffc00000, data 0x1ae7ea1/0x1c7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 290 ms_handle_reset con 0x5571f8b19c00 session 0x5571f2efe780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 291 ms_handle_reset con 0x5571f572c400 session 0x5571f6582000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 291 ms_handle_reset con 0x5571f2739c00 session 0x5571f3b07a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 45465600 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 291 ms_handle_reset con 0x5571f2c24800 session 0x5571f6582f00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2014716 data_alloc: 218103808 data_used: 8749056
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:43.189326+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 291 ms_handle_reset con 0x5571f572c400 session 0x5571f7f3a000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b19800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 45465600 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:44.189486+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 292 ms_handle_reset con 0x5571f8b19800 session 0x5571f313d0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b19c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 292 ms_handle_reset con 0x5571f8b19c00 session 0x5571f4bee960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 292 ms_handle_reset con 0x5571f572c000 session 0x5571f4e31c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 292 ms_handle_reset con 0x5571f572c000 session 0x5571f551b2c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136503296 unmapped: 45441024 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 292 ms_handle_reset con 0x5571f2c24800 session 0x5571f3b07a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:45.189644+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 292 heartbeat osd_stat(store_statfs(0x4f9049000/0x0/0x4ffc00000, data 0x1aed2b5/0x1c82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136503296 unmapped: 45441024 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:46.189808+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 292 heartbeat osd_stat(store_statfs(0x4f9049000/0x0/0x4ffc00000, data 0x1aed2b5/0x1c82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136503296 unmapped: 45441024 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.813530922s of 11.154751778s, submitted: 101
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 292 ms_handle_reset con 0x5571f572c400 session 0x5571f6582000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:47.189977+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b19800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b19c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136503296 unmapped: 45441024 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 293 ms_handle_reset con 0x5571f8b19800 session 0x5571f4eb7e00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2016692 data_alloc: 218103808 data_used: 8749056
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:48.190138+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 294 ms_handle_reset con 0x5571f8b19c00 session 0x5571f55b1860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136511488 unmapped: 45432832 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:49.190454+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136511488 unmapped: 45432832 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:50.190600+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 294 heartbeat osd_stat(store_statfs(0x4f9044000/0x0/0x4ffc00000, data 0x1af0af7/0x1c89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 294 ms_handle_reset con 0x5571f2c24800 session 0x5571f313a1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 45408256 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:51.190758+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 295 ms_handle_reset con 0x5571f572c000 session 0x5571f4d8d680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 136544256 unmapped: 45400064 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:52.190944+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 295 heartbeat osd_stat(store_statfs(0x4f9042000/0x0/0x4ffc00000, data 0x1af0b68/0x1c8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [0,0,0,0,0,3])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 296 ms_handle_reset con 0x5571f572c400 session 0x5571f2efe960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b19800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 296 ms_handle_reset con 0x5571f8b19800 session 0x5571f4d925a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b19c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 297 ms_handle_reset con 0x5571f8b19c00 session 0x5571f4b6d680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 44146688 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2104239 data_alloc: 218103808 data_used: 9158656
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b19c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 297 ms_handle_reset con 0x5571f8b19c00 session 0x5571f500f2c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:53.191133+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 297 ms_handle_reset con 0x5571f2c24800 session 0x5571f3b07c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 297 ms_handle_reset con 0x5571f572c000 session 0x5571f2d01a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 297 ms_handle_reset con 0x5571f572c400 session 0x5571f7f3af00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 297 heartbeat osd_stat(store_statfs(0x4f88e6000/0x0/0x4ffc00000, data 0x22413e5/0x23e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 41746432 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b19800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 297 ms_handle_reset con 0x5571f8b19800 session 0x5571f56f6b40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:54.191344+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 297 ms_handle_reset con 0x5571f572c000 session 0x5571f571a5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 297 ms_handle_reset con 0x5571f2c24800 session 0x5571f570e1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b19c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 298 ms_handle_reset con 0x5571f6867400 session 0x5571f3ddab40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139960320 unmapped: 41984000 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:55.191551+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 298 ms_handle_reset con 0x5571f6867800 session 0x5571f4eb63c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6866800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 299 ms_handle_reset con 0x5571f8b19c00 session 0x5571f7f2e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 299 ms_handle_reset con 0x5571f6866800 session 0x5571f56f65a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 299 ms_handle_reset con 0x5571f572c400 session 0x5571f4d8c1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139378688 unmapped: 42565632 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:56.191691+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 299 heartbeat osd_stat(store_statfs(0x4f771c000/0x0/0x4ffc00000, data 0x2265cc3/0x2410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 299 ms_handle_reset con 0x5571f2c24800 session 0x5571f5a881e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 299 ms_handle_reset con 0x5571f572c000 session 0x5571f3dda780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139378688 unmapped: 42565632 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:57.191875+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139378688 unmapped: 42565632 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2120317 data_alloc: 218103808 data_used: 9170944
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:58.192030+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 299 ms_handle_reset con 0x5571f6867400 session 0x5571f4b6c780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.968988419s of 11.444383621s, submitted: 133
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 299 ms_handle_reset con 0x5571f6867400 session 0x5571f394b4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 299 ms_handle_reset con 0x5571f2c24800 session 0x5571f4da9680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139378688 unmapped: 42565632 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:59.192241+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 299 ms_handle_reset con 0x5571f572c000 session 0x5571f55b1e00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 299 ms_handle_reset con 0x5571f572c400 session 0x5571f7f2fe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6866800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b19c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 300 ms_handle_reset con 0x5571f8b19c00 session 0x5571f4e714a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139395072 unmapped: 42549248 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:00.192613+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 300 heartbeat osd_stat(store_statfs(0x4f7721000/0x0/0x4ffc00000, data 0x2268bc1/0x240d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 301 ms_handle_reset con 0x5571f6867800 session 0x5571f4b6cb40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 301 ms_handle_reset con 0x5571f6866800 session 0x5571f7f2fe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139395072 unmapped: 42549248 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:01.192860+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139395072 unmapped: 42549248 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:02.193063+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 301 heartbeat osd_stat(store_statfs(0x4f7719000/0x0/0x4ffc00000, data 0x226c369/0x2413000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 301 handle_osd_map epochs [302,302], i have 302, src has [1,302]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 302 ms_handle_reset con 0x5571f2c24800 session 0x5571f55b1e00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139395072 unmapped: 42549248 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2129667 data_alloc: 218103808 data_used: 9183232
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:03.193171+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139395072 unmapped: 42549248 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:04.193383+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 302 heartbeat osd_stat(store_statfs(0x4f7715000/0x0/0x4ffc00000, data 0x226df91/0x2416000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 303 ms_handle_reset con 0x5571f572c000 session 0x5571f3dda780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 303 heartbeat osd_stat(store_statfs(0x4f7715000/0x0/0x4ffc00000, data 0x226df2f/0x2415000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 303 ms_handle_reset con 0x5571f572c400 session 0x5571f4d8c1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139419648 unmapped: 42524672 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:05.193609+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 304 ms_handle_reset con 0x5571f2c24800 session 0x5571f56f65a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 304 ms_handle_reset con 0x5571f572c000 session 0x5571f4d925a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139501568 unmapped: 42442752 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:06.193760+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6866800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 304 ms_handle_reset con 0x5571f6866800 session 0x5571f3b07a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139649024 unmapped: 42295296 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:07.193949+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 305 ms_handle_reset con 0x5571f6867800 session 0x5571f7f3a000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139649024 unmapped: 42295296 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:08.194227+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2136705 data_alloc: 218103808 data_used: 9187328
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 306 ms_handle_reset con 0x5571f6867400 session 0x5571f6582d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 306 ms_handle_reset con 0x5571f2c24800 session 0x5571f4d90d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139649024 unmapped: 42295296 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:09.194375+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.487491608s of 11.196624756s, submitted: 241
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 307 ms_handle_reset con 0x5571f572c000 session 0x5571f3b074a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6866800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 307 heartbeat osd_stat(store_statfs(0x4f72fe000/0x0/0x4ffc00000, data 0x2274a1b/0x241e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139649024 unmapped: 42295296 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:10.194510+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 307 ms_handle_reset con 0x5571f6867400 session 0x5571f571a1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 307 ms_handle_reset con 0x5571f6867800 session 0x5571f500fc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 41246720 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:11.194655+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573bc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 308 ms_handle_reset con 0x5571f573bc00 session 0x5571f3f201e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 309 ms_handle_reset con 0x5571f2c24800 session 0x5571f3ddb680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:12.195232+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140713984 unmapped: 41230336 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 309 handle_osd_map epochs [309,310], i have 309, src has [1,310]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 310 ms_handle_reset con 0x5571f6866800 session 0x5571f5ccb4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 310 ms_handle_reset con 0x5571f572c000 session 0x5571f31a01e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:13.195693+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140722176 unmapped: 41222144 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2149002 data_alloc: 218103808 data_used: 9195520
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 311 heartbeat osd_stat(store_statfs(0x4f72f6000/0x0/0x4ffc00000, data 0x227baa7/0x2427000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573bc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:14.195988+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140722176 unmapped: 41222144 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 312 ms_handle_reset con 0x5571f573bc00 session 0x5571f5a88b40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 312 ms_handle_reset con 0x5571f6867400 session 0x5571f31a1c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 312 ms_handle_reset con 0x5571f2c24800 session 0x5571f500f4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:15.196151+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140738560 unmapped: 41205760 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 312 heartbeat osd_stat(store_statfs(0x4f72f0000/0x0/0x4ffc00000, data 0x227f1c3/0x242e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 313 ms_handle_reset con 0x5571f572c000 session 0x5571f7f3a5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:16.196445+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140738560 unmapped: 41205760 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 313 heartbeat osd_stat(store_statfs(0x4f72eb000/0x0/0x4ffc00000, data 0x2280dd1/0x2432000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573bc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 314 ms_handle_reset con 0x5571f573bc00 session 0x5571f551a3c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:17.196802+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 41189376 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:18.197014+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 41189376 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2166065 data_alloc: 218103808 data_used: 8687616
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 314 heartbeat osd_stat(store_statfs(0x4f72e7000/0x0/0x4ffc00000, data 0x22829f9/0x2435000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 314 handle_osd_map epochs [315,315], i have 315, src has [1,315]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6866800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 315 ms_handle_reset con 0x5571f6867800 session 0x5571f5ccba40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:19.197560+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 41172992 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.600614548s of 10.128721237s, submitted: 161
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573b400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573b800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 316 ms_handle_reset con 0x5571f573b400 session 0x5571f4d8d0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:20.197760+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140787712 unmapped: 41156608 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 316 ms_handle_reset con 0x5571f573b800 session 0x5571f4e301e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 317 ms_handle_reset con 0x5571f6866800 session 0x5571f394bc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:21.198342+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140804096 unmapped: 41140224 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:22.198597+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140804096 unmapped: 41140224 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 317 ms_handle_reset con 0x5571f572c000 session 0x5571f56f72c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573b400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573bc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 318 ms_handle_reset con 0x5571f573b400 session 0x5571f570e780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:23.198982+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140828672 unmapped: 41115648 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 318 handle_osd_map epochs [318,319], i have 318, src has [1,319]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2182949 data_alloc: 218103808 data_used: 8699904
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 319 ms_handle_reset con 0x5571f573bc00 session 0x5571f4d91a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 319 ms_handle_reset con 0x5571f2c24800 session 0x5571f31a1e00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 319 ms_handle_reset con 0x5571f572c000 session 0x5571f5ccb680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 319 heartbeat osd_stat(store_statfs(0x4f72d7000/0x0/0x4ffc00000, data 0x228b50d/0x2445000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:24.199191+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140845056 unmapped: 41099264 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 319 handle_osd_map epochs [319,320], i have 319, src has [1,320]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573b400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 320 ms_handle_reset con 0x5571f6867800 session 0x5571f4d8c780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573b800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 320 ms_handle_reset con 0x5571f573b800 session 0x5571f4e4b0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:25.199354+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 41091072 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 321 ms_handle_reset con 0x5571f573b400 session 0x5571f4b6c1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:26.199573+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140869632 unmapped: 41074688 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573b800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 321 ms_handle_reset con 0x5571f572c000 session 0x5571f56f7c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 321 ms_handle_reset con 0x5571f2c24800 session 0x5571f394a5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6866800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 321 ms_handle_reset con 0x5571f6866800 session 0x5571f33163c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:27.199766+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 321 ms_handle_reset con 0x5571f6867800 session 0x5571f4e4a1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140886016 unmapped: 41058304 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573a800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 322 ms_handle_reset con 0x5571f573a800 session 0x5571f55b1a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 322 ms_handle_reset con 0x5571f2c24800 session 0x5571f570e780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:28.200051+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 322 ms_handle_reset con 0x5571f572c000 session 0x5571f56f72c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140918784 unmapped: 41025536 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6866800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2218452 data_alloc: 218103808 data_used: 8716288
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 322 ms_handle_reset con 0x5571f6866800 session 0x5571f4e301e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 322 ms_handle_reset con 0x5571f6867800 session 0x5571f5ccba40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 322 ms_handle_reset con 0x5571f4f06c00 session 0x5571f4e4bc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 323 ms_handle_reset con 0x5571f573b800 session 0x5571f570fe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:29.200347+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 323 heartbeat osd_stat(store_statfs(0x4f72c8000/0x0/0x4ffc00000, data 0x24bd5ad/0x2454000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 140918784 unmapped: 41025536 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 323 ms_handle_reset con 0x5571f2c24800 session 0x5571f7f3a5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 323 ms_handle_reset con 0x5571f4f06c00 session 0x5571f500f4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.779619217s of 10.103399277s, submitted: 85
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6866800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 324 ms_handle_reset con 0x5571f572c000 session 0x5571f4da8b40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:30.200629+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 141107200 unmapped: 40837120 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 324 handle_osd_map epochs [324,325], i have 324, src has [1,325]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 325 ms_handle_reset con 0x5571f6867800 session 0x5571f7f2ed20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:31.200763+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139149312 unmapped: 42795008 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:32.200918+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139182080 unmapped: 42762240 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 325 heartbeat osd_stat(store_statfs(0x4f7299000/0x0/0x4ffc00000, data 0x24eae06/0x2484000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 325 handle_osd_map epochs [325,326], i have 325, src has [1,326]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 325 handle_osd_map epochs [326,326], i have 326, src has [1,326]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:33.201048+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139190272 unmapped: 42754048 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 326 ms_handle_reset con 0x5571f2c24800 session 0x5571f5a88000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2231986 data_alloc: 218103808 data_used: 8720384
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 327 ms_handle_reset con 0x5571f4f06c00 session 0x5571f2eff0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:34.201215+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139198464 unmapped: 42745856 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 327 ms_handle_reset con 0x5571f572c000 session 0x5571f4d903c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 327 heartbeat osd_stat(store_statfs(0x4f7294000/0x0/0x4ffc00000, data 0x24ee522/0x2489000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 327 heartbeat osd_stat(store_statfs(0x4f7294000/0x0/0x4ffc00000, data 0x24ee522/0x2489000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:35.201378+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139198464 unmapped: 42745856 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:36.201641+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139198464 unmapped: 42745856 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573b800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:37.201857+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139198464 unmapped: 42745856 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 327 ms_handle_reset con 0x5571f4f07400 session 0x5571f3ddb680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:38.202008+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139206656 unmapped: 42737664 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2239331 data_alloc: 218103808 data_used: 8732672
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 327 handle_osd_map epochs [327,328], i have 327, src has [1,328]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 327 handle_osd_map epochs [328,328], i have 328, src has [1,328]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:39.202240+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139223040 unmapped: 42721280 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 328 ms_handle_reset con 0x5571f4f07c00 session 0x5571f500fc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:40.202502+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 328 heartbeat osd_stat(store_statfs(0x4f7290000/0x0/0x4ffc00000, data 0x24f0246/0x248e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139231232 unmapped: 42713088 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.844756126s of 10.538313866s, submitted: 113
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 329 ms_handle_reset con 0x5571f4f07800 session 0x5571f56f74a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 329 ms_handle_reset con 0x5571f4f07c00 session 0x5571f4e714a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 329 ms_handle_reset con 0x5571f573b800 session 0x5571f571bc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:41.202669+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139247616 unmapped: 42696704 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:42.202874+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139247616 unmapped: 42696704 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:43.203048+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139272192 unmapped: 42672128 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2253260 data_alloc: 218103808 data_used: 8773632
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:44.203138+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139272192 unmapped: 42672128 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 330 heartbeat osd_stat(store_statfs(0x4f7288000/0x0/0x4ffc00000, data 0x24f3f62/0x2495000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:45.203486+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139616256 unmapped: 42328064 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 330 ms_handle_reset con 0x5571f2c24800 session 0x5571f551a3c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 330 ms_handle_reset con 0x5571f4f06c00 session 0x5571f4d92000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 331 ms_handle_reset con 0x5571f4f07400 session 0x5571f5a892c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:46.203621+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 42270720 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 331 ms_handle_reset con 0x5571f2c24800 session 0x5571f7f3b4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 331 ms_handle_reset con 0x5571f4f07800 session 0x5571f57621e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:47.203777+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573b800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139681792 unmapped: 42262528 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 332 ms_handle_reset con 0x5571f4f07c00 session 0x5571f7f2e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 332 heartbeat osd_stat(store_statfs(0x4f7013000/0x0/0x4ffc00000, data 0x2763aca/0x270a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:48.203932+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 333 ms_handle_reset con 0x5571f573b800 session 0x5571f31a05a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139747328 unmapped: 42196992 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2305554 data_alloc: 234881024 data_used: 9760768
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 333 ms_handle_reset con 0x5571f4f06c00 session 0x5571f313a960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 333 ms_handle_reset con 0x5571f4f07400 session 0x5571f4d92000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:49.204047+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 139755520 unmapped: 42188800 heap: 181944320 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 333 handle_osd_map epochs [334,334], i have 333, src has [1,334]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 334 ms_handle_reset con 0x5571f4f07c00 session 0x5571f3b070e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573b800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 334 ms_handle_reset con 0x5571f4f06800 session 0x5571f4e714a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 334 ms_handle_reset con 0x5571f572c000 session 0x5571f4d923c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:50.204239+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 152371200 unmapped: 33775616 heap: 186146816 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.775141239s of 10.085356712s, submitted: 88
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 334 handle_osd_map epochs [334,335], i have 334, src has [1,335]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:51.204474+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 335 ms_handle_reset con 0x5571f4f07800 session 0x5571f571bc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 141221888 unmapped: 53329920 heap: 194551808 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 335 ms_handle_reset con 0x5571f4f06c00 session 0x5571f500fc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 335 heartbeat osd_stat(store_statfs(0x4f4c04000/0x0/0x4ffc00000, data 0x4b6aafc/0x4b18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [0,0,0,0,0,0,0,2,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 335 ms_handle_reset con 0x5571f2c24800 session 0x5571f4eb65a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:52.205008+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 145506304 unmapped: 49045504 heap: 194551808 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:53.205135+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 153935872 unmapped: 40615936 heap: 194551808 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110405 data_alloc: 234881024 data_used: 9789440
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 336 ms_handle_reset con 0x5571f4f07400 session 0x5571f3ddb680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:54.205274+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 336 heartbeat osd_stat(store_statfs(0x4f0005000/0x0/0x4ffc00000, data 0x976aff0/0x9719000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [0,0,0,0,0,0,1,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 145588224 unmapped: 48963584 heap: 194551808 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 336 ms_handle_reset con 0x5571f4f07c00 session 0x5571f5ccba40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:55.205379+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 145645568 unmapped: 48906240 heap: 194551808 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 336 ms_handle_reset con 0x5571f4f07c00 session 0x5571f5a890e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:56.205516+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 145711104 unmapped: 48840704 heap: 194551808 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:57.205631+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 336 heartbeat osd_stat(store_statfs(0x4eb003000/0x0/0x4ffc00000, data 0xe76cbee/0xe71b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [0,0,0,0,0,0,1,0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 141606912 unmapped: 61341696 heap: 202948608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 336 ms_handle_reset con 0x5571f4f07400 session 0x5571f4eb74a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 336 ms_handle_reset con 0x5571f4f06c00 session 0x5571f4e4a960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:58.205729+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 336 ms_handle_reset con 0x5571f4f06400 session 0x5571f4e4ab40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 141729792 unmapped: 61218816 heap: 202948608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3976657 data_alloc: 234881024 data_used: 9797632
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:59.205854+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 337 ms_handle_reset con 0x5571f4f07800 session 0x5571f4d8dc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 146006016 unmapped: 56942592 heap: 202948608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 337 ms_handle_reset con 0x5571f4f06400 session 0x5571f4e30780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 337 ms_handle_reset con 0x5571f2c24800 session 0x5571f500f4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:00.206010+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 150331392 unmapped: 52617216 heap: 202948608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.021196365s of 10.050020218s, submitted: 157
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 337 ms_handle_reset con 0x5571f4f06800 session 0x5571f4d92960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:01.206147+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 337 ms_handle_reset con 0x5571f573b800 session 0x5571f313be00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 337 heartbeat osd_stat(store_statfs(0x4e3c00000/0x0/0x4ffc00000, data 0x15b6e7de/0x15b1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 337 handle_osd_map epochs [337,338], i have 337, src has [1,338]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 337 handle_osd_map epochs [338,338], i have 338, src has [1,338]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 338 ms_handle_reset con 0x5571f4f06c00 session 0x5571f56f74a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 142196736 unmapped: 60751872 heap: 202948608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 338 ms_handle_reset con 0x5571f4f06c00 session 0x5571f4d90b40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:02.206326+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 142262272 unmapped: 60686336 heap: 202948608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:03.206517+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 338 handle_osd_map epochs [339,339], i have 338, src has [1,339]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4490136 data_alloc: 234881024 data_used: 9814016
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 142303232 unmapped: 60645376 heap: 202948608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 339 handle_osd_map epochs [340,340], i have 339, src has [1,340]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 340 ms_handle_reset con 0x5571f2c24800 session 0x5571f3b06960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 340 ms_handle_reset con 0x5571f4f06400 session 0x5571f2d00d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 340 ms_handle_reset con 0x5571f4f06800 session 0x5571f31a1680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:04.206651+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 142319616 unmapped: 60628992 heap: 202948608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573b800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 340 ms_handle_reset con 0x5571f4f07400 session 0x5571f4d92780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:05.206802+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 142327808 unmapped: 60620800 heap: 202948608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 340 handle_osd_map epochs [340,341], i have 340, src has [1,341]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 341 ms_handle_reset con 0x5571f573b800 session 0x5571f3200000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 341 ms_handle_reset con 0x5571f2c24800 session 0x5571f3201c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 341 heartbeat osd_stat(store_statfs(0x4e3bfb000/0x0/0x4ffc00000, data 0x15b735d6/0x15b22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:06.206974+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 341 ms_handle_reset con 0x5571f6866800 session 0x5571f31a01e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 341 ms_handle_reset con 0x5571f4f07000 session 0x5571f3a1e3c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 142344192 unmapped: 60604416 heap: 202948608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 341 ms_handle_reset con 0x5571f4f06800 session 0x5571f4e832c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 341 ms_handle_reset con 0x5571f4f06400 session 0x5571f55b0000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:07.207234+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 142360576 unmapped: 60588032 heap: 202948608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 341 handle_osd_map epochs [341,342], i have 341, src has [1,342]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573b800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 342 ms_handle_reset con 0x5571f4f07000 session 0x5571f3b07a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 342 ms_handle_reset con 0x5571f4f06800 session 0x5571f5762000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:08.207402+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6866800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4487270 data_alloc: 234881024 data_used: 9687040
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 142401536 unmapped: 60547072 heap: 202948608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 342 handle_osd_map epochs [342,343], i have 342, src has [1,343]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 343 handle_osd_map epochs [343,343], i have 343, src has [1,343]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 343 ms_handle_reset con 0x5571f573b800 session 0x5571f313cf00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 343 ms_handle_reset con 0x5571f2c24800 session 0x5571f4e82f00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 343 ms_handle_reset con 0x5571f4f06c00 session 0x5571f3201c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:09.207567+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 142409728 unmapped: 60538880 heap: 202948608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 344 ms_handle_reset con 0x5571f4f06400 session 0x5571f4d92780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 344 ms_handle_reset con 0x5571f6866800 session 0x5571f57630e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 344 ms_handle_reset con 0x5571f4f07000 session 0x5571f7f3a3c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:10.207707+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573b800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 344 ms_handle_reset con 0x5571f573b800 session 0x5571f4b6de00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 142475264 unmapped: 60473344 heap: 202948608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 344 handle_osd_map epochs [344,345], i have 344, src has [1,345]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.134190559s of 10.124237061s, submitted: 199
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 345 ms_handle_reset con 0x5571f4f06800 session 0x5571f31a1680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 345 ms_handle_reset con 0x5571f4f06c00 session 0x5571f2d00d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:11.207876+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 142499840 unmapped: 60448768 heap: 202948608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 345 handle_osd_map epochs [345,346], i have 345, src has [1,346]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 346 ms_handle_reset con 0x5571f4f07000 session 0x5571f3a1eb40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573b800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 346 ms_handle_reset con 0x5571f573b800 session 0x5571f5a890e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 346 ms_handle_reset con 0x5571f4f06400 session 0x5571f4bee780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 346 heartbeat osd_stat(store_statfs(0x4e3e88000/0x0/0x4ffc00000, data 0x156b907b/0x15894000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:12.208026+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 346 ms_handle_reset con 0x5571f572b400 session 0x5571f571b860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 346 ms_handle_reset con 0x5571f6eac800 session 0x5571f551a960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 346 heartbeat osd_stat(store_statfs(0x4e3e83000/0x0/0x4ffc00000, data 0x156bb199/0x15898000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 142524416 unmapped: 60424192 heap: 202948608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 346 ms_handle_reset con 0x5571f4f06800 session 0x5571f3b072c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:13.208212+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4458727 data_alloc: 218103808 data_used: 9490432
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 142532608 unmapped: 60416000 heap: 202948608 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573b800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 347 ms_handle_reset con 0x5571f4f07000 session 0x5571f3a1e1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 347 ms_handle_reset con 0x5571f4f06c00 session 0x5571f394be00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 347 ms_handle_reset con 0x5571f573b800 session 0x5571f3ddb680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572b400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:14.208332+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176447488 unmapped: 34906112 heap: 211353600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eac800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 347 ms_handle_reset con 0x5571f6eac800 session 0x5571f4b6d860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 347 ms_handle_reset con 0x5571f4f07000 session 0x5571f4ea61e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:15.208487+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 142811136 unmapped: 72744960 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:16.208702+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 152412160 unmapped: 63143936 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6866800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:17.208890+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 347 ms_handle_reset con 0x5571f4f07c00 session 0x5571f5cca5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572a800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 148512768 unmapped: 67043328 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:18.209037+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 348 heartbeat osd_stat(store_statfs(0x4dce9b000/0x0/0x4ffc00000, data 0x1c6a3a29/0x1c882000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [0,0,1,0,0,0,3,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a61000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6e01c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 348 ms_handle_reset con 0x5571f5a61000 session 0x5571f7f2e780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5287916 data_alloc: 218103808 data_used: 7839744
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 144621568 unmapped: 70934528 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 349 ms_handle_reset con 0x5571f572a800 session 0x5571f56f7a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 349 ms_handle_reset con 0x5571f6e01c00 session 0x5571f4ea7c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:19.209180+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 349 ms_handle_reset con 0x5571f6866800 session 0x5571f5a88f00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149200896 unmapped: 66355200 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:20.209404+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 349 heartbeat osd_stat(store_statfs(0x4d9297000/0x0/0x4ffc00000, data 0x202a56b5/0x20485000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,2])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 153657344 unmapped: 61898752 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.467541695s of 10.024539948s, submitted: 356
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:21.209525+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 155000832 unmapped: 60555264 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:22.209668+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 350 heartbeat osd_stat(store_statfs(0x4d569a000/0x0/0x4ffc00000, data 0x23ea5653/0x24084000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [0,0,0,0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 146882560 unmapped: 68673536 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:23.209892+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 350 ms_handle_reset con 0x5571f4f07c00 session 0x5571f4d8c1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573b800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 350 ms_handle_reset con 0x5571f4f07000 session 0x5571f4e83a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6305573 data_alloc: 218103808 data_used: 7852032
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 147169280 unmapped: 68386816 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 350 ms_handle_reset con 0x5571f4f06800 session 0x5571f3b07860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 350 handle_osd_map epochs [351,351], i have 350, src has [1,351]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 351 ms_handle_reset con 0x5571f4f07c00 session 0x5571f7f2fe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 351 ms_handle_reset con 0x5571f572b400 session 0x5571f4eb65a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 351 ms_handle_reset con 0x5571f573b800 session 0x5571f55b1680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572a800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:24.210138+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6866800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 351 heartbeat osd_stat(store_statfs(0x4d3294000/0x0/0x4ffc00000, data 0x262a8d6d/0x2648a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 147447808 unmapped: 68108288 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 351 handle_osd_map epochs [351,352], i have 351, src has [1,352]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 352 ms_handle_reset con 0x5571f6866800 session 0x5571f4e4ad20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 352 ms_handle_reset con 0x5571f572a800 session 0x5571f31a01e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 352 ms_handle_reset con 0x5571f4f06800 session 0x5571f33170e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 352 ms_handle_reset con 0x5571f4f07c00 session 0x5571f30b0780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:25.210268+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 147464192 unmapped: 68091904 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572b400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:26.210541+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 352 heartbeat osd_stat(store_statfs(0x4d3293000/0x0/0x4ffc00000, data 0x262aa5d7/0x2648a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 147464192 unmapped: 68091904 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 352 handle_osd_map epochs [352,353], i have 352, src has [1,353]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 353 ms_handle_reset con 0x5571f572b400 session 0x5571f3843a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:27.210777+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573b800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 353 ms_handle_reset con 0x5571f573b800 session 0x5571f5ccad20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149135360 unmapped: 66420736 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 353 ms_handle_reset con 0x5571f4f06800 session 0x5571f6582b40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 353 ms_handle_reset con 0x5571f4f07c00 session 0x5571f4e310e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:28.210939+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6422910 data_alloc: 218103808 data_used: 7925760
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149135360 unmapped: 66420736 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:29.211141+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149143552 unmapped: 66412544 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:30.211343+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572a800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 354 ms_handle_reset con 0x5571f572a800 session 0x5571f313d860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572b400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 354 ms_handle_reset con 0x5571f572b400 session 0x5571f65823c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149086208 unmapped: 66469888 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:31.211585+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149086208 unmapped: 66469888 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573b800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:32.211844+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 354 heartbeat osd_stat(store_statfs(0x4d2881000/0x0/0x4ffc00000, data 0x26cb8d6f/0x26e9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.116710663s of 11.525857925s, submitted: 218
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 354 ms_handle_reset con 0x5571f573b800 session 0x5571f5762f00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149086208 unmapped: 66469888 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:33.212063+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6428690 data_alloc: 218103808 data_used: 7933952
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149086208 unmapped: 66469888 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:34.212289+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149086208 unmapped: 66469888 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 355 ms_handle_reset con 0x5571f4f06800 session 0x5571f4e4be00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 355 ms_handle_reset con 0x5571f4f07c00 session 0x5571f4d921e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572a800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:35.212479+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 355 ms_handle_reset con 0x5571f572a800 session 0x5571f4e70f00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572b400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 355 ms_handle_reset con 0x5571f572b400 session 0x5571f4e31c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149250048 unmapped: 66306048 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:36.212627+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 355 heartbeat osd_stat(store_statfs(0x4d2367000/0x0/0x4ffc00000, data 0x271d0871/0x273b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149250048 unmapped: 66306048 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6866800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 355 ms_handle_reset con 0x5571f6866800 session 0x5571f4e703c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:37.212796+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 355 ms_handle_reset con 0x5571f4f07c00 session 0x5571f55b12c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572a800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572b400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 66142208 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 355 handle_osd_map epochs [355,356], i have 355, src has [1,356]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6e01c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a61000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:38.212948+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 356 ms_handle_reset con 0x5571f6e01c00 session 0x5571f3b070e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6633761 data_alloc: 234881024 data_used: 11247616
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 149340160 unmapped: 66215936 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f5a61000 session 0x5571f551ab40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:39.213160+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f4f06800 session 0x5571f3a1e3c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 152731648 unmapped: 62824448 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:40.213394+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 heartbeat osd_stat(store_statfs(0x4d0e92000/0x0/0x4ffc00000, data 0x28291029/0x2847a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 152731648 unmapped: 62824448 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:41.213536+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 152731648 unmapped: 62824448 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:42.213778+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 152731648 unmapped: 62824448 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:43.213991+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 heartbeat osd_stat(store_statfs(0x4d0e92000/0x0/0x4ffc00000, data 0x28291029/0x2847a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6679744 data_alloc: 234881024 data_used: 17281024
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 152731648 unmapped: 62824448 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 heartbeat osd_stat(store_statfs(0x4d0e92000/0x0/0x4ffc00000, data 0x28291029/0x2847a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:44.214171+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 152731648 unmapped: 62824448 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:45.214344+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 152731648 unmapped: 62824448 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 heartbeat osd_stat(store_statfs(0x4d0e92000/0x0/0x4ffc00000, data 0x28291029/0x2847a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:46.214501+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 152731648 unmapped: 62824448 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:47.214735+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 152731648 unmapped: 62824448 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eac800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.481342316s of 15.962320328s, submitted: 78
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:48.214899+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f6eac800 session 0x5571f4e4b0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f4f06800 session 0x5571f4e303c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f4f07c00 session 0x5571f3dda780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a61000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f5a61000 session 0x5571f4e4ab40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6e01c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f6e01c00 session 0x5571f4ea6f00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6718989 data_alloc: 234881024 data_used: 17281024
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 152821760 unmapped: 62734336 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:49.215082+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 152821760 unmapped: 62734336 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:50.215297+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 heartbeat osd_stat(store_statfs(0x4d0992000/0x0/0x4ffc00000, data 0x28793029/0x2897c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 162979840 unmapped: 52576256 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:51.215471+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 164339712 unmapped: 51216384 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f6867c00 session 0x5571f7f2e960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:52.215681+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 163078144 unmapped: 52477952 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f07c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a61000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:53.215855+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6816865 data_alloc: 234881024 data_used: 19554304
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 163069952 unmapped: 52486144 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f6867c00 session 0x5571f570f0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:54.216161+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 166854656 unmapped: 48701440 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:55.216366+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6e01c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f6e01c00 session 0x5571f4b6c960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f2d51c00 session 0x5571f7f2f860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 heartbeat osd_stat(store_statfs(0x4cff75000/0x0/0x4ffc00000, data 0x291b0029/0x29399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 166895616 unmapped: 48660480 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:56.216551+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5bfb000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f5bfb000 session 0x5571f56f6f00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 166912000 unmapped: 48644096 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:57.216765+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 heartbeat osd_stat(store_statfs(0x4cff75000/0x0/0x4ffc00000, data 0x291b0029/0x29399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f4aa8000 session 0x5571f2efe780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 166944768 unmapped: 48611328 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f4aa8000 session 0x5571f56f70e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:58.216891+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6856699 data_alloc: 234881024 data_used: 24809472
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 166977536 unmapped: 48578560 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:59.217056+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.685091019s of 11.346314430s, submitted: 174
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f2d51c00 session 0x5571f5a892c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 166977536 unmapped: 48578560 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:00.217171+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 heartbeat osd_stat(store_statfs(0x4cff73000/0x0/0x4ffc00000, data 0x291b2029/0x2939b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5bfb000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f5bfb000 session 0x5571f313ab40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167018496 unmapped: 48537600 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f6867c00 session 0x5571f3b2fa40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:01.217333+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167084032 unmapped: 48472064 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:02.217527+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 heartbeat osd_stat(store_statfs(0x4cff73000/0x0/0x4ffc00000, data 0x291b2029/0x2939b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167084032 unmapped: 48472064 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:03.217784+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6857832 data_alloc: 234881024 data_used: 24809472
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167084032 unmapped: 48472064 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:04.217950+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167084032 unmapped: 48472064 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:05.218151+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167084032 unmapped: 48472064 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:06.218318+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167084032 unmapped: 48472064 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 heartbeat osd_stat(store_statfs(0x4cff72000/0x0/0x4ffc00000, data 0x291b3029/0x2939c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:07.218549+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167133184 unmapped: 48422912 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:08.218717+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6965376 data_alloc: 234881024 data_used: 25239552
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167067648 unmapped: 48488448 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:09.218900+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.891384125s of 10.138665199s, submitted: 58
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167837696 unmapped: 47718400 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 heartbeat osd_stat(store_statfs(0x4cf948000/0x0/0x4ffc00000, data 0x29fec029/0x299c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:10.219120+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 heartbeat osd_stat(store_statfs(0x4cf933000/0x0/0x4ffc00000, data 0x29ff9029/0x299d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167870464 unmapped: 47685632 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 heartbeat osd_stat(store_statfs(0x4cf933000/0x0/0x4ffc00000, data 0x29ff9029/0x299d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:11.219285+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167870464 unmapped: 47685632 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 heartbeat osd_stat(store_statfs(0x4cf933000/0x0/0x4ffc00000, data 0x29ff9029/0x299d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:12.219445+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 heartbeat osd_stat(store_statfs(0x4cf933000/0x0/0x4ffc00000, data 0x29ff9029/0x299d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167870464 unmapped: 47685632 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:13.219616+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6968620 data_alloc: 234881024 data_used: 25100288
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167755776 unmapped: 47800320 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f572a800 session 0x5571f4d92d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f572b400 session 0x5571f570eb40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:14.225344+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f2d51c00 session 0x5571f313cf00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167755776 unmapped: 47800320 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:15.225735+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 heartbeat osd_stat(store_statfs(0x4cf95f000/0x0/0x4ffc00000, data 0x29fd6029/0x299af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167763968 unmapped: 47792128 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:16.225888+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167763968 unmapped: 47792128 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:17.226295+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167763968 unmapped: 47792128 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:18.226443+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 heartbeat osd_stat(store_statfs(0x4cf95f000/0x0/0x4ffc00000, data 0x29fd6029/0x299af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6962232 data_alloc: 234881024 data_used: 24989696
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167763968 unmapped: 47792128 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:19.226668+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167763968 unmapped: 47792128 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:20.226902+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167763968 unmapped: 47792128 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:21.227116+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 heartbeat osd_stat(store_statfs(0x4cf95f000/0x0/0x4ffc00000, data 0x29fd6029/0x299af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167763968 unmapped: 47792128 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:22.227317+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167772160 unmapped: 47783936 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:23.227530+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 heartbeat osd_stat(store_statfs(0x4cf95f000/0x0/0x4ffc00000, data 0x29fd6029/0x299af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6962232 data_alloc: 234881024 data_used: 24989696
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167780352 unmapped: 47775744 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:24.227721+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167780352 unmapped: 47775744 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.391123772s of 15.470210075s, submitted: 40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f4aa8000 session 0x5571f2d00d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:25.227893+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 heartbeat osd_stat(store_statfs(0x4cf95f000/0x0/0x4ffc00000, data 0x29fd6029/0x299af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5bfb000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6867c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 167936000 unmapped: 47620096 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:26.228082+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 168017920 unmapped: 47538176 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:27.228305+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 168042496 unmapped: 47513600 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:28.228502+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 heartbeat osd_stat(store_statfs(0x4cf93b000/0x0/0x4ffc00000, data 0x29ffa029/0x299d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6964313 data_alloc: 234881024 data_used: 25157632
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 168042496 unmapped: 47513600 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f4f06800 session 0x5571f551a5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f4f07c00 session 0x5571f551ad20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f5a61000 session 0x5571f5ccb0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:29.229858+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f2d51c00 session 0x5571f4e310e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 163815424 unmapped: 51740672 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:30.230048+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 163815424 unmapped: 51740672 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:31.230213+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 163815424 unmapped: 51740672 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:32.230363+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f4aa8000 session 0x5571f55b03c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f4f06800 session 0x5571f5762b40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572b400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f572b400 session 0x5571f65821e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f2d51c00 session 0x5571f4d905a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f4aa8000 session 0x5571f3b2ed20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 163848192 unmapped: 51707904 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:33.230525+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 heartbeat osd_stat(store_statfs(0x4cf691000/0x0/0x4ffc00000, data 0x2a2a4029/0x29c7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6947447 data_alloc: 234881024 data_used: 19906560
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 163848192 unmapped: 51707904 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:34.230717+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a61000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 ms_handle_reset con 0x5571f5a61000 session 0x5571f570fa40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 163848192 unmapped: 51707904 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:35.230877+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6e01c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d56800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.758706093s of 10.129618645s, submitted: 34
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 358 ms_handle_reset con 0x5571f6e01c00 session 0x5571f2eff0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 358 heartbeat osd_stat(store_statfs(0x4cf691000/0x0/0x4ffc00000, data 0x2a2a4029/0x29c7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 164192256 unmapped: 51363840 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 358 handle_osd_map epochs [358,359], i have 358, src has [1,359]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:36.231022+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 359 ms_handle_reset con 0x5571f2d56800 session 0x5571f55b05a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 359 ms_handle_reset con 0x5571f4f06800 session 0x5571f4d90d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 164233216 unmapped: 51322880 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 359 handle_osd_map epochs [359,360], i have 359, src has [1,360]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:37.231152+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 360 ms_handle_reset con 0x5571f2d51c00 session 0x5571f4da9c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 360 ms_handle_reset con 0x5571f4aa8000 session 0x5571f313a1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a61000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6e01c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 164052992 unmapped: 51503104 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:38.232688+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7071805 data_alloc: 234881024 data_used: 21827584
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 165249024 unmapped: 50307072 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:39.232816+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 361 ms_handle_reset con 0x5571f3035400 session 0x5571f4d8d680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 168640512 unmapped: 46915584 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:40.232959+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 361 ms_handle_reset con 0x5571f2d51400 session 0x5571f6582960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 168689664 unmapped: 46866432 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:41.233195+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 361 heartbeat osd_stat(store_statfs(0x4d001d000/0x0/0x4ffc00000, data 0x29912011/0x292ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 362 ms_handle_reset con 0x5571f2d51c00 session 0x5571f551a960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 168755200 unmapped: 46800896 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:42.233361+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 168755200 unmapped: 46800896 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:43.233506+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 362 ms_handle_reset con 0x5571f4aa8000 session 0x5571f7f3b4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 362 ms_handle_reset con 0x5571f3035400 session 0x5571f4ea63c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 362 handle_osd_map epochs [362,363], i have 362, src has [1,363]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6977886 data_alloc: 251658240 data_used: 29011968
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 168886272 unmapped: 46669824 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:44.233679+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30ce400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 185835520 unmapped: 29720576 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:45.233845+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.225950241s of 10.408130646s, submitted: 171
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 169140224 unmapped: 46415872 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:46.234034+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 173604864 unmapped: 41951232 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:47.234194+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 363 heartbeat osd_stat(store_statfs(0x4cb00d000/0x0/0x4ffc00000, data 0x2e9216d7/0x2e301000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 173842432 unmapped: 41713664 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:48.234379+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7863122 data_alloc: 251658240 data_used: 29020160
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 169959424 unmapped: 45596672 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:49.234543+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 364 heartbeat osd_stat(store_statfs(0x4c8409000/0x0/0x4ffc00000, data 0x31523191/0x30f04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 178503680 unmapped: 37052416 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:50.234726+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 171327488 unmapped: 44228608 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:51.235169+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184131584 unmapped: 31424512 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:52.235302+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:53.235456+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 180273152 unmapped: 35282944 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8779074 data_alloc: 251658240 data_used: 29020160
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:54.235639+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174645248 unmapped: 40910848 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:55.235826+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 364 heartbeat osd_stat(store_statfs(0x4bfaab000/0x0/0x4ffc00000, data 0x39a82191/0x39463000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,2])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181444608 unmapped: 34111488 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 364 ms_handle_reset con 0x5571f30ce400 session 0x5571f7f2e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573ac00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 364 ms_handle_reset con 0x5571f573ac00 session 0x5571f8d234a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 364 ms_handle_reset con 0x5571f4f06800 session 0x5571f31ef680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573ac00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.052072048s of 10.067953110s, submitted: 125
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 364 ms_handle_reset con 0x5571f573ac00 session 0x5571f3f20d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 364 ms_handle_reset con 0x5571f2d51c00 session 0x5571f4ea61e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:56.235990+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177438720 unmapped: 38117376 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:57.236269+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177487872 unmapped: 38068224 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:58.236464+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177324032 unmapped: 38232064 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 364 ms_handle_reset con 0x5571f3035400 session 0x5571f4d8d680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30ce400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 364 ms_handle_reset con 0x5571f30ce400 session 0x5571f551ad20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7121241 data_alloc: 251658240 data_used: 30130176
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:59.236600+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176349184 unmapped: 39206912 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 364 heartbeat osd_stat(store_statfs(0x4cf973000/0x0/0x4ffc00000, data 0x29fba191/0x2999b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 364 ms_handle_reset con 0x5571f2d51c00 session 0x5571f551a5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:00.236734+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176414720 unmapped: 39141376 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:01.237575+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176414720 unmapped: 39141376 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:02.237723+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176283648 unmapped: 39272448 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 365 ms_handle_reset con 0x5571f3035400 session 0x5571f4d92d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 365 heartbeat osd_stat(store_statfs(0x4cf96f000/0x0/0x4ffc00000, data 0x29fbbdb9/0x2999e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:03.237831+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176291840 unmapped: 39264256 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 365 ms_handle_reset con 0x5571f4f06800 session 0x5571f2efe780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573ac00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 365 ms_handle_reset con 0x5571f573ac00 session 0x5571f500fc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7133339 data_alloc: 251658240 data_used: 31137792
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:04.237986+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176619520 unmapped: 38936576 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 365 handle_osd_map epochs [365,366], i have 365, src has [1,366]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 366 heartbeat osd_stat(store_statfs(0x4cf96f000/0x0/0x4ffc00000, data 0x29fbbdb9/0x2999e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:05.238113+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176685056 unmapped: 38871040 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 366 handle_osd_map epochs [366,367], i have 366, src has [1,367]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.914572239s of 10.001122475s, submitted: 163
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30ce800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 367 ms_handle_reset con 0x5571f8b18c00 session 0x5571f8d22d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 367 ms_handle_reset con 0x5571f30ce800 session 0x5571f570fe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:06.238275+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176914432 unmapped: 38641664 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 367 handle_osd_map epochs [367,368], i have 367, src has [1,368]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 368 ms_handle_reset con 0x5571f4aa8000 session 0x5571f4ea7680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 368 ms_handle_reset con 0x5571f2d51c00 session 0x5571f4da8d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:07.238503+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176930816 unmapped: 38625280 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:08.238756+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176963584 unmapped: 38592512 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 368 handle_osd_map epochs [368,369], i have 368, src has [1,369]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 369 handle_osd_map epochs [369,370], i have 369, src has [1,370]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 370 ms_handle_reset con 0x5571f3035400 session 0x5571f3b07c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f06800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5355289 data_alloc: 251658240 data_used: 30846976
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:09.238900+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 370 ms_handle_reset con 0x5571f4f06800 session 0x5571f394a000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177274880 unmapped: 38281216 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 370 heartbeat osd_stat(store_statfs(0x4dfb82000/0x0/0x4ffc00000, data 0x17428b23/0x17389000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:10.239136+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177291264 unmapped: 38264832 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 370 ms_handle_reset con 0x5571f2d51c00 session 0x5571f8d23e00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 370 handle_osd_map epochs [371,371], i have 371, src has [1,371]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:11.239270+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176857088 unmapped: 38699008 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 371 ms_handle_reset con 0x5571f3035400 session 0x5571f3b06780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 371 heartbeat osd_stat(store_statfs(0x4ee382000/0x0/0x4ffc00000, data 0xa02a76b/0x9f8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:12.239411+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176783360 unmapped: 38772736 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:13.239599+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176783360 unmapped: 38772736 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 371 handle_osd_map epochs [371,372], i have 371, src has [1,372]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3067549 data_alloc: 251658240 data_used: 30855168
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:14.239766+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176832512 unmapped: 38723584 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30ce800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 372 ms_handle_reset con 0x5571f30ce800 session 0x5571f4d90780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 372 ms_handle_reset con 0x5571f4aa8000 session 0x5571f7f2fe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f573ac00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 372 ms_handle_reset con 0x5571f573ac00 session 0x5571f32014a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:15.239950+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176881664 unmapped: 38674432 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 372 ms_handle_reset con 0x5571f2d51c00 session 0x5571f313c780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.878788948s of 10.021809578s, submitted: 245
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 372 ms_handle_reset con 0x5571f3035400 session 0x5571f8d22780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:16.240106+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 372 heartbeat osd_stat(store_statfs(0x4f537a000/0x0/0x4ffc00000, data 0x40302c3/0x3f93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30ce800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177184768 unmapped: 38371328 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:17.240280+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177209344 unmapped: 38346752 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:18.240403+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177274880 unmapped: 38281216 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 372 handle_osd_map epochs [372,373], i have 372, src has [1,373]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3079637 data_alloc: 251658240 data_used: 30904320
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:19.240579+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177274880 unmapped: 38281216 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:20.240710+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3034400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177274880 unmapped: 38281216 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 373 heartbeat osd_stat(store_statfs(0x4f5351000/0x0/0x4ffc00000, data 0x4056dcd/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:21.240927+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf3000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 373 ms_handle_reset con 0x5571f2cf3000 session 0x5571f4eb6d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177274880 unmapped: 38281216 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:22.241114+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177561600 unmapped: 37994496 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 373 heartbeat osd_stat(store_statfs(0x4f5342000/0x0/0x4ffc00000, data 0x4066dcd/0x3fcc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4dbd800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 374 heartbeat osd_stat(store_statfs(0x4f5342000/0x0/0x4ffc00000, data 0x4066dcd/0x3fcc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 374 ms_handle_reset con 0x5571f4dbd800 session 0x5571f4e303c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 374 ms_handle_reset con 0x5571f2c6e000 session 0x5571f4ea72c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:23.241245+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177422336 unmapped: 38133760 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 374 ms_handle_reset con 0x5571f2c6e000 session 0x5571f4befc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3088819 data_alloc: 251658240 data_used: 30994432
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:24.241389+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177422336 unmapped: 38133760 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf3000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:25.241532+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177537024 unmapped: 38019072 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 375 ms_handle_reset con 0x5571f2cf3000 session 0x5571f4e4b0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.602946281s of 10.436694145s, submitted: 47
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:26.241720+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177577984 unmapped: 37978112 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:27.241955+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 375 heartbeat osd_stat(store_statfs(0x4f533a000/0x0/0x4ffc00000, data 0x406a5d7/0x3fd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177627136 unmapped: 37928960 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 376 ms_handle_reset con 0x5571f2d51c00 session 0x5571f394bc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:28.242144+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177709056 unmapped: 37847040 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 376 ms_handle_reset con 0x5571f3034400 session 0x5571f4da94a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3093271 data_alloc: 251658240 data_used: 30994432
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:29.242293+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 179527680 unmapped: 36028416 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4dbd800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 376 ms_handle_reset con 0x5571f4dbd800 session 0x5571f8d232c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:30.242405+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 376 ms_handle_reset con 0x5571f3035400 session 0x5571f4d8dc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 376 ms_handle_reset con 0x5571f2c6e000 session 0x5571f4e310e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 35610624 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:31.242528+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 35610624 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:32.242667+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 179945472 unmapped: 35610624 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 376 heartbeat osd_stat(store_statfs(0x4f511a000/0x0/0x4ffc00000, data 0x42891bb/0x41f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 376 handle_osd_map epochs [377,377], i have 377, src has [1,377]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf3000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 377 ms_handle_reset con 0x5571f2cf3000 session 0x5571f4ea6000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3034400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 377 ms_handle_reset con 0x5571f2d51c00 session 0x5571f6582f00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:33.242833+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 179961856 unmapped: 35594240 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3134202 data_alloc: 251658240 data_used: 32518144
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:34.242924+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 179961856 unmapped: 35594240 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 378 heartbeat osd_stat(store_statfs(0x4f5113000/0x0/0x4ffc00000, data 0x428c9b9/0x41f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:35.243167+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 180011008 unmapped: 35545088 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 378 ms_handle_reset con 0x5571f3034400 session 0x5571f4e4ad20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:36.243288+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181084160 unmapped: 34471936 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 378 heartbeat osd_stat(store_statfs(0x4f5040000/0x0/0x4ffc00000, data 0x435f9b9/0x42cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:37.243458+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.679069519s of 11.092819214s, submitted: 63
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181141504 unmapped: 34414592 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 378 ms_handle_reset con 0x5571f2c6e000 session 0x5571f4b6d4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:38.243705+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181223424 unmapped: 34332672 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 378 handle_osd_map epochs [378,379], i have 378, src has [1,379]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:39.243907+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3149042 data_alloc: 251658240 data_used: 32522240
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 180461568 unmapped: 35094528 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf3000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:40.244060+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 379 ms_handle_reset con 0x5571f2cf3000 session 0x5571f56f7e00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 180600832 unmapped: 34955264 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:41.244262+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 379 heartbeat osd_stat(store_statfs(0x4f5025000/0x0/0x4ffc00000, data 0x46ba493/0x42e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 180617216 unmapped: 34938880 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:42.244401+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 180617216 unmapped: 34938880 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 379 ms_handle_reset con 0x5571f3035400 session 0x5571f4ea63c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets getting new tickets!
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:43.244681+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _finish_auth 0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:43.245553+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 180723712 unmapped: 34832384 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a66800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f03800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 380 ms_handle_reset con 0x5571f4f03800 session 0x5571f56f7860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 380 heartbeat osd_stat(store_statfs(0x4f5025000/0x0/0x4ffc00000, data 0x46ba4a3/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:44.244823+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3186944 data_alloc: 251658240 data_used: 32534528
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 180756480 unmapped: 34799616 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:45.244986+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181313536 unmapped: 34242560 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 380 handle_osd_map epochs [380,381], i have 380, src has [1,381]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 381 ms_handle_reset con 0x5571f5a66800 session 0x5571f4b6c1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 381 ms_handle_reset con 0x5571f2c6e000 session 0x5571f65832c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:46.245137+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181403648 unmapped: 34152448 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 381 ms_handle_reset con 0x5571f2d51c00 session 0x5571f4b6cf00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:47.245386+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181420032 unmapped: 34136064 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f4fa4000/0x0/0x4ffc00000, data 0x47341a2/0x4367000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf3000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.494256973s of 10.695263863s, submitted: 50
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 381 heartbeat osd_stat(store_statfs(0x4f4fa4000/0x0/0x4ffc00000, data 0x47341a2/0x4367000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:48.245553+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f03800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 381 ms_handle_reset con 0x5571f2cf3000 session 0x5571f4eb72c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181010432 unmapped: 34545664 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 381 ms_handle_reset con 0x5571f4f03800 session 0x5571f4d932c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 381 ms_handle_reset con 0x5571f3035400 session 0x5571f3317c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 381 handle_osd_map epochs [381,382], i have 381, src has [1,382]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf3000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:49.245648+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3205500 data_alloc: 251658240 data_used: 32718848
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 382 ms_handle_reset con 0x5571f2c6e000 session 0x5571f2d01680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 182091776 unmapped: 33464320 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: mgrc ms_handle_reset ms_handle_reset con 0x5571f572a000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1430667654
Nov 29 08:25:26 compute-0 ceph-osd[90977]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1430667654,v1:192.168.122.100:6801/1430667654]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: get_auth_request con 0x5571f5a66800 auth_method 0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: mgrc handle_mgr_configure stats_period=5
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 382 handle_osd_map epochs [383,383], i have 383, src has [1,383]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:50.245758+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 383 ms_handle_reset con 0x5571f2cf3000 session 0x5571f5762000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 182198272 unmapped: 33357824 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:51.277852+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 182198272 unmapped: 33357824 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f03800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 383 handle_osd_map epochs [383,384], i have 383, src has [1,384]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 384 ms_handle_reset con 0x5571f4f03800 session 0x5571f4d925a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f4f99000/0x0/0x4ffc00000, data 0x473d9ac/0x4374000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:52.278152+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 182247424 unmapped: 33308672 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 384 ms_handle_reset con 0x5571f3035400 session 0x5571f571b680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:53.278307+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f4f94000/0x0/0x4ffc00000, data 0x473f582/0x4377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 182362112 unmapped: 33193984 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f4f8b000/0x0/0x4ffc00000, data 0x474b582/0x4383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 384 ms_handle_reset con 0x5571f2d51c00 session 0x5571f313ab40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf3000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 384 ms_handle_reset con 0x5571f2cf3000 session 0x5571f4b6d860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:54.278441+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3217455 data_alloc: 251658240 data_used: 33099776
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 182403072 unmapped: 33153024 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 384 ms_handle_reset con 0x5571f30ce800 session 0x5571f4da8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 384 ms_handle_reset con 0x5571f4aa8000 session 0x5571f4e82f00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 384 ms_handle_reset con 0x5571f2d51c00 session 0x5571f5ccbc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 384 ms_handle_reset con 0x5571f3035400 session 0x5571f5a883c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf3000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:55.278550+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30ce800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 385 ms_handle_reset con 0x5571f2d51c00 session 0x5571f4b6c780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 385 ms_handle_reset con 0x5571f2cf3000 session 0x5571f3843680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 183599104 unmapped: 31956992 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 385 ms_handle_reset con 0x5571f30ce800 session 0x5571f570f2c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:56.278728+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 385 ms_handle_reset con 0x5571f2c6e000 session 0x5571f5cca3c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 386 ms_handle_reset con 0x5571f4aa8000 session 0x5571f5763e00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 183418880 unmapped: 32137216 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 386 ms_handle_reset con 0x5571f2c6e000 session 0x5571f5a88000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:57.278934+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 183418880 unmapped: 32137216 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf3000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30ce800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 386 ms_handle_reset con 0x5571f2d51c00 session 0x5571f3b2ed20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.987792015s of 10.008704185s, submitted: 88
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f03800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:58.279075+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 386 ms_handle_reset con 0x5571f4f03800 session 0x5571f2effe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 387 ms_handle_reset con 0x5571f30ce800 session 0x5571f55b14a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 183427072 unmapped: 32129024 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d5b400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d8800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 388 ms_handle_reset con 0x5571f4aa8000 session 0x5571f5ccb680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 388 ms_handle_reset con 0x5571f2cf3000 session 0x5571f571b4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 388 ms_handle_reset con 0x5571f2d5b400 session 0x5571f55b1680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 388 ms_handle_reset con 0x5571f38d8800 session 0x5571f4da9680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:59.279243+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3183894 data_alloc: 251658240 data_used: 32436224
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 182321152 unmapped: 33234944 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 388 heartbeat osd_stat(store_statfs(0x4f5304000/0x0/0x4ffc00000, data 0x43d0590/0x4008000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:00.279409+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 388 ms_handle_reset con 0x5571f2c6e000 session 0x5571f2eff0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 388 ms_handle_reset con 0x5571f2d51c00 session 0x5571f57632c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 182345728 unmapped: 33210368 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf3000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 388 ms_handle_reset con 0x5571f2cf3000 session 0x5571f3b061e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 388 handle_osd_map epochs [389,389], i have 389, src has [1,389]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 389 ms_handle_reset con 0x5571f2c6e000 session 0x5571f551a000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:01.279547+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 182378496 unmapped: 33177600 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:02.279670+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 389 ms_handle_reset con 0x5571f2d51c00 session 0x5571f4e4a3c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 182378496 unmapped: 33177600 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 389 ms_handle_reset con 0x5571f5bfb000 session 0x5571f394be00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 389 ms_handle_reset con 0x5571f6867c00 session 0x5571f4d8d0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:03.279850+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf3000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 390 ms_handle_reset con 0x5571f2cf3000 session 0x5571f7f2fa40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5bfb000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 390 ms_handle_reset con 0x5571f2c6e000 session 0x5571f5a88960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 182460416 unmapped: 33095680 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 390 ms_handle_reset con 0x5571f2d51c00 session 0x5571f55b01e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 390 heartbeat osd_stat(store_statfs(0x4f5316000/0x0/0x4ffc00000, data 0x3de6214/0x3ff8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d5b400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d8800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 391 ms_handle_reset con 0x5571f2d5b400 session 0x5571f5a88000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30ce800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 391 ms_handle_reset con 0x5571f30ce800 session 0x5571f4d925a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:04.280002+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 391 heartbeat osd_stat(store_statfs(0x4f5316000/0x0/0x4ffc00000, data 0x3de6214/0x3ff8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3159492 data_alloc: 251658240 data_used: 32333824
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 182493184 unmapped: 33062912 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 392 ms_handle_reset con 0x5571f38d8800 session 0x5571f4e82000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:05.280177+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 182501376 unmapped: 33054720 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 392 ms_handle_reset con 0x5571f5bfb000 session 0x5571f2effe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 393 ms_handle_reset con 0x5571f2c6e000 session 0x5571f4d932c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:06.280345+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 182501376 unmapped: 33054720 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 393 heartbeat osd_stat(store_statfs(0x4f5326000/0x0/0x4ffc00000, data 0x3dc9424/0x3fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf3000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 393 ms_handle_reset con 0x5571f2cf3000 session 0x5571f4ea63c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d5b400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 393 ms_handle_reset con 0x5571f2d5b400 session 0x5571f4e4a780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 393 ms_handle_reset con 0x5571f2d51c00 session 0x5571f4e4ad20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 393 heartbeat osd_stat(store_statfs(0x4f5326000/0x0/0x4ffc00000, data 0x3dc9424/0x3fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:07.280624+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 182558720 unmapped: 32997376 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 393 ms_handle_reset con 0x5571f5a61000 session 0x5571f570f0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.088065147s of 10.210553169s, submitted: 228
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 393 ms_handle_reset con 0x5571f6e01c00 session 0x5571f3317e00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:08.280756+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf3000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 182558720 unmapped: 32997376 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:09.280947+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d8800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3166796 data_alloc: 251658240 data_used: 32354304
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 394 ms_handle_reset con 0x5571f2c6e000 session 0x5571f6583680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 182575104 unmapped: 32980992 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5bfb000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:10.281200+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170156032 unmapped: 45400064 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 394 ms_handle_reset con 0x5571f38d8800 session 0x5571f3ddbe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 394 ms_handle_reset con 0x5571f5bfb000 session 0x5571f3b06d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 394 heartbeat osd_stat(store_statfs(0x4f7601000/0x0/0x4ffc00000, data 0x1af3fb8/0x1d0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:11.281424+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170164224 unmapped: 45391872 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 394 ms_handle_reset con 0x5571f2cf3000 session 0x5571f38425a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:12.281582+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170164224 unmapped: 45391872 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:13.281730+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170164224 unmapped: 45391872 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d8800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 395 ms_handle_reset con 0x5571f38d8800 session 0x5571f313c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:14.281996+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2703324 data_alloc: 218103808 data_used: 8097792
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170164224 unmapped: 45391872 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:15.282210+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d51c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a61000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 396 ms_handle_reset con 0x5571f5a61000 session 0x5571f4d92b40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170180608 unmapped: 45375488 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 397 ms_handle_reset con 0x5571f2d51c00 session 0x5571f8d23c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 397 ms_handle_reset con 0x5571f2c6e000 session 0x5571f3f21e00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:16.282399+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 397 heartbeat osd_stat(store_statfs(0x4f71e7000/0x0/0x4ffc00000, data 0x1af924c/0x1d14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf3000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170196992 unmapped: 45359104 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 398 ms_handle_reset con 0x5571f2cf3000 session 0x5571f4d8c5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:17.282590+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170196992 unmapped: 45359104 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:18.282900+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170196992 unmapped: 45359104 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:19.283055+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2712269 data_alloc: 218103808 data_used: 8097792
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170196992 unmapped: 45359104 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d8800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.080961227s of 11.466389656s, submitted: 90
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a61000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 399 ms_handle_reset con 0x5571f5a61000 session 0x5571f4d8c5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5bfb000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 399 ms_handle_reset con 0x5571f38d8800 session 0x5571f56f70e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6e01c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 399 ms_handle_reset con 0x5571f6e01c00 session 0x5571f4d925a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f71e7000/0x0/0x4ffc00000, data 0x1afae22/0x1d16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:20.283228+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 400 ms_handle_reset con 0x5571f2c6e000 session 0x5571f571b4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf3000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 400 ms_handle_reset con 0x5571f2cf3000 session 0x5571f5a88960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 400 ms_handle_reset con 0x5571f5bfb000 session 0x5571f8d23c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 400 heartbeat osd_stat(store_statfs(0x4f71e0000/0x0/0x4ffc00000, data 0x1afe4e8/0x1d1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170188800 unmapped: 45367296 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d8800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 400 ms_handle_reset con 0x5571f38d8800 session 0x5571f8d23a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:21.283746+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170188800 unmapped: 45367296 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:22.284009+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170188800 unmapped: 45367296 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a61000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f03800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 401 ms_handle_reset con 0x5571f4f03800 session 0x5571f3a1eb40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:23.284370+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f71df000/0x0/0x4ffc00000, data 0x1afe4f8/0x1d1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f03800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 402 ms_handle_reset con 0x5571f4f03800 session 0x5571f313c5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170246144 unmapped: 45309952 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 402 ms_handle_reset con 0x5571f4aa8000 session 0x5571f3b06d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 403 ms_handle_reset con 0x5571f2c6e000 session 0x5571f570f680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 403 ms_handle_reset con 0x5571f5a61000 session 0x5571f6583680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf3000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:24.284511+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2734579 data_alloc: 218103808 data_used: 8114176
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170246144 unmapped: 45309952 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 404 ms_handle_reset con 0x5571f2cf3000 session 0x5571f4eb7860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 404 ms_handle_reset con 0x5571f2c6e000 session 0x5571f6583e00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:25.284692+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f03800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 404 ms_handle_reset con 0x5571f4f03800 session 0x5571f4e832c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170278912 unmapped: 45277184 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a61000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 404 ms_handle_reset con 0x5571f5a61000 session 0x5571f4b6c960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 405 ms_handle_reset con 0x5571f4aa8000 session 0x5571f4e4a000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d8800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5bfb000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 405 ms_handle_reset con 0x5571f5bfb000 session 0x5571f5a88000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:26.284809+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170287104 unmapped: 45268992 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 406 ms_handle_reset con 0x5571f38d8800 session 0x5571f394be00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:27.284968+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170319872 unmapped: 45236224 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:28.285170+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 407 ms_handle_reset con 0x5571f2c6e000 session 0x5571f3b06960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170344448 unmapped: 45211648 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 407 heartbeat osd_stat(store_statfs(0x4f71ca000/0x0/0x4ffc00000, data 0x1b0a8cd/0x1d30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 407 ms_handle_reset con 0x5571f4aa8000 session 0x5571f3843680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f03800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 408 ms_handle_reset con 0x5571f4f03800 session 0x5571f3b2ed20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:29.285294+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2747485 data_alloc: 218103808 data_used: 8126464
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 45203456 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:30.285436+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 45203456 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:31.285658+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a61000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 408 ms_handle_reset con 0x5571f5a61000 session 0x5571f4b6d680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 45203456 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.148362160s of 12.036891937s, submitted: 222
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 408 ms_handle_reset con 0x5571f2c6e000 session 0x5571f4b6c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:32.285849+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d8800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 408 ms_handle_reset con 0x5571f38d8800 session 0x5571f4befc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 45203456 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 408 heartbeat osd_stat(store_statfs(0x4f71cc000/0x0/0x4ffc00000, data 0x1b0c3a5/0x1d31000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:33.285972+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 45203456 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 409 ms_handle_reset con 0x5571f4aa8000 session 0x5571f4bef680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:34.286069+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2753829 data_alloc: 218103808 data_used: 8138752
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170360832 unmapped: 45195264 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f03800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d5b000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 409 ms_handle_reset con 0x5571f2d5b000 session 0x5571f4bee960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:35.286232+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6eacc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170377216 unmapped: 45178880 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 410 ms_handle_reset con 0x5571f6eacc00 session 0x5571f4e70d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:36.286469+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d5b000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 410 ms_handle_reset con 0x5571f2c6e000 session 0x5571f4e70f00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 410 heartbeat osd_stat(store_statfs(0x4f71c7000/0x0/0x4ffc00000, data 0x1b0df59/0x1d37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d8800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 410 ms_handle_reset con 0x5571f38d8800 session 0x5571f4ea72c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170409984 unmapped: 45146112 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 411 ms_handle_reset con 0x5571f2d5b000 session 0x5571f7f2ef00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:37.286798+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 411 ms_handle_reset con 0x5571f4aa8000 session 0x5571f5ccb4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 411 ms_handle_reset con 0x5571f4f03800 session 0x5571f4beeb40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170418176 unmapped: 45137920 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:38.286978+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170418176 unmapped: 45137920 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 411 heartbeat osd_stat(store_statfs(0x4f71bf000/0x0/0x4ffc00000, data 0x1b11c55/0x1d3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:39.287113+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2767762 data_alloc: 218103808 data_used: 8163328
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170426368 unmapped: 45129728 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:40.287242+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170475520 unmapped: 45080576 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d5b000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 412 ms_handle_reset con 0x5571f2d5b000 session 0x5571f4e82000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 412 ms_handle_reset con 0x5571f2c6e000 session 0x5571f32003c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d8800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 412 ms_handle_reset con 0x5571f38d8800 session 0x5571f4e4a5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:41.287385+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170475520 unmapped: 45080576 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.434510231s of 10.152920723s, submitted: 84
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 412 heartbeat osd_stat(store_statfs(0x4f71bf000/0x0/0x4ffc00000, data 0x1b133f7/0x1d3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f70c1000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:42.287526+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 413 ms_handle_reset con 0x5571f4aa8000 session 0x5571f4ea6000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30cfc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170475520 unmapped: 45080576 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:43.287708+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170475520 unmapped: 45080576 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 415 ms_handle_reset con 0x5571f70c1000 session 0x5571f55b1860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:44.287821+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 415 ms_handle_reset con 0x5571f30cfc00 session 0x5571f4d8d680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2775568 data_alloc: 218103808 data_used: 8171520
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170475520 unmapped: 45080576 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:45.287999+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170475520 unmapped: 45080576 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 415 ms_handle_reset con 0x5571f2c6e000 session 0x5571f7f2ef00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d5b000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d8800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:46.288136+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 415 ms_handle_reset con 0x5571f38d8800 session 0x5571f4e70d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170500096 unmapped: 45056000 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 415 ms_handle_reset con 0x5571f4aa8000 session 0x5571f570fe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f71b5000/0x0/0x4ffc00000, data 0x1b1874a/0x1d47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:47.288299+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 415 ms_handle_reset con 0x5571f2d5b000 session 0x5571f4ea72c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170500096 unmapped: 45056000 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:48.288495+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170500096 unmapped: 45056000 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:49.288612+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2780653 data_alloc: 218103808 data_used: 8171520
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170500096 unmapped: 45056000 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f71b4000/0x0/0x4ffc00000, data 0x1b187ac/0x1d48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:50.288764+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170500096 unmapped: 45056000 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:51.289001+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170500096 unmapped: 45056000 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:52.289157+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 415 handle_osd_map epochs [417,417], i have 415, src has [1,417]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 415 handle_osd_map epochs [416,417], i have 415, src has [1,417]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.311448097s of 10.459137917s, submitted: 42
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170524672 unmapped: 45031424 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30cfc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 417 ms_handle_reset con 0x5571f30cfc00 session 0x5571f4bef680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 417 ms_handle_reset con 0x5571f2c6e000 session 0x5571f4bee960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:53.289290+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170532864 unmapped: 45023232 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 417 heartbeat osd_stat(store_statfs(0x4f71ae000/0x0/0x4ffc00000, data 0x1b1bf37/0x1d50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d8800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 417 ms_handle_reset con 0x5571f38d8800 session 0x5571f4b6c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:54.289460+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2788373 data_alloc: 218103808 data_used: 8179712
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170532864 unmapped: 45023232 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 417 heartbeat osd_stat(store_statfs(0x4f71ae000/0x0/0x4ffc00000, data 0x1b1bf37/0x1d50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:55.289636+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170532864 unmapped: 45023232 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 417 ms_handle_reset con 0x5571f4aa8000 session 0x5571f4b6d680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:56.289812+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4dc0c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 417 ms_handle_reset con 0x5571f4dc0c00 session 0x5571f3b2ed20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 417 heartbeat osd_stat(store_statfs(0x4f71ad000/0x0/0x4ffc00000, data 0x1b1bf99/0x1d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170532864 unmapped: 45023232 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:57.289995+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30cfc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d8800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 418 ms_handle_reset con 0x5571f30cfc00 session 0x5571f394be00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170557440 unmapped: 44998656 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:58.290212+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 171614208 unmapped: 43941888 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 419 ms_handle_reset con 0x5571f38d8800 session 0x5571f551b860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 419 ms_handle_reset con 0x5571f4aa8000 session 0x5571f8d23c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 419 heartbeat osd_stat(store_statfs(0x4f71a5000/0x0/0x4ffc00000, data 0x1b1f741/0x1d57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 419 ms_handle_reset con 0x5571f2c6e000 session 0x5571f3b06960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:59.290454+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2795899 data_alloc: 218103808 data_used: 8200192
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 171622400 unmapped: 43933696 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 419 ms_handle_reset con 0x5571f8b18400 session 0x5571f571b4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:00.290621+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 171630592 unmapped: 43925504 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:01.290769+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 171630592 unmapped: 43925504 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30cfc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 419 ms_handle_reset con 0x5571f30cfc00 session 0x5571f5cca960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d8800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 419 ms_handle_reset con 0x5571f4aa8000 session 0x5571f551a1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c3ac00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 419 ms_handle_reset con 0x5571f6c3ac00 session 0x5571f551a780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d5ac00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 419 ms_handle_reset con 0x5571f2d5ac00 session 0x5571f551ab40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b91800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:02.290926+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 419 ms_handle_reset con 0x5571f38d8800 session 0x5571f5a88b40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 419 ms_handle_reset con 0x5571f2c6e000 session 0x5571f4d8c5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.538212299s of 10.054039955s, submitted: 66
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 173760512 unmapped: 41795584 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 419 ms_handle_reset con 0x5571f4b91800 session 0x5571f551a3c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d5ac00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 419 ms_handle_reset con 0x5571f2d5ac00 session 0x5571f5ccbe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30cfc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 419 ms_handle_reset con 0x5571f30cfc00 session 0x5571f5cca000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:03.291175+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c3ac00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 420 ms_handle_reset con 0x5571f4aa8000 session 0x5571f3201c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 420 ms_handle_reset con 0x5571f6c3ac00 session 0x5571f5a881e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 420 ms_handle_reset con 0x5571f2c6e000 session 0x5571f5a894a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d5ac00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30cfc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 420 ms_handle_reset con 0x5571f30cfc00 session 0x5571f3a1f680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170639360 unmapped: 44916736 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:04.291358+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2858384 data_alloc: 218103808 data_used: 8216576
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170639360 unmapped: 44916736 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 420 heartbeat osd_stat(store_statfs(0x4f6b73000/0x0/0x4ffc00000, data 0x214e397/0x238a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 420 handle_osd_map epochs [421,421], i have 421, src has [1,421]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 420 handle_osd_map epochs [421,421], i have 421, src has [1,421]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 421 ms_handle_reset con 0x5571f4aa8000 session 0x5571f313c5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 421 ms_handle_reset con 0x5571f2d5ac00 session 0x5571f4e4a3c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:05.291518+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 421 heartbeat osd_stat(store_statfs(0x4f6b73000/0x0/0x4ffc00000, data 0x214e397/0x238a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170524672 unmapped: 45031424 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:06.291671+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30cfc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 422 ms_handle_reset con 0x5571f30cfc00 session 0x5571f4e4a000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170532864 unmapped: 45023232 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 422 ms_handle_reset con 0x5571f2c6e000 session 0x5571f570f0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:07.291812+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 422 ms_handle_reset con 0x5571f4aa8000 session 0x5571f56f63c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170532864 unmapped: 45023232 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c3ac00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 422 ms_handle_reset con 0x5571f6c3ac00 session 0x5571f313a1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b91800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 422 ms_handle_reset con 0x5571f4b91800 session 0x5571f55b0b40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f6b6d000/0x0/0x4ffc00000, data 0x2151ce7/0x238f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:08.291938+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30cfc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170532864 unmapped: 45023232 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 422 ms_handle_reset con 0x5571f4aa8000 session 0x5571f571b860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:09.292069+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2864394 data_alloc: 218103808 data_used: 8220672
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 170532864 unmapped: 45023232 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:10.292180+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 171261952 unmapped: 44294144 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:11.292307+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 171261952 unmapped: 44294144 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b91800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 422 ms_handle_reset con 0x5571f4b91800 session 0x5571f571a960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c3ac00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:12.292459+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 171261952 unmapped: 44294144 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f6b6e000/0x0/0x4ffc00000, data 0x2151d49/0x2390000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.194179535s of 10.678098679s, submitted: 71
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f65da000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf2000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 423 ms_handle_reset con 0x5571f65da000 session 0x5571f4b6cd20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:13.292607+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 171261952 unmapped: 44294144 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 424 ms_handle_reset con 0x5571f2cf2000 session 0x5571f4ea6000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 424 ms_handle_reset con 0x5571f6c3ac00 session 0x5571f571ba40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:14.292762+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2916446 data_alloc: 234881024 data_used: 14434304
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 424 heartbeat osd_stat(store_statfs(0x4f6b67000/0x0/0x4ffc00000, data 0x2155549/0x2396000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 171261952 unmapped: 44294144 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:15.292872+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c3ac00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 171270144 unmapped: 44285952 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf2000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 425 ms_handle_reset con 0x5571f6c3ac00 session 0x5571f2d00000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 425 ms_handle_reset con 0x5571f2cf2000 session 0x5571f55b1680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:16.293010+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 425 heartbeat osd_stat(store_statfs(0x4f6b67000/0x0/0x4ffc00000, data 0x2155549/0x2396000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 425 ms_handle_reset con 0x5571f4aa8000 session 0x5571f500fc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 171294720 unmapped: 44261376 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b91800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:17.293152+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f65da000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a61400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 426 ms_handle_reset con 0x5571f5a61400 session 0x5571f33161e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f6b61000/0x0/0x4ffc00000, data 0x2158df5/0x239c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 426 ms_handle_reset con 0x5571f65da000 session 0x5571f313c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 426 ms_handle_reset con 0x5571f4b91800 session 0x5571f500e5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 171311104 unmapped: 44244992 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf2000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:18.293277+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 427 ms_handle_reset con 0x5571f2cf2000 session 0x5571f394bc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 171376640 unmapped: 44179456 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 427 ms_handle_reset con 0x5571f4aa8000 session 0x5571f5a89860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a61400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:19.293380+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2929258 data_alloc: 234881024 data_used: 14450688
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 171376640 unmapped: 44179456 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f65da000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 428 ms_handle_reset con 0x5571f65da000 session 0x5571f6583680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:20.293594+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 429 ms_handle_reset con 0x5571f5a61400 session 0x5571f8d221e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 172654592 unmapped: 42901504 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a61400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:21.293727+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 430 ms_handle_reset con 0x5571f5a61400 session 0x5571f313c780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f50f7000/0x0/0x4ffc00000, data 0x2a1ee9b/0x2c65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf2000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 178003968 unmapped: 37552128 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:22.293890+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 430 ms_handle_reset con 0x5571f4aa8000 session 0x5571f4e4b0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 431 ms_handle_reset con 0x5571f2cf2000 session 0x5571f3a1f2c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 178012160 unmapped: 37543936 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.179092407s of 10.051578522s, submitted: 223
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:23.294013+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 178012160 unmapped: 37543936 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:24.294145+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b91800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 431 heartbeat osd_stat(store_statfs(0x4f50b2000/0x0/0x4ffc00000, data 0x2a61aef/0x2ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013550 data_alloc: 234881024 data_used: 15642624
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177455104 unmapped: 38100992 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 431 handle_osd_map epochs [432,432], i have 432, src has [1,432]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:25.294268+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 432 ms_handle_reset con 0x5571f4b91800 session 0x5571f5ccb680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177668096 unmapped: 37888000 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f65da000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 432 ms_handle_reset con 0x5571f65da000 session 0x5571f31a05a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf2000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 432 ms_handle_reset con 0x5571f2cf2000 session 0x5571f5cca780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 432 heartbeat osd_stat(store_statfs(0x4f506d000/0x0/0x4ffc00000, data 0x2aa7619/0x2cf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:26.294527+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177668096 unmapped: 37888000 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:27.294740+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 432 ms_handle_reset con 0x5571f4aa8000 session 0x5571f32005a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b91800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 432 ms_handle_reset con 0x5571f4b91800 session 0x5571f31a1c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177668096 unmapped: 37888000 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:28.294959+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 177668096 unmapped: 37888000 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a61400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 432 ms_handle_reset con 0x5571f5a61400 session 0x5571f313b680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c3ac00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:29.295149+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3017522 data_alloc: 234881024 data_used: 15724544
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 178733056 unmapped: 36823040 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:30.295341+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 433 heartbeat osd_stat(store_statfs(0x4f506c000/0x0/0x4ffc00000, data 0x2aa9047/0x2cf1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 433 handle_osd_map epochs [434,434], i have 434, src has [1,434]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 178733056 unmapped: 36823040 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 434 ms_handle_reset con 0x5571f6c3ac00 session 0x5571f570fe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:31.295471+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 434 heartbeat osd_stat(store_statfs(0x4f506c000/0x0/0x4ffc00000, data 0x2aa9047/0x2cf1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 179019776 unmapped: 36536320 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:32.295634+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 179019776 unmapped: 36536320 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:33.295848+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c3ac00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 179019776 unmapped: 36536320 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 434 ms_handle_reset con 0x5571f6c3ac00 session 0x5571f4d8de00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:34.296155+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3021264 data_alloc: 234881024 data_used: 15724544
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 434 heartbeat osd_stat(store_statfs(0x4f5045000/0x0/0x4ffc00000, data 0x2acdc6f/0x2d17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 179019776 unmapped: 36536320 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:35.296293+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 179019776 unmapped: 36536320 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:36.296506+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 434 heartbeat osd_stat(store_statfs(0x4f5045000/0x0/0x4ffc00000, data 0x2acdc6f/0x2d17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 179019776 unmapped: 36536320 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.918247223s of 13.954124451s, submitted: 106
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:37.296662+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 179027968 unmapped: 36528128 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:38.296841+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 179027968 unmapped: 36528128 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:39.297003+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf2000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3023918 data_alloc: 234881024 data_used: 15724544
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 179036160 unmapped: 36519936 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 435 ms_handle_reset con 0x5571f2cf2000 session 0x5571f3843c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:40.297163+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 435 heartbeat osd_stat(store_statfs(0x4f5041000/0x0/0x4ffc00000, data 0x2ad2897/0x2d1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 179036160 unmapped: 36519936 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:41.297325+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 179036160 unmapped: 36519936 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:42.297529+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 179052544 unmapped: 36503552 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 436 ms_handle_reset con 0x5571f2c6e000 session 0x5571f4d8c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:43.297757+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 436 ms_handle_reset con 0x5571f30cfc00 session 0x5571f38430e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 179052544 unmapped: 36503552 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b91800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:44.297893+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a61400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 436 ms_handle_reset con 0x5571f5a61400 session 0x5571f3843a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 436 ms_handle_reset con 0x5571f4aa8000 session 0x5571f5762960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3023632 data_alloc: 234881024 data_used: 15728640
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174899200 unmapped: 40656896 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:45.298046+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 437 ms_handle_reset con 0x5571f4b91800 session 0x5571f6583860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 437 ms_handle_reset con 0x5571f2c6e000 session 0x5571f5ccb680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf2000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 437 ms_handle_reset con 0x5571f2cf2000 session 0x5571f4e4b0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174899200 unmapped: 40656896 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 437 heartbeat osd_stat(store_statfs(0x4f5fd3000/0x0/0x4ffc00000, data 0x1b3f023/0x1d89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:46.298177+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174899200 unmapped: 40656896 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:47.298324+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30cfc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.284437180s of 10.291283607s, submitted: 82
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c3ac00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 437 ms_handle_reset con 0x5571f6c3ac00 session 0x5571f394bc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 437 handle_osd_map epochs [437,438], i have 437, src has [1,438]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 437 handle_osd_map epochs [438,438], i have 438, src has [1,438]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 438 ms_handle_reset con 0x5571f30cfc00 session 0x5571f313c780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174907392 unmapped: 40648704 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:48.298511+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174923776 unmapped: 40632320 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf2000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 438 ms_handle_reset con 0x5571f4aa8000 session 0x5571f551b680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 438 ms_handle_reset con 0x5571f2cf2000 session 0x5571f2d00000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:49.298658+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2876246 data_alloc: 218103808 data_used: 8286208
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 438 handle_osd_map epochs [438,439], i have 438, src has [1,439]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 439 ms_handle_reset con 0x5571f2c6e000 session 0x5571f55b1680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174923776 unmapped: 40632320 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:50.298788+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b91800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 439 handle_osd_map epochs [440,440], i have 439, src has [1,440]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 440 heartbeat osd_stat(store_statfs(0x4f5fc8000/0x0/0x4ffc00000, data 0x1b4441f/0x1d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174923776 unmapped: 40632320 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:51.299153+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f03400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 441 ms_handle_reset con 0x5571f4f03400 session 0x5571f570f0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174931968 unmapped: 40624128 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 441 ms_handle_reset con 0x5571f4b91800 session 0x5571f55b0b40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:52.299291+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174931968 unmapped: 40624128 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf2000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 442 ms_handle_reset con 0x5571f2c6e000 session 0x5571f4e4a3c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30cfc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:53.299361+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 442 ms_handle_reset con 0x5571f4aa8000 session 0x5571f7f2f4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 442 ms_handle_reset con 0x5571f30cfc00 session 0x5571f8d230e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 442 ms_handle_reset con 0x5571f2cf2000 session 0x5571f3a1f680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174948352 unmapped: 40607744 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 442 heartbeat osd_stat(store_statfs(0x4f5fc1000/0x0/0x4ffc00000, data 0x1b47bf4/0x1d9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:54.299472+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 442 ms_handle_reset con 0x5571f2c6e000 session 0x5571f4d905a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf2000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 442 ms_handle_reset con 0x5571f2cf2000 session 0x5571f4d90780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30cfc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2888441 data_alloc: 218103808 data_used: 8298496
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 442 ms_handle_reset con 0x5571f30cfc00 session 0x5571f4b6c3c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 442 handle_osd_map epochs [442,443], i have 442, src has [1,443]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 173760512 unmapped: 41795584 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 443 ms_handle_reset con 0x5571f4aa8000 session 0x5571f3b06960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:55.299594+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 173760512 unmapped: 41795584 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b91800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 443 ms_handle_reset con 0x5571f4b91800 session 0x5571f3b2e5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 443 ms_handle_reset con 0x5571f2c6e000 session 0x5571f7f2e960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:56.299773+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf2000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 443 ms_handle_reset con 0x5571f2cf2000 session 0x5571f5cca1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 173776896 unmapped: 41779200 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30cfc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 443 ms_handle_reset con 0x5571f30cfc00 session 0x5571f7f2fa40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b91800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 443 ms_handle_reset con 0x5571f4b91800 session 0x5571f3317c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30ce400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:57.299925+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f5fbf000/0x0/0x4ffc00000, data 0x1b49678/0x1d9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4f03c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.759210587s of 10.348554611s, submitted: 102
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 173801472 unmapped: 41754624 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 443 handle_osd_map epochs [444,444], i have 443, src has [1,444]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 444 ms_handle_reset con 0x5571f4f03c00 session 0x5571f571bc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 444 ms_handle_reset con 0x5571f30ce400 session 0x5571f8d234a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 444 ms_handle_reset con 0x5571f4aa8000 session 0x5571f394b4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:58.300082+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 444 ms_handle_reset con 0x5571f2c6e000 session 0x5571f7f3a960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f5fba000/0x0/0x4ffc00000, data 0x1b4b302/0x1da2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2cf2000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 173817856 unmapped: 41738240 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:59.300238+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2896938 data_alloc: 218103808 data_used: 8314880
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30cfc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 173826048 unmapped: 41730048 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:00.300392+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 444 handle_osd_map epochs [445,445], i have 444, src has [1,445]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b91800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 445 ms_handle_reset con 0x5571f4b91800 session 0x5571f4e82780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c93400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 445 ms_handle_reset con 0x5571f2cf2000 session 0x5571f4e714a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 173826048 unmapped: 41730048 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 445 heartbeat osd_stat(store_statfs(0x4f5fb9000/0x0/0x4ffc00000, data 0x1b4cf10/0x1da4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 445 ms_handle_reset con 0x5571f2c6e000 session 0x5571f7f3b4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:01.300548+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30ce400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 445 heartbeat osd_stat(store_statfs(0x4f5fb9000/0x0/0x4ffc00000, data 0x1b4cf10/0x1da4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 445 ms_handle_reset con 0x5571f30ce400 session 0x5571f33170e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 173826048 unmapped: 41730048 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 445 handle_osd_map epochs [445,446], i have 445, src has [1,446]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:02.300775+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 446 ms_handle_reset con 0x5571f2c93400 session 0x5571f5a88b40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 173842432 unmapped: 41713664 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 446 ms_handle_reset con 0x5571f4aa8000 session 0x5571f3317c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4b91800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:03.300980+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 446 ms_handle_reset con 0x5571f4b91800 session 0x5571f5cca1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 173768704 unmapped: 41787392 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 446 ms_handle_reset con 0x5571f30cfc00 session 0x5571f3a1e5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:04.301151+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902670 data_alloc: 218103808 data_used: 8327168
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 173768704 unmapped: 41787392 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f5fb9000/0x0/0x4ffc00000, data 0x1b4ea84/0x1da4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 447 ms_handle_reset con 0x5571f2c6e000 session 0x5571f38430e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:05.301279+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c93400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 173768704 unmapped: 41787392 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30ce400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 447 ms_handle_reset con 0x5571f30ce400 session 0x5571f3843c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 447 ms_handle_reset con 0x5571f2c93400 session 0x5571f4d8c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:06.302489+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 173768704 unmapped: 41787392 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:07.302687+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.572727680s of 10.057257652s, submitted: 110
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174817280 unmapped: 40738816 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:08.302808+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174817280 unmapped: 40738816 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:09.302963+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4aa8000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910398 data_alloc: 218103808 data_used: 8327168
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174841856 unmapped: 40714240 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:10.303294+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 449 heartbeat osd_stat(store_statfs(0x4f5fb3000/0x0/0x4ffc00000, data 0x1b522f4/0x1da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174841856 unmapped: 40714240 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 450 ms_handle_reset con 0x5571f4aa8000 session 0x5571f570fe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:11.304886+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 450 ms_handle_reset con 0x5571f2c6e000 session 0x5571f313b680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174841856 unmapped: 40714240 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:12.306027+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c93400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 450 ms_handle_reset con 0x5571f2c93400 session 0x5571f551a780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30ce400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 450 ms_handle_reset con 0x5571f30ce400 session 0x5571f8d23680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30cfc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 450 handle_osd_map epochs [450,451], i have 450, src has [1,451]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174841856 unmapped: 40714240 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 451 ms_handle_reset con 0x5571f30cfc00 session 0x5571f5a88960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:13.306286+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f572c800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174841856 unmapped: 40714240 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 451 ms_handle_reset con 0x5571f572c800 session 0x5571f55b0b40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 451 ms_handle_reset con 0x5571f8b18000 session 0x5571f31a0d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c93400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:14.306457+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2921284 data_alloc: 218103808 data_used: 8335360
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174841856 unmapped: 40714240 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 451 handle_osd_map epochs [452,452], i have 451, src has [1,452]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 451 ms_handle_reset con 0x5571f2c6e000 session 0x5571f6583860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 452 ms_handle_reset con 0x5571f2c93400 session 0x5571f65832c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:15.306584+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174841856 unmapped: 40714240 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 452 heartbeat osd_stat(store_statfs(0x4f5fa6000/0x0/0x4ffc00000, data 0x1b5926a/0x1db7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:16.307582+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30ce400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 452 ms_handle_reset con 0x5571f30ce400 session 0x5571f4e4b0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30cfc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 452 ms_handle_reset con 0x5571f30cfc00 session 0x5571f4e4b2c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174841856 unmapped: 40714240 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:17.307904+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 452 heartbeat osd_stat(store_statfs(0x4f5fa6000/0x0/0x4ffc00000, data 0x1b5926a/0x1db7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 452 ms_handle_reset con 0x5571f2c6e000 session 0x5571f4e4af00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 452 heartbeat osd_stat(store_statfs(0x4f5fa7000/0x0/0x4ffc00000, data 0x1b5925a/0x1db6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c93400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.021152496s of 10.191619873s, submitted: 57
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174841856 unmapped: 40714240 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30ce400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:18.308268+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174841856 unmapped: 40714240 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:19.308421+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 453 ms_handle_reset con 0x5571f30ce400 session 0x5571f4d90780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2934419 data_alloc: 218103808 data_used: 8355840
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 453 ms_handle_reset con 0x5571f2c93400 session 0x5571f3b2e5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174858240 unmapped: 40697856 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b18000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:20.308612+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174858240 unmapped: 40697856 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5bfa800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:21.308784+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 453 handle_osd_map epochs [453,454], i have 453, src has [1,454]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 453 handle_osd_map epochs [454,454], i have 454, src has [1,454]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 454 ms_handle_reset con 0x5571f5bfa800 session 0x5571f3316000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3762400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 454 ms_handle_reset con 0x5571f8b18000 session 0x5571f57623c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 454 ms_handle_reset con 0x5571f3762400 session 0x5571f4da8960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174874624 unmapped: 40681472 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:22.308999+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 454 heartbeat osd_stat(store_statfs(0x4f5f9f000/0x0/0x4ffc00000, data 0x1b5cafe/0x1dbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c93400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 454 ms_handle_reset con 0x5571f2c6e000 session 0x5571f4bee780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30ce400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 454 ms_handle_reset con 0x5571f2c93400 session 0x5571f4b6c780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5bfa800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 173973504 unmapped: 41582592 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 454 ms_handle_reset con 0x5571f30ce400 session 0x5571f7f3b0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:23.309157+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f71e3800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 454 ms_handle_reset con 0x5571f71e3800 session 0x5571f5a88960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 454 handle_osd_map epochs [454,455], i have 454, src has [1,455]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 454 handle_osd_map epochs [455,455], i have 455, src has [1,455]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 455 ms_handle_reset con 0x5571f5bfa800 session 0x5571f4bee780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 455 ms_handle_reset con 0x5571f2c6e000 session 0x5571f551a780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c93400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30ce400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 455 ms_handle_reset con 0x5571f30ce400 session 0x5571f4d8c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3762400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 178184192 unmapped: 37371904 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:24.309285+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3222853 data_alloc: 218103808 data_used: 8376320
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 455 ms_handle_reset con 0x5571f2c6e800 session 0x5571f30b0780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 455 ms_handle_reset con 0x5571f2c93400 session 0x5571f570fe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 455 ms_handle_reset con 0x5571f3762400 session 0x5571f3a1e5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 456 ms_handle_reset con 0x5571f2c6e000 session 0x5571f551af00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174080000 unmapped: 41476096 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 456 ms_handle_reset con 0x5571f2c92800 session 0x5571f4e703c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:25.309419+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174080000 unmapped: 41476096 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:26.309549+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 456 heartbeat osd_stat(store_statfs(0x4f3b9a000/0x0/0x4ffc00000, data 0x3f60148/0x41c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174080000 unmapped: 41476096 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 456 heartbeat osd_stat(store_statfs(0x4f3b9a000/0x0/0x4ffc00000, data 0x3f60148/0x41c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:27.309741+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 456 handle_osd_map epochs [456,457], i have 456, src has [1,457]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.795708179s of 10.123501778s, submitted: 128
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174080000 unmapped: 41476096 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 457 heartbeat osd_stat(store_statfs(0x4f3b9a000/0x0/0x4ffc00000, data 0x3f60148/0x41c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:28.309914+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 457 ms_handle_reset con 0x5571f2c6e800 session 0x5571f8d22780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 457 handle_osd_map epochs [457,458], i have 457, src has [1,458]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174088192 unmapped: 41467904 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:29.310126+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 458 heartbeat osd_stat(store_statfs(0x4f3b96000/0x0/0x4ffc00000, data 0x3f61d38/0x41c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3198815 data_alloc: 218103808 data_used: 8376320
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174088192 unmapped: 41467904 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 458 handle_osd_map epochs [459,459], i have 458, src has [1,459]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:30.310347+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f30ce400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 459 ms_handle_reset con 0x5571f30ce400 session 0x5571f2efe780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 459 handle_osd_map epochs [459,460], i have 459, src has [1,460]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174104576 unmapped: 41451520 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 460 ms_handle_reset con 0x5571f2c6e000 session 0x5571f6582960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:31.310567+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 460 ms_handle_reset con 0x5571f2c6e800 session 0x5571f313c5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174104576 unmapped: 41451520 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 460 heartbeat osd_stat(store_statfs(0x4f3b8d000/0x0/0x4ffc00000, data 0x3f67059/0x41cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:32.310822+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174104576 unmapped: 41451520 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:33.310962+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3762400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 460 ms_handle_reset con 0x5571f3762400 session 0x5571f33161e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5bfa800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a63000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 460 ms_handle_reset con 0x5571f5a63000 session 0x5571f8d221e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 460 ms_handle_reset con 0x5571f5bfa800 session 0x5571f4e30f00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 460 ms_handle_reset con 0x5571f2c6e000 session 0x5571f5ccb680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174145536 unmapped: 41410560 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 460 handle_osd_map epochs [460,461], i have 460, src has [1,461]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 461 ms_handle_reset con 0x5571f2c92800 session 0x5571f3ddb680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:34.311117+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 461 heartbeat osd_stat(store_statfs(0x4f3b88000/0x0/0x4ffc00000, data 0x3f68cfd/0x41d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 461 ms_handle_reset con 0x5571f2c6e800 session 0x5571f2efe960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3762400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219793 data_alloc: 218103808 data_used: 8409088
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 461 ms_handle_reset con 0x5571f3762400 session 0x5571f8d22000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a63000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 461 ms_handle_reset con 0x5571f5a63000 session 0x5571f3200000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174161920 unmapped: 41394176 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:35.311263+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 461 ms_handle_reset con 0x5571f2c92800 session 0x5571f57630e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3762400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b19c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174170112 unmapped: 41385984 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 462 ms_handle_reset con 0x5571f3762400 session 0x5571f4e4ab40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:36.311380+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 462 ms_handle_reset con 0x5571f8b19c00 session 0x5571f570e1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 462 ms_handle_reset con 0x5571f3035800 session 0x5571f8d22960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174219264 unmapped: 41336832 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 462 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 463 ms_handle_reset con 0x5571f3035c00 session 0x5571f4e312c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:37.311547+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 174227456 unmapped: 41328640 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:38.311720+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 463 ms_handle_reset con 0x5571f3035c00 session 0x5571f7f2f4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176717824 unmapped: 38838272 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 463 heartbeat osd_stat(store_statfs(0x4f3773000/0x0/0x4ffc00000, data 0x3f6c652/0x41d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:39.311933+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.512027740s of 11.217332840s, submitted: 161
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 463 ms_handle_reset con 0x5571f3035800 session 0x5571f4e71680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 463 ms_handle_reset con 0x5571f2c92800 session 0x5571f551a1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3762400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 463 ms_handle_reset con 0x5571f3762400 session 0x5571f571b680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3288706 data_alloc: 234881024 data_used: 17289216
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8b19c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 463 handle_osd_map epochs [464,464], i have 463, src has [1,464]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d9400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176824320 unmapped: 38731776 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:40.312278+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 464 handle_osd_map epochs [464,465], i have 464, src has [1,465]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 465 ms_handle_reset con 0x5571f38d9400 session 0x5571f571be00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176840704 unmapped: 38715392 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:41.312459+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 465 ms_handle_reset con 0x5571f8b19c00 session 0x5571f31a1680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 465 ms_handle_reset con 0x5571f2c92800 session 0x5571f4e714a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 465 heartbeat osd_stat(store_statfs(0x4f376b000/0x0/0x4ffc00000, data 0x3f6ff46/0x41e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176840704 unmapped: 38715392 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:42.312643+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 465 ms_handle_reset con 0x5571f3035c00 session 0x5571f313d860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3762400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 465 ms_handle_reset con 0x5571f3762400 session 0x5571f6582780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 465 handle_osd_map epochs [465,466], i have 465, src has [1,466]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 466 ms_handle_reset con 0x5571f3035800 session 0x5571f313b680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176848896 unmapped: 38707200 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:43.312789+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 466 ms_handle_reset con 0x5571f3035800 session 0x5571f38430e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f3768000/0x0/0x4ffc00000, data 0x3f71b48/0x41e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176857088 unmapped: 38699008 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:44.312915+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3304265 data_alloc: 234881024 data_used: 17305600
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176857088 unmapped: 38699008 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:45.313043+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176857088 unmapped: 38699008 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 466 handle_osd_map epochs [468,468], i have 466, src has [1,468]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 466 handle_osd_map epochs [467,468], i have 466, src has [1,468]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:46.313155+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 468 ms_handle_reset con 0x5571f2c92800 session 0x5571f571b2c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176865280 unmapped: 38690816 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:47.313323+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176865280 unmapped: 38690816 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:48.313468+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 468 ms_handle_reset con 0x5571f3035c00 session 0x5571f3b072c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3762400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f71e2800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 468 ms_handle_reset con 0x5571f71e2800 session 0x5571f3200780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 468 ms_handle_reset con 0x5571f3762400 session 0x5571f7f3b2c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3762400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 468 heartbeat osd_stat(store_statfs(0x4f3761000/0x0/0x4ffc00000, data 0x3f7566a/0x41ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 468 ms_handle_reset con 0x5571f3762400 session 0x5571f2d00000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 176111616 unmapped: 39444480 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:49.313612+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.782676697s of 10.145098686s, submitted: 114
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3337913 data_alloc: 234881024 data_used: 17260544
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 468 handle_osd_map epochs [468,469], i have 468, src has [1,469]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 469 handle_osd_map epochs [469,470], i have 469, src has [1,470]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 470 ms_handle_reset con 0x5571f2c92800 session 0x5571f3b06000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188768256 unmapped: 26787840 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:50.313770+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181428224 unmapped: 34127872 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:51.313905+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 470 ms_handle_reset con 0x5571f3035800 session 0x5571f4e4af00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f71e2800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 470 ms_handle_reset con 0x5571f71e2800 session 0x5571f4eb7860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 470 ms_handle_reset con 0x5571f3035c00 session 0x5571f313c780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 183533568 unmapped: 32022528 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:52.314035+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 470 ms_handle_reset con 0x5571f2c92800 session 0x5571f3843860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 470 ms_handle_reset con 0x5571f3035800 session 0x5571f3b06780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 182509568 unmapped: 33046528 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:53.314205+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 470 handle_osd_map epochs [470,471], i have 470, src has [1,471]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3762400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 471 ms_handle_reset con 0x5571f3762400 session 0x5571f394a000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f71e2800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 182804480 unmapped: 32751616 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:54.314442+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 471 handle_osd_map epochs [472,472], i have 471, src has [1,472]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 472 ms_handle_reset con 0x5571f71e2800 session 0x5571f4d932c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a62000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3405866 data_alloc: 234881024 data_used: 17285120
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 472 ms_handle_reset con 0x5571f5a62000 session 0x5571f4e83c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 472 heartbeat osd_stat(store_statfs(0x4f2ce1000/0x0/0x4ffc00000, data 0x49c18e8/0x4c3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 182837248 unmapped: 32718848 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:55.314562+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 472 ms_handle_reset con 0x5571f2c92800 session 0x5571f55b0d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 472 ms_handle_reset con 0x5571f3035800 session 0x5571f3b074a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3762400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 182837248 unmapped: 32718848 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:56.314692+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f71e2800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 473 ms_handle_reset con 0x5571f71e2800 session 0x5571f55b1860
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 473 handle_osd_map epochs [473,474], i have 473, src has [1,474]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 473 handle_osd_map epochs [474,474], i have 474, src has [1,474]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 474 ms_handle_reset con 0x5571f3762400 session 0x5571f313b2c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181510144 unmapped: 34045952 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:57.315138+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5bfbc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 474 ms_handle_reset con 0x5571f5bfbc00 session 0x5571f4ea63c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181510144 unmapped: 34045952 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:58.315302+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5bfbc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 474 ms_handle_reset con 0x5571f5bfbc00 session 0x5571f4d8de00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 474 ms_handle_reset con 0x5571f2c92800 session 0x5571f4ea7e00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181542912 unmapped: 34013184 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:59.315472+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 474 handle_osd_map epochs [474,475], i have 474, src has [1,475]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.566374779s of 10.402941704s, submitted: 206
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 475 ms_handle_reset con 0x5571f3035800 session 0x5571f5a883c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412091 data_alloc: 234881024 data_used: 17289216
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:00.315608+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181542912 unmapped: 34013184 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 475 heartbeat osd_stat(store_statfs(0x4f3d47000/0x0/0x4ffc00000, data 0x49c8936/0x4c46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:01.315781+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181542912 unmapped: 34013184 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3762400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 475 ms_handle_reset con 0x5571f3762400 session 0x5571f5762000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f71e2800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 475 handle_osd_map epochs [475,476], i have 475, src has [1,476]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 476 ms_handle_reset con 0x5571f71e2800 session 0x5571f2d003c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:02.315950+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181673984 unmapped: 33882112 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f71e2800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 476 ms_handle_reset con 0x5571f71e2800 session 0x5571f570fc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 476 ms_handle_reset con 0x5571f2c92800 session 0x5571f394bc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 476 heartbeat osd_stat(store_statfs(0x4f3d43000/0x0/0x4ffc00000, data 0x49ca58a/0x4c49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:03.316125+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181673984 unmapped: 33882112 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:04.316348+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181673984 unmapped: 33882112 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 476 heartbeat osd_stat(store_statfs(0x4f3d45000/0x0/0x4ffc00000, data 0x49ca58a/0x4c49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413593 data_alloc: 234881024 data_used: 17289216
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:05.316516+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181673984 unmapped: 33882112 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 476 handle_osd_map epochs [477,477], i have 476, src has [1,477]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 477 ms_handle_reset con 0x5571f3035800 session 0x5571f570f4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:06.316720+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181690368 unmapped: 33865728 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:07.316912+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181690368 unmapped: 33865728 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3762400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 477 heartbeat osd_stat(store_statfs(0x4f3d40000/0x0/0x4ffc00000, data 0x49cc0a8/0x4c4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5bfbc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 477 ms_handle_reset con 0x5571f5bfbc00 session 0x5571f4d903c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 477 handle_osd_map epochs [477,478], i have 477, src has [1,478]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f38d8c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 478 ms_handle_reset con 0x5571f38d8c00 session 0x5571f56f74a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:08.317021+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 478 ms_handle_reset con 0x5571f3035800 session 0x5571f2d01c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181747712 unmapped: 33808384 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 478 handle_osd_map epochs [479,479], i have 478, src has [1,479]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 479 ms_handle_reset con 0x5571f2c92800 session 0x5571f4e30960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:09.317169+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 479 ms_handle_reset con 0x5571f3762400 session 0x5571f3b2e5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181764096 unmapped: 33792000 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5bfbc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 479 ms_handle_reset con 0x5571f5bfbc00 session 0x5571f3f20960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f71e2800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3430278 data_alloc: 234881024 data_used: 17305600
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 479 ms_handle_reset con 0x5571f71e2800 session 0x5571f5ccad20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:10.317323+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181764096 unmapped: 33792000 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f71e2800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 479 handle_osd_map epochs [480,480], i have 479, src has [1,480]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.738577843s of 11.016539574s, submitted: 99
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:11.317444+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181780480 unmapped: 33775616 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 481 ms_handle_reset con 0x5571f71e2800 session 0x5571f4d8d0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 481 heartbeat osd_stat(store_statfs(0x4f3d35000/0x0/0x4ffc00000, data 0x49d312c/0x4c59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:12.317694+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 481 ms_handle_reset con 0x5571f3035800 session 0x5571f2d00d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181780480 unmapped: 33775616 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 481 handle_osd_map epochs [481,482], i have 481, src has [1,482]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 482 ms_handle_reset con 0x5571f2c92800 session 0x5571f3b07a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:13.317846+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3762400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 482 ms_handle_reset con 0x5571f2c6e000 session 0x5571f2d01680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181870592 unmapped: 33685504 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 482 ms_handle_reset con 0x5571f2c6e800 session 0x5571f5a881e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 482 ms_handle_reset con 0x5571f3762400 session 0x5571f4e4a1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 482 ms_handle_reset con 0x5571f2c6e000 session 0x5571f4b6c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 482 handle_osd_map epochs [483,483], i have 482, src has [1,483]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 483 ms_handle_reset con 0x5571f2c92800 session 0x5571f551bc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:14.318292+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181911552 unmapped: 33644544 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444659 data_alloc: 234881024 data_used: 17326080
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:15.318567+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 483 ms_handle_reset con 0x5571f3035800 session 0x5571f3ddaf00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181911552 unmapped: 33644544 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f71e2800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 483 handle_osd_map epochs [484,484], i have 483, src has [1,484]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 484 ms_handle_reset con 0x5571f71e2800 session 0x5571f4b6c1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:16.318760+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181936128 unmapped: 33619968 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f3d2b000/0x0/0x4ffc00000, data 0x49d8622/0x4c62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:17.319206+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181936128 unmapped: 33619968 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 484 ms_handle_reset con 0x5571f2c6e000 session 0x5571f571a1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:18.319467+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 181952512 unmapped: 33603584 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 484 handle_osd_map epochs [484,485], i have 484, src has [1,485]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:19.319742+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 485 ms_handle_reset con 0x5571f2c92800 session 0x5571f571a000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 183001088 unmapped: 32555008 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3d28000/0x0/0x4ffc00000, data 0x49da1ec/0x4c65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3d28000/0x0/0x4ffc00000, data 0x49da1ec/0x4c65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 485 ms_handle_reset con 0x5571f3035800 session 0x5571f65825a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450065 data_alloc: 234881024 data_used: 17350656
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:20.319970+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 183001088 unmapped: 32555008 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 485 handle_osd_map epochs [486,486], i have 485, src has [1,486]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.444325447s of 10.354497910s, submitted: 100
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:21.320145+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 183009280 unmapped: 32546816 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 486 heartbeat osd_stat(store_statfs(0x4f3d25000/0x0/0x4ffc00000, data 0x49dbcfa/0x4c68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 486 handle_osd_map epochs [486,487], i have 486, src has [1,487]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 487 heartbeat osd_stat(store_statfs(0x4f3d25000/0x0/0x4ffc00000, data 0x49dbcfa/0x4c68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:22.320272+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184057856 unmapped: 31498240 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:23.320440+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3762400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 487 ms_handle_reset con 0x5571f3762400 session 0x5571f4d905a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184057856 unmapped: 31498240 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:24.320556+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184057856 unmapped: 31498240 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 487 handle_osd_map epochs [488,488], i have 487, src has [1,488]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3458971 data_alloc: 234881024 data_used: 17354752
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:25.320797+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184074240 unmapped: 31481856 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:26.320951+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f3d1f000/0x0/0x4ffc00000, data 0x49df522/0x4c6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184074240 unmapped: 31481856 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:27.321164+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184074240 unmapped: 31481856 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:28.321378+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184074240 unmapped: 31481856 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:29.321567+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184074240 unmapped: 31481856 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3459131 data_alloc: 234881024 data_used: 17358848
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:30.321699+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 488 handle_osd_map epochs [489,489], i have 488, src has [1,489]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184090624 unmapped: 31465472 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:31.321847+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5bfbc00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 489 ms_handle_reset con 0x5571f5bfbc00 session 0x5571f7f2e3c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184107008 unmapped: 31449088 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f3d1c000/0x0/0x4ffc00000, data 0x49e0ff8/0x4c71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:32.321992+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184107008 unmapped: 31449088 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 489 ms_handle_reset con 0x5571f2c6e000 session 0x5571f57632c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:33.322148+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184107008 unmapped: 31449088 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 489 ms_handle_reset con 0x5571f2c92800 session 0x5571f3b07680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 489 ms_handle_reset con 0x5571f3035800 session 0x5571f4d8da40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:34.322259+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184115200 unmapped: 31440896 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3762400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4dc0000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3462265 data_alloc: 234881024 data_used: 17362944
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:35.322390+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184115200 unmapped: 31440896 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f3d1c000/0x0/0x4ffc00000, data 0x49e0ff8/0x4c71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 489 handle_osd_map epochs [490,490], i have 489, src has [1,490]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.474841118s of 14.555397987s, submitted: 43
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:36.322538+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184115200 unmapped: 31440896 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:37.322712+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184115200 unmapped: 31440896 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:38.322837+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184131584 unmapped: 31424512 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:39.322975+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184205312 unmapped: 31350784 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f3d19000/0x0/0x4ffc00000, data 0x49e2ab2/0x4c74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3465879 data_alloc: 234881024 data_used: 17424384
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:40.323124+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184205312 unmapped: 31350784 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:41.323272+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184205312 unmapped: 31350784 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:42.323432+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184205312 unmapped: 31350784 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d5a800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:43.323559+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 490 ms_handle_reset con 0x5571f2d5a800 session 0x5571f4d923c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184205312 unmapped: 31350784 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:44.323678+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184205312 unmapped: 31350784 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f3d18000/0x0/0x4ffc00000, data 0x49e2ac2/0x4c75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467707 data_alloc: 234881024 data_used: 17424384
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:45.323798+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184205312 unmapped: 31350784 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f321d000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:46.323923+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184205312 unmapped: 31350784 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 490 ms_handle_reset con 0x5571f321d000 session 0x5571f551b2c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f3d18000/0x0/0x4ffc00000, data 0x49e2ac2/0x4c75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.145157814s of 11.193828583s, submitted: 11
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:47.324467+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184205312 unmapped: 31350784 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 490 ms_handle_reset con 0x5571f2c6e000 session 0x5571f5a883c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f3d19000/0x0/0x4ffc00000, data 0x49e2ab2/0x4c74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:48.324595+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184205312 unmapped: 31350784 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:49.324741+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 184221696 unmapped: 31334400 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482379 data_alloc: 234881024 data_used: 21598208
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:50.324867+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 187170816 unmapped: 28385280 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 490 ms_handle_reset con 0x5571f2c92800 session 0x5571f571a5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:51.324989+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 187252736 unmapped: 28303360 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:52.325137+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d5a800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 187269120 unmapped: 28286976 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f3d19000/0x0/0x4ffc00000, data 0x49e2b14/0x4c75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:53.325263+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 187269120 unmapped: 28286976 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 490 ms_handle_reset con 0x5571f2d5a800 session 0x5571f570fc20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:54.325379+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 187285504 unmapped: 28270592 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492116 data_alloc: 234881024 data_used: 22401024
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:55.325495+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 187285504 unmapped: 28270592 heap: 215556096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:56.325603+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 31358976 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:57.325770+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f391a000/0x0/0x4ffc00000, data 0x4de2ab2/0x5074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 31358976 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f391a000/0x0/0x4ffc00000, data 0x4de2ab2/0x5074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:58.325898+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 31358976 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:59.326024+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 31358976 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:00.326151+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3520007 data_alloc: 234881024 data_used: 22401024
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 31358976 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:01.326471+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.393831253s of 14.515746117s, submitted: 14
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 31358976 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:02.326621+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 31358976 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:03.326754+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188669952 unmapped: 31088640 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f391a000/0x0/0x4ffc00000, data 0x4de2ab2/0x5074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:04.326895+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188669952 unmapped: 31088640 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 490 ms_handle_reset con 0x5571f3762400 session 0x5571f551a960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 490 ms_handle_reset con 0x5571f4dc0000 session 0x5571f7f2f0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:05.327015+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528503 data_alloc: 234881024 data_used: 23855104
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188669952 unmapped: 31088640 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:06.327140+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188686336 unmapped: 31072256 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:07.327301+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188686336 unmapped: 31072256 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:08.327412+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188686336 unmapped: 31072256 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f391a000/0x0/0x4ffc00000, data 0x4de2ab2/0x5074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:09.327537+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188686336 unmapped: 31072256 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 490 ms_handle_reset con 0x5571f2c6e000 session 0x5571f3b06f00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:10.327655+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528503 data_alloc: 234881024 data_used: 23855104
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188686336 unmapped: 31072256 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:11.327866+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188686336 unmapped: 31072256 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:12.328019+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _renew_subs
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 490 handle_osd_map epochs [491,491], i have 490, src has [1,491]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.121050835s of 11.126111984s, submitted: 1
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188702720 unmapped: 31055872 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 491 ms_handle_reset con 0x5571f2c92800 session 0x5571f7f2f680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d5a800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:13.328144+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 491 ms_handle_reset con 0x5571f2d5a800 session 0x5571f55b03c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3762400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 491 ms_handle_reset con 0x5571f3762400 session 0x5571f4bef0e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188727296 unmapped: 31031296 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3ee7800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 491 ms_handle_reset con 0x5571f3ee7800 session 0x5571f55b1a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 491 ms_handle_reset con 0x5571f3035800 session 0x5571f4ea72c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:14.328294+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 491 heartbeat osd_stat(store_statfs(0x4f3d16000/0x0/0x4ffc00000, data 0x49e46d8/0x4c77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188743680 unmapped: 31014912 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 491 ms_handle_reset con 0x5571f2c6e000 session 0x5571f3b2fa40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 491 ms_handle_reset con 0x5571f2c92800 session 0x5571f4d921e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d5a800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:15.328476+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3509231 data_alloc: 234881024 data_used: 23859200
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 491 ms_handle_reset con 0x5571f2d5a800 session 0x5571f500f4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188751872 unmapped: 31006720 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:16.329275+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188751872 unmapped: 31006720 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 491 heartbeat osd_stat(store_statfs(0x4f3d16000/0x0/0x4ffc00000, data 0x49e473a/0x4c78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3762400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 491 ms_handle_reset con 0x5571f3762400 session 0x5571f571a780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:17.330351+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 491 handle_osd_map epochs [491,492], i have 491, src has [1,492]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188792832 unmapped: 30965760 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 492 ms_handle_reset con 0x5571f2c6e000 session 0x5571f7f2e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 492 heartbeat osd_stat(store_statfs(0x4f3d17000/0x0/0x4ffc00000, data 0x49e46d8/0x4c77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:18.330745+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188792832 unmapped: 30965760 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 492 ms_handle_reset con 0x5571f2c92800 session 0x5571f7f3af00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:19.331882+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 492 heartbeat osd_stat(store_statfs(0x4f3d14000/0x0/0x4ffc00000, data 0x49e6300/0x4c7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188792832 unmapped: 30965760 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:20.332390+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3508826 data_alloc: 234881024 data_used: 23863296
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188792832 unmapped: 30965760 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:21.332666+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188792832 unmapped: 30965760 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:22.333156+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188792832 unmapped: 30965760 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:23.333760+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188792832 unmapped: 30965760 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:24.333928+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188792832 unmapped: 30965760 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 492 handle_osd_map epochs [492,493], i have 492, src has [1,493]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.406382561s of 12.535206795s, submitted: 29
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:25.334413+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3d15000/0x0/0x4ffc00000, data 0x49e629e/0x4c79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513000 data_alloc: 234881024 data_used: 23871488
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188833792 unmapped: 30924800 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:26.334859+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3d11000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188833792 unmapped: 30924800 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:27.335160+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188833792 unmapped: 30924800 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:28.335382+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188833792 unmapped: 30924800 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:29.335638+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188833792 unmapped: 30924800 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3d11000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:30.335906+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513000 data_alloc: 234881024 data_used: 23871488
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188833792 unmapped: 30924800 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:31.336179+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188833792 unmapped: 30924800 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3d11000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:32.336345+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188833792 unmapped: 30924800 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:33.336576+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188833792 unmapped: 30924800 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3d11000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:34.336821+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188833792 unmapped: 30924800 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:35.337003+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513000 data_alloc: 234881024 data_used: 23871488
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3d11000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188833792 unmapped: 30924800 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:36.337276+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188833792 unmapped: 30924800 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:37.337585+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3d11000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188833792 unmapped: 30924800 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:38.337785+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3d11000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188833792 unmapped: 30924800 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3d11000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:39.338071+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188833792 unmapped: 30924800 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:40.338338+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513000 data_alloc: 234881024 data_used: 23871488
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188833792 unmapped: 30924800 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:41.338697+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188833792 unmapped: 30924800 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:42.338957+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.2 total, 600.0 interval
                                           Cumulative writes: 23K writes, 94K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 23K writes, 8348 syncs, 2.87 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 26.47 MB, 0.04 MB/s
                                           Interval WAL: 10K writes, 4333 syncs, 2.42 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188833792 unmapped: 30924800 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3d11000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:43.339163+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 188833792 unmapped: 30924800 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:44.339362+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d5a800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2d5a800 session 0x5571f4ea7c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f3035800 session 0x5571f3843a40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189677568 unmapped: 30081024 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:45.339537+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513000 data_alloc: 234881024 data_used: 24395776
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189677568 unmapped: 30081024 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:46.339722+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189677568 unmapped: 30081024 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3d11000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:47.339892+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189677568 unmapped: 30081024 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3d11000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:48.340049+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189677568 unmapped: 30081024 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:49.340357+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189677568 unmapped: 30081024 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:50.340542+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513000 data_alloc: 234881024 data_used: 24395776
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3762400
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.547378540s of 25.566181183s, submitted: 11
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189693952 unmapped: 30064640 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:51.340711+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189693952 unmapped: 30064640 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f3762400 session 0x5571f4da8b40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:52.340929+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189693952 unmapped: 30064640 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:53.341177+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3d11000/0x0/0x4ffc00000, data 0x49e7dbb/0x4c7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189693952 unmapped: 30064640 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:54.341353+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189693952 unmapped: 30064640 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:55.341559+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513949 data_alloc: 234881024 data_used: 24395776
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189693952 unmapped: 30064640 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:56.341811+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189693952 unmapped: 30064640 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:57.342075+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189693952 unmapped: 30064640 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:58.342420+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189693952 unmapped: 30064640 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:59.342631+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3d11000/0x0/0x4ffc00000, data 0x49e7dbb/0x4c7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189693952 unmapped: 30064640 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:00.342858+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513949 data_alloc: 234881024 data_used: 24395776
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189693952 unmapped: 30064640 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3d11000/0x0/0x4ffc00000, data 0x49e7dbb/0x4c7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:01.343154+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189693952 unmapped: 30064640 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:02.343328+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189693952 unmapped: 30064640 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:03.343541+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189693952 unmapped: 30064640 heap: 219758592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3d11000/0x0/0x4ffc00000, data 0x49e7dbb/0x4c7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.488888741s of 13.507647514s, submitted: 4
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:04.343680+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2c6e000 session 0x5571f6583c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3d11000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 202342400 unmapped: 30015488 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:05.343813+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2c92800 session 0x5571f4eb63c0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3878373 data_alloc: 234881024 data_used: 24395776
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189784064 unmapped: 42573824 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:06.343968+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189784064 unmapped: 42573824 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:07.344187+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189784064 unmapped: 42573824 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:08.344344+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189784064 unmapped: 42573824 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f0911000/0x0/0x4ffc00000, data 0x7de7d58/0x807c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:09.344516+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189784064 unmapped: 42573824 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:10.344685+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3878373 data_alloc: 234881024 data_used: 24395776
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189784064 unmapped: 42573824 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:11.344842+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189784064 unmapped: 42573824 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:12.345052+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189784064 unmapped: 42573824 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:13.345168+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189784064 unmapped: 42573824 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f0911000/0x0/0x4ffc00000, data 0x7de7d58/0x807c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:14.345300+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f0911000/0x0/0x4ffc00000, data 0x7de7d58/0x807c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d5a800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2d5a800 session 0x5571f7f3ba40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189800448 unmapped: 42557440 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:15.345450+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3878533 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f3035800 session 0x5571f3317e00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189800448 unmapped: 42557440 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:16.345645+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f5a66000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f5a66000 session 0x5571f571b4a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.638098717s of 12.371903419s, submitted: 22
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2c6e000 session 0x5571f3b06b40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189800448 unmapped: 42557440 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:17.345826+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f0911000/0x0/0x4ffc00000, data 0x7de7d67/0x807d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189800448 unmapped: 42557440 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:18.346040+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f0911000/0x0/0x4ffc00000, data 0x7de7d67/0x807d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189800448 unmapped: 42557440 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:19.346177+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 189800448 unmapped: 42557440 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d5a800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:20.346296+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3953342 data_alloc: 251658240 data_used: 34471936
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 196960256 unmapped: 35397632 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:21.346429+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 196960256 unmapped: 35397632 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:22.346594+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f0911000/0x0/0x4ffc00000, data 0x7de7d67/0x807d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 196960256 unmapped: 35397632 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f0911000/0x0/0x4ffc00000, data 0x7de7d67/0x807d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:23.346768+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 196960256 unmapped: 35397632 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:24.346992+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 196960256 unmapped: 35397632 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:25.347264+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3954462 data_alloc: 251658240 data_used: 34664448
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 196976640 unmapped: 35381248 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:26.347447+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f0911000/0x0/0x4ffc00000, data 0x7de7d67/0x807d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 196927488 unmapped: 35430400 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:27.347736+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 196927488 unmapped: 35430400 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:28.347916+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 196927488 unmapped: 35430400 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:29.348246+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 196927488 unmapped: 35430400 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:30.348445+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f0911000/0x0/0x4ffc00000, data 0x7de7d67/0x807d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3957502 data_alloc: 251658240 data_used: 35135488
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 197107712 unmapped: 35250176 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.666679382s of 14.694677353s, submitted: 7
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:31.348561+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 213262336 unmapped: 19095552 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:32.348729+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef371000/0x0/0x4ffc00000, data 0x7de7d67/0x807d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,3])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 214704128 unmapped: 17653760 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:33.348867+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215015424 unmapped: 17342464 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:34.349130+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215023616 unmapped: 17334272 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:35.349309+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3968830 data_alloc: 251658240 data_used: 36155392
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209174528 unmapped: 23183360 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:36.349462+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef741000/0x0/0x4ffc00000, data 0x7de7d67/0x807d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209190912 unmapped: 23166976 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:37.349679+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209190912 unmapped: 23166976 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:38.349991+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209190912 unmapped: 23166976 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:39.350136+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209371136 unmapped: 22986752 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:40.350310+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4016774 data_alloc: 251658240 data_used: 36216832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209371136 unmapped: 22986752 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:41.350575+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209371136 unmapped: 22986752 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:42.350759+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef0f7000/0x0/0x4ffc00000, data 0x8431d67/0x86c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209403904 unmapped: 22953984 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:43.350946+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209403904 unmapped: 22953984 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:44.351179+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209403904 unmapped: 22953984 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:45.351394+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4016774 data_alloc: 251658240 data_used: 36216832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209403904 unmapped: 22953984 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:46.351585+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.945721626s of 15.409510612s, submitted: 163
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209526784 unmapped: 22831104 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:47.351817+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef127000/0x0/0x4ffc00000, data 0x8431d67/0x86c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209559552 unmapped: 22798336 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:48.351967+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209559552 unmapped: 22798336 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:49.352214+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef127000/0x0/0x4ffc00000, data 0x8431d67/0x86c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209559552 unmapped: 22798336 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:50.352436+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4012038 data_alloc: 251658240 data_used: 36216832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209559552 unmapped: 22798336 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:51.352683+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209559552 unmapped: 22798336 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:52.352916+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209559552 unmapped: 22798336 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:53.353082+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209559552 unmapped: 22798336 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:54.353388+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209559552 unmapped: 22798336 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:55.353577+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4014806 data_alloc: 251658240 data_used: 36229120
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef127000/0x0/0x4ffc00000, data 0x8431d67/0x86c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2c92800 session 0x5571f4ea6d20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2d5a800 session 0x5571f8d225a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209567744 unmapped: 22790144 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:56.353689+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f3035800 session 0x5571f4e4a000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209633280 unmapped: 22724608 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:57.353895+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209633280 unmapped: 22724608 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:58.354065+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209633280 unmapped: 22724608 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:59.354261+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209649664 unmapped: 22708224 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:00.354697+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4010366 data_alloc: 251658240 data_used: 36229120
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209649664 unmapped: 22708224 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:01.354812+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef128000/0x0/0x4ffc00000, data 0x8431d58/0x86c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209649664 unmapped: 22708224 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:02.354943+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209649664 unmapped: 22708224 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:03.355078+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209649664 unmapped: 22708224 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:04.355254+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209649664 unmapped: 22708224 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:05.355372+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.770528793s of 18.887245178s, submitted: 37
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4010366 data_alloc: 251658240 data_used: 36229120
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209682432 unmapped: 22675456 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:06.355587+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef128000/0x0/0x4ffc00000, data 0x8431d58/0x86c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [1])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209633280 unmapped: 22724608 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:07.355849+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209633280 unmapped: 22724608 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:08.355998+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209633280 unmapped: 22724608 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:09.356222+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef128000/0x0/0x4ffc00000, data 0x8431d58/0x86c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209633280 unmapped: 22724608 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:10.356417+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef128000/0x0/0x4ffc00000, data 0x8431d58/0x86c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4010366 data_alloc: 251658240 data_used: 36229120
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef128000/0x0/0x4ffc00000, data 0x8431d58/0x86c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209633280 unmapped: 22724608 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:11.356590+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209633280 unmapped: 22724608 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:12.356726+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209633280 unmapped: 22724608 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:13.356919+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209633280 unmapped: 22724608 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:14.357065+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef128000/0x0/0x4ffc00000, data 0x8431d58/0x86c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209633280 unmapped: 22724608 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f6c89c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f6c89c00 session 0x5571f4e30960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:15.357205+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4010366 data_alloc: 251658240 data_used: 36229120
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209633280 unmapped: 22724608 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:16.357345+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef128000/0x0/0x4ffc00000, data 0x8431d58/0x86c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209633280 unmapped: 22724608 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:17.357504+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209633280 unmapped: 22724608 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:18.357620+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef128000/0x0/0x4ffc00000, data 0x8431d58/0x86c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209698816 unmapped: 22659072 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:19.357746+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef128000/0x0/0x4ffc00000, data 0x8431d58/0x86c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209739776 unmapped: 22618112 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:20.357943+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4012446 data_alloc: 251658240 data_used: 36417536
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209739776 unmapped: 22618112 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:21.358040+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209739776 unmapped: 22618112 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:22.358169+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209739776 unmapped: 22618112 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:23.358300+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209739776 unmapped: 22618112 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:24.358466+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef128000/0x0/0x4ffc00000, data 0x8431d58/0x86c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209739776 unmapped: 22618112 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:25.358638+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4012446 data_alloc: 251658240 data_used: 36417536
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209739776 unmapped: 22618112 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:26.359050+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209739776 unmapped: 22618112 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:27.359284+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209739776 unmapped: 22618112 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:28.359587+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef128000/0x0/0x4ffc00000, data 0x8431d58/0x86c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209739776 unmapped: 22618112 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:29.359772+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.100845337s of 24.497510910s, submitted: 108
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209920000 unmapped: 22437888 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:30.359937+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4038446 data_alloc: 251658240 data_used: 38113280
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210239488 unmapped: 22118400 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:31.360120+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210239488 unmapped: 22118400 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:32.360262+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef128000/0x0/0x4ffc00000, data 0x8431d58/0x86c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210247680 unmapped: 22110208 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:33.360438+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210247680 unmapped: 22110208 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:34.360535+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210247680 unmapped: 22110208 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:35.360783+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4037182 data_alloc: 251658240 data_used: 38113280
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210329600 unmapped: 22028288 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:36.360908+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210329600 unmapped: 22028288 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:37.361069+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef128000/0x0/0x4ffc00000, data 0x8431d58/0x86c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210329600 unmapped: 22028288 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:38.361253+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210329600 unmapped: 22028288 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:39.361493+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210329600 unmapped: 22028288 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:40.361671+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4037182 data_alloc: 251658240 data_used: 38113280
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210329600 unmapped: 22028288 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:41.361792+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.473871231s of 11.523114204s, submitted: 19
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:42.361934+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210370560 unmapped: 21987328 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:43.362134+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210370560 unmapped: 21987328 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef128000/0x0/0x4ffc00000, data 0x8431d58/0x86c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:44.362305+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210370560 unmapped: 21987328 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:45.362581+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210370560 unmapped: 21987328 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef128000/0x0/0x4ffc00000, data 0x8431d58/0x86c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4037390 data_alloc: 251658240 data_used: 38088704
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:46.362727+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 21913600 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:47.362899+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 21913600 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef128000/0x0/0x4ffc00000, data 0x8431d58/0x86c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:48.363040+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 21913600 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:49.363230+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 21913600 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:50.363394+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 21913600 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef128000/0x0/0x4ffc00000, data 0x8431d58/0x86c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef128000/0x0/0x4ffc00000, data 0x8431d58/0x86c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4036158 data_alloc: 251658240 data_used: 38088704
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:51.363547+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210509824 unmapped: 21848064 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:52.363691+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210509824 unmapped: 21848064 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:53.363968+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210509824 unmapped: 21848064 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:54.364266+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210509824 unmapped: 21848064 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef128000/0x0/0x4ffc00000, data 0x8431d58/0x86c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:55.364565+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210870272 unmapped: 21487616 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2c6e000 session 0x5571f4b6c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2c92800 session 0x5571f4d905a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d5a800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2d5a800 session 0x5571f3a1e5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4042718 data_alloc: 251658240 data_used: 38944768
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:56.364748+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210870272 unmapped: 21487616 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:57.365450+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210870272 unmapped: 21487616 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:58.365813+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210870272 unmapped: 21487616 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:59.365925+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210870272 unmapped: 21487616 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:00.366225+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210870272 unmapped: 21487616 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4ef128000/0x0/0x4ffc00000, data 0x8431d58/0x86c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4042718 data_alloc: 251658240 data_used: 38944768
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:01.366551+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210870272 unmapped: 21487616 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:02.366900+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 210878464 unmapped: 21479424 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.371913910s of 21.410015106s, submitted: 13
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:03.367071+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f3035800 session 0x5571f570fe00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205086720 unmapped: 27271168 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:04.367392+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205086720 unmapped: 27271168 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:05.367586+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205086720 unmapped: 27271168 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532806 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:06.367754+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205086720 unmapped: 27271168 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:07.367974+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205086720 unmapped: 27271168 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:08.368144+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205086720 unmapped: 27271168 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:09.368589+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205086720 unmapped: 27271168 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:10.368823+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205086720 unmapped: 27271168 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f8439800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f8439800 session 0x5571f4bee780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2c6e000 session 0x5571f5a88960
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533446 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:11.369167+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205348864 unmapped: 27009024 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:12.369355+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205348864 unmapped: 27009024 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:13.369566+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205348864 unmapped: 27009024 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:14.369796+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205348864 unmapped: 27009024 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:15.369990+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205348864 unmapped: 27009024 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533446 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:16.370156+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205348864 unmapped: 27009024 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.795513153s of 13.827775955s, submitted: 16
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:17.370425+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205512704 unmapped: 26845184 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2c92800 session 0x5571f4bee780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:18.370637+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205520896 unmapped: 26836992 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:19.370829+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205520896 unmapped: 26836992 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:20.371059+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205520896 unmapped: 26836992 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538429 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:21.371202+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205520896 unmapped: 26836992 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:22.371396+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205520896 unmapped: 26836992 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b32000/0x0/0x4ffc00000, data 0x4a27d58/0x4cbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:23.371544+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205520896 unmapped: 26836992 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:24.371665+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205520896 unmapped: 26836992 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:25.371852+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205520896 unmapped: 26836992 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538429 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:26.371989+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205520896 unmapped: 26836992 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:27.372165+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b32000/0x0/0x4ffc00000, data 0x4a27d58/0x4cbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205520896 unmapped: 26836992 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:28.372369+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205520896 unmapped: 26836992 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:29.372596+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205520896 unmapped: 26836992 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:30.372742+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205520896 unmapped: 26836992 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b32000/0x0/0x4ffc00000, data 0x4a27d58/0x4cbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538429 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:31.372852+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205520896 unmapped: 26836992 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d5a800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.125852585s of 15.154190063s, submitted: 6
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2d5a800 session 0x5571f4b6c000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f3035800 session 0x5571f8d225a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:32.372997+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 209739776 unmapped: 22618112 heap: 232357888 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2c24c00 session 0x5571f7f3af00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:33.373142+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 35209216 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:34.373367+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 35209216 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:35.373544+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 35209216 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3731607 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:36.373718+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 35209216 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f0f72000/0x0/0x4ffc00000, data 0x65e7d58/0x687c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:37.373926+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 35209216 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:38.374060+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 35209216 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:39.374279+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 35209216 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:40.374419+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 35209216 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3731607 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:41.374572+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 35209216 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f0f72000/0x0/0x4ffc00000, data 0x65e7d58/0x687c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:42.374702+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 35209216 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:43.374897+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 35209216 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:44.375057+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 35209216 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:45.375192+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 35209216 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2c24c00 session 0x5571f571a780
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f0f72000/0x0/0x4ffc00000, data 0x65e7d58/0x687c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3731607 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:46.375311+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 35209216 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:47.375479+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 35209216 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:48.375618+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 35209216 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:49.375773+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 35209216 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f0f72000/0x0/0x4ffc00000, data 0x65e7d58/0x687c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:50.375928+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 206544896 unmapped: 34209792 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:51.376126+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3768087 data_alloc: 251658240 data_used: 29577216
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 206544896 unmapped: 34209792 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:52.376216+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 206544896 unmapped: 34209792 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:53.376392+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 206544896 unmapped: 34209792 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:54.376573+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f0f72000/0x0/0x4ffc00000, data 0x65e7d58/0x687c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 206544896 unmapped: 34209792 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:55.376796+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 206544896 unmapped: 34209792 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:56.376943+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3768087 data_alloc: 251658240 data_used: 29577216
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 206544896 unmapped: 34209792 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:57.377156+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 206544896 unmapped: 34209792 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:58.377314+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f0f72000/0x0/0x4ffc00000, data 0x65e7d58/0x687c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 206544896 unmapped: 34209792 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:59.377446+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 206544896 unmapped: 34209792 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:00.377583+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.966415405s of 28.398801804s, submitted: 24
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 214949888 unmapped: 25804800 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:01.377729+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3783783 data_alloc: 251658240 data_used: 30068736
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215080960 unmapped: 25673728 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:02.378771+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215146496 unmapped: 25608192 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f027d000/0x0/0x4ffc00000, data 0x70fcd58/0x7391000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:03.379141+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:04.379838+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:05.380414+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:06.380980+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3869639 data_alloc: 251658240 data_used: 30289920
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:07.381721+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:08.382426+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f027d000/0x0/0x4ffc00000, data 0x70fcd58/0x7391000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:09.382604+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:10.382946+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2c6e000 session 0x5571f4d921e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2c92800 session 0x5571f3b2fa40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:11.383089+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3870439 data_alloc: 251658240 data_used: 30314496
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f027d000/0x0/0x4ffc00000, data 0x70fcd58/0x7391000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d5a800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2d5a800 session 0x5571f7f3ab40
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:12.383416+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:13.383543+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:14.383845+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:15.383962+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:16.384138+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3870439 data_alloc: 251658240 data_used: 30314496
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f027d000/0x0/0x4ffc00000, data 0x70fcd58/0x7391000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f027d000/0x0/0x4ffc00000, data 0x70fcd58/0x7391000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:17.384568+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:18.384920+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:19.385176+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:20.385540+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:21.385768+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3870439 data_alloc: 251658240 data_used: 30314496
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f027d000/0x0/0x4ffc00000, data 0x70fcd58/0x7391000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:22.385923+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f027d000/0x0/0x4ffc00000, data 0x70fcd58/0x7391000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:23.386300+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:24.386563+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:25.386815+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:26.387060+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3870439 data_alloc: 251658240 data_used: 30314496
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f3035800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f3035800 session 0x5571f570e5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215154688 unmapped: 25600000 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:27.387332+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2c24c00 session 0x5571f4eb74a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f027d000/0x0/0x4ffc00000, data 0x70fcd58/0x7391000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 215162880 unmapped: 25591808 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2c6e000 session 0x5571f4ea6000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c92800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.322549820s of 27.762292862s, submitted: 67
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2c92800 session 0x5571f551a5a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:28.387444+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2d5a800
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212197376 unmapped: 28557312 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f027d000/0x0/0x4ffc00000, data 0x70fcd58/0x7391000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:29.387618+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212197376 unmapped: 28557312 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:30.387821+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212197376 unmapped: 28557312 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:31.387953+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3865058 data_alloc: 251658240 data_used: 30314496
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f4eff000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f045c000/0x0/0x4ffc00000, data 0x70fcd68/0x7392000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212197376 unmapped: 28557312 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:32.388107+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212197376 unmapped: 28557312 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:33.388239+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212197376 unmapped: 28557312 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:34.388385+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f045c000/0x0/0x4ffc00000, data 0x70fcd68/0x7392000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212197376 unmapped: 28557312 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:35.388516+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212197376 unmapped: 28557312 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:36.388697+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f045c000/0x0/0x4ffc00000, data 0x70fcd68/0x7392000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3865218 data_alloc: 251658240 data_used: 30339072
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212197376 unmapped: 28557312 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:37.388879+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212197376 unmapped: 28557312 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:38.389050+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212197376 unmapped: 28557312 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:39.389272+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212197376 unmapped: 28557312 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:40.389429+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212197376 unmapped: 28557312 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:41.389642+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3865218 data_alloc: 251658240 data_used: 30339072
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212197376 unmapped: 28557312 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f045c000/0x0/0x4ffc00000, data 0x70fcd68/0x7392000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:42.389832+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212197376 unmapped: 28557312 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:43.389953+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212205568 unmapped: 28549120 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:44.390174+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212221952 unmapped: 28532736 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:45.390318+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212221952 unmapped: 28532736 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:46.390453+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3870978 data_alloc: 251658240 data_used: 31059968
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212221952 unmapped: 28532736 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:47.390620+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212221952 unmapped: 28532736 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f045c000/0x0/0x4ffc00000, data 0x70fcd68/0x7392000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:48.390817+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.241676331s of 20.271730423s, submitted: 7
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212254720 unmapped: 28499968 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:49.390979+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212254720 unmapped: 28499968 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:50.391226+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212254720 unmapped: 28499968 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:51.391423+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3871858 data_alloc: 251658240 data_used: 31059968
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212254720 unmapped: 28499968 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:52.391663+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212254720 unmapped: 28499968 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f045c000/0x0/0x4ffc00000, data 0x70fcd68/0x7392000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:53.391847+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212254720 unmapped: 28499968 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:54.392081+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212254720 unmapped: 28499968 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:55.392464+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212254720 unmapped: 28499968 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:56.392609+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f045c000/0x0/0x4ffc00000, data 0x70fcd68/0x7392000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3872034 data_alloc: 251658240 data_used: 31059968
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212254720 unmapped: 28499968 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:57.392808+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212254720 unmapped: 28499968 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:58.393036+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212254720 unmapped: 28499968 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:59.393346+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212254720 unmapped: 28499968 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:00.393498+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212254720 unmapped: 28499968 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:01.393624+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3871154 data_alloc: 251658240 data_used: 31059968
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212254720 unmapped: 28499968 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f045c000/0x0/0x4ffc00000, data 0x70fcd68/0x7392000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:02.393746+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212254720 unmapped: 28499968 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:03.393878+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212254720 unmapped: 28499968 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:04.394026+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212254720 unmapped: 28499968 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:05.394191+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f045c000/0x0/0x4ffc00000, data 0x70fcd68/0x7392000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212254720 unmapped: 28499968 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:06.394363+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3871154 data_alloc: 251658240 data_used: 31059968
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212254720 unmapped: 28499968 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:07.394550+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f045c000/0x0/0x4ffc00000, data 0x70fcd68/0x7392000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212254720 unmapped: 28499968 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:08.394686+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.461668015s of 20.483251572s, submitted: 7
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2d5a800 session 0x5571f38425a0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f4eff000 session 0x5571f8d23680
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212385792 unmapped: 28368896 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c24c00
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2c24c00 session 0x5571f4e31c20
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:09.394815+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f045d000/0x0/0x4ffc00000, data 0x70fcd58/0x7391000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212385792 unmapped: 28368896 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:10.394979+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212385792 unmapped: 28368896 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:11.395142+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3871273 data_alloc: 251658240 data_used: 31256576
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212385792 unmapped: 28368896 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:12.395324+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f045d000/0x0/0x4ffc00000, data 0x70fcd58/0x7391000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212385792 unmapped: 28368896 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f045d000/0x0/0x4ffc00000, data 0x70fcd58/0x7391000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:13.395507+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 212385792 unmapped: 28368896 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:14.395690+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: handle_auth_request added challenge on 0x5571f2c6e000
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 ms_handle_reset con 0x5571f2c6e000 session 0x5571f3a1e1e0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f045d000/0x0/0x4ffc00000, data 0x70fcd58/0x7391000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:15.395895+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:16.396066+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549785 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:17.396274+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:18.396484+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:19.396646+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:20.396902+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:21.397197+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549785 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:22.397426+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:23.397778+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:24.398052+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:25.398407+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:26.398732+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549785 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:27.399169+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:28.399441+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:29.399772+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:30.400005+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:31.400285+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549785 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:32.400494+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:33.400874+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:34.401222+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:35.401578+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:36.401871+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549785 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:37.402213+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:38.402386+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:39.402570+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:40.402710+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:41.402866+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549785 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:42.403019+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:43.403169+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:44.403297+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:45.403464+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:46.403627+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549785 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:47.403819+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:48.404027+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:49.404267+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:50.404531+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:51.404746+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549785 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:52.404909+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:53.405048+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:54.405250+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:55.405348+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:56.405471+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549785 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:57.405644+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:58.405760+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:59.405950+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:00.406184+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:01.406355+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549785 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:02.406509+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:03.406661+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:04.406856+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:05.407039+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:06.407250+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549785 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:07.407386+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:08.407588+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:09.407748+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:10.407916+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:11.408044+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549785 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:12.408131+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:13.408280+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:14.408425+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:15.408598+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:16.408731+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549785 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:17.408901+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:18.409013+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:19.409132+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:20.409309+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:21.409435+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549785 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 208207872 unmapped: 32546816 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:22.409593+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:23.409716+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:24.409835+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:25.409999+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:26.410218+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549785 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:27.410408+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:28.410528+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:29.410714+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:30.410954+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:31.411166+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549785 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:32.411315+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:33.411610+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:34.411753+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:35.412016+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:36.412369+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549785 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:37.412568+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:38.412919+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:39.413061+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:40.413164+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:41.413389+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549785 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:42.414824+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:43.416173+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:44.417195+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:45.418266+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:46.419026+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549785 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:47.419484+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:48.419723+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:49.419983+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:50.420184+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:51.420338+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:26 compute-0 ceph-osd[90977]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:26 compute-0 ceph-osd[90977]: bluestore.MempoolThread(0x5571f1821b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549785 data_alloc: 234881024 data_used: 24399872
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207675392 unmapped: 33079296 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:52.420511+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207708160 unmapped: 33046528 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:53.420682+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: do_command 'config diff' '{prefix=config diff}'
Nov 29 08:25:26 compute-0 ceph-osd[90977]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 08:25:26 compute-0 ceph-osd[90977]: do_command 'config show' '{prefix=config show}'
Nov 29 08:25:26 compute-0 ceph-osd[90977]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 08:25:26 compute-0 ceph-osd[90977]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 08:25:26 compute-0 ceph-osd[90977]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 08:25:26 compute-0 ceph-osd[90977]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 08:25:26 compute-0 ceph-osd[90977]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207650816 unmapped: 33103872 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:54.420857+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207396864 unmapped: 33357824 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:55.421041+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f2b72000/0x0/0x4ffc00000, data 0x49e7d58/0x4c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:25:26 compute-0 ceph-osd[90977]: prioritycache tune_memory target: 4294967296 mapped: 207429632 unmapped: 33325056 heap: 240754688 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: tick
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_tickets
Nov 29 08:25:26 compute-0 ceph-osd[90977]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:56.421260+0000)
Nov 29 08:25:26 compute-0 ceph-osd[90977]: do_command 'log dump' '{prefix=log dump}'
Nov 29 08:25:26 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19271 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:26 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 08:25:26 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4135193953' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 08:25:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:25:27.151 163500 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:25:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:25:27.152 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:25:27 compute-0 ovn_metadata_agent[163495]: 2025-11-29 08:25:27.152 163500 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:25:27 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19273 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 08:25:27 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/548748791' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 08:25:27 compute-0 ceph-mon[75237]: from='client.19260 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:27 compute-0 ceph-mon[75237]: from='client.19263 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1242098' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 08:25:27 compute-0 ceph-mon[75237]: from='client.19267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/4135193953' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 08:25:27 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/548748791' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 08:25:27 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19277 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:27 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 08:25:27 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1951768638' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 08:25:27 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19281 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:25:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 08:25:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2928656011' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 08:25:28 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19285 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:25:28 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:28 compute-0 ceph-mon[75237]: pgmap v2217: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:28 compute-0 ceph-mon[75237]: from='client.19271 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:28 compute-0 ceph-mon[75237]: from='client.19273 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:28 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1951768638' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 08:25:28 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2928656011' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 08:25:28 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 29 08:25:28 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1830544210' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 08:25:29 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19293 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:25:29 compute-0 ceph-mgr[75527]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 08:25:29 compute-0 ceph-321e9cb7-01a2-5759-bf8c-981c9a64aa3e-mgr-compute-0-fwfehy[75523]: 2025-11-29T08:25:29.207+0000 7fed72bf5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 08:25:29 compute-0 nova_compute[255040]: 2025-11-29 08:25:29.379 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 29 08:25:29 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1675071716' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 08:25:29 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 29 08:25:29 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3604064164' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 08:25:29 compute-0 ceph-mon[75237]: from='client.19277 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:29 compute-0 ceph-mon[75237]: from='client.19281 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:25:29 compute-0 ceph-mon[75237]: from='client.19285 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:25:29 compute-0 ceph-mon[75237]: pgmap v2218: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:29 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1830544210' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 08:25:29 compute-0 ceph-mon[75237]: from='client.19293 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:25:29 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1675071716' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 08:25:29 compute-0 podman[308168]: 2025-11-29 08:25:29.906770575 +0000 UTC m=+0.072810054 container health_status e4450a6e056f3d654702b4ea97b3271a40231929e17638afdb6eb1b0fbf326ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible)
Nov 29 08:25:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 29 08:25:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1992661761' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 08:25:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 29 08:25:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/47736284' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 08:25:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 29 08:25:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4184629345' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 08:25:30 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 29 08:25:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 29 08:25:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2751024014' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 08:25:30 compute-0 crontab[308355]: (root) LIST (root)
Nov 29 08:25:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:25:30 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3604064164' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 08:25:30 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1992661761' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 08:25:30 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/47736284' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 08:25:30 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/4184629345' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 08:25:30 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2751024014' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 08:25:30 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 29 08:25:30 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2114039298' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 08:25:31 compute-0 nova_compute[255040]: 2025-11-29 08:25:31.041 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 29 08:25:31 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2609096550' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 23371776 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 192 heartbeat osd_stat(store_statfs(0x4ede9a000/0x0/0x4ffc00000, data 0xd6c42ea/0xd7d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:09.489654+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d09800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 192 ms_handle_reset con 0x558bf3d09800 session 0x558bf63b1860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d09400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108093440 unmapped: 23273472 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.490467072s of 10.064993858s, submitted: 42
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 192 ms_handle_reset con 0x558bf3d09400 session 0x558bf63b14a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2709931 data_alloc: 234881024 data_used: 12398592
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:10.489869+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 23142400 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:11.490178+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 22872064 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:12.490368+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 192 heartbeat osd_stat(store_statfs(0x4ea3fb000/0x0/0x4ffc00000, data 0x111632ea/0x11273000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108617728 unmapped: 22749184 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:13.490577+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108675072 unmapped: 22691840 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:14.490833+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff8800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 192 heartbeat osd_stat(store_statfs(0x4e83fb000/0x0/0x4ffc00000, data 0x131632ea/0x13273000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108904448 unmapped: 22462464 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475631 data_alloc: 234881024 data_used: 12398592
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:15.490984+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff9400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 192 ms_handle_reset con 0x558bf3ff9400 session 0x558bf2d6cf00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 13737984 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a7c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:16.491204+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 192 handle_osd_map epochs [193,193], i have 192, src has [1,193]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a4400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 193 ms_handle_reset con 0x558bf52a4400 session 0x558bf52b72c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 117760000 unmapped: 13606912 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a4400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:17.491415+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 193 handle_osd_map epochs [194,194], i have 193, src has [1,194]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 194 ms_handle_reset con 0x558bf52a4400 session 0x558bf1de0780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24be000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 194 ms_handle_reset con 0x558bf24be000 session 0x558bf1de14a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 194 ms_handle_reset con 0x558bf52a7c00 session 0x558bf3d50f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 118013952 unmapped: 13352960 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:18.491614+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 118153216 unmapped: 13213696 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e19000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 194 ms_handle_reset con 0x558bf3e19000 session 0x558bf52c4b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:19.491926+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf215c800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 194 ms_handle_reset con 0x558bf215c800 session 0x558bf19afa40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 195 ms_handle_reset con 0x558bf5e5ac00 session 0x558bf3d50000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24be000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 195 ms_handle_reset con 0x558bf24be000 session 0x558bf4582f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 21446656 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 195 heartbeat osd_stat(store_statfs(0x4e03f3000/0x0/0x4ffc00000, data 0x1b1670d3/0x1b27a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4200850 data_alloc: 234881024 data_used: 12414976
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:20.492245+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e19000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a4400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 195 ms_handle_reset con 0x558bf52a4400 session 0x558bf1de1a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a7c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.040750504s of 10.780436516s, submitted: 66
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 195 ms_handle_reset con 0x558bf3ff8800 session 0x558bf52410e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 21438464 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:21.492445+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 195 handle_osd_map epochs [195,196], i have 195, src has [1,196]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 196 ms_handle_reset con 0x558bf52a7c00 session 0x558bf2fb94a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 20938752 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 196 ms_handle_reset con 0x558bf1947c00 session 0x558bf5dd1860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 196 ms_handle_reset con 0x558bf5e3ac00 session 0x558bf2c07860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 196 ms_handle_reset con 0x558bf3e19000 session 0x558bf316c780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24be000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:22.492608+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 196 ms_handle_reset con 0x558bf24be000 session 0x558bf4978960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 24854528 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff8800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 196 ms_handle_reset con 0x558bf3ff8800 session 0x558bf30af860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:23.492805+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 196 ms_handle_reset con 0x558bf1947c00 session 0x558bf4399a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 196 heartbeat osd_stat(store_statfs(0x4fa3b3000/0x0/0x4ffc00000, data 0x11a7702/0x12bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 106512384 unmapped: 24854528 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:24.493085+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 106487808 unmapped: 24879104 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367406 data_alloc: 218103808 data_used: 7852032
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:25.493425+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 106487808 unmapped: 24879104 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:26.493634+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 106487808 unmapped: 24879104 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:27.493787+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 106487808 unmapped: 24879104 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e5a800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:28.493959+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 197 ms_handle_reset con 0x558bf5e5a800 session 0x558bf45a9a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d45000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 197 ms_handle_reset con 0x558bf2d45000 session 0x558bf49781e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 106569728 unmapped: 24797184 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:29.494135+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 197 heartbeat osd_stat(store_statfs(0x4fa3b0000/0x0/0x4ffc00000, data 0x11a9300/0x12bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 106569728 unmapped: 24797184 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7a800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:30.494380+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1370988 data_alloc: 218103808 data_used: 7860224
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 197 handle_osd_map epochs [197,198], i have 197, src has [1,198]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 198 ms_handle_reset con 0x558bf5e7a800 session 0x558bf19af2c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24739840 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:31.494606+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24739840 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:32.494783+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e5b400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 106627072 unmapped: 24739840 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 198 ms_handle_reset con 0x558bf5e5b400 session 0x558bf403e1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.590768814s of 12.429908752s, submitted: 194
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d45000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 198 ms_handle_reset con 0x558bf2d45000 session 0x558bf2d01a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:33.494896+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e5a800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 198 ms_handle_reset con 0x558bf1947c00 session 0x558bf23fcd20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 198 ms_handle_reset con 0x558bf5e5a800 session 0x558bf2b56000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 105897984 unmapped: 25468928 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7a800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee6800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 198 ms_handle_reset con 0x558bf5e7a800 session 0x558bf2d4c1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 198 ms_handle_reset con 0x558bf5ee6800 session 0x558bf30e7860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:34.495151+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 198 ms_handle_reset con 0x558bf1947c00 session 0x558bf2b1d4a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 105766912 unmapped: 25600000 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:35.495357+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1375521 data_alloc: 218103808 data_used: 7335936
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 198 heartbeat osd_stat(store_statfs(0x4f9f9c000/0x0/0x4ffc00000, data 0x11aaeb8/0x12c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 198 handle_osd_map epochs [199,199], i have 198, src has [1,199]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff9400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 199 ms_handle_reset con 0x558bf3ff9400 session 0x558bf42e4000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 199 ms_handle_reset con 0x558bf2d0dc00 session 0x558bf42e65a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffa400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 25583616 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:36.495586+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee6400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 199 ms_handle_reset con 0x558bf5ee6400 session 0x558bf42e5680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 199 ms_handle_reset con 0x558bf3ffa400 session 0x558bf4321680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 105799680 unmapped: 25567232 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:37.495825+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 105799680 unmapped: 25567232 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 199 ms_handle_reset con 0x558bf1947c00 session 0x558bf43e81e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:38.495958+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 199 ms_handle_reset con 0x558bf2d0dc00 session 0x558bf2d00960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff9400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 199 ms_handle_reset con 0x558bf3ff9400 session 0x558bf30aeb40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee6400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e58800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 199 ms_handle_reset con 0x558bf5ee6400 session 0x558bf17334a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0a800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 199 ms_handle_reset con 0x558bf5e58800 session 0x558bf3e42780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 106029056 unmapped: 25337856 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 199 ms_handle_reset con 0x558bf2d0a800 session 0x558bf21623c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:39.496153+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 199 ms_handle_reset con 0x558bf1947c00 session 0x558bf22b7860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43bf000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 199 ms_handle_reset con 0x558bf43bf000 session 0x558bf4082000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 24657920 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 199 heartbeat osd_stat(store_statfs(0x4f913f000/0x0/0x4ffc00000, data 0x2006939/0x211f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:40.496433+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503156 data_alloc: 218103808 data_used: 7344128
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 24625152 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 199 heartbeat osd_stat(store_statfs(0x4f913f000/0x0/0x4ffc00000, data 0x2006972/0x211f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:41.496715+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 106774528 unmapped: 24592384 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bec00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c0000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:42.496982+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7b800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 199 ms_handle_reset con 0x558bf43c0000 session 0x558bf316c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 199 heartbeat osd_stat(store_statfs(0x4f913e000/0x0/0x4ffc00000, data 0x2006982/0x2120000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 24559616 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:43.497202+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 24559616 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:44.497373+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 199 handle_osd_map epochs [199,200], i have 199, src has [1,200]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.474607468s of 11.210055351s, submitted: 106
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 199 handle_osd_map epochs [200,200], i have 200, src has [1,200]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 24559616 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:45.497504+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1506773 data_alloc: 218103808 data_used: 7356416
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 24559616 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:46.498227+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a7400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf311f400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107003904 unmapped: 24363008 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 200 heartbeat osd_stat(store_statfs(0x4f9139000/0x0/0x4ffc00000, data 0x20085b8/0x2124000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:47.498478+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 200 ms_handle_reset con 0x558bf311f400 session 0x558bf318e3c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107028480 unmapped: 24338432 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:48.498608+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107315200 unmapped: 24051712 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 200 ms_handle_reset con 0x558bf5e7b800 session 0x558bf63b2960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:49.498819+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107315200 unmapped: 24051712 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:50.499019+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1574435 data_alloc: 218103808 data_used: 7360512
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 200 heartbeat osd_stat(store_statfs(0x4f896e000/0x0/0x4ffc00000, data 0x27d262a/0x28f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107315200 unmapped: 24051712 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 201 ms_handle_reset con 0x558bf52a7400 session 0x558bf2b1cf00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:51.499296+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 24248320 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:52.499479+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7bc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107110400 unmapped: 24256512 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:53.499593+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 201 ms_handle_reset con 0x558bf24bec00 session 0x558bf3052780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 201 ms_handle_reset con 0x558bf5e7bc00 session 0x558bf63b2000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0bc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 24248320 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:54.499725+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.126789570s of 10.070312500s, submitted: 58
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 201 ms_handle_reset con 0x558bf2d0bc00 session 0x558bf45363c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 24248320 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 201 heartbeat osd_stat(store_statfs(0x4f896b000/0x0/0x4ffc00000, data 0x27d4250/0x28f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:55.499930+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576264 data_alloc: 218103808 data_used: 7307264
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 24248320 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:56.500107+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 24248320 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:57.500344+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 24248320 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:58.500531+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 201 heartbeat osd_stat(store_statfs(0x4f896c000/0x0/0x4ffc00000, data 0x27d41ee/0x28f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 24248320 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bec00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 201 ms_handle_reset con 0x558bf24bec00 session 0x558bf19ec1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:59.500677+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 201 ms_handle_reset con 0x558bf2d0cc00 session 0x558bf5dd03c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 201 heartbeat osd_stat(store_statfs(0x4f896c000/0x0/0x4ffc00000, data 0x27d41ee/0x28f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf623cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 201 ms_handle_reset con 0x558bf623cc00 session 0x558bf52c5680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 24240128 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:00.500831+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 201 ms_handle_reset con 0x558bf1947000 session 0x558bf52c5c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1579742 data_alloc: 218103808 data_used: 7311360
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3bc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107126784 unmapped: 24240128 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:01.501023+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 24231936 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 201 ms_handle_reset con 0x558bf5e3bc00 session 0x558bf2fb85a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:02.501180+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 24231936 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:03.501450+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 201 ms_handle_reset con 0x558bf1947000 session 0x558bf2d03c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bec00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf623cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 201 ms_handle_reset con 0x558bf623cc00 session 0x558bf318fe00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d08800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 22970368 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:04.501775+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 201 ms_handle_reset con 0x558bf24bec00 session 0x558bf3d51680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.718374252s of 10.022595406s, submitted: 23
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 201 heartbeat osd_stat(store_statfs(0x4f8857000/0x0/0x4ffc00000, data 0x28e81fe/0x2a07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 201 handle_osd_map epochs [201,202], i have 201, src has [1,202]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 202 ms_handle_reset con 0x558bf2d0cc00 session 0x558bf53185a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 202 ms_handle_reset con 0x558bf3d08800 session 0x558bf3e45a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 202 ms_handle_reset con 0x558bf1947000 session 0x558bf45790e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bec00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 202 ms_handle_reset con 0x558bf19b5800 session 0x558bf4047680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 202 ms_handle_reset con 0x558bf24bec00 session 0x558bf45792c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 202 ms_handle_reset con 0x558bf19b4000 session 0x558bf43e8f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 23904256 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:05.501959+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1635152 data_alloc: 218103808 data_used: 7323648
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d08800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 24076288 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 202 ms_handle_reset con 0x558bf3d08800 session 0x558bf4582d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:06.502227+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 202 ms_handle_reset con 0x558bf1947000 session 0x558bf4582b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 202 ms_handle_reset con 0x558bf19b4000 session 0x558bf44e12c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 202 handle_osd_map epochs [203,203], i have 202, src has [1,203]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 203 ms_handle_reset con 0x558bf2d0cc00 session 0x558bf3e423c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 24059904 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 203 ms_handle_reset con 0x558bf19b5800 session 0x558bf4582780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:07.502473+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bec00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 203 ms_handle_reset con 0x558bf24bec00 session 0x558bf63b3a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 203 ms_handle_reset con 0x558bf19b4000 session 0x558bf44dcd20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 203 ms_handle_reset con 0x558bf19b5800 session 0x558bf531a960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 203 ms_handle_reset con 0x558bf1947000 session 0x558bf19fa780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 24059904 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:08.502729+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 203 heartbeat osd_stat(store_statfs(0x4f88de000/0x0/0x4ffc00000, data 0x27d7a5c/0x28fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 203 handle_osd_map epochs [204,204], i have 203, src has [1,204]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 204 ms_handle_reset con 0x558bf2d0cc00 session 0x558bf516dc20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 24027136 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:09.502959+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf45e8400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107356160 unmapped: 24010752 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:10.503165+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1540341 data_alloc: 218103808 data_used: 7327744
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107356160 unmapped: 24010752 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 204 handle_osd_map epochs [205,205], i have 204, src has [1,205]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:11.503357+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 204 heartbeat osd_stat(store_statfs(0x4f912c000/0x0/0x4ffc00000, data 0x200f62e/0x2131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 205 ms_handle_reset con 0x558bf45e8400 session 0x558bf520fa40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 23953408 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:12.503480+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 205 ms_handle_reset con 0x558bf1947000 session 0x558bf4320000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 23928832 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:13.503724+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 205 ms_handle_reset con 0x558bf19b4000 session 0x558bf44dd680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 23928832 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:14.503880+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a4800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 205 ms_handle_reset con 0x558bf52a4800 session 0x558bf42e4d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a7400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 205 heartbeat osd_stat(store_statfs(0x4f9a26000/0x0/0x4ffc00000, data 0x1714282/0x1837000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 205 ms_handle_reset con 0x558bf52a7400 session 0x558bf21fbe00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107503616 unmapped: 23863296 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:15.504160+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1478498 data_alloc: 218103808 data_used: 7389184
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3b000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.314363480s of 11.204186440s, submitted: 139
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 205 heartbeat osd_stat(store_statfs(0x4f9a26000/0x0/0x4ffc00000, data 0x1714282/0x1837000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107520000 unmapped: 23846912 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d45000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 205 handle_osd_map epochs [206,206], i have 206, src has [1,206]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf45e9c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:16.504319+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 206 ms_handle_reset con 0x558bf45e9c00 session 0x558bf3d51860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 206 ms_handle_reset con 0x558bf2d45000 session 0x558bf21fa5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 206 ms_handle_reset con 0x558bf1947000 session 0x558bf46925a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a4800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 207 ms_handle_reset con 0x558bf19b4000 session 0x558bf516c960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 23748608 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:17.504481+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 23748608 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:18.504699+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a7400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0a000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 208 ms_handle_reset con 0x558bf52a7400 session 0x558bf23fd0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d09400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107675648 unmapped: 23691264 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:19.504952+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 208 ms_handle_reset con 0x558bf2d0a000 session 0x558bf4578f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 208 ms_handle_reset con 0x558bf52a4800 session 0x558bf19edc20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 208 ms_handle_reset con 0x558bf5e3b000 session 0x558bf4046000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 107683840 unmapped: 23683072 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:20.505164+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494385 data_alloc: 218103808 data_used: 7397376
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 208 handle_osd_map epochs [209,209], i have 208, src has [1,209]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 209 ms_handle_reset con 0x558bf1947000 session 0x558bf21621e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 209 ms_handle_reset con 0x558bf3d09400 session 0x558bf4320b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 209 heartbeat osd_stat(store_statfs(0x4f9a1b000/0x0/0x4ffc00000, data 0x1719634/0x1842000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [1,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 209 ms_handle_reset con 0x558bf19b4000 session 0x558bf3e40b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 209 ms_handle_reset con 0x558bf1947000 session 0x558bf403f2c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108912640 unmapped: 22454272 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d09400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:21.505376+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 209 ms_handle_reset con 0x558bf3d09400 session 0x558bf3e43680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a4800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 209 ms_handle_reset con 0x558bf52a4800 session 0x558bf3e40f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108912640 unmapped: 22454272 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:22.505576+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3b000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0a000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 209 ms_handle_reset con 0x558bf2d0a000 session 0x558bf3e412c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108920832 unmapped: 22446080 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d45000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 209 ms_handle_reset con 0x558bf2d45000 session 0x558bf3e441e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:23.505703+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0a000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 209 ms_handle_reset con 0x558bf1947000 session 0x558bf3e42960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d09400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 209 ms_handle_reset con 0x558bf3d09400 session 0x558bf40821e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a4800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108929024 unmapped: 22437888 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 209 ms_handle_reset con 0x558bf52a4800 session 0x558bf531af00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:24.505874+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 209 heartbeat osd_stat(store_statfs(0x4f93ba000/0x0/0x4ffc00000, data 0x1d7813e/0x1ea3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108961792 unmapped: 22405120 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:25.506050+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1550711 data_alloc: 218103808 data_used: 7401472
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108961792 unmapped: 22405120 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:26.506232+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108961792 unmapped: 22405120 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:27.506382+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108961792 unmapped: 22405120 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:28.506532+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 209 handle_osd_map epochs [209,210], i have 209, src has [1,210]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.362996101s of 12.773057938s, submitted: 119
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf215c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108519424 unmapped: 22847488 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:29.506695+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 210 ms_handle_reset con 0x558bf215c000 session 0x558bf1de0d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 210 ms_handle_reset con 0x558bf2d0a000 session 0x558bf43994a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 210 ms_handle_reset con 0x558bf1947000 session 0x558bf2d021e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 210 heartbeat osd_stat(store_statfs(0x4f93b7000/0x0/0x4ffc00000, data 0x1d79d12/0x1ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 210 heartbeat osd_stat(store_statfs(0x4f93b7000/0x0/0x4ffc00000, data 0x1d79d12/0x1ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 22814720 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:30.506852+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 210 ms_handle_reset con 0x558bf5e3b000 session 0x558bf3053860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1551180 data_alloc: 218103808 data_used: 7405568
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 22814720 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:31.507136+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 22806528 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:32.507329+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 210 heartbeat osd_stat(store_statfs(0x4f9a16000/0x0/0x4ffc00000, data 0x171cc60/0x1847000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 22806528 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 210 ms_handle_reset con 0x558bf3e18800 session 0x558bf2c06960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:33.507499+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 22806528 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:34.507666+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2479000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 22806528 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:35.508011+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503859 data_alloc: 218103808 data_used: 7401472
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 210 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf2162b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 22806528 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:36.508213+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf45e8800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 210 handle_osd_map epochs [210,211], i have 210, src has [1,211]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108576768 unmapped: 22790144 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:37.508380+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 211 ms_handle_reset con 0x558bf2479000 session 0x558bf4578d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 211 heartbeat osd_stat(store_statfs(0x4f9a11000/0x0/0x4ffc00000, data 0x171e8fa/0x184c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108576768 unmapped: 22790144 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:38.508602+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 211 handle_osd_map epochs [211,212], i have 211, src has [1,212]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.736628532s of 10.076858521s, submitted: 57
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 212 ms_handle_reset con 0x558bf45e8800 session 0x558bf44e0960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 212 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf4579a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 212 ms_handle_reset con 0x558bf1947000 session 0x558bf5240d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffa000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108617728 unmapped: 22749184 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:39.508766+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7a800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 212 ms_handle_reset con 0x558bf5e7a800 session 0x558bf40461e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 212 handle_osd_map epochs [213,213], i have 212, src has [1,213]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 22716416 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:40.508907+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee8800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 213 ms_handle_reset con 0x558bf3ffa000 session 0x558bf2d01e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1521551 data_alloc: 218103808 data_used: 7409664
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 213 ms_handle_reset con 0x558bf5ee8800 session 0x558bf4578000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 213 heartbeat osd_stat(store_statfs(0x4f9a0a000/0x0/0x4ffc00000, data 0x172211c/0x1852000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 213 heartbeat osd_stat(store_statfs(0x4f9a0a000/0x0/0x4ffc00000, data 0x172211c/0x1852000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108675072 unmapped: 22691840 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:41.509128+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 214 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf21632c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf45e8800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7a800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 214 ms_handle_reset con 0x558bf5e7a800 session 0x558bf2d00d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 214 ms_handle_reset con 0x558bf24bf000 session 0x558bf2fb9e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 214 handle_osd_map epochs [215,215], i have 214, src has [1,215]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 22634496 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:42.509275+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 215 ms_handle_reset con 0x558bf45e8800 session 0x558bf42e6960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 215 ms_handle_reset con 0x558bf24bf000 session 0x558bf40403c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 215 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf4979860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 215 handle_osd_map epochs [215,216], i have 215, src has [1,216]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108756992 unmapped: 22609920 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:43.509420+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 216 handle_osd_map epochs [216,217], i have 216, src has [1,217]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7a800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 217 ms_handle_reset con 0x558bf5e7a800 session 0x558bf5e5c3c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 217 ms_handle_reset con 0x558bf1947000 session 0x558bf4582960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 217 heartbeat osd_stat(store_statfs(0x4f9a00000/0x0/0x4ffc00000, data 0x1729020/0x185b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:44.509606+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108822528 unmapped: 22544384 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 217 heartbeat osd_stat(store_statfs(0x4f99ff000/0x0/0x4ffc00000, data 0x1729514/0x185c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e19400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 217 ms_handle_reset con 0x558bf3e19400 session 0x558bf52403c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 217 heartbeat osd_stat(store_statfs(0x4f99ff000/0x0/0x4ffc00000, data 0x1729514/0x185c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:45.509792+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 217 handle_osd_map epochs [217,218], i have 217, src has [1,218]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108691456 unmapped: 22675456 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1535874 data_alloc: 218103808 data_used: 7430144
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 218 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf22b61e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:46.509938+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108691456 unmapped: 22675456 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:47.510185+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108691456 unmapped: 22675456 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:48.510370+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 22667264 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 218 ms_handle_reset con 0x558bf24bf000 session 0x558bf3e45860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e19400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 219 ms_handle_reset con 0x558bf1947000 session 0x558bf46934a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.278069496s of 10.002120018s, submitted: 217
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 219 ms_handle_reset con 0x558bf3e19400 session 0x558bf23fda40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:49.510538+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 22650880 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 219 ms_handle_reset con 0x558bf5e3c000 session 0x558bf40830e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:50.510702+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 22634496 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1548644 data_alloc: 218103808 data_used: 7434240
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 219 heartbeat osd_stat(store_statfs(0x4f99f9000/0x0/0x4ffc00000, data 0x172ca1c/0x1863000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:51.510937+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 22634496 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 219 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf2d6d0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 219 ms_handle_reset con 0x558bf1947000 session 0x558bf3e40780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:52.511136+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 22822912 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 219 ms_handle_reset con 0x558bf24bf000 session 0x558bf3e40000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e19400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:53.511297+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 22814720 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 220 ms_handle_reset con 0x558bf3e19400 session 0x558bf531be00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:54.511514+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108568576 unmapped: 22798336 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:55.511668+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108568576 unmapped: 22798336 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 220 ms_handle_reset con 0x558bf1947400 session 0x558bf30534a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1548727 data_alloc: 218103808 data_used: 7376896
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 220 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf465d2c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 220 ms_handle_reset con 0x558bf1947000 session 0x558bf316cd20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 220 heartbeat osd_stat(store_statfs(0x4f99fa000/0x0/0x4ffc00000, data 0x172e41e/0x1863000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 220 ms_handle_reset con 0x558bf24bf000 session 0x558bf40401e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:56.511823+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108576768 unmapped: 22790144 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e19400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 220 ms_handle_reset con 0x558bf3e19400 session 0x558bf42e41e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e58400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 220 ms_handle_reset con 0x558bf5e58400 session 0x558bf42e61e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 220 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf2d4c1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:57.512019+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108740608 unmapped: 22626304 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:58.512152+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108740608 unmapped: 22626304 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.544179916s of 10.053987503s, submitted: 121
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:59.512360+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108756992 unmapped: 22609920 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf24bf000 session 0x558bf403ef00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf1947000 session 0x558bf2fcf680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:00.512500+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3d000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf5e3d000 session 0x558bf52b12c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f93800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 108756992 unmapped: 22609920 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf3f93800 session 0x558bf1de1e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1608985 data_alloc: 218103808 data_used: 7385088
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf2fb9c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:01.512920+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 109125632 unmapped: 22241280 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf1947000 session 0x558bf19fcb40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 heartbeat osd_stat(store_statfs(0x4f936d000/0x0/0x4ffc00000, data 0x1db808c/0x1eef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3d000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:02.513051+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 109125632 unmapped: 22241280 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1946800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf1946800 session 0x558bf52a14a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf215c400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:03.513142+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 109060096 unmapped: 22306816 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf215c400 session 0x558bf4583860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:04.513262+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 111312896 unmapped: 20054016 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 heartbeat osd_stat(store_statfs(0x4f936d000/0x0/0x4ffc00000, data 0x1db808c/0x1eef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bd000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf24bd000 session 0x558bf42a6d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf516c1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:05.513401+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 20037632 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1649442 data_alloc: 234881024 data_used: 12742656
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1946800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf1946800 session 0x558bf19fb2c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf1947000 session 0x558bf5dd0000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:06.513530+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 20037632 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 heartbeat osd_stat(store_statfs(0x4f936f000/0x0/0x4ffc00000, data 0x1db808c/0x1eef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:07.513697+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 20037632 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:08.513818+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 20037632 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:09.513942+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 20037632 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:10.514146+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 20037632 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1649442 data_alloc: 234881024 data_used: 12742656
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf3dbb800 session 0x558bf4537a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0a800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:11.514308+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 111960064 unmapped: 19406848 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf2d0a800 session 0x558bf4536b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:12.514469+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 111960064 unmapped: 19406848 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 heartbeat osd_stat(store_statfs(0x4f936f000/0x0/0x4ffc00000, data 0x1db808c/0x1eef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.588603973s of 14.628640175s, submitted: 18
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf63b2d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:13.514669+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 19390464 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:14.514808+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 112001024 unmapped: 19365888 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:15.514972+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 14589952 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24be000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf24be000 session 0x558bf21fa3c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1754745 data_alloc: 234881024 data_used: 13676544
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:16.515164+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf2d0dc00 session 0x558bf2343680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 15212544 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 heartbeat osd_stat(store_statfs(0x4f8606000/0x0/0x4ffc00000, data 0x2b200ee/0x2c58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:17.515378+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 14057472 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:18.515522+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 117407744 unmapped: 13959168 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf5ee8000 session 0x558bf52b0960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7a400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:19.515759+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 15613952 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf5e7a400 session 0x558bf2d6de00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:20.515903+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 115834880 unmapped: 15532032 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1812935 data_alloc: 234881024 data_used: 13959168
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 heartbeat osd_stat(store_statfs(0x4f815d000/0x0/0x4ffc00000, data 0x2fc90ee/0x3101000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:21.516082+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 115834880 unmapped: 15532032 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf43e9680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:22.516290+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 115834880 unmapped: 15532032 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24be000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.919167519s of 10.065400124s, submitted: 181
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:23.516464+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf2d0dc00 session 0x558bf30523c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf24be000 session 0x558bf4082b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 115843072 unmapped: 15523840 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf5ee8000 session 0x558bf40823c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:24.516645+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24be800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 17211392 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf24be800 session 0x558bf52b70e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:25.516746+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 114442240 unmapped: 16924672 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1817200 data_alloc: 234881024 data_used: 13963264
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 heartbeat osd_stat(store_statfs(0x4f8136000/0x0/0x4ffc00000, data 0x2fee160/0x3128000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,2])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf23fcf00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:26.516884+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 16916480 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:27.517034+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 16916480 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:28.517178+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 16916480 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:29.517325+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 16916480 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 heartbeat osd_stat(store_statfs(0x4f853d000/0x0/0x4ffc00000, data 0x2be70fe/0x2d20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 heartbeat osd_stat(store_statfs(0x4f853d000/0x0/0x4ffc00000, data 0x2be70fe/0x2d20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:30.517462+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 16916480 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1780643 data_alloc: 234881024 data_used: 13963264
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:31.517640+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 16916480 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c0800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf43c0800 session 0x558bf516cf00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:32.517801+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 heartbeat osd_stat(store_statfs(0x4f853d000/0x0/0x4ffc00000, data 0x2be710e/0x2d21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3fb7800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf3fb7800 session 0x558bf44dc3c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2479c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf2479c00 session 0x558bf19ec780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 114483200 unmapped: 16883712 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d08000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf3d08000 session 0x558bf52a0000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf52b14a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2479c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3fb7800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:33.517925+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 15007744 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.113865376s of 10.467970848s, submitted: 71
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:34.518001+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 16621568 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 221 handle_osd_map epochs [222,222], i have 222, src has [1,222]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:35.518170+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16613376 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 222 ms_handle_reset con 0x558bf3fb7800 session 0x558bf19fa000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 222 ms_handle_reset con 0x558bf24bf400 session 0x558bf21634a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dba800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 222 ms_handle_reset con 0x558bf3dba800 session 0x558bf2163680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1804014 data_alloc: 234881024 data_used: 13975552
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a4000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 222 ms_handle_reset con 0x558bf2479c00 session 0x558bf2d6c780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 222 ms_handle_reset con 0x558bf52a4000 session 0x558bf21fba40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 222 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf42e5c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2479c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:36.518320+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 114753536 unmapped: 16613376 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 222 heartbeat osd_stat(store_statfs(0x4f838d000/0x0/0x4ffc00000, data 0x2d93cf2/0x2ed0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:37.518469+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 16580608 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 223 ms_handle_reset con 0x558bf24bf400 session 0x558bf403e5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 223 ms_handle_reset con 0x558bf2479c00 session 0x558bf42e4960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 223 heartbeat osd_stat(store_statfs(0x4f838d000/0x0/0x4ffc00000, data 0x2d93cf2/0x2ed0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:38.518625+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 114786304 unmapped: 16580608 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dba800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 223 heartbeat osd_stat(store_statfs(0x4f8389000/0x0/0x4ffc00000, data 0x2d95928/0x2ed4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3fb7800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 223 ms_handle_reset con 0x558bf3fb7800 session 0x558bf44dcb40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 223 ms_handle_reset con 0x558bf5e7cc00 session 0x558bf403ef00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:39.518776+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 115187712 unmapped: 16179200 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 223 ms_handle_reset con 0x558bf3f92c00 session 0x558bf2d4da40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:40.519006+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e5a400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 223 handle_osd_map epochs [224,224], i have 224, src has [1,224]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf623d800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 16162816 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 224 ms_handle_reset con 0x558bf3dba800 session 0x558bf42e5860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1820260 data_alloc: 234881024 data_used: 13987840
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff8400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:41.519205+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c2c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 115228672 unmapped: 16138240 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:42.519418+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 14934016 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 224 ms_handle_reset con 0x558bf43c2c00 session 0x558bf4583a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:43.519548+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 224 ms_handle_reset con 0x558bf1947000 session 0x558bf2d03c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dba800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 14893056 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 224 heartbeat osd_stat(store_statfs(0x4f8356000/0x0/0x4ffc00000, data 0x2dc652f/0x2f08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.324342728s of 10.326644897s, submitted: 37
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 225 ms_handle_reset con 0x558bf3dba800 session 0x558bf4583c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:44.519666+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d09800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 225 ms_handle_reset con 0x558bf3d09800 session 0x558bf4040b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 115892224 unmapped: 15474688 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 225 ms_handle_reset con 0x558bf193dc00 session 0x558bf316d4a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a5000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:45.519805+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 115892224 unmapped: 15474688 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1835362 data_alloc: 234881024 data_used: 15646720
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f90c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 226 ms_handle_reset con 0x558bf52a5000 session 0x558bf465d680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:46.519945+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 14426112 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 226 ms_handle_reset con 0x558bf3f90c00 session 0x558bf3e410e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:47.520191+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 226 ms_handle_reset con 0x558bf193dc00 session 0x558bf19af0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 226 ms_handle_reset con 0x558bf3ff8400 session 0x558bf4082000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d09800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 14426112 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 226 heartbeat osd_stat(store_statfs(0x4f834f000/0x0/0x4ffc00000, data 0x2dc9ed1/0x2f0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 227 ms_handle_reset con 0x558bf3d09800 session 0x558bf45374a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:48.520368+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 14376960 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dba800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 227 heartbeat osd_stat(store_statfs(0x4f834b000/0x0/0x4ffc00000, data 0x2dcbaf9/0x2f11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:49.520516+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 14376960 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7a000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f93c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 228 ms_handle_reset con 0x558bf5e7a000 session 0x558bf63b3680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 228 ms_handle_reset con 0x558bf3f93c00 session 0x558bf2162f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 228 heartbeat osd_stat(store_statfs(0x4f834d000/0x0/0x4ffc00000, data 0x2dcbaf9/0x2f11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 228 heartbeat osd_stat(store_statfs(0x4f834d000/0x0/0x4ffc00000, data 0x2dcbaf9/0x2f11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:50.520646+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 228 ms_handle_reset con 0x558bf193dc00 session 0x558bf2fb9c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 14327808 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d09800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 228 handle_osd_map epochs [229,229], i have 228, src has [1,229]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 229 ms_handle_reset con 0x558bf3d09800 session 0x558bf2fb85a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1851079 data_alloc: 234881024 data_used: 15654912
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 229 ms_handle_reset con 0x558bf3dba800 session 0x558bf3e43860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:51.521170+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 229 heartbeat osd_stat(store_statfs(0x4f8343000/0x0/0x4ffc00000, data 0x2dd07b6/0x2f18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 14327808 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:52.521584+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 229 heartbeat osd_stat(store_statfs(0x4f8343000/0x0/0x4ffc00000, data 0x2dd07b6/0x2f18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 14311424 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff8400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 229 handle_osd_map epochs [230,230], i have 229, src has [1,230]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 230 ms_handle_reset con 0x558bf3ff8400 session 0x558bf2fceb40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:53.521751+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 230 handle_osd_map epochs [230,231], i have 230, src has [1,231]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 121249792 unmapped: 10117120 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e19400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.884241104s of 10.206487656s, submitted: 108
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:54.521895+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 10248192 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 231 handle_osd_map epochs [232,232], i have 231, src has [1,232]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 232 ms_handle_reset con 0x558bf5e7c000 session 0x558bf2fb94a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:55.522123+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 232 heartbeat osd_stat(store_statfs(0x4f8088000/0x0/0x4ffc00000, data 0x30829ff/0x31cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 10559488 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1904378 data_alloc: 234881024 data_used: 16490496
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 232 heartbeat osd_stat(store_statfs(0x4f8081000/0x0/0x4ffc00000, data 0x309160b/0x31dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 232 handle_osd_map epochs [232,233], i have 232, src has [1,233]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 233 ms_handle_reset con 0x558bf193dc00 session 0x558bf44dcd20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 233 ms_handle_reset con 0x558bf3e19400 session 0x558bf4083680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:56.522936+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 10559488 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d09800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:57.523076+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 10543104 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 233 ms_handle_reset con 0x558bf3d09800 session 0x558bf2d01680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:58.523298+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 10510336 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:59.523557+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bc800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 233 handle_osd_map epochs [235,235], i have 233, src has [1,235]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 233 handle_osd_map epochs [234,235], i have 233, src has [1,235]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 235 heartbeat osd_stat(store_statfs(0x4f807a000/0x0/0x4ffc00000, data 0x3095217/0x31e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 10993664 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:00.523991+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 235 handle_osd_map epochs [236,236], i have 235, src has [1,236]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 10993664 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1915225 data_alloc: 234881024 data_used: 16498688
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:01.524209+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 10993664 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:02.524514+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 10993664 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 236 ms_handle_reset con 0x558bf24bc800 session 0x558bf52a0780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:03.524661+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 10993664 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 236 heartbeat osd_stat(store_statfs(0x4f8074000/0x0/0x4ffc00000, data 0x309a54f/0x31e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:04.524818+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.454097748s of 10.278900146s, submitted: 74
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 236 ms_handle_reset con 0x558bf5e3cc00 session 0x558bf21faf00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 10993664 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 236 heartbeat osd_stat(store_statfs(0x4f8074000/0x0/0x4ffc00000, data 0x309a54f/0x31e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:05.525065+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 236 ms_handle_reset con 0x558bf24bf000 session 0x558bf23fc960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 236 ms_handle_reset con 0x558bf5e3d000 session 0x558bf4040000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 10993664 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1912941 data_alloc: 234881024 data_used: 16498688
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 236 ms_handle_reset con 0x558bf193dc00 session 0x558bf3e45860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:06.525206+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 115924992 unmapped: 15441920 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:07.525378+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 115924992 unmapped: 15441920 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:08.525504+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 115924992 unmapped: 15441920 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:09.525661+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 115924992 unmapped: 15441920 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:10.525817+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 115924992 unmapped: 15441920 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffc400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1715669 data_alloc: 234881024 data_used: 10522624
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 237 ms_handle_reset con 0x558bf3ffc400 session 0x558bf43994a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 237 heartbeat osd_stat(store_statfs(0x4f9532000/0x0/0x4ffc00000, data 0x1bdbfc7/0x1d2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:11.526045+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 237 handle_osd_map epochs [237,238], i have 237, src has [1,238]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 15400960 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:12.526182+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 15400960 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:13.526326+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 15400960 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 240 ms_handle_reset con 0x558bf19b5400 session 0x558bf43992c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:14.526428+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 240 ms_handle_reset con 0x558bf193dc00 session 0x558bf22b63c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 15376384 heap: 131366912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c2c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 240 ms_handle_reset con 0x558bf43c2c00 session 0x558bf2343e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e19800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 240 ms_handle_reset con 0x558bf3e19800 session 0x558bf465d0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 240 heartbeat osd_stat(store_statfs(0x4f9527000/0x0/0x4ffc00000, data 0x1be14c1/0x1d35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d09400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 240 ms_handle_reset con 0x558bf3d09400 session 0x558bf43e8780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:15.526589+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dba000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 240 ms_handle_reset con 0x558bf3dba000 session 0x558bf42e45a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.495403290s of 10.952305794s, submitted: 133
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 240 ms_handle_reset con 0x558bf193dc00 session 0x558bf63b2f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 19390464 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d09400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 240 ms_handle_reset con 0x558bf3d09400 session 0x558bf19fa5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dba000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 240 ms_handle_reset con 0x558bf3dba000 session 0x558bf45a90e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e19800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 240 ms_handle_reset con 0x558bf3e19800 session 0x558bf2b57680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c2c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 240 ms_handle_reset con 0x558bf43c2c00 session 0x558bf2c070e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1806371 data_alloc: 234881024 data_used: 10530816
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:16.530187+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 19390464 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:17.530321+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 19382272 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:18.530610+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d09400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 240 ms_handle_reset con 0x558bf3d09400 session 0x558bf2d4c780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 19382272 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:19.530813+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 19382272 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:20.530992+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 241 ms_handle_reset con 0x558bf193dc00 session 0x558bf43e90e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 19382272 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1808112 data_alloc: 234881024 data_used: 10543104
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 241 heartbeat osd_stat(store_statfs(0x4f8b03000/0x0/0x4ffc00000, data 0x2603fe7/0x275a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1946800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 241 ms_handle_reset con 0x558bf1946800 session 0x558bf21630e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:21.531171+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 241 ms_handle_reset con 0x558bf5e5a400 session 0x558bf4082f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 241 ms_handle_reset con 0x558bf623d800 session 0x558bf23fdc20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 19382272 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1946800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 241 ms_handle_reset con 0x558bf1946800 session 0x558bf19fa3c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 241 ms_handle_reset con 0x558bf193dc00 session 0x558bf43205a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:22.531301+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 20824064 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d09400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 241 ms_handle_reset con 0x558bf3d09400 session 0x558bf19ae3c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e5a400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:23.531434+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 241 ms_handle_reset con 0x558bf5e5a400 session 0x558bf5120000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 20824064 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a4000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:24.531625+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 241 heartbeat osd_stat(store_statfs(0x4f8b86000/0x0/0x4ffc00000, data 0x2173fa4/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 20824064 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:25.531752+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 241 heartbeat osd_stat(store_statfs(0x4f8b86000/0x0/0x4ffc00000, data 0x2173fa4/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 118407168 unmapped: 17162240 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1802069 data_alloc: 234881024 data_used: 17514496
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 241 heartbeat osd_stat(store_statfs(0x4f8b86000/0x0/0x4ffc00000, data 0x2173fa4/0x22c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:26.531908+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a7c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 241 ms_handle_reset con 0x558bf52a7c00 session 0x558bf2fcfe00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.540328026s of 10.904197693s, submitted: 96
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 241 ms_handle_reset con 0x558bf193dc00 session 0x558bf318e960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 119324672 unmapped: 16244736 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:27.532080+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 119324672 unmapped: 16244736 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 241 heartbeat osd_stat(store_statfs(0x4f8b89000/0x0/0x4ffc00000, data 0x2173ef2/0x22c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:28.532264+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1946800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 16211968 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:29.532429+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 241 ms_handle_reset con 0x558bf1946800 session 0x558bf43201e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 16211968 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:30.532721+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 16211968 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1804475 data_alloc: 234881024 data_used: 17518592
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:31.532883+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 16211968 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:32.533039+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 16211968 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:33.533162+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 16211968 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 242 heartbeat osd_stat(store_statfs(0x4f8b85000/0x0/0x4ffc00000, data 0x21759ac/0x22c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3d400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:34.533982+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 242 heartbeat osd_stat(store_statfs(0x4f8b85000/0x0/0x4ffc00000, data 0x21759ac/0x22c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 242 ms_handle_reset con 0x558bf5e3d400 session 0x558bf3e44960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 119635968 unmapped: 15933440 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:35.534169+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 119635968 unmapped: 15933440 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1856003 data_alloc: 234881024 data_used: 17518592
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:36.535262+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3a400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.157931328s of 10.038742065s, submitted: 35
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 242 ms_handle_reset con 0x558bf5e3a400 session 0x558bf2162b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 120725504 unmapped: 14843904 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a7400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d07400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:37.535389+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 126337024 unmapped: 9232384 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:38.535622+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 242 heartbeat osd_stat(store_statfs(0x4f8050000/0x0/0x4ffc00000, data 0x2caa9cf/0x2dfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 7004160 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:39.536025+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 242 ms_handle_reset con 0x558bf52a7400 session 0x558bf40830e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 242 ms_handle_reset con 0x558bf3d07400 session 0x558bf4040f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 7004160 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 242 ms_handle_reset con 0x558bf193dc00 session 0x558bf318ef00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1946800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 242 heartbeat osd_stat(store_statfs(0x4f8046000/0x0/0x4ffc00000, data 0x2cb39cf/0x2e07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:40.536198+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 242 ms_handle_reset con 0x558bf1946800 session 0x558bf19fa1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 126099456 unmapped: 9469952 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1874842 data_alloc: 234881024 data_used: 18288640
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:41.536476+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3a400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 126099456 unmapped: 9469952 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:42.536765+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 242 heartbeat osd_stat(store_statfs(0x4f850b000/0x0/0x4ffc00000, data 0x26c6a1e/0x281b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 242 ms_handle_reset con 0x558bf5e3a400 session 0x558bf403e780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 126099456 unmapped: 9469952 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3d400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:43.536912+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 126107648 unmapped: 9461760 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2c7d400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:44.537289+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 126107648 unmapped: 9461760 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:45.537362+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 242 ms_handle_reset con 0x558bf2c7d400 session 0x558bf23434a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 126115840 unmapped: 9453568 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1880958 data_alloc: 234881024 data_used: 18296832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:46.537518+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 242 heartbeat osd_stat(store_statfs(0x4f8630000/0x0/0x4ffc00000, data 0x26c6e90/0x281e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.954082489s of 10.410627365s, submitted: 83
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 126115840 unmapped: 9453568 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:47.537644+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 242 handle_osd_map epochs [242,243], i have 242, src has [1,243]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 126115840 unmapped: 9453568 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 243 ms_handle_reset con 0x558bf5e3d400 session 0x558bf403e5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:48.537796+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 126115840 unmapped: 9453568 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d45c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:49.538024+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c1800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 126115840 unmapped: 9453568 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 243 ms_handle_reset con 0x558bf2d45c00 session 0x558bf2d4d860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee9c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:50.538170+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 243 ms_handle_reset con 0x558bf5ee9c00 session 0x558bf4083860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 126164992 unmapped: 9404416 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1890688 data_alloc: 234881024 data_used: 18333696
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:51.538337+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff8400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 126173184 unmapped: 9396224 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:52.538491+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 244 heartbeat osd_stat(store_statfs(0x4f862c000/0x0/0x4ffc00000, data 0x26ca21a/0x2821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,3])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 244 ms_handle_reset con 0x558bf193dc00 session 0x558bf2162780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 126173184 unmapped: 9396224 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 244 ms_handle_reset con 0x558bf3ff8400 session 0x558bf44dcd20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:53.538802+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d45c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 244 ms_handle_reset con 0x558bf43c1800 session 0x558bf5e5d0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 10543104 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 244 ms_handle_reset con 0x558bf193dc00 session 0x558bf4082960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff8400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:54.538969+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 245 ms_handle_reset con 0x558bf2d45c00 session 0x558bf44dcb40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 245 ms_handle_reset con 0x558bf3ff8400 session 0x558bf52b0960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 10502144 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0bc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 245 ms_handle_reset con 0x558bf2d0bc00 session 0x558bf21621e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:55.539205+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d45c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 245 ms_handle_reset con 0x558bf193dc00 session 0x558bf403f680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff8400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 245 ms_handle_reset con 0x558bf3ff8400 session 0x558bf19fa5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 10510336 heap: 135569408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c1800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1891613 data_alloc: 234881024 data_used: 18337792
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 245 handle_osd_map epochs [245,246], i have 245, src has [1,246]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:56.539371+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 246 ms_handle_reset con 0x558bf43c1800 session 0x558bf19fa1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 246 ms_handle_reset con 0x558bf52a5c00 session 0x558bf43205a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c1400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 137838592 unmapped: 5341184 heap: 143179776 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.493886948s of 10.062929153s, submitted: 114
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:57.539974+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 246 ms_handle_reset con 0x558bf2d45c00 session 0x558bf30e6960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 145899520 unmapped: 7266304 heap: 153165824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 246 ms_handle_reset con 0x558bf43c1400 session 0x558bf2343e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:58.540314+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 246 heartbeat osd_stat(store_statfs(0x4f8cf9000/0x0/0x4ffc00000, data 0x301bcd6/0x3174000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 12681216 heap: 153165824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:59.540869+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 12681216 heap: 153165824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:00.541841+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 12681216 heap: 153165824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2066486 data_alloc: 234881024 data_used: 24276992
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 246 handle_osd_map epochs [247,247], i have 247, src has [1,247]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:01.542113+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 12640256 heap: 153165824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:02.542399+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 247 ms_handle_reset con 0x558bf193dc00 session 0x558bf44dd4a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 132997120 unmapped: 20168704 heap: 153165824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 247 ms_handle_reset con 0x558bf3dbb400 session 0x558bf30e65a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a7000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:03.542781+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 247 heartbeat osd_stat(store_statfs(0x4f8650000/0x0/0x4ffc00000, data 0x36c3952/0x381d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 132997120 unmapped: 20168704 heap: 153165824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:04.542943+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 134045696 unmapped: 19120128 heap: 153165824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:05.543153+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 248 heartbeat osd_stat(store_statfs(0x4f864c000/0x0/0x4ffc00000, data 0x36c555e/0x3820000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 248 ms_handle_reset con 0x558bf193cc00 session 0x558bf2fcf680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 248 ms_handle_reset con 0x558bf52a7000 session 0x558bf30e72c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 134045696 unmapped: 19120128 heap: 153165824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2049588 data_alloc: 234881024 data_used: 24276992
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:06.543326+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 248 ms_handle_reset con 0x558bf193cc00 session 0x558bf4040780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 248 ms_handle_reset con 0x558bf193dc00 session 0x558bf4046b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 248 ms_handle_reset con 0x558bf3dbb400 session 0x558bf2fce780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 134070272 unmapped: 19095552 heap: 153165824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:07.543452+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 248 heartbeat osd_stat(store_statfs(0x4f864d000/0x0/0x4ffc00000, data 0x36c555e/0x3820000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 134070272 unmapped: 19095552 heap: 153165824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:08.543659+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c1400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c0800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.841867924s of 11.478613853s, submitted: 48
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 248 ms_handle_reset con 0x558bf43c0800 session 0x558bf30e70e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 248 ms_handle_reset con 0x558bf43c1400 session 0x558bf2fb9e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 134086656 unmapped: 19079168 heap: 153165824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:09.543797+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 248 heartbeat osd_stat(store_statfs(0x4f864d000/0x0/0x4ffc00000, data 0x36c555e/0x3820000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 134086656 unmapped: 19079168 heap: 153165824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:10.543984+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 248 handle_osd_map epochs [249,249], i have 249, src has [1,249]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43bfc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 134103040 unmapped: 19062784 heap: 153165824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 249 ms_handle_reset con 0x558bf43bfc00 session 0x558bf318fe00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 249 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf4083a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2057950 data_alloc: 234881024 data_used: 24289280
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:11.544270+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 134103040 unmapped: 19062784 heap: 153165824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f92400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:12.544442+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf623c800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 249 ms_handle_reset con 0x558bf623c800 session 0x558bf318e000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffc400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 249 ms_handle_reset con 0x558bf3ffc400 session 0x558bf21fbe00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 249 ms_handle_reset con 0x558bf2d0c000 session 0x558bf21faf00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 249 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf4082f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffc400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43bfc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 134103040 unmapped: 19062784 heap: 153165824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:13.544599+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 134111232 unmapped: 19054592 heap: 153165824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:14.544725+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 249 heartbeat osd_stat(store_statfs(0x4f8648000/0x0/0x4ffc00000, data 0x36c70d1/0x3826000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 250 ms_handle_reset con 0x558bf3ffc400 session 0x558bf22b7860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 250 ms_handle_reset con 0x558bf2d0c000 session 0x558bf4398780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f92c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 250 ms_handle_reset con 0x558bf5f92c00 session 0x558bf19fba40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 250 ms_handle_reset con 0x558bf5f92400 session 0x558bf21623c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 250 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf403f0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d45400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 250 ms_handle_reset con 0x558bf2d45400 session 0x558bf4320000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 134135808 unmapped: 19030016 heap: 153165824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:15.544860+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 250 ms_handle_reset con 0x558bf2d0c000 session 0x558bf318e000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffc400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f92c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 251 ms_handle_reset con 0x558bf5f92c00 session 0x558bf30e70e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 251 ms_handle_reset con 0x558bf43bfc00 session 0x558bf43985a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 134144000 unmapped: 19021824 heap: 153165824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2066713 data_alloc: 234881024 data_used: 24236032
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:16.545192+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 251 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf3e44d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 251 ms_handle_reset con 0x558bf3ffc400 session 0x558bf318fe00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d45400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 252 ms_handle_reset con 0x558bf2d45400 session 0x558bf44dc1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43bfc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f92c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 134152192 unmapped: 19013632 heap: 153165824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 252 ms_handle_reset con 0x558bf43bfc00 session 0x558bf45781e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 252 ms_handle_reset con 0x558bf2d0c000 session 0x558bf2fce780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 252 ms_handle_reset con 0x558bf5f92c00 session 0x558bf4046b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:17.545350+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d45400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffc400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 252 ms_handle_reset con 0x558bf3ffc400 session 0x558bf43205a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43bfc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a5400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 252 ms_handle_reset con 0x558bf43bfc00 session 0x558bf19fa1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 19005440 heap: 153165824 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:18.545505+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.735878468s of 10.059769630s, submitted: 104
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 134414336 unmapped: 52363264 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:19.545622+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 252 heartbeat osd_stat(store_statfs(0x4f623c000/0x0/0x4ffc00000, data 0x5acc523/0x5c32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,3])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 44736512 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:20.545941+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 139075584 unmapped: 47702016 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2823595 data_alloc: 251658240 data_used: 30285824
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:21.546191+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 139206656 unmapped: 47570944 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:22.546352+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 46448640 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:23.546524+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 252 heartbeat osd_stat(store_statfs(0x4efa3c000/0x0/0x4ffc00000, data 0xc2cc523/0xc432000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 145825792 unmapped: 40951808 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 252 handle_osd_map epochs [252,253], i have 252, src has [1,253]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:24.546668+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 150126592 unmapped: 36651008 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:25.546825+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 150249472 unmapped: 36528128 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:26.546990+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3818553 data_alloc: 251658240 data_used: 30294016
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 150405120 unmapped: 36372480 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:27.547155+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e7639000/0x0/0x4ffc00000, data 0x146cdfdd/0x14835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,2])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154681344 unmapped: 32096256 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:28.547316+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.254049540s of 10.012930870s, submitted: 255
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154976256 unmapped: 31801344 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:29.547463+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 253 heartbeat osd_stat(store_statfs(0x4e4639000/0x0/0x4ffc00000, data 0x176cdfdd/0x17835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 147873792 unmapped: 38903808 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:30.547614+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 253 ms_handle_reset con 0x558bf52a5400 session 0x558bf4082b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 253 ms_handle_reset con 0x558bf3e18800 session 0x558bf45785a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 253 ms_handle_reset con 0x558bf3e18800 session 0x558bf1de10e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffc400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 143769600 unmapped: 43008000 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:31.547816+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4593969 data_alloc: 251658240 data_used: 30294016
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 253 ms_handle_reset con 0x558bf3ffc400 session 0x558bf40830e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43bfc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 143974400 unmapped: 42803200 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:32.547901+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 253 ms_handle_reset con 0x558bf43bfc00 session 0x558bf3e441e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 144769024 unmapped: 42008576 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:33.548204+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a5400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 253 ms_handle_reset con 0x558bf52a5400 session 0x558bf531ad20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f92c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 145252352 unmapped: 41525248 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:34.548377+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 254 ms_handle_reset con 0x558bf5f92c00 session 0x558bf19fa5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 254 heartbeat osd_stat(store_statfs(0x4f65f0000/0x0/0x4ffc00000, data 0x3715b93/0x387c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 145268736 unmapped: 41508864 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:35.548585+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 145268736 unmapped: 41508864 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:36.548782+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2255405 data_alloc: 251658240 data_used: 34328576
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 254 ms_handle_reset con 0x558bf3e18800 session 0x558bf21fb4a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffc400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 254 ms_handle_reset con 0x558bf3ffc400 session 0x558bf4398d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146522112 unmapped: 40255488 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:37.548906+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 254 heartbeat osd_stat(store_statfs(0x4f85f1000/0x0/0x4ffc00000, data 0x3715b93/0x387c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146522112 unmapped: 40255488 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:38.549068+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43bfc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.036072731s of 10.070055008s, submitted: 189
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 254 ms_handle_reset con 0x558bf43bfc00 session 0x558bf4578000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146522112 unmapped: 40255488 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:39.549253+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146522112 unmapped: 40255488 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:40.549383+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 254 heartbeat osd_stat(store_statfs(0x4f85f2000/0x0/0x4ffc00000, data 0x3715b93/0x387c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e19400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146522112 unmapped: 40255488 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:41.549579+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2255651 data_alloc: 251658240 data_used: 34852864
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 254 ms_handle_reset con 0x558bf3e19400 session 0x558bf49783c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee6000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146522112 unmapped: 40255488 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:42.549730+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146407424 unmapped: 40370176 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:43.549880+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 254 ms_handle_reset con 0x558bf5ee6000 session 0x558bf52b0780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 254 heartbeat osd_stat(store_statfs(0x4f85f2000/0x0/0x4ffc00000, data 0x3715b93/0x387c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 254 handle_osd_map epochs [254,255], i have 254, src has [1,255]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146415616 unmapped: 40361984 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:44.550046+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 ms_handle_reset con 0x558bf3e18800 session 0x558bf316c3c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146898944 unmapped: 39878656 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:45.550280+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dba800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 ms_handle_reset con 0x558bf3dba800 session 0x558bf2b57c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146898944 unmapped: 39878656 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:46.550459+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2316777 data_alloc: 251658240 data_used: 34861056
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:47.550641+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146898944 unmapped: 39878656 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 heartbeat osd_stat(store_statfs(0x4f7ee1000/0x0/0x4ffc00000, data 0x3e236af/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:48.550814+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 148258816 unmapped: 38518784 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 heartbeat osd_stat(store_statfs(0x4f7ee1000/0x0/0x4ffc00000, data 0x3e236af/0x3f8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:49.551006+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 148258816 unmapped: 38518784 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2478c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.165941715s of 10.850604057s, submitted: 69
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:50.551188+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 148258816 unmapped: 38518784 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:51.551444+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 157360128 unmapped: 29417472 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2403959 data_alloc: 251658240 data_used: 36593664
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 heartbeat osd_stat(store_statfs(0x4f766a000/0x0/0x4ffc00000, data 0x469b6af/0x4804000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,4])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:52.551631+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 157368320 unmapped: 29409280 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f92800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 ms_handle_reset con 0x558bf193dc00 session 0x558bf2162960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:53.551810+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 149741568 unmapped: 37036032 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 ms_handle_reset con 0x558bf5f92800 session 0x558bf21fbe00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 ms_handle_reset con 0x558bf2478c00 session 0x558bf21fba40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:54.551988+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 150282240 unmapped: 36495360 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f91000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee6800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 ms_handle_reset con 0x558bf5ee6800 session 0x558bf22b61e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 ms_handle_reset con 0x558bf3f91000 session 0x558bf19fc5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:55.552148+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 150282240 unmapped: 36495360 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e19800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 ms_handle_reset con 0x558bf3e19800 session 0x558bf403f4a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf623cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 ms_handle_reset con 0x558bf623cc00 session 0x558bf19fad20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f62000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:56.552283+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 150290432 unmapped: 36487168 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 ms_handle_reset con 0x558bf1947800 session 0x558bf40821e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2421162 data_alloc: 251658240 data_used: 36605952
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e19800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 ms_handle_reset con 0x558bf3e19800 session 0x558bf42e4d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f91000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 heartbeat osd_stat(store_statfs(0x4f72fb000/0x0/0x4ffc00000, data 0x4a09711/0x4b73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee6800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 ms_handle_reset con 0x558bf5ee6800 session 0x558bf316c3c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf623cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:57.552492+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 149651456 unmapped: 37126144 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 ms_handle_reset con 0x558bf3f91000 session 0x558bf42e43c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:58.552819+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 151896064 unmapped: 34881536 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 ms_handle_reset con 0x558bf623cc00 session 0x558bf52b0780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:59.553002+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf45e8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff9c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 151896064 unmapped: 34881536 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.568102360s of 10.011026382s, submitted: 63
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 heartbeat osd_stat(store_statfs(0x4f70a5000/0x0/0x4ffc00000, data 0x4c5e911/0x4dc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [0,0,0,0,0,0,2])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 ms_handle_reset con 0x558bf5f62000 session 0x558bf3e445a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e19800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:00.553197+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 151740416 unmapped: 35037184 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:01.553403+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 27631616 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2521039 data_alloc: 268435456 data_used: 46813184
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf44dd4a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 ms_handle_reset con 0x558bf2d45400 session 0x558bf4578780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:02.553584+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 161341440 unmapped: 25436160 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f91000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:03.553715+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 161341440 unmapped: 25436160 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:04.553850+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 162390016 unmapped: 24387584 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:05.554018+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 162390016 unmapped: 24387584 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 heartbeat osd_stat(store_statfs(0x4f70ec000/0x0/0x4ffc00000, data 0x4c18902/0x4d82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:06.554244+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 162422784 unmapped: 24354816 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2526658 data_alloc: 268435456 data_used: 48324608
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:07.554441+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 162430976 unmapped: 24346624 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:08.554625+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 162430976 unmapped: 24346624 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 heartbeat osd_stat(store_statfs(0x4f70ec000/0x0/0x4ffc00000, data 0x4c18902/0x4d82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:09.554823+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 162430976 unmapped: 24346624 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 heartbeat osd_stat(store_statfs(0x4f70ec000/0x0/0x4ffc00000, data 0x4c18902/0x4d82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.265880108s of 10.424218178s, submitted: 18
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3497614738' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:10.555033+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 162430976 unmapped: 24346624 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:11.555251+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 162439168 unmapped: 24338432 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2525709 data_alloc: 268435456 data_used: 48324608
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 heartbeat osd_stat(store_statfs(0x4f70ec000/0x0/0x4ffc00000, data 0x4c188f2/0x4d81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:12.555387+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 162488320 unmapped: 24289280 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:13.555531+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 164241408 unmapped: 22536192 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 heartbeat osd_stat(store_statfs(0x4f70ed000/0x0/0x4ffc00000, data 0x4c188f2/0x4d81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,7])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:14.555666+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 166666240 unmapped: 20111360 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:15.555844+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 170131456 unmapped: 16646144 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:16.556000+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 166846464 unmapped: 19931136 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2573381 data_alloc: 268435456 data_used: 48394240
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:17.556171+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 166846464 unmapped: 19931136 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 heartbeat osd_stat(store_statfs(0x4f6b4b000/0x0/0x4ffc00000, data 0x51ba8f2/0x5323000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,3])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:18.582783+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 166846464 unmapped: 19931136 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:19.583068+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 169484288 unmapped: 17293312 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 heartbeat osd_stat(store_statfs(0x4f6866000/0x0/0x4ffc00000, data 0x54978f2/0x5600000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,7])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 0.638751030s of 10.264914513s, submitted: 80
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:20.583245+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 169197568 unmapped: 17580032 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:21.583437+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 169197568 unmapped: 17580032 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2613037 data_alloc: 268435456 data_used: 48390144
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:22.583580+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 170246144 unmapped: 16531456 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:23.583790+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 172597248 unmapped: 14180352 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 heartbeat osd_stat(store_statfs(0x4f646a000/0x0/0x4ffc00000, data 0x58938f2/0x59fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,5,1,18])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:24.583929+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 172589056 unmapped: 14188544 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:25.584135+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 172589056 unmapped: 14188544 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 heartbeat osd_stat(store_statfs(0x4f646a000/0x0/0x4ffc00000, data 0x58938f2/0x59fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,6,18])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.780549526s, txc = 0x558bf245a900
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.780166149s, txc = 0x558bf314af00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.778464794s, txc = 0x558bf245bb00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.778217316s, txc = 0x558bf3096900
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.778244019s, txc = 0x558bf3071800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.778017998s, txc = 0x558bf21b2f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.778039455s, txc = 0x558bf218bb00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.777891159s, txc = 0x558bf1ea0000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.777847767s, txc = 0x558bf245af00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.777720928s, txc = 0x558bf24a6900
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.777545452s, txc = 0x558bf23acf00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.777036667s, txc = 0x558bf21b3200
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.777060509s, txc = 0x558bf310f500
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.776832104s, txc = 0x558bf2d50300
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.776288033s, txc = 0x558bf23d8c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.776040554s, txc = 0x558bf1ec3b00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.595216274s, txc = 0x558bf24a7b00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.593986034s, txc = 0x558bf2516600
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:26.584307+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 172523520 unmapped: 14254080 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2649623 data_alloc: 268435456 data_used: 48381952
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 heartbeat osd_stat(store_statfs(0x4f6204000/0x0/0x4ffc00000, data 0x5aff8f2/0x5c68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [0,0,0,0,0,0,2])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 ms_handle_reset con 0x558bf3f91000 session 0x558bf23425a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:27.584445+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 171450368 unmapped: 15327232 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:28.584643+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 171933696 unmapped: 14843904 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:29.584778+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 171933696 unmapped: 14843904 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7b800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:30.584908+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 1.524443746s of 10.415996552s, submitted: 76
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 169476096 unmapped: 17301504 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:31.585083+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 ms_handle_reset con 0x558bf5e7b800 session 0x558bf403e1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7d000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 heartbeat osd_stat(store_statfs(0x4f4f58000/0x0/0x4ffc00000, data 0x5c0a8f2/0x5d73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 170098688 unmapped: 16678912 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2679354 data_alloc: 268435456 data_used: 49999872
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:32.585262+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 170098688 unmapped: 16678912 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:33.585387+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 171540480 unmapped: 15237120 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 ms_handle_reset con 0x558bf5e7d000 session 0x558bf3e441e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:34.586670+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 171646976 unmapped: 15130624 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 heartbeat osd_stat(store_statfs(0x4f509b000/0x0/0x4ffc00000, data 0x5ac06e2/0x5c27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:35.586827+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 171679744 unmapped: 15097856 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 handle_osd_map epochs [255,256], i have 255, src has [1,256]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 255 handle_osd_map epochs [256,256], i have 256, src has [1,256]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:36.586947+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 256 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf2342000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 162439168 unmapped: 24338432 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2472807 data_alloc: 251658240 data_used: 37224448
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3c800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a5000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 256 ms_handle_reset con 0x558bf52a5000 session 0x558bf43e9e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a6000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:37.587107+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 257 ms_handle_reset con 0x558bf52a6000 session 0x558bf3e41e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 162471936 unmapped: 24305664 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f93c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e58000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:38.587270+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 257 handle_osd_map epochs [257,258], i have 257, src has [1,258]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 162471936 unmapped: 24305664 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 258 ms_handle_reset con 0x558bf5e58000 session 0x558bf318e5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 258 ms_handle_reset con 0x558bf52a4000 session 0x558bf3d51c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 258 heartbeat osd_stat(store_statfs(0x4f6096000/0x0/0x4ffc00000, data 0x4acfeec/0x4c38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:39.587410+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3b400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 258 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf2163860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 150069248 unmapped: 36708352 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:40.587539+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.500047684s of 10.075272560s, submitted: 187
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 259 ms_handle_reset con 0x558bf5e3b400 session 0x558bf19fbc20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 150077440 unmapped: 36700160 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 259 heartbeat osd_stat(store_statfs(0x4f6fdf000/0x0/0x4ffc00000, data 0x3b85ae8/0x3cee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:41.587717+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 259 ms_handle_reset con 0x558bf5f93c00 session 0x558bf45792c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 150077440 unmapped: 36700160 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2269215 data_alloc: 234881024 data_used: 21032960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0d000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 259 ms_handle_reset con 0x558bf2d0d000 session 0x558bf23434a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dba000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 259 ms_handle_reset con 0x558bf3dba000 session 0x558bf531ad20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:42.587848+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 150142976 unmapped: 36634624 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:43.587990+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 150142976 unmapped: 36634624 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 259 heartbeat osd_stat(store_statfs(0x4f6fdd000/0x0/0x4ffc00000, data 0x3b87692/0x3cf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 259 ms_handle_reset con 0x558bf3e19800 session 0x558bf44dd0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 259 handle_osd_map epochs [259,260], i have 259, src has [1,260]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 260 ms_handle_reset con 0x558bf3ff9c00 session 0x558bf30e74a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 260 heartbeat osd_stat(store_statfs(0x4f6fdd000/0x0/0x4ffc00000, data 0x3b87692/0x3cf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 260 ms_handle_reset con 0x558bf3dbb400 session 0x558bf2d6c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:44.588753+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2478400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 141901824 unmapped: 44875776 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 260 ms_handle_reset con 0x558bf2478400 session 0x558bf2d021e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43bf800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 260 ms_handle_reset con 0x558bf43bf800 session 0x558bf2c070e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:45.588926+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2478400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 141918208 unmapped: 44859392 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 260 handle_osd_map epochs [260,261], i have 260, src has [1,261]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 261 ms_handle_reset con 0x558bf2478400 session 0x558bf45365a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:46.589074+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 141918208 unmapped: 44859392 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 262 ms_handle_reset con 0x558bf3dbb400 session 0x558bf4537e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2079517 data_alloc: 234881024 data_used: 11665408
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e19800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 262 handle_osd_map epochs [262,263], i have 262, src has [1,263]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 263 ms_handle_reset con 0x558bf3e19800 session 0x558bf4320780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:47.589259+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 141869056 unmapped: 44908544 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 263 heartbeat osd_stat(store_statfs(0x4f8148000/0x0/0x4ffc00000, data 0x2a17638/0x2b85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff9c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee8800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 263 ms_handle_reset con 0x558bf3ff9c00 session 0x558bf2fb94a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:48.589399+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24abc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 263 ms_handle_reset con 0x558bf5ee8800 session 0x558bf2d032c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2478400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 263 ms_handle_reset con 0x558bf2478400 session 0x558bf4399c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 141869056 unmapped: 44908544 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 263 heartbeat osd_stat(store_statfs(0x4f8148000/0x0/0x4ffc00000, data 0x2a17638/0x2b85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 263 heartbeat osd_stat(store_statfs(0x4f8148000/0x0/0x4ffc00000, data 0x2a17638/0x2b85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:49.589548+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 264 ms_handle_reset con 0x558bf24abc00 session 0x558bf43981e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 141869056 unmapped: 44908544 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 264 heartbeat osd_stat(store_statfs(0x4f8144000/0x0/0x4ffc00000, data 0x2a19260/0x2b88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:50.589726+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 141869056 unmapped: 44908544 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:51.589928+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 264 ms_handle_reset con 0x558bf5e3c800 session 0x558bf3e450e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 141869056 unmapped: 44908544 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.607349396s of 11.119590759s, submitted: 162
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 264 ms_handle_reset con 0x558bf2d0cc00 session 0x558bf3052f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 264 ms_handle_reset con 0x558bf45e8000 session 0x558bf19fa5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2082275 data_alloc: 234881024 data_used: 11677696
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2478400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:52.590048+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 264 ms_handle_reset con 0x558bf2478400 session 0x558bf318e960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 136814592 unmapped: 49963008 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:53.590198+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 136814592 unmapped: 49963008 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 264 heartbeat osd_stat(store_statfs(0x4f8cd8000/0x0/0x4ffc00000, data 0x1e871fe/0x1ff5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:54.590322+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 264 heartbeat osd_stat(store_statfs(0x4f8cd8000/0x0/0x4ffc00000, data 0x1e871fe/0x1ff5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 136814592 unmapped: 49963008 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:55.590489+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 136814592 unmapped: 49963008 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffb800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 264 ms_handle_reset con 0x558bf3ffb800 session 0x558bf403e1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3a800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:56.590647+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 265 ms_handle_reset con 0x558bf5e3a800 session 0x558bf23425a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 136871936 unmapped: 49905664 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1904761 data_alloc: 218103808 data_used: 7974912
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:57.590892+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 136871936 unmapped: 49905664 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffd000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:58.591049+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 265 heartbeat osd_stat(store_statfs(0x4f8fd1000/0x0/0x4ffc00000, data 0x177cc56/0x18eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 266 ms_handle_reset con 0x558bf3ffd000 session 0x558bf42e43c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 136880128 unmapped: 49897472 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:59.591254+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 136880128 unmapped: 49897472 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e19400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 266 ms_handle_reset con 0x558bf3e19400 session 0x558bf40821e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2478400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:00.591665+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 266 handle_osd_map epochs [266,267], i have 266, src has [1,267]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 267 ms_handle_reset con 0x558bf2478400 session 0x558bf403f4a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 136814592 unmapped: 49963008 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 267 heartbeat osd_stat(store_statfs(0x4f8fcf000/0x0/0x4ffc00000, data 0x177e82a/0x18ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e19400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:01.591857+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 268 ms_handle_reset con 0x558bf3e19400 session 0x558bf22b61e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 136814592 unmapped: 49963008 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1917401 data_alloc: 218103808 data_used: 7987200
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:02.592047+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 136814592 unmapped: 49963008 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:03.592249+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 136814592 unmapped: 49963008 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 268 heartbeat osd_stat(store_statfs(0x4f8fc7000/0x0/0x4ffc00000, data 0x17820a4/0x18f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:04.592464+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 136814592 unmapped: 49963008 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:05.592631+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 136814592 unmapped: 49963008 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:06.592827+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 136814592 unmapped: 49963008 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1917401 data_alloc: 218103808 data_used: 7987200
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:07.593015+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 136814592 unmapped: 49963008 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:08.593183+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 136814592 unmapped: 49963008 heap: 186777600 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a6c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.849105835s of 17.470384598s, submitted: 102
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 ms_handle_reset con 0x558bf52a6c00 session 0x558bf21fbe00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 ms_handle_reset con 0x558bf2d0dc00 session 0x558bf42e5e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24aac00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 ms_handle_reset con 0x558bf24aac00 session 0x558bf403f2c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d07000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 ms_handle_reset con 0x558bf3d07000 session 0x558bf403e780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:09.593316+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 ms_handle_reset con 0x558bf19b5000 session 0x558bf403f0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff7800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 ms_handle_reset con 0x558bf5ff7800 session 0x558bf22b7860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 53346304 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 ms_handle_reset con 0x558bf3e18400 session 0x558bf316cb40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3a000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 ms_handle_reset con 0x558bf5e3a000 session 0x558bf316d4a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 ms_handle_reset con 0x558bf19b5000 session 0x558bf316c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 heartbeat osd_stat(store_statfs(0x4f8fc4000/0x0/0x4ffc00000, data 0x1783b6e/0x18f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:10.593572+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 53346304 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:11.593841+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 53346304 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2011375 data_alloc: 218103808 data_used: 7987200
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:12.594068+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 heartbeat osd_stat(store_statfs(0x4f8414000/0x0/0x4ffc00000, data 0x2333b6e/0x24a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d07000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 ms_handle_reset con 0x558bf3d07000 session 0x558bf19fa960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 53346304 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 ms_handle_reset con 0x558bf3e18400 session 0x558bf19faf00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:13.594285+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff7800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 ms_handle_reset con 0x558bf5ff7800 session 0x558bf19fa000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 53346304 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 ms_handle_reset con 0x558bf3f92000 session 0x558bf40830e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:14.594472+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 137650176 unmapped: 53329920 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ea7800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ea7000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:15.594698+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 137650176 unmapped: 53329920 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:16.594832+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 141123584 unmapped: 49856512 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2098882 data_alloc: 234881024 data_used: 19955712
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:17.595199+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 heartbeat osd_stat(store_statfs(0x4f8413000/0x0/0x4ffc00000, data 0x2333ba1/0x24ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 141123584 unmapped: 49856512 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f62000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:18.595364+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 141123584 unmapped: 49856512 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43d1000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 ms_handle_reset con 0x558bf43d1000 session 0x558bf3e45c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff6c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:19.595535+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 ms_handle_reset con 0x558bf5ff6c00 session 0x558bf2fb85a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 ms_handle_reset con 0x558bf5f62000 session 0x558bf51212c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 50282496 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:20.595713+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf45e8c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.276758194s of 11.438013077s, submitted: 32
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 ms_handle_reset con 0x558bf45e8c00 session 0x558bf2fb85a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 50282496 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:21.595940+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 50282496 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2100646 data_alloc: 234881024 data_used: 20090880
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:22.596234+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 heartbeat osd_stat(store_statfs(0x4f8412000/0x0/0x4ffc00000, data 0x2333bb1/0x24ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2c7cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 ms_handle_reset con 0x558bf2c7cc00 session 0x558bf40830e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43d1000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 50282496 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 ms_handle_reset con 0x558bf43d1000 session 0x558bf19faf00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf45e8c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:23.596383+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 ms_handle_reset con 0x558bf45e8c00 session 0x558bf316c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f62000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 ms_handle_reset con 0x558bf5f62000 session 0x558bf42e5e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 50003968 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:24.596574+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 heartbeat osd_stat(store_statfs(0x4f81de000/0x0/0x4ffc00000, data 0x2567bb1/0x26e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 50003968 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:25.596749+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 ms_handle_reset con 0x558bf19b4800 session 0x558bf2d030e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 49979392 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:26.596982+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffc000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e58400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 141533184 unmapped: 49446912 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2138642 data_alloc: 234881024 data_used: 20107264
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 270 ms_handle_reset con 0x558bf3ffc000 session 0x558bf3e443c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:27.597166+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 270 ms_handle_reset con 0x558bf5e58400 session 0x558bf19fa5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 271 ms_handle_reset con 0x558bf24bf000 session 0x558bf40821e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 142319616 unmapped: 48660480 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:28.597354+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 271 ms_handle_reset con 0x558bf19b4800 session 0x558bf4399c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146735104 unmapped: 44244992 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:29.597530+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146268160 unmapped: 44711936 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bc000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f62400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 271 ms_handle_reset con 0x558bf5f62400 session 0x558bf3e40f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d07000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24ab000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:30.597692+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 271 ms_handle_reset con 0x558bf3d07000 session 0x558bf42e41e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 271 heartbeat osd_stat(store_statfs(0x4f6f41000/0x0/0x4ffc00000, data 0x3801369/0x397d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.501066208s of 10.093942642s, submitted: 113
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 271 ms_handle_reset con 0x558bf19b4800 session 0x558bf2d4da40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146767872 unmapped: 44212224 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:31.597875+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 271 handle_osd_map epochs [271,272], i have 271, src has [1,272]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 272 ms_handle_reset con 0x558bf24ab000 session 0x558bf30534a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e58400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 272 ms_handle_reset con 0x558bf5e58400 session 0x558bf403e1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146767872 unmapped: 44212224 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2278158 data_alloc: 234881024 data_used: 20168704
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 272 ms_handle_reset con 0x558bf24bc000 session 0x558bf2d032c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:32.598023+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146767872 unmapped: 44212224 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:33.598206+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146767872 unmapped: 44212224 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:34.598388+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 272 heartbeat osd_stat(store_statfs(0x4f6f3b000/0x0/0x4ffc00000, data 0x3802fc2/0x3982000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146767872 unmapped: 44212224 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:35.598801+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146923520 unmapped: 44056576 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 272 ms_handle_reset con 0x558bf24bf000 session 0x558bf422f0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:36.598930+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 272 ms_handle_reset con 0x558bf19b4800 session 0x558bf2c06960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 16K writes, 63K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 16K writes, 4927 syncs, 3.26 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8252 writes, 32K keys, 8252 commit groups, 1.0 writes per commit group, ingest: 20.69 MB, 0.03 MB/s
                                           Interval WAL: 8253 writes, 3353 syncs, 2.46 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146931712 unmapped: 44048384 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2275918 data_alloc: 234881024 data_used: 20168704
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:37.599126+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146931712 unmapped: 44048384 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:38.599253+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146931712 unmapped: 44048384 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24ab000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 272 ms_handle_reset con 0x558bf24ab000 session 0x558bf2d4d860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bc000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:39.599416+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 272 ms_handle_reset con 0x558bf24bc000 session 0x558bf63b2d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146939904 unmapped: 44040192 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 272 heartbeat osd_stat(store_statfs(0x4f6f1a000/0x0/0x4ffc00000, data 0x3824f9f/0x39a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:40.599629+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146939904 unmapped: 44040192 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:41.599848+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 146939904 unmapped: 44040192 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2275918 data_alloc: 234881024 data_used: 20168704
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:42.600001+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e58800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 272 ms_handle_reset con 0x558bf5e58800 session 0x558bf52b05a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bd000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 272 ms_handle_reset con 0x558bf24bd000 session 0x558bf42e4960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 147316736 unmapped: 43663360 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:43.600175+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.512125969s of 12.745903015s, submitted: 24
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 272 ms_handle_reset con 0x558bf19b4800 session 0x558bf3e40d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 147316736 unmapped: 43663360 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24ab000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 272 heartbeat osd_stat(store_statfs(0x4f6f1a000/0x0/0x4ffc00000, data 0x3824fc2/0x39a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 272 ms_handle_reset con 0x558bf24ab000 session 0x558bf4082b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:44.600336+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d45800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 147316736 unmapped: 43663360 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 272 heartbeat osd_stat(store_statfs(0x4f6f1a000/0x0/0x4ffc00000, data 0x3824fc2/0x39a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:45.600490+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 151240704 unmapped: 39739392 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:46.600615+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3b400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 272 ms_handle_reset con 0x558bf5e3b400 session 0x558bf19fb2c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3a800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 151240704 unmapped: 39739392 heap: 190980096 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2321667 data_alloc: 234881024 data_used: 26263552
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:47.600767+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 272 ms_handle_reset con 0x558bf5e3a800 session 0x558bf30e6d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff9800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 272 ms_handle_reset con 0x558bf3ff9800 session 0x558bf2c07e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 151691264 unmapped: 42958848 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 272 ms_handle_reset con 0x558bf19b4800 session 0x558bf316d4a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:48.600907+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24ab000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 151707648 unmapped: 42942464 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:49.601124+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 273 ms_handle_reset con 0x558bf24ab000 session 0x558bf4320780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 151707648 unmapped: 42942464 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:50.601300+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 151707648 unmapped: 42942464 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 273 heartbeat osd_stat(store_statfs(0x4f6295000/0x0/0x4ffc00000, data 0x44a7b96/0x4628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:51.601487+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 151707648 unmapped: 42942464 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2424276 data_alloc: 234881024 data_used: 26275840
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:52.601663+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c1000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 151715840 unmapped: 42934272 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 274 ms_handle_reset con 0x558bf43c1000 session 0x558bf3e434a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:53.601816+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 274 heartbeat osd_stat(store_statfs(0x4f6292000/0x0/0x4ffc00000, data 0x44a976a/0x462b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 151101440 unmapped: 43548672 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.429430008s of 10.751461029s, submitted: 45
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbbc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffc000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 275 ms_handle_reset con 0x558bf19b4c00 session 0x558bf2d4d4a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:54.601975+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 275 ms_handle_reset con 0x558bf3ffc000 session 0x558bf52b1680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 275 ms_handle_reset con 0x558bf3dbbc00 session 0x558bf21632c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 275 heartbeat osd_stat(store_statfs(0x4f628e000/0x0/0x4ffc00000, data 0x44ab33e/0x462e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 151109632 unmapped: 43540480 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 275 ms_handle_reset con 0x558bf19b4800 session 0x558bf2d4cd20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:55.602117+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 151109632 unmapped: 43540480 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 276 ms_handle_reset con 0x558bf19b4c00 session 0x558bf44ddc20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:56.602280+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24ab000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 276 ms_handle_reset con 0x558bf24ab000 session 0x558bf44dc3c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c1000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 151109632 unmapped: 43540480 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2434488 data_alloc: 234881024 data_used: 26275840
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 277 ms_handle_reset con 0x558bf43c1000 session 0x558bf44dcb40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:57.602441+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 277 ms_handle_reset con 0x558bf19b4800 session 0x558bf2d4d4a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 155975680 unmapped: 38674432 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 277 heartbeat osd_stat(store_statfs(0x4f5b12000/0x0/0x4ffc00000, data 0x5209b9e/0x4daa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:58.602568+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 278 ms_handle_reset con 0x558bf19b4c00 session 0x558bf4320780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24ab000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 278 ms_handle_reset con 0x558bf24ab000 session 0x558bf19fb2c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 38453248 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:59.602717+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e19400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 278 ms_handle_reset con 0x558bf3e19400 session 0x558bf42e4960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 278 ms_handle_reset con 0x558bf19b4000 session 0x558bf403e5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 156196864 unmapped: 38453248 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 278 ms_handle_reset con 0x558bf5ea7800 session 0x558bf4083a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 278 ms_handle_reset con 0x558bf5ea7000 session 0x558bf40832c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:00.602861+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 278 heartbeat osd_stat(store_statfs(0x4f5a8c000/0x0/0x4ffc00000, data 0x528f7d6/0x4e32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 278 ms_handle_reset con 0x558bf19b4800 session 0x558bf2163e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 150798336 unmapped: 43851776 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 278 ms_handle_reset con 0x558bf19b4c00 session 0x558bf30e70e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24ab000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:01.603056+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 278 ms_handle_reset con 0x558bf24ab000 session 0x558bf30e72c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 149954560 unmapped: 44695552 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2306181 data_alloc: 234881024 data_used: 15491072
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:02.603211+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 278 heartbeat osd_stat(store_statfs(0x4f71e0000/0x0/0x4ffc00000, data 0x3b3c7a3/0x36dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 149954560 unmapped: 44695552 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:03.603340+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 278 heartbeat osd_stat(store_statfs(0x4f71e0000/0x0/0x4ffc00000, data 0x3b3c7a3/0x36dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 153485312 unmapped: 41164800 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:04.603800+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 278 ms_handle_reset con 0x558bf19b4800 session 0x558bf63b3a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.677801132s of 10.396897316s, submitted: 161
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 278 ms_handle_reset con 0x558bf19b4c00 session 0x558bf63b3e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf45e8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 153485312 unmapped: 41164800 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:05.604058+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 278 ms_handle_reset con 0x558bf2d45800 session 0x558bf52b12c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3fb7800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf65c0800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 152846336 unmapped: 41803776 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:06.604262+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 152846336 unmapped: 41803776 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2386400 data_alloc: 251658240 data_used: 27369472
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:07.604397+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 278 ms_handle_reset con 0x558bf65c0800 session 0x558bf52b05a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 279 ms_handle_reset con 0x558bf45e8000 session 0x558bf52b0960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 152846336 unmapped: 41803776 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:08.604529+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 152846336 unmapped: 41803776 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:09.604690+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 279 heartbeat osd_stat(store_statfs(0x4f7201000/0x0/0x4ffc00000, data 0x3b1a2df/0x36bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,1,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 152862720 unmapped: 41787392 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:10.604867+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 152862720 unmapped: 41787392 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:11.605144+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 152862720 unmapped: 41787392 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2394759 data_alloc: 251658240 data_used: 27377664
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:12.605366+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 152870912 unmapped: 41779200 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:13.605509+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 152870912 unmapped: 41779200 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:14.605708+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 279 heartbeat osd_stat(store_statfs(0x4f7201000/0x0/0x4ffc00000, data 0x3b1a2df/0x36bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 279 heartbeat osd_stat(store_statfs(0x4f7201000/0x0/0x4ffc00000, data 0x3b1a2df/0x36bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,2])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 152903680 unmapped: 41746432 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 1.567022204s of 10.608901978s, submitted: 36
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:15.605860+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 279 ms_handle_reset con 0x558bf19b5400 session 0x558bf2d4d860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bc400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 280 heartbeat osd_stat(store_statfs(0x4f7201000/0x0/0x4ffc00000, data 0x3b1a2df/0x36bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,2])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 152928256 unmapped: 41721856 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:16.606022+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 280 ms_handle_reset con 0x558bf193cc00 session 0x558bf45823c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 280 ms_handle_reset con 0x558bf3fb7800 session 0x558bf2fb8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 280 ms_handle_reset con 0x558bf24bc400 session 0x558bf2d4da40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 151584768 unmapped: 43065344 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2398652 data_alloc: 251658240 data_used: 27385856
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f62400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:17.606176+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 280 ms_handle_reset con 0x558bf24bf800 session 0x558bf3e42b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 280 ms_handle_reset con 0x558bf5f62400 session 0x558bf43e8b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 280 ms_handle_reset con 0x558bf193cc00 session 0x558bf3e401e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 280 ms_handle_reset con 0x558bf19b5400 session 0x558bf21fba40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bc400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 151601152 unmapped: 43048960 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:18.606349+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3fb7800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf45e9000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f90000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 281 ms_handle_reset con 0x558bf24bc400 session 0x558bf4578d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 281 ms_handle_reset con 0x558bf45e9000 session 0x558bf4536d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 281 ms_handle_reset con 0x558bf193cc00 session 0x558bf30e7860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 151617536 unmapped: 43032576 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bc400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 281 ms_handle_reset con 0x558bf24bc400 session 0x558bf2fce1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f62400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:19.606508+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 281 ms_handle_reset con 0x558bf5f62400 session 0x558bf4321a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 282 ms_handle_reset con 0x558bf19b5400 session 0x558bf5e5c780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 282 ms_handle_reset con 0x558bf3f90000 session 0x558bf4536b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f91c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 282 ms_handle_reset con 0x558bf3fb7800 session 0x558bf4579a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 282 ms_handle_reset con 0x558bf19b5400 session 0x558bf316d680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bc400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 282 ms_handle_reset con 0x558bf3f91c00 session 0x558bf19fb4a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 282 ms_handle_reset con 0x558bf193cc00 session 0x558bf3e432c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 143032320 unmapped: 51617792 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:20.606659+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 283 ms_handle_reset con 0x558bf24bc400 session 0x558bf3e43a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 283 heartbeat osd_stat(store_statfs(0x4f8f9a000/0x0/0x4ffc00000, data 0x179c3f6/0x1923000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 143015936 unmapped: 51634176 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 283 ms_handle_reset con 0x558bf193cc00 session 0x558bf43e8b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:21.606816+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 283 ms_handle_reset con 0x558bf19b5400 session 0x558bf45823c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 51642368 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2010158 data_alloc: 218103808 data_used: 8175616
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f91c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 283 ms_handle_reset con 0x558bf3f91c00 session 0x558bf19fb2c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3fb7800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:22.607061+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 283 ms_handle_reset con 0x558bf3fb7800 session 0x558bf4320780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f90000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 283 ms_handle_reset con 0x558bf3f90000 session 0x558bf44dcb40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 141836288 unmapped: 52813824 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:23.607307+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 283 ms_handle_reset con 0x558bf193cc00 session 0x558bf44ddc20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 141844480 unmapped: 52805632 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:24.607510+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 283 handle_osd_map epochs [283,284], i have 283, src has [1,284]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 284 ms_handle_reset con 0x558bf19b5400 session 0x558bf21632c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 141860864 unmapped: 52789248 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:25.607676+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff8c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf171d400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.360013008s of 10.410017014s, submitted: 234
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 284 ms_handle_reset con 0x558bf171d400 session 0x558bf42e52c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 284 ms_handle_reset con 0x558bf3ff8c00 session 0x558bf42e5a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 284 heartbeat osd_stat(store_statfs(0x4f8f95000/0x0/0x4ffc00000, data 0x179e0d6/0x1928000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 141860864 unmapped: 52789248 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffc400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 284 ms_handle_reset con 0x558bf3ffc400 session 0x558bf2342000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf171d400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:26.607822+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 284 ms_handle_reset con 0x558bf171d400 session 0x558bf2fb8d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 284 ms_handle_reset con 0x558bf19b5400 session 0x558bf21fb0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff8c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 284 ms_handle_reset con 0x558bf193cc00 session 0x558bf4083680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f93000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 141271040 unmapped: 53379072 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 284 ms_handle_reset con 0x558bf3f93000 session 0x558bf19fa960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2094402 data_alloc: 218103808 data_used: 7675904
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 284 ms_handle_reset con 0x558bf3ff8c00 session 0x558bf4536960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:27.607996+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 141271040 unmapped: 53379072 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:28.608193+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf171d400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 285 ms_handle_reset con 0x558bf171d400 session 0x558bf3e401e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 142327808 unmapped: 52322304 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:29.608348+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 286 ms_handle_reset con 0x558bf193cc00 session 0x558bf52a1c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43bf000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 286 ms_handle_reset con 0x558bf43bf000 session 0x558bf19fad20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24ab800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 142262272 unmapped: 52387840 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 286 ms_handle_reset con 0x558bf24ab800 session 0x558bf52a14a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:30.608508+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 286 heartbeat osd_stat(store_statfs(0x4f8735000/0x0/0x4ffc00000, data 0x1fff6da/0x2188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 142262272 unmapped: 52387840 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:31.608710+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf8a4a800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 286 ms_handle_reset con 0x558bf8a4a800 session 0x558bf4537a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf171d400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 286 ms_handle_reset con 0x558bf193cc00 session 0x558bf2162960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 286 ms_handle_reset con 0x558bf171d400 session 0x558bf43e9680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 142286848 unmapped: 52363264 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2098808 data_alloc: 218103808 data_used: 7671808
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:32.608907+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 142286848 unmapped: 52363264 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:33.609153+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24ab800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43bf000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 286 ms_handle_reset con 0x558bf43bf000 session 0x558bf30523c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 142303232 unmapped: 52346880 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:34.609324+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bec00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf8a4bc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 287 heartbeat osd_stat(store_statfs(0x4f8735000/0x0/0x4ffc00000, data 0x1fff6ea/0x2189000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 287 ms_handle_reset con 0x558bf24bec00 session 0x558bf45365a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 142327808 unmapped: 52322304 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:35.609501+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.536039352s of 10.025963783s, submitted: 122
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 288 ms_handle_reset con 0x558bf8a4bc00 session 0x558bf63b2780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 288 ms_handle_reset con 0x558bf24ab800 session 0x558bf403ef00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 142336000 unmapped: 52314112 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:36.609718+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf171d400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 142336000 unmapped: 52314112 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2108793 data_alloc: 218103808 data_used: 7692288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:37.609875+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 288 ms_handle_reset con 0x558bf193cc00 session 0x558bf42e52c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 289 ms_handle_reset con 0x558bf171d400 session 0x558bf21fb0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bec00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 289 ms_handle_reset con 0x558bf24bec00 session 0x558bf2fce1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 143400960 unmapped: 51249152 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:38.610077+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43bf000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 289 ms_handle_reset con 0x558bf43bf000 session 0x558bf42e5a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf171d400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 289 ms_handle_reset con 0x558bf171d400 session 0x558bf3053a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 289 ms_handle_reset con 0x558bf193cc00 session 0x558bf2fb8d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24ab800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 289 ms_handle_reset con 0x558bf24ab800 session 0x558bf2342d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bec00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 289 ms_handle_reset con 0x558bf24bec00 session 0x558bf19fa780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 51200000 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 289 ms_handle_reset con 0x558bf3f92c00 session 0x558bf2163e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:39.610224+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf171d400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 289 ms_handle_reset con 0x558bf193cc00 session 0x558bf44dd860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24ab800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 289 ms_handle_reset con 0x558bf24ab800 session 0x558bf44dc780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 51191808 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bec00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 289 ms_handle_reset con 0x558bf24bec00 session 0x558bf43e8780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff6800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:40.610394+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 289 ms_handle_reset con 0x558bf5ff6800 session 0x558bf2343c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f63c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffac00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 289 ms_handle_reset con 0x558bf3ffac00 session 0x558bf422f0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 289 heartbeat osd_stat(store_statfs(0x4f8729000/0x0/0x4ffc00000, data 0x2004bac/0x2195000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 147480576 unmapped: 47169536 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:41.610815+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 289 ms_handle_reset con 0x558bf193cc00 session 0x558bf4536f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24ab800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 290 ms_handle_reset con 0x558bf24ab800 session 0x558bf4536000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bec00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 290 ms_handle_reset con 0x558bf24bec00 session 0x558bf30e6960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff6800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 147488768 unmapped: 47161344 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2184931 data_alloc: 234881024 data_used: 16097280
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:42.611588+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 291 ms_handle_reset con 0x558bf5ff6800 session 0x558bf2d4c1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 291 ms_handle_reset con 0x558bf5f63c00 session 0x558bf63b2b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 291 ms_handle_reset con 0x558bf193cc00 session 0x558bf2d01c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24ab800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 291 ms_handle_reset con 0x558bf24ab800 session 0x558bf42a6780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 147488768 unmapped: 47161344 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bec00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:43.611952+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 292 ms_handle_reset con 0x558bf24bec00 session 0x558bf2162f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff6800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 292 ms_handle_reset con 0x558bf5ff6800 session 0x558bf42e43c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d08800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 147488768 unmapped: 47161344 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 292 heartbeat osd_stat(store_statfs(0x4f871f000/0x0/0x4ffc00000, data 0x200846c/0x219d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:44.612247+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 292 ms_handle_reset con 0x558bf3d08800 session 0x558bf19ed0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 292 ms_handle_reset con 0x558bf193cc00 session 0x558bf3e454a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 147496960 unmapped: 47153152 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:45.612497+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24ab800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 292 ms_handle_reset con 0x558bf24ab800 session 0x558bf45785a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bec00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.499817848s of 10.000440598s, submitted: 146
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 292 ms_handle_reset con 0x558bf24bec00 session 0x558bf318f2c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 147513344 unmapped: 47136768 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:46.612999+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d08800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f93000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 292 ms_handle_reset con 0x558bf3f93000 session 0x558bf422fe00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 147513344 unmapped: 47136768 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2194663 data_alloc: 234881024 data_used: 16101376
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:47.613200+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff6800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2a42c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 293 ms_handle_reset con 0x558bf5ff6800 session 0x558bf422ed20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 147488768 unmapped: 47161344 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:48.613396+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 294 ms_handle_reset con 0x558bf2a42c00 session 0x558bf44dde00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 294 ms_handle_reset con 0x558bf193cc00 session 0x558bf422fe00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 294 ms_handle_reset con 0x558bf3d08800 session 0x558bf2fb85a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 147496960 unmapped: 47153152 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:49.613544+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 294 heartbeat osd_stat(store_statfs(0x4f8714000/0x0/0x4ffc00000, data 0x200ddc7/0x21a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 147513344 unmapped: 47136768 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2c7cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:50.613709+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf62b7000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 294 ms_handle_reset con 0x558bf62b7000 session 0x558bf318f2c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 147521536 unmapped: 47128576 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24be400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bc400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 295 ms_handle_reset con 0x558bf24be400 session 0x558bf3e454a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:51.613897+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 295 ms_handle_reset con 0x558bf24bc400 session 0x558bf4536000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 153812992 unmapped: 40837120 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2300903 data_alloc: 234881024 data_used: 16134144
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 296 ms_handle_reset con 0x558bf1947c00 session 0x558bf3e42000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:52.614098+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 296 ms_handle_reset con 0x558bf2c7cc00 session 0x558bf21fa960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 296 heartbeat osd_stat(store_statfs(0x4f8711000/0x0/0x4ffc00000, data 0x200f9c7/0x21ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 297 ms_handle_reset con 0x558bf193cc00 session 0x558bf45781e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24be400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 297 ms_handle_reset con 0x558bf24be400 session 0x558bf52b7680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 297 ms_handle_reset con 0x558bf193cc00 session 0x558bf2c070e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 153518080 unmapped: 41132032 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 297 ms_handle_reset con 0x558bf1947c00 session 0x558bf2c07a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:53.614220+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bc400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 297 ms_handle_reset con 0x558bf24bc400 session 0x558bf52b0d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2c7cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 297 ms_handle_reset con 0x558bf2c7cc00 session 0x558bf2d02d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2a42c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d08800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf62b7000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 297 ms_handle_reset con 0x558bf62b7000 session 0x558bf19fc5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 297 ms_handle_reset con 0x558bf2a42c00 session 0x558bf531a5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 152780800 unmapped: 41869312 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff7c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e19400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:54.614345+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 298 ms_handle_reset con 0x558bf5ff7c00 session 0x558bf4399860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 298 heartbeat osd_stat(store_statfs(0x4f6942000/0x0/0x4ffc00000, data 0x2c30c45/0x2dd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d06000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 298 ms_handle_reset con 0x558bf3d06000 session 0x558bf42e50e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 298 heartbeat osd_stat(store_statfs(0x4f6942000/0x0/0x4ffc00000, data 0x2c30c45/0x2dd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 152788992 unmapped: 41861120 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 299 ms_handle_reset con 0x558bf3e19400 session 0x558bf2fcfe00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dba800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 299 ms_handle_reset con 0x558bf3dba800 session 0x558bf2163860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 299 ms_handle_reset con 0x558bf24bf400 session 0x558bf316dc20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:55.614470+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 299 ms_handle_reset con 0x558bf3d08800 session 0x558bf2342d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 152788992 unmapped: 41861120 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:56.614613+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2a42c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 299 ms_handle_reset con 0x558bf2a42c00 session 0x558bf2d6c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d06000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.266131401s of 10.939495087s, submitted: 218
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 299 ms_handle_reset con 0x558bf3d06000 session 0x558bf3e42f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 299 heartbeat osd_stat(store_statfs(0x4f693d000/0x0/0x4ffc00000, data 0x2c32d0a/0x2dd5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 152797184 unmapped: 41852928 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2347331 data_alloc: 234881024 data_used: 18055168
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:57.614781+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e19400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 299 ms_handle_reset con 0x558bf3e19400 session 0x558bf63b2f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 152797184 unmapped: 41852928 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:58.614917+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 299 ms_handle_reset con 0x558bf24bf400 session 0x558bf63b30e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2a42c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 299 heartbeat osd_stat(store_statfs(0x4f693d000/0x0/0x4ffc00000, data 0x2c32d0a/0x2dd5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d06000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 299 ms_handle_reset con 0x558bf3d06000 session 0x558bf42e5a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d08800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 299 ms_handle_reset con 0x558bf3d08800 session 0x558bf42e52c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff7c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 299 ms_handle_reset con 0x558bf5ff7c00 session 0x558bf45365a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 153042944 unmapped: 41607168 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:59.615130+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e19000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 300 ms_handle_reset con 0x558bf3e19000 session 0x558bf44e0d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 300 ms_handle_reset con 0x558bf24bf400 session 0x558bf3e40b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d06000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 153067520 unmapped: 41582592 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 300 handle_osd_map epochs [300,301], i have 300, src has [1,301]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 300 handle_osd_map epochs [301,301], i have 301, src has [1,301]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:00.615283+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 301 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf40823c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d08800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 301 ms_handle_reset con 0x558bf3d08800 session 0x558bf42a6780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 301 ms_handle_reset con 0x558bf3d06000 session 0x558bf2162f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 301 ms_handle_reset con 0x558bf2a42c00 session 0x558bf45374a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 153083904 unmapped: 41566208 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:01.615437+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 153108480 unmapped: 41541632 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2344803 data_alloc: 234881024 data_used: 18063360
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:02.615597+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 302 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf43e9680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 302 ms_handle_reset con 0x558bf24bf400 session 0x558bf45363c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 153108480 unmapped: 41541632 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:03.615794+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d06000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 153108480 unmapped: 41541632 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:04.615996+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 303 ms_handle_reset con 0x558bf3d06000 session 0x558bf422ed20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c3000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 303 ms_handle_reset con 0x558bf43c3000 session 0x558bf318fe00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 303 heartbeat osd_stat(store_statfs(0x4f691e000/0x0/0x4ffc00000, data 0x2c5a0ca/0x2dff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf62b7800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 303 ms_handle_reset con 0x558bf62b7800 session 0x558bf2d4c960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 153124864 unmapped: 41525248 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:05.616187+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 304 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf30e63c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 304 ms_handle_reset con 0x558bf24bf400 session 0x558bf43983c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 304 heartbeat osd_stat(store_statfs(0x4f6918000/0x0/0x4ffc00000, data 0x2c5d439/0x2e05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 153190400 unmapped: 41459712 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:06.616357+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d06000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.378499031s of 10.091331482s, submitted: 236
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 153239552 unmapped: 41410560 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2354368 data_alloc: 234881024 data_used: 18071552
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:07.616550+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c3000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 305 ms_handle_reset con 0x558bf3d06000 session 0x558bf2fb94a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2ae8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 305 ms_handle_reset con 0x558bf2ae8000 session 0x558bf4321680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 306 ms_handle_reset con 0x558bf43c3000 session 0x558bf21fad20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 153280512 unmapped: 41369600 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:08.616867+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 306 ms_handle_reset con 0x558bf24bf400 session 0x558bf63b2f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2ae8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 306 heartbeat osd_stat(store_statfs(0x4f6502000/0x0/0x4ffc00000, data 0x2c60736/0x2e09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 153288704 unmapped: 41361408 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:09.617049+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 307 ms_handle_reset con 0x558bf2ae8000 session 0x558bf2163860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf65c0000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 307 ms_handle_reset con 0x558bf65c0000 session 0x558bf19fc5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 153329664 unmapped: 41320448 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 307 ms_handle_reset con 0x558bf3dbb000 session 0x558bf2c07a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:10.617219+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 153337856 unmapped: 41312256 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7bc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:11.617424+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 308 ms_handle_reset con 0x558bf5e7bc00 session 0x558bf21fa960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 308 handle_osd_map epochs [308,309], i have 308, src has [1,309]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 309 ms_handle_reset con 0x558bf24bf400 session 0x558bf52b0d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 153346048 unmapped: 41304064 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2366186 data_alloc: 234881024 data_used: 18092032
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:12.617797+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 310 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf3e44f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2ae8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 310 ms_handle_reset con 0x558bf2ae8000 session 0x558bf4082b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154411008 unmapped: 40239104 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:13.618226+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf65c0000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154443776 unmapped: 40206336 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:14.618344+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 311 heartbeat osd_stat(store_statfs(0x4f64f5000/0x0/0x4ffc00000, data 0x2c6c226/0x2e16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 312 ms_handle_reset con 0x558bf3dbb000 session 0x558bf3e443c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3a800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 312 ms_handle_reset con 0x558bf65c0000 session 0x558bf43985a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 312 ms_handle_reset con 0x558bf5e3a800 session 0x558bf21fb680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154501120 unmapped: 40148992 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:15.618504+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 312 heartbeat osd_stat(store_statfs(0x4f64f3000/0x0/0x4ffc00000, data 0x2c6deb0/0x2e1a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 313 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf51212c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154525696 unmapped: 40124416 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:16.618642+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 313 handle_osd_map epochs [313,314], i have 313, src has [1,314]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 314 ms_handle_reset con 0x558bf24bf400 session 0x558bf3e410e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2ae8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 314 heartbeat osd_stat(store_statfs(0x4f64f1000/0x0/0x4ffc00000, data 0x2c6fa2e/0x2e1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.198980331s of 10.011181831s, submitted: 245
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 314 ms_handle_reset con 0x558bf3dbb000 session 0x558bf3e41e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 314 ms_handle_reset con 0x558bf2ae8000 session 0x558bf2c07680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154550272 unmapped: 40099840 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2384117 data_alloc: 234881024 data_used: 18079744
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:17.618802+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 314 heartbeat osd_stat(store_statfs(0x4f64ec000/0x0/0x4ffc00000, data 0x2c716b8/0x2e1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:18.619052+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154583040 unmapped: 40067072 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 315 ms_handle_reset con 0x558bf24bf400 session 0x558bf3e41c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:19.619275+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154583040 unmapped: 40067072 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 315 heartbeat osd_stat(store_statfs(0x4f64e6000/0x0/0x4ffc00000, data 0x2e3018e/0x2e27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bc800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee6800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 316 ms_handle_reset con 0x558bf24bc800 session 0x558bf4399a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:20.619445+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154599424 unmapped: 40050688 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 316 handle_osd_map epochs [316,317], i have 316, src has [1,317]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 316 handle_osd_map epochs [317,317], i have 317, src has [1,317]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 317 ms_handle_reset con 0x558bf5ee6800 session 0x558bf2c070e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 317 ms_handle_reset con 0x558bf3e18800 session 0x558bf2d02d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 317 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf19fb860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:21.619977+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154615808 unmapped: 40034304 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 317 heartbeat osd_stat(store_statfs(0x4f64de000/0x0/0x4ffc00000, data 0x2e33ec4/0x2e2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:22.620250+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bc800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154615808 unmapped: 40034304 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2410570 data_alloc: 234881024 data_used: 18096128
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 317 ms_handle_reset con 0x558bf24bc800 session 0x558bf318e3c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 318 ms_handle_reset con 0x558bf24bf400 session 0x558bf44dc3c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 318 heartbeat osd_stat(store_statfs(0x4f64dc000/0x0/0x4ffc00000, data 0x2e33f36/0x2e31000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee6800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 319 ms_handle_reset con 0x558bf3e18800 session 0x558bf43e94a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3c400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf65c1000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 319 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf2163a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 319 ms_handle_reset con 0x558bf65c1000 session 0x558bf42e43c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 319 ms_handle_reset con 0x558bf5e3c400 session 0x558bf44e1860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 319 heartbeat osd_stat(store_statfs(0x4f64dc000/0x0/0x4ffc00000, data 0x2e33f36/0x2e31000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:23.620424+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154656768 unmapped: 39993344 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 320 ms_handle_reset con 0x558bf5ee6800 session 0x558bf63b30e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:24.620720+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f92000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154779648 unmapped: 39870464 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 320 ms_handle_reset con 0x558bf5f92000 session 0x558bf4579e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 320 handle_osd_map epochs [320,321], i have 320, src has [1,321]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:25.620876+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 321 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf422e960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154796032 unmapped: 39854080 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 321 heartbeat osd_stat(store_statfs(0x4f64c5000/0x0/0x4ffc00000, data 0x314b384/0x2e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:26.621013+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154796032 unmapped: 39854080 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3d000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 321 ms_handle_reset con 0x558bf5e3d000 session 0x558bf316c780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3c400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.740106583s of 10.109332085s, submitted: 132
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee6800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 321 ms_handle_reset con 0x558bf5e3c400 session 0x558bf44e12c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 321 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf19ed0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f92000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24be000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 321 ms_handle_reset con 0x558bf5f92000 session 0x558bf2d02d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 321 ms_handle_reset con 0x558bf24be000 session 0x558bf40821e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:27.621214+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 153952256 unmapped: 40697856 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2470085 data_alloc: 234881024 data_used: 18120704
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 321 handle_osd_map epochs [321,322], i have 321, src has [1,322]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d06000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 322 ms_handle_reset con 0x558bf3d06000 session 0x558bf403ed20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 322 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf30e72c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24be000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 322 ms_handle_reset con 0x558bf24be000 session 0x558bf2d4c780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:28.621666+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 153985024 unmapped: 40665088 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 323 ms_handle_reset con 0x558bf3dbb000 session 0x558bf4399a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3c400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 323 ms_handle_reset con 0x558bf5e3c400 session 0x558bf43985a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 323 ms_handle_reset con 0x558bf5ee6800 session 0x558bf19fa780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:29.622153+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 153985024 unmapped: 40665088 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 324 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf2b57860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0c400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:30.622313+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154042368 unmapped: 40607744 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 325 heartbeat osd_stat(store_statfs(0x4f64bd000/0x0/0x4ffc00000, data 0x31508b9/0x2e4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [2])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 325 ms_handle_reset con 0x558bf2d0c400 session 0x558bf2c07c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e19800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 325 ms_handle_reset con 0x558bf3e19800 session 0x558bf40832c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d08400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:31.622474+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154107904 unmapped: 40542208 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:32.622643+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154107904 unmapped: 40542208 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2482436 data_alloc: 234881024 data_used: 18362368
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 325 heartbeat osd_stat(store_statfs(0x4f64bc000/0x0/0x4ffc00000, data 0x3151fa6/0x2e4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 326 ms_handle_reset con 0x558bf3d08400 session 0x558bf63b3e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee7c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:33.622783+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 327 ms_handle_reset con 0x558bf5ee7c00 session 0x558bf52b61e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154116096 unmapped: 40534016 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 327 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf44e14a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:34.622966+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154116096 unmapped: 40534016 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:35.623120+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154116096 unmapped: 40534016 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:36.623280+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154116096 unmapped: 40534016 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf45e8c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 327 heartbeat osd_stat(store_statfs(0x4f64b7000/0x0/0x4ffc00000, data 0x3155724/0x2e55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:37.623422+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154116096 unmapped: 40534016 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2488208 data_alloc: 234881024 data_used: 18362368
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:38.623571+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154116096 unmapped: 40534016 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:39.623887+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 154124288 unmapped: 40525824 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee7400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.169567108s of 12.705246925s, submitted: 154
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 328 ms_handle_reset con 0x558bf5ee7400 session 0x558bf43e8b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 329 ms_handle_reset con 0x558bf3dbb800 session 0x558bf42e4000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:40.624042+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 329 ms_handle_reset con 0x558bf45e8c00 session 0x558bf318e960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 156270592 unmapped: 38379520 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:41.624263+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 156270592 unmapped: 38379520 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf45e9c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:42.624413+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 156270592 unmapped: 38379520 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2514748 data_alloc: 234881024 data_used: 18362368
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 329 heartbeat osd_stat(store_statfs(0x4f6496000/0x0/0x4ffc00000, data 0x319ffd6/0x2e76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:43.624544+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 156319744 unmapped: 38330368 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f91000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 329 handle_osd_map epochs [330,330], i have 330, src has [1,330]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 330 handle_osd_map epochs [330,331], i have 330, src has [1,331]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:44.624745+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 157548544 unmapped: 37101568 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 331 ms_handle_reset con 0x558bf45e9c00 session 0x558bf4398000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 331 ms_handle_reset con 0x558bf3f91000 session 0x558bf3e40000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:45.624858+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 157523968 unmapped: 37126144 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 331 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf2d4c960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:46.625020+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 157589504 unmapped: 37060608 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 331 heartbeat osd_stat(store_statfs(0x4f6449000/0x0/0x4ffc00000, data 0x31ea746/0x2ec4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf45e8c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee7400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 331 ms_handle_reset con 0x558bf45e8c00 session 0x558bf3e403c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 331 ms_handle_reset con 0x558bf5ee7400 session 0x558bf4040f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf62b7c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 332 ms_handle_reset con 0x558bf62b7c00 session 0x558bf40412c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:47.625141+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 157663232 unmapped: 36986880 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2553021 data_alloc: 234881024 data_used: 22970368
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 332 handle_osd_map epochs [332,333], i have 332, src has [1,333]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 333 ms_handle_reset con 0x558bf3e18800 session 0x558bf403f4a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 333 ms_handle_reset con 0x558bf3dbb800 session 0x558bf42e5680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:48.625277+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 157720576 unmapped: 36929536 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f91000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 333 ms_handle_reset con 0x558bf3f91000 session 0x558bf2fb8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 333 handle_osd_map epochs [333,334], i have 333, src has [1,334]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:49.625399+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 157728768 unmapped: 36921344 heap: 194650112 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff6400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 334 ms_handle_reset con 0x558bf5ff6400 session 0x558bf63b3680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf311fc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.349979877s of 10.085923195s, submitted: 102
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c1400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 334 ms_handle_reset con 0x558bf43c1400 session 0x558bf2fb9a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c1400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 334 heartbeat osd_stat(store_statfs(0x4f643c000/0x0/0x4ffc00000, data 0x31efc3c/0x2ed0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:50.625515+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 195567616 unmapped: 36880384 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 334 handle_osd_map epochs [335,335], i have 334, src has [1,335]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 335 ms_handle_reset con 0x558bf311fc00 session 0x558bf52a1a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 335 heartbeat osd_stat(store_statfs(0x4f503c000/0x0/0x4ffc00000, data 0x45efc3c/0x42d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,6])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:51.625727+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 167116800 unmapped: 65331200 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 335 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf22b6780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 335 heartbeat osd_stat(store_statfs(0x4f3bcc000/0x0/0x4ffc00000, data 0x5a5f82c/0x5741000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [0,0,1,0,0,1,2,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:52.625855+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 158769152 unmapped: 73678848 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3107969 data_alloc: 234881024 data_used: 22986752
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:53.626009+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 335 handle_osd_map epochs [335,336], i have 335, src has [1,336]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 168263680 unmapped: 64184320 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 336 ms_handle_reset con 0x558bf3dbb800 session 0x558bf21623c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:54.626182+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 164134912 unmapped: 68313088 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 336 ms_handle_reset con 0x558bf3e18800 session 0x558bf63b2000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:55.626304+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf45e9800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 168468480 unmapped: 63979520 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 336 ms_handle_reset con 0x558bf45e9800 session 0x558bf3e41860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 336 heartbeat osd_stat(store_statfs(0x4ea7c9000/0x0/0x4ffc00000, data 0xee6148c/0xeb44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:56.626468+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 168673280 unmapped: 63774720 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:57.836310+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 172933120 unmapped: 59514880 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4462303 data_alloc: 234881024 data_used: 22999040
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf311fc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 336 ms_handle_reset con 0x558bf311fc00 session 0x558bf2b565a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 336 ms_handle_reset con 0x558bf3dbb800 session 0x558bf4320000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 336 heartbeat osd_stat(store_statfs(0x4e5bca000/0x0/0x4ffc00000, data 0x13a6148c/0x13744000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:58.836432+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 165707776 unmapped: 66740224 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 337 ms_handle_reset con 0x558bf3e18800 session 0x558bf44dda40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5d40c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 337 ms_handle_reset con 0x558bf5d40c00 session 0x558bf44e1e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d09400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 337 ms_handle_reset con 0x558bf3d09400 session 0x558bf2fb9c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 337 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf3e44780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:59.836548+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 2.874756336s of 10.028690338s, submitted: 348
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 162897920 unmapped: 69550080 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf311fc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:00.836663+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 167485440 unmapped: 64962560 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 337 ms_handle_reset con 0x558bf43c1400 session 0x558bf2342780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 337 ms_handle_reset con 0x558bf5ee8000 session 0x558bf40832c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 338 ms_handle_reset con 0x558bf311fc00 session 0x558bf40821e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f92000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 338 ms_handle_reset con 0x558bf5f92000 session 0x558bf52a0960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:01.837456+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 163332096 unmapped: 69115904 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf311fc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 338 heartbeat osd_stat(store_statfs(0x4df3c4000/0x0/0x4ffc00000, data 0x1a264c52/0x19f49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:02.837626+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5163184 data_alloc: 234881024 data_used: 23019520
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 163348480 unmapped: 69099520 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 338 handle_osd_map epochs [338,339], i have 338, src has [1,339]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 338 handle_osd_map epochs [339,339], i have 339, src has [1,339]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 339 handle_osd_map epochs [339,340], i have 339, src has [1,340]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 340 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf2d01860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 340 ms_handle_reset con 0x558bf311fc00 session 0x558bf422f860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:03.837796+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 163446784 unmapped: 69001216 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43d1c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:04.837981+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0c800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 163463168 unmapped: 68984832 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 340 ms_handle_reset con 0x558bf2d0c800 session 0x558bf316cd20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:05.838177+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 341 ms_handle_reset con 0x558bf43d1c00 session 0x558bf44dd0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 163463168 unmapped: 68984832 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 341 ms_handle_reset con 0x558bf5e3dc00 session 0x558bf4082b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0c800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:06.838314+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf311fc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 163471360 unmapped: 68976640 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 341 ms_handle_reset con 0x558bf311fc00 session 0x558bf52a0780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 341 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf63b30e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43d1c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 341 handle_osd_map epochs [342,342], i have 341, src has [1,342]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2a43c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:07.838441+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5178269 data_alloc: 234881024 data_used: 23044096
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 342 ms_handle_reset con 0x558bf2a43c00 session 0x558bf19fb2c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf623dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 163553280 unmapped: 68894720 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 342 ms_handle_reset con 0x558bf43d1c00 session 0x558bf52b63c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e58400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 342 ms_handle_reset con 0x558bf5e58400 session 0x558bf23425a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 342 heartbeat osd_stat(store_statfs(0x4df3bb000/0x0/0x4ffc00000, data 0x1a26af8a/0x19f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2a43c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 342 ms_handle_reset con 0x558bf2a43c00 session 0x558bf52b10e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf311fc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 342 handle_osd_map epochs [343,343], i have 342, src has [1,343]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 343 ms_handle_reset con 0x558bf623dc00 session 0x558bf2d4cd20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 343 ms_handle_reset con 0x558bf2d0c800 session 0x558bf52a1c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 343 ms_handle_reset con 0x558bf311fc00 session 0x558bf52b1860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:08.838567+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 163602432 unmapped: 68845568 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 343 heartbeat osd_stat(store_statfs(0x4df46c000/0x0/0x4ffc00000, data 0x1a1b8546/0x19e9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff7400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 344 ms_handle_reset con 0x558bf5ff7400 session 0x558bf19fb2c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2a43c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 344 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf30e65a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0c800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:09.838755+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.289133072s of 10.003424644s, submitted: 279
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 163700736 unmapped: 68747264 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 344 ms_handle_reset con 0x558bf2d0c800 session 0x558bf2162f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf311fc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 345 ms_handle_reset con 0x558bf2a43c00 session 0x558bf316cd20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf623dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:10.838938+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 345 ms_handle_reset con 0x558bf623dc00 session 0x558bf40821e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf215d000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 163790848 unmapped: 68657152 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 345 handle_osd_map epochs [346,346], i have 345, src has [1,346]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 346 ms_handle_reset con 0x558bf215d000 session 0x558bf403f4a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:11.839139+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 346 ms_handle_reset con 0x558bf311fc00 session 0x558bf316c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 163807232 unmapped: 68640768 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 346 ms_handle_reset con 0x558bf171d400 session 0x558bf19fa5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 346 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf44dc780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:12.839266+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5151597 data_alloc: 234881024 data_used: 22917120
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 163823616 unmapped: 68624384 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2a43c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:13.839707+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 163815424 unmapped: 68632576 heap: 232448000 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0c800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 347 ms_handle_reset con 0x558bf2d0c800 session 0x558bf23fc1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf623dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f91800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:14.839842+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 218824704 unmapped: 38821888 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 347 ms_handle_reset con 0x558bf193cc00 session 0x558bf2b565a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 347 ms_handle_reset con 0x558bf623dc00 session 0x558bf4579e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 347 heartbeat osd_stat(store_statfs(0x4dcc82000/0x0/0x4ffc00000, data 0x1c4b1520/0x1c68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:15.840028+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 161546240 unmapped: 96100352 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:16.840218+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 161710080 unmapped: 95936512 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf171d400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 347 ms_handle_reset con 0x558bf171d400 session 0x558bf2d01860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0c800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:17.840397+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5933760 data_alloc: 218103808 data_used: 7696384
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 166215680 unmapped: 91430912 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 347 handle_osd_map epochs [347,348], i have 347, src has [1,348]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf311fc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffa800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 348 ms_handle_reset con 0x558bf311fc00 session 0x558bf2d4c960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:18.840569+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 349 ms_handle_reset con 0x558bf2d0c800 session 0x558bf52a0780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 158113792 unmapped: 99532800 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 349 ms_handle_reset con 0x558bf3ffa800 session 0x558bf43203c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 349 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf3e42960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:19.840736+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 166707200 unmapped: 90939392 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.557832241s of 10.140983582s, submitted: 220
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf8a4ac00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 349 heartbeat osd_stat(store_statfs(0x4d3111000/0x0/0x4ffc00000, data 0x25c0ff70/0x25ded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,1,1,2])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:20.840911+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 167059456 unmapped: 90587136 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:21.841115+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 175726592 unmapped: 81920000 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 349 handle_osd_map epochs [349,350], i have 349, src has [1,350]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:22.841435+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7123688 data_alloc: 218103808 data_used: 7716864
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 171999232 unmapped: 85647360 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 350 ms_handle_reset con 0x558bf3dbb800 session 0x558bf44dd0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5d40400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 350 ms_handle_reset con 0x558bf8a4ac00 session 0x558bf403f2c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 350 handle_osd_map epochs [350,351], i have 350, src has [1,351]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 351 ms_handle_reset con 0x558bf3f91800 session 0x558bf4040f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 351 ms_handle_reset con 0x558bf2a43c00 session 0x558bf63b3a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 351 ms_handle_reset con 0x558bf5d40400 session 0x558bf52b0960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:23.841583+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 160251904 unmapped: 97394688 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 352 ms_handle_reset con 0x558bf3dbb800 session 0x558bf2163860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 352 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf531a5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f91800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:24.841915+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 352 heartbeat osd_stat(store_statfs(0x4ca10b000/0x0/0x4ffc00000, data 0x2ec135c6/0x2edf1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 352 ms_handle_reset con 0x558bf3f91800 session 0x558bf4398b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf8a4ac00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 352 ms_handle_reset con 0x558bf8a4ac00 session 0x558bf44e1a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 160276480 unmapped: 97370112 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffa000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:25.842165+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 160276480 unmapped: 97370112 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 352 heartbeat osd_stat(store_statfs(0x4ca10a000/0x0/0x4ffc00000, data 0x2ec15394/0x2edf4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:26.842321+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 160292864 unmapped: 97353728 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5d41400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 353 ms_handle_reset con 0x558bf3ffa000 session 0x558bf44e0b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 353 ms_handle_reset con 0x558bf5d41400 session 0x558bf422ed20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c1400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 353 ms_handle_reset con 0x558bf43c1400 session 0x558bf43983c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee6400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 353 ms_handle_reset con 0x558bf5ee6400 session 0x558bf2fce1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d07400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 353 ms_handle_reset con 0x558bf3d07400 session 0x558bf422e5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:27.842575+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7454572 data_alloc: 218103808 data_used: 7987200
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 160522240 unmapped: 97124352 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 353 handle_osd_map epochs [353,354], i have 353, src has [1,354]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:28.842740+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 160538624 unmapped: 97107968 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:29.842976+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 160538624 unmapped: 97107968 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff8800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 354 ms_handle_reset con 0x558bf3ff8800 session 0x558bf2fb85a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0a800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 354 ms_handle_reset con 0x558bf2d0a800 session 0x558bf19fa3c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:30.843194+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 161030144 unmapped: 96616448 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 354 heartbeat osd_stat(store_statfs(0x4c999b000/0x0/0x4ffc00000, data 0x2f381a68/0x2f562000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:31.843383+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 161046528 unmapped: 96600064 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1a28800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.011055946s of 12.129410744s, submitted: 178
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 354 ms_handle_reset con 0x558bf1a28800 session 0x558bf42e4d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:32.843514+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7457888 data_alloc: 218103808 data_used: 7987200
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 161046528 unmapped: 96600064 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:33.843666+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 161046528 unmapped: 96600064 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43d0c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 355 ms_handle_reset con 0x558bf43d0c00 session 0x558bf21fba40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee9000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:34.843875+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 161103872 unmapped: 96542720 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 355 ms_handle_reset con 0x558bf5ee9000 session 0x558bf2d4da40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1a28800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 355 heartbeat osd_stat(store_statfs(0x4c999b000/0x0/0x4ffc00000, data 0x2f381aca/0x2f563000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 355 ms_handle_reset con 0x558bf1a28800 session 0x558bf4398b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0a800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 355 ms_handle_reset con 0x558bf2d0a800 session 0x558bf4040f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff8800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 355 ms_handle_reset con 0x558bf3ff8800 session 0x558bf403f2c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:35.844038+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 162979840 unmapped: 94666752 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 355 ms_handle_reset con 0x558bf5ee8000 session 0x558bf44dd0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:36.846728+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43d0c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 355 ms_handle_reset con 0x558bf43d0c00 session 0x558bf3e42960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1a28800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 355 ms_handle_reset con 0x558bf1a28800 session 0x558bf43203c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 162996224 unmapped: 94650368 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 355 heartbeat osd_stat(store_statfs(0x4c95ee000/0x0/0x4ffc00000, data 0x2f72d55a/0x2f90f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0a800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff8800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 355 ms_handle_reset con 0x558bf3ff8800 session 0x558bf44dc780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2a43800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:37.846890+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7504937 data_alloc: 218103808 data_used: 7999488
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 162922496 unmapped: 94724096 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee9c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5d40800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 356 ms_handle_reset con 0x558bf5ee9c00 session 0x558bf40821e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 356 heartbeat osd_stat(store_statfs(0x4c8e31000/0x0/0x4ffc00000, data 0x2fee61c3/0x300cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:38.847051+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 356 handle_osd_map epochs [356,357], i have 356, src has [1,357]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 93536256 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf5d40800 session 0x558bf4537e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf2d0a800 session 0x558bf23fc1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:39.847205+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 93528064 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:40.847415+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c8e2d000/0x0/0x4ffc00000, data 0x2fee7d97/0x300cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 93528064 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c8e2d000/0x0/0x4ffc00000, data 0x2fee7d97/0x300cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:41.847621+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 93528064 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:42.847798+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7628334 data_alloc: 234881024 data_used: 15474688
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 164126720 unmapped: 93519872 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:43.847975+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 164126720 unmapped: 93519872 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c8e2d000/0x0/0x4ffc00000, data 0x2fee7d97/0x300cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:44.848145+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 164134912 unmapped: 93511680 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:45.848397+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 164134912 unmapped: 93511680 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c8e2d000/0x0/0x4ffc00000, data 0x2fee7d97/0x300cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:46.848601+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 164134912 unmapped: 93511680 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:47.848788+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7628334 data_alloc: 234881024 data_used: 15474688
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 164134912 unmapped: 93511680 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.573204994s of 16.148366928s, submitted: 94
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf3dbb400 session 0x558bf2fceb40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:48.848926+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 164446208 unmapped: 93200384 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:49.849180+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 164446208 unmapped: 93200384 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c8ac3000/0x0/0x4ffc00000, data 0x30253d97/0x3043b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee6c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf5ee6c00 session 0x558bf30e70e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:50.849318+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 165879808 unmapped: 91766784 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7a000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf5e7a000 session 0x558bf2343680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:51.849484+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff8400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf3ff8400 session 0x558bf52a14a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffac00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 166576128 unmapped: 91070464 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf3ffac00 session 0x558bf43e9e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:52.849684+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff8400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7730102 data_alloc: 234881024 data_used: 15523840
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 166576128 unmapped: 91070464 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:53.849888+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7a000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf5e7a000 session 0x558bf63b21e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 166469632 unmapped: 91176960 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:54.850195+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c8175000/0x0/0x4ffc00000, data 0x30ba0df9/0x30d89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 166510592 unmapped: 91136000 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3c400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf5e3c400 session 0x558bf44dde00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1946800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf1946800 session 0x558bf44e1860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:55.850370+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c8175000/0x0/0x4ffc00000, data 0x30ba0df9/0x30d89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 166526976 unmapped: 91119616 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f90000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf3f90000 session 0x558bf30e72c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:56.850612+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 166526976 unmapped: 91119616 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c8176000/0x0/0x4ffc00000, data 0x30ba0d97/0x30d88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:57.850748+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5d40400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf5d40400 session 0x558bf316d4a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1946800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7743528 data_alloc: 234881024 data_used: 17780736
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf1946800 session 0x558bf3e410e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 166658048 unmapped: 90988544 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:58.850903+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 166658048 unmapped: 90988544 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f90000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.765805244s of 11.279858589s, submitted: 111
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf3f90000 session 0x558bf52a12c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:59.851159+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 166666240 unmapped: 90980352 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c8155000/0x0/0x4ffc00000, data 0x30bbfe09/0x30da9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:00.851358+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3ac00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf5e3ac00 session 0x558bf43e94a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0ac00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf2d0ac00 session 0x558bf4046f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 166666240 unmapped: 90980352 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c8155000/0x0/0x4ffc00000, data 0x30bbfd97/0x30da7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:01.851574+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 166666240 unmapped: 90980352 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c8155000/0x0/0x4ffc00000, data 0x30bbfd97/0x30da7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf65c1c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf65c1c00 session 0x558bf316d4a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:02.851740+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7745840 data_alloc: 234881024 data_used: 17780736
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 166674432 unmapped: 90972160 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1946800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf1946800 session 0x558bf44dde00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0ac00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf2d0ac00 session 0x558bf52a14a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:03.851985+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 167911424 unmapped: 89735168 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c814c000/0x0/0x4ffc00000, data 0x30bc9d97/0x30db1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:04.852191+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 167911424 unmapped: 89735168 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:05.852373+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 167911424 unmapped: 89735168 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:06.852577+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 167911424 unmapped: 89735168 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:07.852750+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c814c000/0x0/0x4ffc00000, data 0x30bc9d97/0x30db1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7787595 data_alloc: 234881024 data_used: 17829888
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 170663936 unmapped: 86982656 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:08.852889+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 173801472 unmapped: 83845120 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.603356361s of 10.111203194s, submitted: 123
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:09.853046+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 174063616 unmapped: 83582976 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:10.853202+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 174063616 unmapped: 83582976 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:11.853408+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 174088192 unmapped: 83558400 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:12.853615+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7880385 data_alloc: 234881024 data_used: 19406848
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 174088192 unmapped: 83558400 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c7762000/0x0/0x4ffc00000, data 0x3195dd97/0x3179c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:13.853807+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf5ee8000 session 0x558bf19fa5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf2a43800 session 0x558bf40410e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 174104576 unmapped: 83542016 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf311e000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf311e000 session 0x558bf19fa3c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:14.853987+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 174129152 unmapped: 83517440 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:15.854183+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 174129152 unmapped: 83517440 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:16.854333+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 174129152 unmapped: 83517440 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c772e000/0x0/0x4ffc00000, data 0x31991d64/0x317ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:17.854484+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7877508 data_alloc: 234881024 data_used: 19410944
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 174129152 unmapped: 83517440 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:18.854621+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c772e000/0x0/0x4ffc00000, data 0x31991d64/0x317ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 174129152 unmapped: 83517440 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:19.854783+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 174137344 unmapped: 83509248 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:20.855036+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c772e000/0x0/0x4ffc00000, data 0x31991d64/0x317ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.028919220s of 11.498594284s, submitted: 68
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 174137344 unmapped: 83509248 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:21.855324+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 174137344 unmapped: 83509248 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c7725000/0x0/0x4ffc00000, data 0x3199cd64/0x317d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:22.856264+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7876720 data_alloc: 234881024 data_used: 19410944
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 174137344 unmapped: 83509248 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f63c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf5f63c00 session 0x558bf2d012c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:23.856555+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 174137344 unmapped: 83509248 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1a28800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf1a28800 session 0x558bf2d001e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:24.856882+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee9400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf5ee9400 session 0x558bf44e1860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee7400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf5ee7400 session 0x558bf2d00960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 174137344 unmapped: 83509248 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:25.857032+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2ae8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d09000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c7724000/0x0/0x4ffc00000, data 0x3199cd74/0x317da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 174137344 unmapped: 83509248 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:26.857179+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 174161920 unmapped: 83484672 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c7724000/0x0/0x4ffc00000, data 0x3199cd74/0x317da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:27.857338+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7880302 data_alloc: 234881024 data_used: 19529728
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 174170112 unmapped: 83476480 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:28.863797+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf3dbb400 session 0x558bf43e8960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf3ff8400 session 0x558bf23fd2c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 174170112 unmapped: 83476480 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1a28800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:29.863942+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf1a28800 session 0x558bf19fb2c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 172646400 unmapped: 85000192 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:30.864426+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 172646400 unmapped: 85000192 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:31.864834+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 172646400 unmapped: 85000192 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f62000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.139566422s of 11.206670761s, submitted: 20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c7a8d000/0x0/0x4ffc00000, data 0x31633d74/0x31471000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 ms_handle_reset con 0x558bf5f62000 session 0x558bf4579c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:32.865147+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7880942 data_alloc: 234881024 data_used: 17231872
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 172736512 unmapped: 84910080 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:33.865786+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 172736512 unmapped: 84910080 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf62b7800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:34.867485+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f90800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24be400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 172924928 unmapped: 84721664 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 358 ms_handle_reset con 0x558bf3f90800 session 0x558bf42e5e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 358 ms_handle_reset con 0x558bf3f92400 session 0x558bf4578f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:35.868124+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 359 ms_handle_reset con 0x558bf24be400 session 0x558bf43e9860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 359 ms_handle_reset con 0x558bf62b7800 session 0x558bf2fb9e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 177700864 unmapped: 79945728 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1a28800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 359 ms_handle_reset con 0x558bf1a28800 session 0x558bf2fb9c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:36.868405+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f90800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 359 heartbeat osd_stat(store_statfs(0x4c6556000/0x0/0x4ffc00000, data 0x3331c57e/0x329a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff8400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 360 ms_handle_reset con 0x558bf3ff8400 session 0x558bf30e6b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 360 ms_handle_reset con 0x558bf3f90800 session 0x558bf23fcb40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1a28800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 177709056 unmapped: 79937536 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 360 ms_handle_reset con 0x558bf1a28800 session 0x558bf63b21e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:37.868578+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24be400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff8400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8083171 data_alloc: 234881024 data_used: 17252352
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 178069504 unmapped: 79577088 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:38.868727+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf62b7800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 360 handle_osd_map epochs [360,361], i have 360, src has [1,361]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 180191232 unmapped: 77455360 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 361 ms_handle_reset con 0x558bf62b7800 session 0x558bf43e9a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:39.868991+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 179568640 unmapped: 78077952 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c2400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 361 ms_handle_reset con 0x558bf43c2400 session 0x558bf44e1a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:40.869338+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 179576832 unmapped: 78069760 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:41.869701+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf62b7400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 362 ms_handle_reset con 0x558bf62b7400 session 0x558bf2c07a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 179576832 unmapped: 78069760 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:42.869937+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 362 heartbeat osd_stat(store_statfs(0x4c78c3000/0x0/0x4ffc00000, data 0x314459de/0x3128c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7922308 data_alloc: 234881024 data_used: 23388160
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 179576832 unmapped: 78069760 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf8a4a800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f93400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.349198341s of 11.106988907s, submitted: 230
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 362 ms_handle_reset con 0x558bf3f93400 session 0x558bf4536960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 362 ms_handle_reset con 0x558bf8a4a800 session 0x558bf2c072c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:43.870173+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 363 heartbeat osd_stat(store_statfs(0x4c78c3000/0x0/0x4ffc00000, data 0x314459de/0x3128c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f93400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1a28800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 179625984 unmapped: 78020608 heap: 257646592 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:44.870328+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 189292544 unmapped: 76759040 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:45.870608+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 180944896 unmapped: 85106688 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:46.870787+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 363 heartbeat osd_stat(store_statfs(0x4c306d000/0x0/0x4ffc00000, data 0x36047526/0x35e91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,1,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 181051392 unmapped: 85000192 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:47.870968+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 363 heartbeat osd_stat(store_statfs(0x4c206d000/0x0/0x4ffc00000, data 0x37047526/0x36e91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,1,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8787364 data_alloc: 234881024 data_used: 23396352
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 181329920 unmapped: 84721664 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 363 handle_osd_map epochs [363,364], i have 363, src has [1,364]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:48.871123+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 190185472 unmapped: 75866112 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:49.871314+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182075392 unmapped: 83976192 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 364 heartbeat osd_stat(store_statfs(0x4bd069000/0x0/0x4ffc00000, data 0x3c048fe0/0x3be94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:50.871493+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182329344 unmapped: 83722240 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:51.871730+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182632448 unmapped: 83419136 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:52.871993+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 9828562 data_alloc: 234881024 data_used: 23404544
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.150237083s of 10.045161247s, submitted: 111
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 187015168 unmapped: 79036416 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:53.872200+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 189440000 unmapped: 76611584 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:54.872430+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 364 heartbeat osd_stat(store_statfs(0x4b4077000/0x0/0x4ffc00000, data 0x4503bfe0/0x44e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,2])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 364 ms_handle_reset con 0x558bf1a28800 session 0x558bf21632c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185663488 unmapped: 80388096 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 364 ms_handle_reset con 0x558bf3f93400 session 0x558bf44dda40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a7000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 364 ms_handle_reset con 0x558bf52a7000 session 0x558bf44e1e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24aa000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:55.872627+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 364 ms_handle_reset con 0x558bf24aa000 session 0x558bf4578960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1a28800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185729024 unmapped: 80322560 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:56.872795+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 364 heartbeat osd_stat(store_statfs(0x4b5474000/0x0/0x4ffc00000, data 0x43040f6e/0x42e8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,1,2])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185802752 unmapped: 80248832 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:57.872948+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 9864123 data_alloc: 234881024 data_used: 23838720
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185843712 unmapped: 80207872 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 364 ms_handle_reset con 0x558bf1a28800 session 0x558bf42e4960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:58.873127+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 364 heartbeat osd_stat(store_statfs(0x4b686f000/0x0/0x4ffc00000, data 0x42444f6e/0x4228e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,2])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185966592 unmapped: 80084992 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f93400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:59.874581+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 364 ms_handle_reset con 0x558bf3f93400 session 0x558bf2d02d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a7000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185966592 unmapped: 80084992 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:00.874717+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185966592 unmapped: 80084992 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 364 handle_osd_map epochs [364,365], i have 364, src has [1,365]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:01.874943+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 364 handle_osd_map epochs [365,365], i have 365, src has [1,365]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 365 ms_handle_reset con 0x558bf52a7000 session 0x558bf21fb0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185974784 unmapped: 80076800 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:02.875140+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d09400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 365 heartbeat osd_stat(store_statfs(0x4c7466000/0x0/0x4ffc00000, data 0x31c4cb96/0x31a97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 365 ms_handle_reset con 0x558bf3d09400 session 0x558bf422fc20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a6c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8092335 data_alloc: 234881024 data_used: 23859200
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.691864014s of 10.057542801s, submitted: 182
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185974784 unmapped: 80076800 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:03.875293+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 365 ms_handle_reset con 0x558bf52a6c00 session 0x558bf63b3860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1a28800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d09400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 366 ms_handle_reset con 0x558bf1a28800 session 0x558bf2d00d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 186097664 unmapped: 79953920 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:04.875483+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 186138624 unmapped: 79912960 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:05.875632+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7c400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffa000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 367 ms_handle_reset con 0x558bf3ffa000 session 0x558bf4536960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 367 ms_handle_reset con 0x558bf5e7c400 session 0x558bf2c07e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 186171392 unmapped: 79880192 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 368 ms_handle_reset con 0x558bf3d09400 session 0x558bf3e44f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:06.875797+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 368 ms_handle_reset con 0x558bf3f92800 session 0x558bf22b6780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183943168 unmapped: 82108416 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1a28800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:07.875973+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 368 heartbeat osd_stat(store_statfs(0x4c8223000/0x0/0x4ffc00000, data 0x30ae6f12/0x30cdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7949094 data_alloc: 234881024 data_used: 22921216
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183984128 unmapped: 82067456 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 370 ms_handle_reset con 0x558bf1a28800 session 0x558bf3e423c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d09400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:08.876169+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 370 ms_handle_reset con 0x558bf3d09400 session 0x558bf2d005a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182116352 unmapped: 83935232 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:09.876326+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffa000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182140928 unmapped: 83910656 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 371 ms_handle_reset con 0x558bf3ffa000 session 0x558bf45374a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c0400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:10.876464+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 371 ms_handle_reset con 0x558bf43c0400 session 0x558bf3e43680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 180535296 unmapped: 85516288 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:11.876696+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 371 heartbeat osd_stat(store_statfs(0x4f1e12000/0x0/0x4ffc00000, data 0x37ac34e/0x38eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 180535296 unmapped: 85516288 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:12.876885+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3089271 data_alloc: 234881024 data_used: 22925312
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 180535296 unmapped: 85516288 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:13.877018+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 180535296 unmapped: 85516288 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.632959366s of 11.156858444s, submitted: 335
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf8a4a800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 372 ms_handle_reset con 0x558bf8a4a800 session 0x558bf63b30e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:14.877190+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 180535296 unmapped: 85516288 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:15.877332+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf45e9000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 180543488 unmapped: 85508096 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:16.877509+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 372 heartbeat osd_stat(store_statfs(0x4f5610000/0x0/0x4ffc00000, data 0x37adde2/0x38ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 180543488 unmapped: 85508096 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:17.877623+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092521 data_alloc: 234881024 data_used: 22937600
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 180535296 unmapped: 85516288 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:18.877747+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 372 heartbeat osd_stat(store_statfs(0x4f5610000/0x0/0x4ffc00000, data 0x37adde2/0x38ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 180543488 unmapped: 85508096 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:19.877915+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffa800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 180543488 unmapped: 85508096 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:20.878077+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf311fc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 373 ms_handle_reset con 0x558bf311fc00 session 0x558bf43e9860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 181075968 unmapped: 84975616 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:21.878243+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 180568064 unmapped: 85483520 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7a000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:22.878467+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 374 heartbeat osd_stat(store_statfs(0x4f5586000/0x0/0x4ffc00000, data 0x38378dc/0x3978000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 374 ms_handle_reset con 0x558bf5e7a000 session 0x558bf63b3e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 374 ms_handle_reset con 0x558bf2d0c000 session 0x558bf42e5e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3112802 data_alloc: 234881024 data_used: 22949888
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0d800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 180568064 unmapped: 85483520 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 374 ms_handle_reset con 0x558bf2d0d800 session 0x558bf2d001e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:23.878583+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 180617216 unmapped: 85434368 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:24.878717+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f92400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3b400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.271022797s of 10.640490532s, submitted: 61
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 375 heartbeat osd_stat(store_statfs(0x4f5575000/0x0/0x4ffc00000, data 0x38c40e6/0x3985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 375 ms_handle_reset con 0x558bf5e3b400 session 0x558bf4537a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 375 ms_handle_reset con 0x558bf5f92400 session 0x558bf52a14a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 180617216 unmapped: 85434368 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:25.880426+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 180723712 unmapped: 85327872 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:26.880613+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0d800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 375 handle_osd_map epochs [375,376], i have 375, src has [1,376]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 375 handle_osd_map epochs [376,376], i have 376, src has [1,376]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 376 ms_handle_reset con 0x558bf2d0d800 session 0x558bf3e445a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 376 ms_handle_reset con 0x558bf2d0c000 session 0x558bf44dde00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 180723712 unmapped: 85327872 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:27.880753+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3146948 data_alloc: 234881024 data_used: 22986752
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 180723712 unmapped: 85327872 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 376 ms_handle_reset con 0x558bf3ffa800 session 0x558bf2c06960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf311fc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:28.880889+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 376 heartbeat osd_stat(store_statfs(0x4f5142000/0x0/0x4ffc00000, data 0x3969d8e/0x39a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [0,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 376 ms_handle_reset con 0x558bf311fc00 session 0x558bf40461e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 186187776 unmapped: 79863808 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:29.881007+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0d800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffa800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 376 ms_handle_reset con 0x558bf2d0d800 session 0x558bf3d503c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 376 ms_handle_reset con 0x558bf2d0c000 session 0x558bf19fb4a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 376 ms_handle_reset con 0x558bf3ffa800 session 0x558bf2d02960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f92400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 186195968 unmapped: 79855616 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:30.881141+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 377 ms_handle_reset con 0x558bf5f92400 session 0x558bf44dcd20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2c7c800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 377 ms_handle_reset con 0x558bf2c7c800 session 0x558bf2b56000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183123968 unmapped: 82927616 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 377 ms_handle_reset con 0x558bf2d0c000 session 0x558bf21632c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:31.881305+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0d800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 377 ms_handle_reset con 0x558bf2d0d800 session 0x558bf3e41e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183123968 unmapped: 82927616 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffa800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:32.881478+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f92400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 377 handle_osd_map epochs [377,378], i have 377, src has [1,378]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 378 ms_handle_reset con 0x558bf3ffa800 session 0x558bf2d012c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184253 data_alloc: 234881024 data_used: 25194496
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183132160 unmapped: 82919424 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:33.881591+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d06c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 378 ms_handle_reset con 0x558bf3d06c00 session 0x558bf4579c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f62800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 378 ms_handle_reset con 0x558bf5f62800 session 0x558bf63b2780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183148544 unmapped: 82903040 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 378 heartbeat osd_stat(store_statfs(0x4f4e35000/0x0/0x4ffc00000, data 0x3c7957c/0x3cb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:34.881725+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.189070702s of 10.009735107s, submitted: 142
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 184246272 unmapped: 81805312 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:35.881871+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 378 ms_handle_reset con 0x558bf5f92400 session 0x558bf2d6d860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 184246272 unmapped: 81805312 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets getting new tickets!
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:36.882195+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _finish_auth 0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:36.883329+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 184246272 unmapped: 81805312 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:37.882342+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 378 ms_handle_reset con 0x558bf3e18c00 session 0x558bf2b1dc20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 378 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf3e432c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3189420 data_alloc: 234881024 data_used: 25862144
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 184270848 unmapped: 81780736 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:38.882459+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 184270848 unmapped: 81780736 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:39.882595+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5d40800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 379 ms_handle_reset con 0x558bf5d40800 session 0x558bf4083680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 379 heartbeat osd_stat(store_statfs(0x4f4e1f000/0x0/0x4ffc00000, data 0x3e5611a/0x3cce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 184279040 unmapped: 81772544 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:40.882726+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 184279040 unmapped: 81772544 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:41.882901+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf8a4b000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3fb6800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 184270848 unmapped: 81780736 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:42.883120+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f62400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 379 ms_handle_reset con 0x558bf5f62400 session 0x558bf3e45680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 379 handle_osd_map epochs [379,380], i have 379, src has [1,380]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216637 data_alloc: 234881024 data_used: 25882624
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 380 heartbeat osd_stat(store_statfs(0x4f4e1f000/0x0/0x4ffc00000, data 0x3e5617c/0x3ccf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 184287232 unmapped: 81764352 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:43.883247+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 380 ms_handle_reset con 0x558bf3e18c00 session 0x558bf52a03c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 380 ms_handle_reset con 0x558bf3f91400 session 0x558bf52b0f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5d40800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: mgrc ms_handle_reset ms_handle_reset con 0x558bf2d0a400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1430667654
Nov 29 08:25:31 compute-0 ceph-osd[89968]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1430667654,v1:192.168.122.100:6801/1430667654]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: get_auth_request con 0x558bf2c7c800 auth_method 0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: mgrc handle_mgr_configure stats_period=5
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185122816 unmapped: 80928768 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:44.883393+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 380 heartbeat osd_stat(store_statfs(0x4f4e1a000/0x0/0x4ffc00000, data 0x3e57db2/0x3cd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 380 ms_handle_reset con 0x558bf43bec00 session 0x558bf49792c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.183272362s of 10.204523087s, submitted: 54
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 380 ms_handle_reset con 0x558bf215cc00 session 0x558bf4978d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e58c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185139200 unmapped: 80912384 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:45.883516+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 381 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf30e65a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 381 ms_handle_reset con 0x558bf3fb6800 session 0x558bf2b1c1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 381 ms_handle_reset con 0x558bf8a4b000 session 0x558bf2c07680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 184582144 unmapped: 81469440 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:46.883636+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 184582144 unmapped: 81469440 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:47.883831+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf215cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a32c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ea6800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f92800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 381 ms_handle_reset con 0x558bf7a32c00 session 0x558bf22b6b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3234895 data_alloc: 234881024 data_used: 25890816
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 381 ms_handle_reset con 0x558bf5f92800 session 0x558bf2d01680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 381 ms_handle_reset con 0x558bf5ea6800 session 0x558bf44dde00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 184614912 unmapped: 81436672 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:48.884009+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 382 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf22b72c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 184623104 unmapped: 81428480 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:49.884181+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 383 heartbeat osd_stat(store_statfs(0x4f5dc8000/0x0/0x4ffc00000, data 0x3ee3690/0x3d65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 383 ms_handle_reset con 0x558bf3e18c00 session 0x558bf3e445a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf6115400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 383 ms_handle_reset con 0x558bf6115400 session 0x558bf52b0b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 383 ms_handle_reset con 0x558bf215cc00 session 0x558bf2d4da40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 184606720 unmapped: 81444864 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:50.884319+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 184606720 unmapped: 81444864 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:51.884483+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ea6800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 383 ms_handle_reset con 0x558bf5ea6800 session 0x558bf43e9860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 384 ms_handle_reset con 0x558bf3e18c00 session 0x558bf4040b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f92800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 384 ms_handle_reset con 0x558bf5f92800 session 0x558bf52a14a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 184647680 unmapped: 81403904 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:52.884613+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3248490 data_alloc: 234881024 data_used: 26021888
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:53.884852+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 184655872 unmapped: 81395712 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 384 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf42e50e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf215cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 384 ms_handle_reset con 0x558bf215cc00 session 0x558bf2d02960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 384 ms_handle_reset con 0x558bf45e9000 session 0x558bf2fceb40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf45e8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:54.885055+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 184655872 unmapped: 81395712 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 384 ms_handle_reset con 0x558bf45e8000 session 0x558bf3e44f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dba400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 384 ms_handle_reset con 0x558bf3dba400 session 0x558bf63b3860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24ab400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 385 ms_handle_reset con 0x558bf24ab400 session 0x558bf22b70e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f5dc2000/0x0/0x4ffc00000, data 0x3ee6e8c/0x3d6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 385 ms_handle_reset con 0x558bf3f92400 session 0x558bf422fc20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:55.885188+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185729024 unmapped: 80322560 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 385 handle_osd_map epochs [385,386], i have 385, src has [1,386]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.431030273s of 10.601291656s, submitted: 109
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 386 ms_handle_reset con 0x558bf1947c00 session 0x558bf4578960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf215cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 386 ms_handle_reset con 0x558bf215cc00 session 0x558bf2d02960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 386 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf465c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24ab400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 386 ms_handle_reset con 0x558bf24ab400 session 0x558bf52a14a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf0aa5c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 386 ms_handle_reset con 0x558bf0aa5c00 session 0x558bf2d4da40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:56.885417+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185737216 unmapped: 80314368 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf215cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:57.885585+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185737216 unmapped: 80314368 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24ab400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 386 ms_handle_reset con 0x558bf215cc00 session 0x558bf4398b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dba400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 386 ms_handle_reset con 0x558bf3dba400 session 0x558bf3e45680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf45e8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 387 ms_handle_reset con 0x558bf24ab400 session 0x558bf40412c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf45e9000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253736 data_alloc: 234881024 data_used: 26062848
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:58.885760+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 388 ms_handle_reset con 0x558bf3f92400 session 0x558bf3053c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185794560 unmapped: 80257024 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 388 ms_handle_reset con 0x558bf1947c00 session 0x558bf4979680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 388 ms_handle_reset con 0x558bf45e8000 session 0x558bf2b1dc20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 388 ms_handle_reset con 0x558bf45e9000 session 0x558bf44dd2c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 388 ms_handle_reset con 0x558bf3f92400 session 0x558bf4082000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 388 heartbeat osd_stat(store_statfs(0x4f6158000/0x0/0x4ffc00000, data 0x3b4ee06/0x39d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:59.885875+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185794560 unmapped: 80257024 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f62800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 388 ms_handle_reset con 0x558bf5f62800 session 0x558bf42e5a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 388 ms_handle_reset con 0x558bf2d0dc00 session 0x558bf52b03c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:00.886046+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf45e8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 388 ms_handle_reset con 0x558bf45e8000 session 0x558bf2d012c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185819136 unmapped: 80232448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 389 ms_handle_reset con 0x558bf1947c00 session 0x558bf19ed0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 389 ms_handle_reset con 0x558bf3f92400 session 0x558bf2d02d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:01.886252+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185827328 unmapped: 80224256 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf45e9000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 389 ms_handle_reset con 0x558bf45e9000 session 0x558bf4399c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 389 ms_handle_reset con 0x558bf2ae8000 session 0x558bf2343680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 389 ms_handle_reset con 0x558bf3d09000 session 0x558bf45781e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:02.886366+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185835520 unmapped: 80216064 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 390 ms_handle_reset con 0x558bf1947c00 session 0x558bf44e0d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3215528 data_alloc: 234881024 data_used: 26021888
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 390 ms_handle_reset con 0x558bf3f92400 session 0x558bf2d01c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:03.886512+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf45e8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 391 ms_handle_reset con 0x558bf45e8000 session 0x558bf403f680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185892864 unmapped: 80158720 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 391 ms_handle_reset con 0x558bf1947c00 session 0x558bf4583c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2ae8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:04.886639+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185958400 unmapped: 80093184 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f615c000/0x0/0x4ffc00000, data 0x3843eca/0x39d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 392 ms_handle_reset con 0x558bf2d0dc00 session 0x558bf2162f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:05.886772+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 392 ms_handle_reset con 0x558bf2ae8000 session 0x558bf45825a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185958400 unmapped: 80093184 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.213545799s of 10.479084969s, submitted: 235
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0d000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 393 ms_handle_reset con 0x558bf2d0d000 session 0x558bf4582d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf8a4bc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:06.886889+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffd800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 393 ms_handle_reset con 0x558bf3ffd800 session 0x558bf52a1680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 393 ms_handle_reset con 0x558bf8a4bc00 session 0x558bf4582b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185958400 unmapped: 80093184 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 393 heartbeat osd_stat(store_statfs(0x4f6200000/0x0/0x4ffc00000, data 0x3718b12/0x392c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:07.887028+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185958400 unmapped: 80093184 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 393 ms_handle_reset con 0x558bf24be400 session 0x558bf23fc1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 393 ms_handle_reset con 0x558bf3ff8400 session 0x558bf422ed20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2ae8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0d000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3201477 data_alloc: 234881024 data_used: 25341952
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:08.887209+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 185966592 unmapped: 80084992 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 393 ms_handle_reset con 0x558bf2d0d000 session 0x558bf44e12c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 394 ms_handle_reset con 0x558bf1947c00 session 0x558bf3d51a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24be400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:09.887492+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 184008704 unmapped: 82042880 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 394 ms_handle_reset con 0x558bf2d0dc00 session 0x558bf3e41c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 394 ms_handle_reset con 0x558bf24be400 session 0x558bf3d50b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:10.887655+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176832512 unmapped: 89219072 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffb800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:11.887842+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176848896 unmapped: 89202688 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 394 ms_handle_reset con 0x558bf2ae8000 session 0x558bf44dde00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 394 ms_handle_reset con 0x558bf3ffb800 session 0x558bf19ed0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 394 heartbeat osd_stat(store_statfs(0x4f80bd000/0x0/0x4ffc00000, data 0x185e74a/0x1a71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:12.888084+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176857088 unmapped: 89194496 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0d000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2839658 data_alloc: 218103808 data_used: 7958528
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:13.888253+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff8400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 395 ms_handle_reset con 0x558bf3ff8400 session 0x558bf2b1dc20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176873472 unmapped: 89178112 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:14.888483+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176873472 unmapped: 89178112 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 395 handle_osd_map epochs [396,396], i have 396, src has [1,396]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf8a4b000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 396 ms_handle_reset con 0x558bf8a4b000 session 0x558bf40412c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:15.888635+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176914432 unmapped: 89137152 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 397 ms_handle_reset con 0x558bf19b4000 session 0x558bf4537e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 397 ms_handle_reset con 0x558bf2d0d000 session 0x558bf42e5a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 397 heartbeat osd_stat(store_statfs(0x4f80b0000/0x0/0x4ffc00000, data 0x1863b66/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d06000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:16.888767+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176914432 unmapped: 89137152 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.164624214s of 10.475586891s, submitted: 131
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 398 ms_handle_reset con 0x558bf3d06000 session 0x558bf3e43c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a6000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 398 ms_handle_reset con 0x558bf52a6000 session 0x558bf2b1c5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:17.888900+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176930816 unmapped: 89120768 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 398 ms_handle_reset con 0x558bf19b4000 session 0x558bf316c3c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2855609 data_alloc: 218103808 data_used: 7974912
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:18.889024+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176930816 unmapped: 89120768 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c1400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 399 ms_handle_reset con 0x558bf43c1400 session 0x558bf2d01c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:19.889151+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f93c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c0800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 399 ms_handle_reset con 0x558bf3f93c00 session 0x558bf22b6780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176939008 unmapped: 89112576 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 400 ms_handle_reset con 0x558bf43c0800 session 0x558bf23fd680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 400 ms_handle_reset con 0x558bf193cc00 session 0x558bf2fcf680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 400 ms_handle_reset con 0x558bf3dbb800 session 0x558bf2b57680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:20.889331+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176939008 unmapped: 89112576 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 400 ms_handle_reset con 0x558bf19b4000 session 0x558bf3e40000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f93c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 400 heartbeat osd_stat(store_statfs(0x4f80a6000/0x0/0x4ffc00000, data 0x1868e64/0x1a86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:21.889511+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176939008 unmapped: 89112576 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 401 ms_handle_reset con 0x558bf3f93c00 session 0x558bf3e403c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c0800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 401 ms_handle_reset con 0x558bf43c0800 session 0x558bf40401e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:22.889664+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c1400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff8c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f92000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 401 ms_handle_reset con 0x558bf3ff8c00 session 0x558bf318ef00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176955392 unmapped: 89096192 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 402 ms_handle_reset con 0x558bf3dbb800 session 0x558bf43201e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2868531 data_alloc: 218103808 data_used: 7974912
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 402 ms_handle_reset con 0x558bf5f92000 session 0x558bf23fcb40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:23.889787+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 403 ms_handle_reset con 0x558bf19b4000 session 0x558bf4320000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 403 ms_handle_reset con 0x558bf43c1400 session 0x558bf2b56000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176996352 unmapped: 89055232 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:24.890020+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a36400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 404 ms_handle_reset con 0x558bf7a36400 session 0x558bf422f0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 177020928 unmapped: 89030656 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 404 ms_handle_reset con 0x558bf19b4000 session 0x558bf316dc20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 404 ms_handle_reset con 0x558bf3dbb800 session 0x558bf2b1c960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c1400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 404 ms_handle_reset con 0x558bf43c1400 session 0x558bf3e40960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f92000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 404 ms_handle_reset con 0x558bf3dbb400 session 0x558bf3e44f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff7c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 404 ms_handle_reset con 0x558bf5ff7c00 session 0x558bf21630e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:25.890160+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 177029120 unmapped: 89022464 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 405 ms_handle_reset con 0x558bf5f92000 session 0x558bf4583a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 405 heartbeat osd_stat(store_statfs(0x4f6ef7000/0x0/0x4ffc00000, data 0x1871c8a/0x1a96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:26.890285+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176144384 unmapped: 89907200 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.191250801s of 10.003265381s, submitted: 215
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 406 ms_handle_reset con 0x558bf19b4000 session 0x558bf3053c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 406 ms_handle_reset con 0x558bf3dbb800 session 0x558bf22b63c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c1400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:27.890465+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176144384 unmapped: 89907200 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 407 ms_handle_reset con 0x558bf3dbb400 session 0x558bf3d505a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43be800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 407 ms_handle_reset con 0x558bf43c1400 session 0x558bf2d00960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 407 ms_handle_reset con 0x558bf43be800 session 0x558bf2d021e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2885134 data_alloc: 218103808 data_used: 7995392
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 407 ms_handle_reset con 0x558bf19b4000 session 0x558bf422e000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:28.890800+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176160768 unmapped: 89890816 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 408 ms_handle_reset con 0x558bf3dbb400 session 0x558bf2d00f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:29.890916+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24ab400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 408 ms_handle_reset con 0x558bf24ab400 session 0x558bf3e43a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176168960 unmapped: 89882624 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0b800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 408 ms_handle_reset con 0x558bf2d0b800 session 0x558bf45790e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:30.891115+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176177152 unmapped: 89874432 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 408 ms_handle_reset con 0x558bf19b4000 session 0x558bf52b1680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24ab400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 408 ms_handle_reset con 0x558bf24ab400 session 0x558bf2d4c1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:31.891333+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176185344 unmapped: 89866240 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 408 heartbeat osd_stat(store_statfs(0x4f6ef7000/0x0/0x4ffc00000, data 0x187702c/0x1a97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3dbb400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 408 ms_handle_reset con 0x558bf3dbb400 session 0x558bf2fce1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:32.891487+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43be800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 408 ms_handle_reset con 0x558bf43be800 session 0x558bf2c072c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d09000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176185344 unmapped: 89866240 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2885274 data_alloc: 218103808 data_used: 7999488
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:33.891634+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176193536 unmapped: 89858048 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 409 ms_handle_reset con 0x558bf3d09000 session 0x558bf2162780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 409 heartbeat osd_stat(store_statfs(0x4f6ef3000/0x0/0x4ffc00000, data 0x1878b5e/0x1a9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:34.891792+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf6113800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f93400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176193536 unmapped: 89858048 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 409 ms_handle_reset con 0x558bf5f93400 session 0x558bf2163a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a32400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 409 heartbeat osd_stat(store_statfs(0x4f6ef3000/0x0/0x4ffc00000, data 0x1878b5e/0x1a9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:35.891937+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176193536 unmapped: 89858048 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 410 ms_handle_reset con 0x558bf7a32400 session 0x558bf63b2780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bd800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7b800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 410 ms_handle_reset con 0x558bf5e7b800 session 0x558bf52b0b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 410 ms_handle_reset con 0x558bf24bd800 session 0x558bf3e401e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:36.892178+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176201728 unmapped: 89849856 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.001776695s of 10.045209885s, submitted: 123
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a33000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 411 ms_handle_reset con 0x558bf7a33000 session 0x558bf22b70e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 411 ms_handle_reset con 0x558bf6113800 session 0x558bf4046960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:37.892371+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176226304 unmapped: 89825280 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bd800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2896610 data_alloc: 218103808 data_used: 8011776
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:38.892561+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176226304 unmapped: 89825280 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:39.892679+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176226304 unmapped: 89825280 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 411 heartbeat osd_stat(store_statfs(0x4f6eec000/0x0/0x4ffc00000, data 0x187c376/0x1aa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 411 handle_osd_map epochs [412,412], i have 412, src has [1,412]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c2c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:40.892799+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 412 ms_handle_reset con 0x558bf43c2c00 session 0x558bf4978780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 412 ms_handle_reset con 0x558bf24bd800 session 0x558bf3e45860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176250880 unmapped: 89800704 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2a42000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:41.892936+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176250880 unmapped: 89800704 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a34800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 413 ms_handle_reset con 0x558bf2a42000 session 0x558bf4321680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ea6800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 413 heartbeat osd_stat(store_statfs(0x4f6ee5000/0x0/0x4ffc00000, data 0x187fc1e/0x1aa7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:42.893070+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176250880 unmapped: 89800704 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904164 data_alloc: 218103808 data_used: 8015872
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:43.893237+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 413 handle_osd_map epochs [414,415], i have 413, src has [1,415]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176259072 unmapped: 89792512 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 415 ms_handle_reset con 0x558bf7a34800 session 0x558bf2163c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 415 ms_handle_reset con 0x558bf5ea6800 session 0x558bf22b72c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:44.893378+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176259072 unmapped: 89792512 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:45.893559+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf623cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 415 ms_handle_reset con 0x558bf2d0dc00 session 0x558bf318fa40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff7400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176275456 unmapped: 89776128 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f93c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 415 ms_handle_reset con 0x558bf5ff7400 session 0x558bf318e000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 415 ms_handle_reset con 0x558bf5f93c00 session 0x558bf403e5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 415 ms_handle_reset con 0x558bf5ee8000 session 0x558bf2fb85a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 415 ms_handle_reset con 0x558bf3e18c00 session 0x558bf4536d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:46.893804+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176316416 unmapped: 89735168 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.299116135s of 10.216234207s, submitted: 61
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 415 ms_handle_reset con 0x558bf623cc00 session 0x558bf3e45c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 415 ms_handle_reset con 0x558bf2d0dc00 session 0x558bf318e000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ea6800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:47.893974+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176316416 unmapped: 89735168 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f6e9c000/0x0/0x4ffc00000, data 0x18c3472/0x1af2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925348 data_alloc: 218103808 data_used: 8019968
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:48.894133+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176324608 unmapped: 89726976 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f6e9c000/0x0/0x4ffc00000, data 0x18c3472/0x1af2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:49.894276+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176324608 unmapped: 89726976 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf171d400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:50.894417+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 176324608 unmapped: 89726976 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:51.894629+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 177397760 unmapped: 88653824 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 417 ms_handle_reset con 0x558bf5ea6800 session 0x558bf2163c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2ae8c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:52.894785+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 417 ms_handle_reset con 0x558bf2ae8c00 session 0x558bf2d00f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 417 ms_handle_reset con 0x558bf171d400 session 0x558bf22b70e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 177405952 unmapped: 88645632 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 417 ms_handle_reset con 0x558bf2d0dc00 session 0x558bf3d505a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2936503 data_alloc: 218103808 data_used: 8032256
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:53.894960+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 88637440 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 417 ms_handle_reset con 0x558bf3e18c00 session 0x558bf22b63c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 417 heartbeat osd_stat(store_statfs(0x4f6e96000/0x0/0x4ffc00000, data 0x18c6ad7/0x1af7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf62b6000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:54.895146+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 88637440 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:55.895395+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 417 ms_handle_reset con 0x558bf62b6000 session 0x558bf3e44f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 417 heartbeat osd_stat(store_statfs(0x4f6e96000/0x0/0x4ffc00000, data 0x18c6ad7/0x1af7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 177414144 unmapped: 88637440 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf623d000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf215d000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 417 ms_handle_reset con 0x558bf215d000 session 0x558bf3053860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf171d400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 417 ms_handle_reset con 0x558bf171d400 session 0x558bf21fba40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:56.895608+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 177422336 unmapped: 88629248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 418 ms_handle_reset con 0x558bf2d0dc00 session 0x558bf44dd860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.874063969s of 10.300529480s, submitted: 97
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 418 ms_handle_reset con 0x558bf3e18c00 session 0x558bf3e430e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf62b6000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:57.895841+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 178495488 unmapped: 87556096 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 419 ms_handle_reset con 0x558bf62b6000 session 0x558bf4040f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2940998 data_alloc: 218103808 data_used: 8044544
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:58.895985+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 419 ms_handle_reset con 0x558bf623d000 session 0x558bf23fdc20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 178503680 unmapped: 87547904 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:59.896186+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee9400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf8a4b000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 419 ms_handle_reset con 0x558bf5ee9400 session 0x558bf4399860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 419 ms_handle_reset con 0x558bf8a4b000 session 0x558bf44dcb40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d45400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 178503680 unmapped: 87547904 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:00.896340+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 178511872 unmapped: 87539712 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 419 heartbeat osd_stat(store_statfs(0x4f6e90000/0x0/0x4ffc00000, data 0x18ca28f/0x1afe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff6400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:01.896529+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 178511872 unmapped: 87539712 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0d800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 419 ms_handle_reset con 0x558bf2d45400 session 0x558bf4537a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:02.896699+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 419 ms_handle_reset con 0x558bf2d0d800 session 0x558bf30e7a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 419 ms_handle_reset con 0x558bf5e7cc00 session 0x558bf422f680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0d800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 419 ms_handle_reset con 0x558bf2d0d800 session 0x558bf44ddc20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 178634752 unmapped: 87416832 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d45400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 420 ms_handle_reset con 0x558bf2d45400 session 0x558bf40465a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 420 ms_handle_reset con 0x558bf5e7cc00 session 0x558bf2d01e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee9400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 420 ms_handle_reset con 0x558bf5ee9400 session 0x558bf4082d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf8a4b000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a36000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 420 ms_handle_reset con 0x558bf7a36000 session 0x558bf43e8780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2995107 data_alloc: 218103808 data_used: 8065024
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:03.896895+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 178642944 unmapped: 87408640 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:04.897031+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 421 ms_handle_reset con 0x558bf8a4b000 session 0x558bf4040780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 178651136 unmapped: 87400448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 421 ms_handle_reset con 0x558bf5ff6400 session 0x558bf2c07680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a37800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:05.897151+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bcc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 421 heartbeat osd_stat(store_statfs(0x4f686d000/0x0/0x4ffc00000, data 0x1eebb6b/0x2120000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 178651136 unmapped: 87400448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 421 ms_handle_reset con 0x558bf24bcc00 session 0x558bf45832c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3ac00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf6114000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 421 heartbeat osd_stat(store_statfs(0x4f686d000/0x0/0x4ffc00000, data 0x1eebb6b/0x2120000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 421 handle_osd_map epochs [422,422], i have 422, src has [1,422]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 422 ms_handle_reset con 0x558bf6114000 session 0x558bf422e3c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:06.897459+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 422 ms_handle_reset con 0x558bf7a37800 session 0x558bf2fcfe00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bcc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 422 ms_handle_reset con 0x558bf5e3ac00 session 0x558bf44e1e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 178651136 unmapped: 87400448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff6400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 422 ms_handle_reset con 0x558bf5ff6400 session 0x558bf2343680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 422 ms_handle_reset con 0x558bf24bcc00 session 0x558bf52b10e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf6114000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 422 ms_handle_reset con 0x558bf6114000 session 0x558bf3e423c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf8a4b000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:07.897671+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.077946186s of 10.358308792s, submitted: 82
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 422 ms_handle_reset con 0x558bf8a4b000 session 0x558bf52b1a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 178659328 unmapped: 87392256 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f686b000/0x0/0x4ffc00000, data 0x1eed7b6/0x2123000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bcc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3ac00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:08.897789+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3001846 data_alloc: 218103808 data_used: 8060928
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff6400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 178667520 unmapped: 87384064 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 422 ms_handle_reset con 0x558bf5ff6400 session 0x558bf4320b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:09.897929+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f686a000/0x0/0x4ffc00000, data 0x1eed7c6/0x2124000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 178667520 unmapped: 87384064 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f686a000/0x0/0x4ffc00000, data 0x1eed7c6/0x2124000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:10.898240+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 179232768 unmapped: 86818816 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf6113400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:11.898472+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf215c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 422 ms_handle_reset con 0x558bf215c000 session 0x558bf403e5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 179232768 unmapped: 86818816 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:12.898624+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff7800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee9800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 423 ms_handle_reset con 0x558bf5ff7800 session 0x558bf52a01e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 179159040 unmapped: 86892544 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:13.898744+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3048684 data_alloc: 234881024 data_used: 13361152
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 423 ms_handle_reset con 0x558bf5ee9800 session 0x558bf4046960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a32c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 424 ms_handle_reset con 0x558bf7a32c00 session 0x558bf2162780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 424 ms_handle_reset con 0x558bf6113400 session 0x558bf45790e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 179159040 unmapped: 86892544 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c2000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 424 ms_handle_reset con 0x558bf43c2000 session 0x558bf4040960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:14.898845+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 179159040 unmapped: 86892544 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf6113000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:15.899012+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 424 handle_osd_map epochs [425,425], i have 425, src has [1,425]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 425 ms_handle_reset con 0x558bf6113000 session 0x558bf44e0f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf6114c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 425 ms_handle_reset con 0x558bf24bf400 session 0x558bf2163a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 179167232 unmapped: 86884352 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 425 ms_handle_reset con 0x558bf6114c00 session 0x558bf43205a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 425 heartbeat osd_stat(store_statfs(0x4f644f000/0x0/0x4ffc00000, data 0x1ef3165/0x212f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:16.899176+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 179183616 unmapped: 86867968 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c2000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf6113000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 426 ms_handle_reset con 0x558bf6113000 session 0x558bf23fc960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 426 ms_handle_reset con 0x558bf43c2000 session 0x558bf63b2780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 426 ms_handle_reset con 0x558bf24bf400 session 0x558bf4082d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf6113400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 426 ms_handle_reset con 0x558bf6113400 session 0x558bf40410e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:17.899385+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 179216384 unmapped: 86835200 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43bfc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.101816177s of 10.476975441s, submitted: 78
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:18.899521+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063357 data_alloc: 234881024 data_used: 13365248
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 428 ms_handle_reset con 0x558bf43bfc00 session 0x558bf63b32c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 179232768 unmapped: 86818816 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:19.899641+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f62400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf311ec00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 428 ms_handle_reset con 0x558bf311ec00 session 0x558bf52b03c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 428 ms_handle_reset con 0x558bf5f62400 session 0x558bf30e72c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 179240960 unmapped: 86810624 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:20.899762+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182697984 unmapped: 83353600 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 430 ms_handle_reset con 0x558bf3f92c00 session 0x558bf42e5680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f92800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 430 heartbeat osd_stat(store_statfs(0x4f5d52000/0x0/0x4ffc00000, data 0x25e7aa0/0x282a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:21.899917+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182714368 unmapped: 83337216 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ea7c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 431 ms_handle_reset con 0x558bf5ea7c00 session 0x558bf3d51e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ea6800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 431 ms_handle_reset con 0x558bf5f92800 session 0x558bf4537a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf311ec00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:22.900129+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182812672 unmapped: 83238912 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:23.900287+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3147148 data_alloc: 234881024 data_used: 14237696
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182812672 unmapped: 83238912 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:24.900540+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 431 ms_handle_reset con 0x558bf5ea6800 session 0x558bf3d50780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183869440 unmapped: 82182144 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 432 ms_handle_reset con 0x558bf311ec00 session 0x558bf52a1e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a37000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:25.900672+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 432 ms_handle_reset con 0x558bf7a37000 session 0x558bf4321a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2a43800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 432 ms_handle_reset con 0x558bf2a43800 session 0x558bf422e000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183926784 unmapped: 82124800 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:26.900900+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 432 heartbeat osd_stat(store_statfs(0x4f5d1c000/0x0/0x4ffc00000, data 0x261d21e/0x2862000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183926784 unmapped: 82124800 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ea6400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 432 ms_handle_reset con 0x558bf5ea6400 session 0x558bf2d03c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2a43800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 432 ms_handle_reset con 0x558bf2a43800 session 0x558bf3e434a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:27.901084+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183926784 unmapped: 82124800 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf311ec00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:28.901294+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 432 ms_handle_reset con 0x558bf311ec00 session 0x558bf2b1da40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ea6800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.895422935s of 10.435905457s, submitted: 213
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3157871 data_alloc: 234881024 data_used: 14721024
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a37000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 432 ms_handle_reset con 0x558bf7a37000 session 0x558bf2fb9c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183926784 unmapped: 82124800 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:29.901460+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183926784 unmapped: 82124800 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 432 handle_osd_map epochs [433,434], i have 432, src has [1,434]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:30.901623+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 434 ms_handle_reset con 0x558bf2d0dc00 session 0x558bf403e960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 434 heartbeat osd_stat(store_statfs(0x4f5d1c000/0x0/0x4ffc00000, data 0x261d21e/0x2862000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183943168 unmapped: 82108416 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:31.901814+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 434 ms_handle_reset con 0x558bf5ea6800 session 0x558bf30e70e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183943168 unmapped: 82108416 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2a43800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 434 ms_handle_reset con 0x558bf2a43800 session 0x558bf4046b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:32.901933+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183943168 unmapped: 82108416 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf311ec00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:33.902075+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 434 ms_handle_reset con 0x558bf311ec00 session 0x558bf2d01680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3163059 data_alloc: 234881024 data_used: 14733312
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ea6800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183934976 unmapped: 82116608 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:34.902241+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183951360 unmapped: 82100224 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:35.902387+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183951360 unmapped: 82100224 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 434 handle_osd_map epochs [434,435], i have 434, src has [1,435]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:36.902548+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f5d14000/0x0/0x4ffc00000, data 0x26238d6/0x286a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 435 ms_handle_reset con 0x558bf5ea6800 session 0x558bf42e4960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff7800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183959552 unmapped: 82092032 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f5d10000/0x0/0x4ffc00000, data 0x26254fe/0x286d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 435 ms_handle_reset con 0x558bf5ff7800 session 0x558bf4399860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:37.902680+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183967744 unmapped: 82083840 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7d000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:38.902802+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 435 ms_handle_reset con 0x558bf2d0dc00 session 0x558bf45792c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3170162 data_alloc: 234881024 data_used: 15003648
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183975936 unmapped: 82075648 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff7400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.289764404s of 10.891372681s, submitted: 106
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:39.902954+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 435 ms_handle_reset con 0x558bf5e3c000 session 0x558bf318e3c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 435 ms_handle_reset con 0x558bf5ff7400 session 0x558bf2d001e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2a42000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183975936 unmapped: 82075648 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 435 handle_osd_map epochs [436,436], i have 436, src has [1,436]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:40.903168+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f5d11000/0x0/0x4ffc00000, data 0x26254fe/0x286d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183975936 unmapped: 82075648 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:41.903396+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183975936 unmapped: 82075648 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:42.903563+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 436 ms_handle_reset con 0x558bf24bcc00 session 0x558bf23fcf00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183975936 unmapped: 82075648 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 436 ms_handle_reset con 0x558bf5e3ac00 session 0x558bf2fcfe00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 436 ms_handle_reset con 0x558bf5e7d000 session 0x558bf4398b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:43.903698+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bcc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3ac00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3173053 data_alloc: 234881024 data_used: 15015936
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183975936 unmapped: 82075648 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 436 ms_handle_reset con 0x558bf5e3ac00 session 0x558bf4321a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 436 ms_handle_reset con 0x558bf2a42000 session 0x558bf4582f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:44.903854+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 437 ms_handle_reset con 0x558bf3e18800 session 0x558bf52b03c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d07000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 437 ms_handle_reset con 0x558bf24bcc00 session 0x558bf3e434a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 437 ms_handle_reset con 0x558bf3d07000 session 0x558bf52b10e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 437 ms_handle_reset con 0x558bf19b4400 session 0x558bf63b32c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182050816 unmapped: 84000768 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bcc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 437 ms_handle_reset con 0x558bf24bcc00 session 0x558bf63b34a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:45.903991+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 437 heartbeat osd_stat(store_statfs(0x4f6a48000/0x0/0x4ffc00000, data 0x18e9d3b/0x1b33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182059008 unmapped: 83992576 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:46.904146+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2a42000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182059008 unmapped: 83992576 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 437 ms_handle_reset con 0x558bf3e18800 session 0x558bf2162f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 438 ms_handle_reset con 0x558bf2a42000 session 0x558bf44dd0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:47.904248+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182059008 unmapped: 83992576 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3ac00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:48.904423+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f93c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3030413 data_alloc: 218103808 data_used: 8417280
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193dc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 438 ms_handle_reset con 0x558bf193dc00 session 0x558bf45825a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 438 ms_handle_reset con 0x558bf3f93c00 session 0x558bf2fb81e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 438 heartbeat osd_stat(store_statfs(0x4f6a46000/0x0/0x4ffc00000, data 0x18eb94b/0x1b37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182059008 unmapped: 83992576 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 438 handle_osd_map epochs [439,439], i have 438, src has [1,439]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.126141548s of 10.001389503s, submitted: 134
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 439 ms_handle_reset con 0x558bf5e3ac00 session 0x558bf2d030e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:49.904570+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 439 handle_osd_map epochs [439,440], i have 439, src has [1,440]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182059008 unmapped: 83992576 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffbc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:50.904699+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182059008 unmapped: 83992576 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 441 ms_handle_reset con 0x558bf5ee8000 session 0x558bf3e44d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:51.904880+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 441 ms_handle_reset con 0x558bf3ffbc00 session 0x558bf422e3c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182075392 unmapped: 83976192 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7b000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:52.905012+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 441 ms_handle_reset con 0x558bf5e7b000 session 0x558bf63b3e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f93c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 442 ms_handle_reset con 0x558bf3e18c00 session 0x558bf42e4d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffbc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3ac00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182083584 unmapped: 83968000 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 442 ms_handle_reset con 0x558bf5e3ac00 session 0x558bf2342960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 442 ms_handle_reset con 0x558bf3ffbc00 session 0x558bf21faf00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 442 ms_handle_reset con 0x558bf3f93c00 session 0x558bf3e42000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:53.905179+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 442 heartbeat osd_stat(store_statfs(0x4f6a3a000/0x0/0x4ffc00000, data 0x18f2849/0x1b43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3044334 data_alloc: 218103808 data_used: 8429568
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 442 ms_handle_reset con 0x558bf5ee8000 session 0x558bf4040780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182099968 unmapped: 83951616 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 442 ms_handle_reset con 0x558bf3e18c00 session 0x558bf2342960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3a800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 442 ms_handle_reset con 0x558bf5e3a800 session 0x558bf63b3e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 442 ms_handle_reset con 0x558bf3f92000 session 0x558bf2d030e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:54.905326+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0d800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 442 handle_osd_map epochs [443,443], i have 442, src has [1,443]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 443 ms_handle_reset con 0x558bf2d0d800 session 0x558bf4398b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee6400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 443 ms_handle_reset con 0x558bf5ee6400 session 0x558bf2fcfe00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182108160 unmapped: 83943424 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:55.905472+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0d800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 443 ms_handle_reset con 0x558bf2d0d800 session 0x558bf2d001e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 443 ms_handle_reset con 0x558bf3e18c00 session 0x558bf4399860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182108160 unmapped: 83943424 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 443 ms_handle_reset con 0x558bf3f92000 session 0x558bf42e4960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f6a36000/0x0/0x4ffc00000, data 0x18f4371/0x1b46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:56.905643+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3a800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 443 ms_handle_reset con 0x558bf5e3a800 session 0x558bf4046b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a32800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf8a4ac00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 443 ms_handle_reset con 0x558bf8a4ac00 session 0x558bf2fb9c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0d800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182116352 unmapped: 83935232 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:57.905793+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 443 handle_osd_map epochs [443,444], i have 443, src has [1,444]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 444 ms_handle_reset con 0x558bf3e18c00 session 0x558bf3e432c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 444 ms_handle_reset con 0x558bf2d0d800 session 0x558bf2b565a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 444 ms_handle_reset con 0x558bf7a32800 session 0x558bf30e70e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182116352 unmapped: 83935232 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f6a34000/0x0/0x4ffc00000, data 0x18f5f89/0x1b48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 444 ms_handle_reset con 0x558bf3f92000 session 0x558bf44dda40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3a800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:58.905941+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3049030 data_alloc: 218103808 data_used: 8429568
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182116352 unmapped: 83935232 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff9400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 444 ms_handle_reset con 0x558bf3ff9400 session 0x558bf3e41a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a37c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:59.906083+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 444 handle_osd_map epochs [444,445], i have 444, src has [1,445]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.020837784s of 10.170378685s, submitted: 134
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0d800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182116352 unmapped: 83935232 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 445 ms_handle_reset con 0x558bf2d0d800 session 0x558bf30e65a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:00.906307+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 445 ms_handle_reset con 0x558bf5e3a800 session 0x558bf42e4b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 445 ms_handle_reset con 0x558bf3f92000 session 0x558bf4320b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a32800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182132736 unmapped: 83918848 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:01.906485+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 445 ms_handle_reset con 0x558bf7a32800 session 0x558bf30e6960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182132736 unmapped: 83918848 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 445 handle_osd_map epochs [446,446], i have 445, src has [1,446]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 446 ms_handle_reset con 0x558bf3e18c00 session 0x558bf2d4c780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:02.906649+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0d800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 446 ms_handle_reset con 0x558bf2d0d800 session 0x558bf45821e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 446 ms_handle_reset con 0x558bf3e18c00 session 0x558bf42e50e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183312384 unmapped: 82739200 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 446 ms_handle_reset con 0x558bf3f92000 session 0x558bf1de10e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f6a2f000/0x0/0x4ffc00000, data 0x18f9841/0x1b4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:03.906818+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 446 ms_handle_reset con 0x558bf7a37c00 session 0x558bf43205a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3057624 data_alloc: 218103808 data_used: 8630272
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183312384 unmapped: 82739200 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3c400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:04.906964+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf6115000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 447 ms_handle_reset con 0x558bf6115000 session 0x558bf2d021e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0d800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 447 ms_handle_reset con 0x558bf5e3c400 session 0x558bf52b0960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183336960 unmapped: 82714624 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:05.909339+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 447 ms_handle_reset con 0x558bf3e18c00 session 0x558bf23fdc20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 447 ms_handle_reset con 0x558bf2d0d800 session 0x558bf3053a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183361536 unmapped: 82690048 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:06.909584+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 448 ms_handle_reset con 0x558bf3f92000 session 0x558bf21630e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f6a29000/0x0/0x4ffc00000, data 0x18fd05f/0x1b53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183369728 unmapped: 82681856 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:07.909781+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a37c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183369728 unmapped: 82681856 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f6a29000/0x0/0x4ffc00000, data 0x18fd05f/0x1b53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 448 ms_handle_reset con 0x558bf7a37c00 session 0x558bf42e45a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 448 ms_handle_reset con 0x558bf5e3c000 session 0x558bf2b57a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0d800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:08.910299+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3061702 data_alloc: 218103808 data_used: 8638464
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183377920 unmapped: 82673664 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2ae8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193d800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 448 ms_handle_reset con 0x558bf2d0d800 session 0x558bf422e960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 448 handle_osd_map epochs [448,449], i have 448, src has [1,449]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 449 ms_handle_reset con 0x558bf2ae8000 session 0x558bf43205a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:09.910421+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183656448 unmapped: 82395136 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:10.911159+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.405130863s of 11.061003685s, submitted: 162
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 449 ms_handle_reset con 0x558bf193d800 session 0x558bf52b1e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183656448 unmapped: 82395136 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:11.911317+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2ae9c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 450 ms_handle_reset con 0x558bf2ae9c00 session 0x558bf4320b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193d800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2ae8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 450 ms_handle_reset con 0x558bf2ae8000 session 0x558bf3e41a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 450 ms_handle_reset con 0x558bf193d800 session 0x558bf30e65a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183656448 unmapped: 82395136 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0d800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 450 ms_handle_reset con 0x558bf2d0d800 session 0x558bf30e70e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 450 ms_handle_reset con 0x558bf5e3c000 session 0x558bf3e432c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:12.911494+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff8800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 450 handle_osd_map epochs [451,451], i have 450, src has [1,451]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 451 ms_handle_reset con 0x558bf3ff8800 session 0x558bf2fb9c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 451 heartbeat osd_stat(store_statfs(0x4f69e3000/0x0/0x4ffc00000, data 0x19407c8/0x1b9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183656448 unmapped: 82395136 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:13.911667+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193d800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 451 ms_handle_reset con 0x558bf193d800 session 0x558bf63b3e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2ae8000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0d800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 451 ms_handle_reset con 0x558bf2d0d800 session 0x558bf4040780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081385 data_alloc: 218103808 data_used: 8658944
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 451 ms_handle_reset con 0x558bf5e3c000 session 0x558bf44ddc20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee9000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 451 heartbeat osd_stat(store_statfs(0x4f69de000/0x0/0x4ffc00000, data 0x19423c8/0x1b9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183664640 unmapped: 82386944 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:14.911782+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 451 handle_osd_map epochs [452,452], i have 451, src has [1,452]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 452 ms_handle_reset con 0x558bf24bf400 session 0x558bf4047680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 452 ms_handle_reset con 0x558bf5ee9000 session 0x558bf2d01e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 452 ms_handle_reset con 0x558bf2ae8000 session 0x558bf2342960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183689216 unmapped: 82362368 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:15.912007+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193d800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183689216 unmapped: 82362368 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 452 ms_handle_reset con 0x558bf193d800 session 0x558bf3d501e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:16.912153+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0d800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 452 ms_handle_reset con 0x558bf2d0d800 session 0x558bf21faf00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 452 ms_handle_reset con 0x558bf24bf400 session 0x558bf43e8960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183697408 unmapped: 82354176 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:17.912258+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43bf000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183697408 unmapped: 82354176 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:18.912536+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3085361 data_alloc: 218103808 data_used: 8671232
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 453 ms_handle_reset con 0x558bf5e3c000 session 0x558bf52b0f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 453 heartbeat osd_stat(store_statfs(0x4f69d9000/0x0/0x4ffc00000, data 0x1945bc6/0x1ba4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183721984 unmapped: 82329600 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:19.912666+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 453 ms_handle_reset con 0x558bf43bf000 session 0x558bf422e000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f93c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183721984 unmapped: 82329600 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:20.913022+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf6113400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 453 ms_handle_reset con 0x558bf6113400 session 0x558bf3e41e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f62400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.762818336s of 10.507340431s, submitted: 97
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183730176 unmapped: 82321408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 454 ms_handle_reset con 0x558bf3f93c00 session 0x558bf52b0000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:21.913238+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 454 ms_handle_reset con 0x558bf5f62400 session 0x558bf3e45a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffbc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d07400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183730176 unmapped: 82321408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 454 ms_handle_reset con 0x558bf3ffbc00 session 0x558bf4320000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f93c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 454 ms_handle_reset con 0x558bf3d07400 session 0x558bf43201e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:22.913667+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffbc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 454 ms_handle_reset con 0x558bf3f93c00 session 0x558bf40401e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43bf000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 454 ms_handle_reset con 0x558bf43bf000 session 0x558bf2b57680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f62400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 454 handle_osd_map epochs [455,455], i have 454, src has [1,455]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 454 ms_handle_reset con 0x558bf3ffbc00 session 0x558bf3e403c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183738368 unmapped: 82313216 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 455 ms_handle_reset con 0x558bf5f62400 session 0x558bf22b6780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d07400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f93c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:23.913794+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 455 ms_handle_reset con 0x558bf3f93c00 session 0x558bf23fc1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffbc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3265561 data_alloc: 218103808 data_used: 8687616
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 455 heartbeat osd_stat(store_statfs(0x4f69d5000/0x0/0x4ffc00000, data 0x19477b6/0x1ba7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,2])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 192167936 unmapped: 73883648 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:24.913962+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff9000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 455 ms_handle_reset con 0x558bf3ff9000 session 0x558bf52a0f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee7400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 455 ms_handle_reset con 0x558bf3d07400 session 0x558bf44e12c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 455 handle_osd_map epochs [455,456], i have 455, src has [1,456]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f92000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 456 ms_handle_reset con 0x558bf3ffbc00 session 0x558bf4046b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 456 ms_handle_reset con 0x558bf5f92000 session 0x558bf316d4a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 456 ms_handle_reset con 0x558bf5ee7400 session 0x558bf23fd2c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183787520 unmapped: 82264064 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:25.914120+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d07400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 456 ms_handle_reset con 0x558bf3d07400 session 0x558bf2fb81e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183787520 unmapped: 82264064 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:26.914323+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f93c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183795712 unmapped: 82255872 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:27.914444+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 456 handle_osd_map epochs [457,457], i have 456, src has [1,457]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 457 ms_handle_reset con 0x558bf3f93c00 session 0x558bf2342000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183795712 unmapped: 82255872 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:28.914621+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff9000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 457 ms_handle_reset con 0x558bf3ff9000 session 0x558bf2b1c1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffbc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 457 handle_osd_map epochs [458,458], i have 457, src has [1,458]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 458 ms_handle_reset con 0x558bf3ffbc00 session 0x558bf19fa000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3399745 data_alloc: 218103808 data_used: 8704000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183803904 unmapped: 82247680 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:29.914745+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 458 handle_osd_map epochs [458,459], i have 458, src has [1,459]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 459 heartbeat osd_stat(store_statfs(0x4f4012000/0x0/0x4ffc00000, data 0x43076db/0x456b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a35000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 459 ms_handle_reset con 0x558bf7a35000 session 0x558bf44e0d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1946800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 459 ms_handle_reset con 0x558bf1946800 session 0x558bf45825a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2a43400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183812096 unmapped: 82239488 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:30.914898+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 459 handle_osd_map epochs [460,460], i have 459, src has [1,460]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 460 ms_handle_reset con 0x558bf2a43400 session 0x558bf318fe00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183828480 unmapped: 82223104 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffd000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.025494576s of 10.208270073s, submitted: 141
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 460 ms_handle_reset con 0x558bf3ffd000 session 0x558bf44e12c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:31.915060+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183828480 unmapped: 82223104 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:32.915244+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2a42400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 460 ms_handle_reset con 0x558bf2a42400 session 0x558bf23fc1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1946800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2a43400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 460 ms_handle_reset con 0x558bf2a43400 session 0x558bf43201e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffd000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a35000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183828480 unmapped: 82223104 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 460 heartbeat osd_stat(store_statfs(0x4f400b000/0x0/0x4ffc00000, data 0x430b172/0x4573000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 460 ms_handle_reset con 0x558bf7a35000 session 0x558bf3e44780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 460 ms_handle_reset con 0x558bf3ffd000 session 0x558bf3e45a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0d400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 460 ms_handle_reset con 0x558bf2d0d400 session 0x558bf52b0000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:33.915419+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf215c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 460 ms_handle_reset con 0x558bf215c000 session 0x558bf3e41e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 460 handle_osd_map epochs [461,461], i have 460, src has [1,461]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417338 data_alloc: 218103808 data_used: 8728576
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 461 ms_handle_reset con 0x558bf1946800 session 0x558bf22b6780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 461 ms_handle_reset con 0x558bf1947400 session 0x558bf2342960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 183836672 unmapped: 82214912 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3a400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 461 ms_handle_reset con 0x558bf5e3a400 session 0x558bf2d01e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:34.915608+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ea7800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 461 ms_handle_reset con 0x558bf5ea7800 session 0x558bf63b3e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf65c1c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 461 ms_handle_reset con 0x558bf65c1c00 session 0x558bf2fb9c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1946800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1947400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3a400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 461 ms_handle_reset con 0x558bf5e3a400 session 0x558bf45365a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ea7800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 184156160 unmapped: 81895424 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:35.915733+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 461 heartbeat osd_stat(store_statfs(0x4f3fe3000/0x0/0x4ffc00000, data 0x4330dc4/0x459b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43be000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 462 ms_handle_reset con 0x558bf5ea7800 session 0x558bf2b57860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 462 ms_handle_reset con 0x558bf43be000 session 0x558bf422e960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182648832 unmapped: 83402752 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf65c1000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 462 ms_handle_reset con 0x558bf19b5000 session 0x558bf4398780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:36.915867+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0c400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 462 ms_handle_reset con 0x558bf2d0c400 session 0x558bf3053860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 462 handle_osd_map epochs [462,463], i have 462, src has [1,463]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 463 ms_handle_reset con 0x558bf65c1000 session 0x558bf2343680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182673408 unmapped: 83378176 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:37.915998+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 463 ms_handle_reset con 0x558bf19b5000 session 0x558bf30e7860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182607872 unmapped: 83443712 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:38.916121+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43be000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 463 ms_handle_reset con 0x558bf43be000 session 0x558bf3e40f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3a400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ea7800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3474529 data_alloc: 234881024 data_used: 14020608
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 463 ms_handle_reset con 0x558bf5ea7800 session 0x558bf21fb860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 463 ms_handle_reset con 0x558bf5e3a400 session 0x558bf52a03c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 463 ms_handle_reset con 0x558bf19b5000 session 0x558bf2d005a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bd800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182804480 unmapped: 83247104 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:39.916247+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee6800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 463 handle_osd_map epochs [463,464], i have 463, src has [1,464]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 464 heartbeat osd_stat(store_statfs(0x4f3fd9000/0x0/0x4ffc00000, data 0x4336434/0x45a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182804480 unmapped: 83247104 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:40.916703+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 464 handle_osd_map epochs [465,465], i have 464, src has [1,465]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 465 ms_handle_reset con 0x558bf24bd800 session 0x558bf63b32c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 465 ms_handle_reset con 0x558bf5ee6800 session 0x558bf21fb860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182853632 unmapped: 83197952 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:41.916942+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 465 ms_handle_reset con 0x558bf5e7cc00 session 0x558bf3d51c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf171d400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.873584747s of 10.347732544s, submitted: 141
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2a43c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 465 ms_handle_reset con 0x558bf2a43c00 session 0x558bf43201e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 465 ms_handle_reset con 0x558bf19b5000 session 0x558bf318fe00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 83312640 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:42.917242+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 465 handle_osd_map epochs [466,466], i have 465, src has [1,466]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 466 ms_handle_reset con 0x558bf171d400 session 0x558bf318e000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bd800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 83312640 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 466 ms_handle_reset con 0x558bf24bd800 session 0x558bf3e454a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:43.917393+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7cc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f3fd2000/0x0/0x4ffc00000, data 0x4339c94/0x45aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3490074 data_alloc: 234881024 data_used: 14036992
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d45400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 83312640 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 466 ms_handle_reset con 0x558bf2d45400 session 0x558bf2fb81e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:44.917672+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 466 handle_osd_map epochs [467,467], i have 466, src has [1,467]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 83312640 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 467 ms_handle_reset con 0x558bf3f92400 session 0x558bf316d4a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:45.917825+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 468 heartbeat osd_stat(store_statfs(0x4f3fd1000/0x0/0x4ffc00000, data 0x433b740/0x45ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 468 ms_handle_reset con 0x558bf5e7cc00 session 0x558bf2d01e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 83312640 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:46.917958+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 468 heartbeat osd_stat(store_statfs(0x4f3fcc000/0x0/0x4ffc00000, data 0x433d353/0x45b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 83312640 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:47.918128+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 182738944 unmapped: 83312640 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf171d400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 468 ms_handle_reset con 0x558bf171d400 session 0x558bf63b3e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:48.918231+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bd800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 468 ms_handle_reset con 0x558bf24bd800 session 0x558bf52a01e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 468 ms_handle_reset con 0x558bf19b5000 session 0x558bf2fceb40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d45400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 468 ms_handle_reset con 0x558bf2d45400 session 0x558bf4583c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498304 data_alloc: 234881024 data_used: 14069760
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 193404928 unmapped: 72646656 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:49.918353+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf65c1000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 469 ms_handle_reset con 0x558bf65c1000 session 0x558bf44e1a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 469 handle_osd_map epochs [470,470], i have 469, src has [1,470]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 193626112 unmapped: 72425472 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:50.918508+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7a400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a7000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 470 ms_handle_reset con 0x558bf5e7a400 session 0x558bf30e6960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 195772416 unmapped: 70279168 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 470 ms_handle_reset con 0x558bf3f92800 session 0x558bf2b56000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 470 ms_handle_reset con 0x558bf52a7000 session 0x558bf63b2f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:51.918693+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 470 heartbeat osd_stat(store_statfs(0x4f2c0e000/0x0/0x4ffc00000, data 0x45489fd/0x47be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a35800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.517469406s of 10.454511642s, submitted: 201
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d0d400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 195862528 unmapped: 70189056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 470 ms_handle_reset con 0x558bf2d0d400 session 0x558bf43e9c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 470 ms_handle_reset con 0x558bf3f92800 session 0x558bf2d00960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:52.918804+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 193568768 unmapped: 72482816 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 470 handle_osd_map epochs [471,471], i have 470, src has [1,471]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:53.918934+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a7000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 471 ms_handle_reset con 0x558bf52a7000 session 0x558bf4041680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7a400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 471 ms_handle_reset con 0x558bf7a35800 session 0x558bf40412c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3666796 data_alloc: 234881024 data_used: 15953920
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194650112 unmapped: 71401472 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 471 handle_osd_map epochs [472,472], i have 471, src has [1,472]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 472 ms_handle_reset con 0x558bf5e7a400 session 0x558bf4537e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:54.919049+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bec00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 472 ms_handle_reset con 0x558bf24bec00 session 0x558bf23fc960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a7000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 472 ms_handle_reset con 0x558bf52a7000 session 0x558bf23fd680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7a400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 472 ms_handle_reset con 0x558bf5e7a400 session 0x558bf42e4b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a35800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194666496 unmapped: 71385088 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 472 handle_osd_map epochs [472,473], i have 472, src has [1,473]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 472 handle_osd_map epochs [473,473], i have 473, src has [1,473]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 473 ms_handle_reset con 0x558bf3f92800 session 0x558bf3e41a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:55.919244+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf6115c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 473 ms_handle_reset con 0x558bf6115c00 session 0x558bf4536000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf193d800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 473 ms_handle_reset con 0x558bf193d800 session 0x558bf2b57860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194707456 unmapped: 71344128 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 473 handle_osd_map epochs [474,474], i have 473, src has [1,474]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:56.919376+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 474 ms_handle_reset con 0x558bf7a35800 session 0x558bf2d001e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 474 heartbeat osd_stat(store_statfs(0x4f17e7000/0x0/0x4ffc00000, data 0x556ce60/0x57e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 474 ms_handle_reset con 0x558bf3f92800 session 0x558bf44e10e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194625536 unmapped: 71426048 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:57.919519+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a7000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 474 ms_handle_reset con 0x558bf52a7000 session 0x558bf316cb40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7a400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 474 ms_handle_reset con 0x558bf5e7a400 session 0x558bf45363c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194633728 unmapped: 71417856 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf6115c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:58.919650+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 474 heartbeat osd_stat(store_statfs(0x4f17e4000/0x0/0x4ffc00000, data 0x556ea42/0x57e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3668435 data_alloc: 234881024 data_used: 15945728
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194633728 unmapped: 71417856 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:59.919791+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 474 handle_osd_map epochs [475,475], i have 474, src has [1,475]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 475 ms_handle_reset con 0x558bf6115c00 session 0x558bf422ed20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194633728 unmapped: 71417856 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:00.920011+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 475 ms_handle_reset con 0x558bf3f92800 session 0x558bf44e14a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a7000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194633728 unmapped: 71417856 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:01.920167+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 475 ms_handle_reset con 0x558bf52a7000 session 0x558bf3e41c20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 475 handle_osd_map epochs [476,476], i have 475, src has [1,476]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7a400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 476 ms_handle_reset con 0x558bf5e7a400 session 0x558bf3d50b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf6115c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194641920 unmapped: 71409664 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.486587524s of 10.291666031s, submitted: 205
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:02.920356+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 476 ms_handle_reset con 0x558bf6115c00 session 0x558bf2b57a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194641920 unmapped: 71409664 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:03.920522+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3673711 data_alloc: 234881024 data_used: 15945728
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 476 heartbeat osd_stat(store_statfs(0x4f17df000/0x0/0x4ffc00000, data 0x55722a2/0x57ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194641920 unmapped: 71409664 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:04.920706+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf8a4bc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 476 ms_handle_reset con 0x558bf8a4bc00 session 0x558bf3e40000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 476 handle_osd_map epochs [476,477], i have 476, src has [1,477]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 477 ms_handle_reset con 0x558bf3f92800 session 0x558bf4082d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 477 heartbeat osd_stat(store_statfs(0x4f17da000/0x0/0x4ffc00000, data 0x5573e22/0x57f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194650112 unmapped: 71401472 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:05.920871+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194650112 unmapped: 71401472 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:06.921076+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf52a7000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 477 heartbeat osd_stat(store_statfs(0x4f17d9000/0x0/0x4ffc00000, data 0x5573e84/0x57f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7a400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194650112 unmapped: 71401472 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf6115c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 477 heartbeat osd_stat(store_statfs(0x4f17d9000/0x0/0x4ffc00000, data 0x5573e84/0x57f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:07.921277+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 477 ms_handle_reset con 0x558bf6115c00 session 0x558bf2b1d4a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 477 handle_osd_map epochs [478,478], i have 477, src has [1,478]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 478 ms_handle_reset con 0x558bf52a7000 session 0x558bf52b10e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a33800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f93400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 478 ms_handle_reset con 0x558bf3f93400 session 0x558bf2342d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194674688 unmapped: 71376896 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:08.921557+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 478 ms_handle_reset con 0x558bf7a33800 session 0x558bf3053a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 478 ms_handle_reset con 0x558bf5e7a400 session 0x558bf42e4000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3690742 data_alloc: 234881024 data_used: 15970304
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 478 handle_osd_map epochs [479,479], i have 478, src has [1,479]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c2c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 479 ms_handle_reset con 0x558bf43c2c00 session 0x558bf4536d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf65c1400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194682880 unmapped: 71368704 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:09.921721+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 479 handle_osd_map epochs [480,480], i have 479, src has [1,480]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 480 ms_handle_reset con 0x558bf65c1400 session 0x558bf318e5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194682880 unmapped: 71368704 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ea6800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 480 ms_handle_reset con 0x558bf5ea6800 session 0x558bf4536780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:10.942785+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bf000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 480 ms_handle_reset con 0x558bf24bf000 session 0x558bf4536b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 481 ms_handle_reset con 0x558bf19b5000 session 0x558bf2b563c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194699264 unmapped: 71352320 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:11.943616+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c2c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7a400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 481 ms_handle_reset con 0x558bf5e7a400 session 0x558bf2b56960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 481 heartbeat osd_stat(store_statfs(0x4f17cf000/0x0/0x4ffc00000, data 0x557ae96/0x57fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194732032 unmapped: 71319552 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.734150887s of 10.019069672s, submitted: 112
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:12.943794+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 481 handle_osd_map epochs [482,482], i have 481, src has [1,482]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 482 ms_handle_reset con 0x558bf43c2c00 session 0x558bf3e45a40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ea6800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 482 ms_handle_reset con 0x558bf1946800 session 0x558bf30e65a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 482 ms_handle_reset con 0x558bf1947400 session 0x558bf52b1e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 482 ms_handle_reset con 0x558bf5ea6800 session 0x558bf4578000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1946800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194772992 unmapped: 71278592 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 482 ms_handle_reset con 0x558bf1946800 session 0x558bf23fcb40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:13.944062+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 482 handle_osd_map epochs [482,483], i have 482, src has [1,483]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 483 ms_handle_reset con 0x558bf19b5000 session 0x558bf44e0f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 483 heartbeat osd_stat(store_statfs(0x4f17cb000/0x0/0x4ffc00000, data 0x557cada/0x5801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694582 data_alloc: 234881024 data_used: 15880192
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194772992 unmapped: 71278592 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:14.945037+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff6400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 483 ms_handle_reset con 0x558bf5ff6400 session 0x558bf318e3c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43d0400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194772992 unmapped: 71278592 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:15.945352+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 483 ms_handle_reset con 0x558bf43d0400 session 0x558bf2fce1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 483 handle_osd_map epochs [484,484], i have 483, src has [1,484]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194781184 unmapped: 71270400 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:16.945501+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f17ea000/0x0/0x4ffc00000, data 0x555c2d8/0x57e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194781184 unmapped: 71270400 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:17.945799+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a32400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 484 ms_handle_reset con 0x558bf7a32400 session 0x558bf45823c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1946800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 484 ms_handle_reset con 0x558bf1946800 session 0x558bf42e41e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194781184 unmapped: 71270400 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:18.945946+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 484 handle_osd_map epochs [485,485], i have 484, src has [1,485]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 485 ms_handle_reset con 0x558bf19b5000 session 0x558bf2d4c780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3704106 data_alloc: 234881024 data_used: 15888384
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:19.946330+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194789376 unmapped: 71262208 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee6800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 485 ms_handle_reset con 0x558bf5ee6800 session 0x558bf3e430e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 485 handle_osd_map epochs [485,486], i have 485, src has [1,486]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 486 heartbeat osd_stat(store_statfs(0x4f17e4000/0x0/0x4ffc00000, data 0x555fa64/0x57e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:20.946590+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194797568 unmapped: 71254016 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff9800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 486 ms_handle_reset con 0x558bf3ff9800 session 0x558bf3d51e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1946800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:21.946761+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194813952 unmapped: 71237632 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 486 handle_osd_map epochs [487,487], i have 486, src has [1,487]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 487 ms_handle_reset con 0x558bf1946800 session 0x558bf2342000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:22.946912+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194830336 unmapped: 71221248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.986619949s of 10.396297455s, submitted: 152
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 487 ms_handle_reset con 0x558bf19b5000 session 0x558bf21fa3c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff9800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:23.947189+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194830336 unmapped: 71221248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 487 handle_osd_map epochs [487,488], i have 487, src has [1,488]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 487 handle_osd_map epochs [488,488], i have 488, src has [1,488]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 488 ms_handle_reset con 0x558bf3ff9800 session 0x558bf2b1dc20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3718940 data_alloc: 234881024 data_used: 15921152
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 488 heartbeat osd_stat(store_statfs(0x4f17dc000/0x0/0x4ffc00000, data 0x55632fe/0x57f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:24.947392+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194838528 unmapped: 71213056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:25.947700+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194838528 unmapped: 71213056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf171d400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 488 ms_handle_reset con 0x558bf171d400 session 0x558bf44dda40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ff9000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 488 ms_handle_reset con 0x558bf3ff9000 session 0x558bf4082b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:26.947932+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194846720 unmapped: 71204864 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:27.948144+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194846720 unmapped: 71204864 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:28.948285+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194846720 unmapped: 71204864 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3715876 data_alloc: 234881024 data_used: 15917056
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:29.948449+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194846720 unmapped: 71204864 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 488 handle_osd_map epochs [489,489], i have 488, src has [1,489]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 489 heartbeat osd_stat(store_statfs(0x4f17df000/0x0/0x4ffc00000, data 0x556328c/0x57ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:30.948720+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194854912 unmapped: 71196672 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee8c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 489 ms_handle_reset con 0x558bf5ee8c00 session 0x558bf3e42780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:31.948951+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194854912 unmapped: 71196672 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 489 heartbeat osd_stat(store_statfs(0x4f17db000/0x0/0x4ffc00000, data 0x5564d62/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 489 heartbeat osd_stat(store_statfs(0x4f17db000/0x0/0x4ffc00000, data 0x5564d62/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:32.949160+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194854912 unmapped: 71196672 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3fb7400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 489 ms_handle_reset con 0x558bf3fb7400 session 0x558bf19fb860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 489 heartbeat osd_stat(store_statfs(0x4f17db000/0x0/0x4ffc00000, data 0x5564d62/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e3c400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 489 ms_handle_reset con 0x558bf5e3c400 session 0x558bf2c07680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a35400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:33.949347+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.499155045s of 10.670278549s, submitted: 62
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194854912 unmapped: 71196672 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 489 ms_handle_reset con 0x558bf7a35400 session 0x558bf23425a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a35000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee7800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3722769 data_alloc: 234881024 data_used: 15925248
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:34.949491+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194879488 unmapped: 71172096 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 489 handle_osd_map epochs [489,490], i have 489, src has [1,490]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:35.949690+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194879488 unmapped: 71172096 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:36.949828+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194879488 unmapped: 71172096 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:37.949997+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194904064 unmapped: 71147520 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f17d7000/0x0/0x4ffc00000, data 0x556682c/0x57f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f17d7000/0x0/0x4ffc00000, data 0x556682c/0x57f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:38.950164+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194920448 unmapped: 71131136 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3727887 data_alloc: 234881024 data_used: 16023552
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:39.950304+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194920448 unmapped: 71131136 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f17d7000/0x0/0x4ffc00000, data 0x556682c/0x57f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:40.950453+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194920448 unmapped: 71131136 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:41.950669+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194920448 unmapped: 71131136 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f17d7000/0x0/0x4ffc00000, data 0x556682c/0x57f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:42.950854+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194920448 unmapped: 71131136 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 ms_handle_reset con 0x558bf19b5000 session 0x558bf3e42000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:43.951059+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194928640 unmapped: 71122944 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730263 data_alloc: 234881024 data_used: 16023552
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:44.951226+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194928640 unmapped: 71122944 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:45.951375+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ea6c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.649665833s of 11.990933418s, submitted: 25
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194928640 unmapped: 71122944 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 ms_handle_reset con 0x558bf5ea6c00 session 0x558bf318e960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a36c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:46.951491+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194936832 unmapped: 71114752 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 ms_handle_reset con 0x558bf7a36c00 session 0x558bf21630e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:47.951603+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194936832 unmapped: 71114752 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f17d8000/0x0/0x4ffc00000, data 0x556682c/0x57f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:48.951771+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 194945024 unmapped: 71106560 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728985 data_alloc: 234881024 data_used: 16023552
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:49.951937+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196255744 unmapped: 69795840 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e59800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 ms_handle_reset con 0x558bf5e59800 session 0x558bf2d00f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:50.952060+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197312512 unmapped: 68739072 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:51.952248+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f13d7000/0x0/0x4ffc00000, data 0x596683c/0x5bf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197312512 unmapped: 68739072 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf6113000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 ms_handle_reset con 0x558bf6113000 session 0x558bf44e0960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a35800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:52.952379+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197312512 unmapped: 68739072 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 ms_handle_reset con 0x558bf7a35800 session 0x558bf52b0d20
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:53.952535+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197328896 unmapped: 68722688 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770162 data_alloc: 234881024 data_used: 18972672
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f13d8000/0x0/0x4ffc00000, data 0x596682c/0x5bf6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:54.952701+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197328896 unmapped: 68722688 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f13d8000/0x0/0x4ffc00000, data 0x596682c/0x5bf6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:55.952856+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197328896 unmapped: 68722688 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.279092789s of 10.325287819s, submitted: 52
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:56.953005+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197328896 unmapped: 68722688 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:57.953146+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197328896 unmapped: 68722688 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:58.953269+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197328896 unmapped: 68722688 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3771250 data_alloc: 234881024 data_used: 18968576
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:59.953399+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197328896 unmapped: 68722688 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f13d8000/0x0/0x4ffc00000, data 0x596682c/0x5bf6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:00.953579+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197328896 unmapped: 68722688 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:01.953686+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 195756032 unmapped: 70295552 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:02.953944+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 195854336 unmapped: 70197248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:03.954083+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 195854336 unmapped: 70197248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 ms_handle_reset con 0x558bf7a35000 session 0x558bf2d030e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3772266 data_alloc: 234881024 data_used: 19394560
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 ms_handle_reset con 0x558bf5ee7800 session 0x558bf3053860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:04.954235+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 195854336 unmapped: 70197248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:05.954379+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f13d8000/0x0/0x4ffc00000, data 0x596682c/0x5bf6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 195854336 unmapped: 70197248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f62c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.677596092s of 10.296804428s, submitted: 11
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:06.954509+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 195854336 unmapped: 70197248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:07.954651+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 195870720 unmapped: 70180864 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f13d9000/0x0/0x4ffc00000, data 0x596681c/0x5bf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:08.954847+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 70172672 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3771633 data_alloc: 234881024 data_used: 19394560
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:09.954976+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 ms_handle_reset con 0x558bf5f62c00 session 0x558bf4046b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 70172672 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 ms_handle_reset con 0x558bf3e18400 session 0x558bf21fa780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:10.955168+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 70172672 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:11.955391+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 70172672 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43be000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 handle_osd_map epochs [490,491], i have 490, src has [1,491]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 490 handle_osd_map epochs [491,491], i have 491, src has [1,491]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 491 ms_handle_reset con 0x558bf43be000 session 0x558bf63b21e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:12.955536+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 195895296 unmapped: 70156288 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 491 ms_handle_reset con 0x558bf3e18400 session 0x558bf19fb2c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43be000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 491 ms_handle_reset con 0x558bf43be000 session 0x558bf3e410e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:13.955653+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee7800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f62c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 491 ms_handle_reset con 0x558bf5f62c00 session 0x558bf2b1c960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 491 ms_handle_reset con 0x558bf5ee7800 session 0x558bf40403c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 195928064 unmapped: 70123520 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 491 heartbeat osd_stat(store_statfs(0x4f13d5000/0x0/0x4ffc00000, data 0x59683f0/0x5bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3751390 data_alloc: 234881024 data_used: 19394560
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:14.955804+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 195928064 unmapped: 70123520 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b4400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43d1000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 491 ms_handle_reset con 0x558bf19b4400 session 0x558bf3e443c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 491 ms_handle_reset con 0x558bf43d1000 session 0x558bf19ed0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43be000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 491 ms_handle_reset con 0x558bf3e18400 session 0x558bf403f0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 491 ms_handle_reset con 0x558bf43be000 session 0x558bf52a05a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:15.955947+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 195985408 unmapped: 70066176 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee7800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 491 ms_handle_reset con 0x558bf5ee7800 session 0x558bf2b563c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f62c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.255308151s of 10.013498306s, submitted: 83
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5e7c000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 491 ms_handle_reset con 0x558bf5e7c000 session 0x558bf4579860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:16.956772+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196018176 unmapped: 70033408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _renew_subs
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 491 handle_osd_map epochs [492,492], i have 491, src has [1,492]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 492 ms_handle_reset con 0x558bf3e18400 session 0x558bf42e5e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 492 ms_handle_reset con 0x558bf5f62c00 session 0x558bf52b12c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:17.957369+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196050944 unmapped: 70000640 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:18.957780+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43be000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 492 ms_handle_reset con 0x558bf43be000 session 0x558bf42e5e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43d1000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196050944 unmapped: 70000640 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 492 ms_handle_reset con 0x558bf43d1000 session 0x558bf52a05a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752451 data_alloc: 234881024 data_used: 19402752
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:19.957946+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 492 heartbeat osd_stat(store_statfs(0x4f17d3000/0x0/0x4ffc00000, data 0x5569fb6/0x57fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196059136 unmapped: 69992448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:20.958311+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196059136 unmapped: 69992448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 492 heartbeat osd_stat(store_statfs(0x4f17d3000/0x0/0x4ffc00000, data 0x5569fb6/0x57fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:21.958687+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196059136 unmapped: 69992448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 492 heartbeat osd_stat(store_statfs(0x4f17d3000/0x0/0x4ffc00000, data 0x5569fb6/0x57fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:22.958909+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196059136 unmapped: 69992448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:23.959550+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196059136 unmapped: 69992448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3752451 data_alloc: 234881024 data_used: 19402752
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:24.959866+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196059136 unmapped: 69992448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 492 handle_osd_map epochs [493,493], i have 492, src has [1,493]
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:25.960466+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196059136 unmapped: 69992448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:26.960653+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f17d0000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196059136 unmapped: 69992448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:27.960836+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196059136 unmapped: 69992448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:28.961050+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196059136 unmapped: 69992448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3755425 data_alloc: 234881024 data_used: 19402752
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:29.961280+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196059136 unmapped: 69992448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:30.961447+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f17d0000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196059136 unmapped: 69992448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:31.961659+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f17d0000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196059136 unmapped: 69992448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f17d0000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:32.961864+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196059136 unmapped: 69992448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:33.962049+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196059136 unmapped: 69992448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3755425 data_alloc: 234881024 data_used: 19402752
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:34.962195+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f17d0000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196059136 unmapped: 69992448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:35.962319+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196059136 unmapped: 69992448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 29K writes, 112K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 29K writes, 10K syncs, 2.80 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 13K writes, 49K keys, 13K commit groups, 1.0 writes per commit group, ingest: 30.99 MB, 0.05 MB/s
                                           Interval WAL: 13K writes, 5503 syncs, 2.39 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:36.962569+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196059136 unmapped: 69992448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:37.962829+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196059136 unmapped: 69992448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:38.963048+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196059136 unmapped: 69992448 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3755425 data_alloc: 234881024 data_used: 19402752
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:39.963282+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196075520 unmapped: 69976064 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f17d0000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:40.963449+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196075520 unmapped: 69976064 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:41.963817+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196075520 unmapped: 69976064 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:42.964203+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196075520 unmapped: 69976064 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f17d0000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:43.964385+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196075520 unmapped: 69976064 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d09800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf3d09800 session 0x558bf2d030e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3e18400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf3e18400 session 0x558bf44e0960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3754465 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:44.964582+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196280320 unmapped: 69771264 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:45.964825+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196280320 unmapped: 69771264 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:46.965043+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196280320 unmapped: 69771264 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:47.965251+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196280320 unmapped: 69771264 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:48.965523+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f17d0000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196280320 unmapped: 69771264 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3754465 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:49.965641+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f17d0000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196280320 unmapped: 69771264 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.894111633s of 34.092514038s, submitted: 77
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:50.965875+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196288512 unmapped: 69763072 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:51.966045+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf19b5400 session 0x558bf2d00f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196288512 unmapped: 69763072 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:52.966211+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f1791000/0x0/0x4ffc00000, data 0x55aba70/0x583d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196288512 unmapped: 69763072 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:53.966370+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196288512 unmapped: 69763072 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3757071 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:54.966583+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196288512 unmapped: 69763072 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:55.966820+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196288512 unmapped: 69763072 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f1791000/0x0/0x4ffc00000, data 0x55aba70/0x583d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:56.967145+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196288512 unmapped: 69763072 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:57.967273+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196288512 unmapped: 69763072 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:58.967455+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196288512 unmapped: 69763072 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f1791000/0x0/0x4ffc00000, data 0x55aba70/0x583d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3757071 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:59.967671+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196304896 unmapped: 69746688 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:00.967845+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196304896 unmapped: 69746688 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:01.968026+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196304896 unmapped: 69746688 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:02.968181+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f1791000/0x0/0x4ffc00000, data 0x55aba70/0x583d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196304896 unmapped: 69746688 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:03.968336+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2478800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf2478800 session 0x558bf318e960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f92800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.593790054s of 13.605471611s, submitted: 2
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 196304896 unmapped: 69746688 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf3f92800 session 0x558bf3e42000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf24bfc00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3850134 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:04.968568+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201580544 unmapped: 64471040 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf24bfc00 session 0x558bf2c07680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf19b5400 session 0x558bf19ed0e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:05.968728+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197394432 unmapped: 68657152 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4efd89000/0x0/0x4ffc00000, data 0x6fb2ad2/0x7245000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:06.968925+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197394432 unmapped: 68657152 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:07.969052+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197394432 unmapped: 68657152 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:08.969153+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197394432 unmapped: 68657152 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3941358 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:09.969312+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197394432 unmapped: 68657152 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4efd89000/0x0/0x4ffc00000, data 0x6fb2ad2/0x7245000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:10.969444+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197394432 unmapped: 68657152 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:11.969654+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197394432 unmapped: 68657152 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4efd89000/0x0/0x4ffc00000, data 0x6fb2ad2/0x7245000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:12.969820+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197394432 unmapped: 68657152 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:13.970019+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4efd89000/0x0/0x4ffc00000, data 0x6fb2ad2/0x7245000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197394432 unmapped: 68657152 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3941358 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:14.970179+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197394432 unmapped: 68657152 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:15.970466+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197394432 unmapped: 68657152 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:16.970603+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a34400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197394432 unmapped: 68657152 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:17.970848+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197394432 unmapped: 68657152 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4efd89000/0x0/0x4ffc00000, data 0x6fb2ad2/0x7245000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:18.971078+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 197402624 unmapped: 68648960 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:19.971368+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3942318 data_alloc: 234881024 data_used: 19734528
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 198352896 unmapped: 67698688 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:20.971510+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 64348160 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:21.971667+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 64348160 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:22.971870+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 64348160 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:23.972031+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 64348160 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4efd89000/0x0/0x4ffc00000, data 0x6fb2ad2/0x7245000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:24.972208+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3997358 data_alloc: 234881024 data_used: 27516928
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 64348160 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:25.972448+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 64348160 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4efd89000/0x0/0x4ffc00000, data 0x6fb2ad2/0x7245000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:26.972589+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4efd89000/0x0/0x4ffc00000, data 0x6fb2ad2/0x7245000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 64348160 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:27.972814+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 64348160 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:28.973034+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 64348160 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:29.973276+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3997358 data_alloc: 234881024 data_used: 27516928
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201703424 unmapped: 64348160 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:30.973451+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.150756836s of 26.977581024s, submitted: 46
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 64217088 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:31.973631+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 64217088 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4efd79000/0x0/0x4ffc00000, data 0x6fb2ad2/0x7245000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:32.973746+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 64217088 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:33.973872+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201834496 unmapped: 64217088 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:34.974058+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4004622 data_alloc: 251658240 data_used: 28053504
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202162176 unmapped: 63889408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:35.974215+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202162176 unmapped: 63889408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4efd84000/0x0/0x4ffc00000, data 0x6fb7ad2/0x724a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:36.974381+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202227712 unmapped: 63823872 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:37.974510+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203292672 unmapped: 62758912 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:38.974647+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203448320 unmapped: 62603264 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:39.974767+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4115110 data_alloc: 251658240 data_used: 28057600
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef99000/0x0/0x4ffc00000, data 0x7da2ad2/0x8035000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202743808 unmapped: 63307776 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:40.974924+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202743808 unmapped: 63307776 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:41.975081+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202743808 unmapped: 63307776 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:42.975212+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202743808 unmapped: 63307776 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:43.975398+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef99000/0x0/0x4ffc00000, data 0x7da2ad2/0x8035000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202743808 unmapped: 63307776 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:44.975550+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4115110 data_alloc: 251658240 data_used: 28057600
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202743808 unmapped: 63307776 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:45.975686+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef99000/0x0/0x4ffc00000, data 0x7da2ad2/0x8035000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202743808 unmapped: 63307776 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:46.975802+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202743808 unmapped: 63307776 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:47.975998+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202743808 unmapped: 63307776 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:48.976188+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202743808 unmapped: 63307776 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:49.976369+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4115110 data_alloc: 251658240 data_used: 28057600
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202743808 unmapped: 63307776 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:50.976521+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef99000/0x0/0x4ffc00000, data 0x7da2ad2/0x8035000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202743808 unmapped: 63307776 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:51.976906+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202743808 unmapped: 63307776 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:52.977066+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202743808 unmapped: 63307776 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:53.977220+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202743808 unmapped: 63307776 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:54.977401+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4115110 data_alloc: 251658240 data_used: 28057600
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef99000/0x0/0x4ffc00000, data 0x7da2ad2/0x8035000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202743808 unmapped: 63307776 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:55.977540+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.419139862s of 24.792182922s, submitted: 66
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf7a34400 session 0x558bf422e960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffa000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf3ffa000 session 0x558bf2342960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202760192 unmapped: 63291392 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:56.977743+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202760192 unmapped: 63291392 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:57.978215+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202760192 unmapped: 63291392 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:58.978471+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef99000/0x0/0x4ffc00000, data 0x7da2ad2/0x8035000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202760192 unmapped: 63291392 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:59.978670+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4111326 data_alloc: 251658240 data_used: 28061696
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202760192 unmapped: 63291392 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:00.978844+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef99000/0x0/0x4ffc00000, data 0x7da2ad2/0x8035000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202760192 unmapped: 63291392 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:01.979046+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202760192 unmapped: 63291392 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:02.979260+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202760192 unmapped: 63291392 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:03.979437+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202760192 unmapped: 63291392 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:04.979588+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4111326 data_alloc: 251658240 data_used: 28061696
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202760192 unmapped: 63291392 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:05.979811+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.086016655s of 10.146600723s, submitted: 16
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef99000/0x0/0x4ffc00000, data 0x7da2ad2/0x8035000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202776576 unmapped: 63275008 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:06.979945+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202825728 unmapped: 63225856 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:07.980137+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202825728 unmapped: 63225856 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:08.980311+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202825728 unmapped: 63225856 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:09.980466+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4111326 data_alloc: 251658240 data_used: 28061696
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202825728 unmapped: 63225856 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:10.980640+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202825728 unmapped: 63225856 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:11.980829+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef99000/0x0/0x4ffc00000, data 0x7da2ad2/0x8035000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202825728 unmapped: 63225856 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:12.980983+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf19b5000 session 0x558bf3d50b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202825728 unmapped: 63225856 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:13.981172+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff7000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf5ff7000 session 0x558bf2d02960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202825728 unmapped: 63225856 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:14.981373+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf19b5000 session 0x558bf30e74a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4111326 data_alloc: 251658240 data_used: 28061696
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f90000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf3f90000 session 0x558bf43e9e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202981376 unmapped: 63070208 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3ffa000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:15.981502+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202981376 unmapped: 63070208 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:16.981629+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef73000/0x0/0x4ffc00000, data 0x7dc6b05/0x805b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202981376 unmapped: 63070208 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:17.981742+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202981376 unmapped: 63070208 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:18.981867+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202997760 unmapped: 63053824 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:19.982031+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4118389 data_alloc: 251658240 data_used: 28086272
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef73000/0x0/0x4ffc00000, data 0x7dc6b05/0x805b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202997760 unmapped: 63053824 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:20.982215+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef73000/0x0/0x4ffc00000, data 0x7dc6b05/0x805b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202997760 unmapped: 63053824 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:21.982389+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202997760 unmapped: 63053824 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef73000/0x0/0x4ffc00000, data 0x7dc6b05/0x805b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:22.982516+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202997760 unmapped: 63053824 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:23.982701+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202997760 unmapped: 63053824 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:24.983259+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4118389 data_alloc: 251658240 data_used: 28086272
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202997760 unmapped: 63053824 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:25.983874+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef73000/0x0/0x4ffc00000, data 0x7dc6b05/0x805b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202997760 unmapped: 63053824 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:26.984975+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202997760 unmapped: 63053824 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:27.985852+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202997760 unmapped: 63053824 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:28.986566+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef73000/0x0/0x4ffc00000, data 0x7dc6b05/0x805b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202997760 unmapped: 63053824 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:29.987171+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.634214401s of 23.862075806s, submitted: 86
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4121021 data_alloc: 251658240 data_used: 28151808
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203063296 unmapped: 62988288 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:30.987493+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203063296 unmapped: 62988288 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:31.987949+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203063296 unmapped: 62988288 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:32.988340+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203063296 unmapped: 62988288 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:33.988824+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef73000/0x0/0x4ffc00000, data 0x7dc6b05/0x805b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef73000/0x0/0x4ffc00000, data 0x7dc6b05/0x805b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203063296 unmapped: 62988288 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:34.988976+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4124701 data_alloc: 251658240 data_used: 28299264
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203104256 unmapped: 62947328 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:35.990227+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef73000/0x0/0x4ffc00000, data 0x7dc6b05/0x805b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203104256 unmapped: 62947328 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:36.990517+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef73000/0x0/0x4ffc00000, data 0x7dc6b05/0x805b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203104256 unmapped: 62947328 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:37.990785+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203104256 unmapped: 62947328 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:38.990939+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203104256 unmapped: 62947328 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:39.991082+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4125709 data_alloc: 251658240 data_used: 28303360
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203104256 unmapped: 62947328 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:40.991241+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef73000/0x0/0x4ffc00000, data 0x7dc6b05/0x805b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203104256 unmapped: 62947328 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:41.991517+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203104256 unmapped: 62947328 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:42.991643+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203104256 unmapped: 62947328 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:43.991799+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203104256 unmapped: 62947328 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef73000/0x0/0x4ffc00000, data 0x7dc6b05/0x805b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:44.991942+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4125709 data_alloc: 251658240 data_used: 28303360
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.400178909s of 15.421634674s, submitted: 7
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 62865408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:45.992070+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 62865408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:46.992215+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 62865408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:47.992495+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 62865408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:48.992795+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 62865408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:49.993017+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4124701 data_alloc: 251658240 data_used: 28295168
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef6b000/0x0/0x4ffc00000, data 0x7dc6b05/0x805b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 62865408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:50.993233+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 62865408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:51.993399+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 62865408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:52.993642+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 62865408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:53.993859+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 62865408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:54.994158+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4124701 data_alloc: 251658240 data_used: 28295168
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef6b000/0x0/0x4ffc00000, data 0x7dc6b05/0x805b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf19b5400 session 0x558bf4082b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.059822083s of 10.074403763s, submitted: 3
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf3ffa000 session 0x558bf44e0b40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 62865408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d09c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:55.994316+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf3d09c00 session 0x558bf2163e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 62865408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:56.994838+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 62865408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:57.996050+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 62865408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:58.997157+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 62865408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:59.997466+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4eef98000/0x0/0x4ffc00000, data 0x7da2ad2/0x8035000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4116364 data_alloc: 251658240 data_used: 28180480
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 62865408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:00.998083+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 62865408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:01.998447+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c3c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf43c3c00 session 0x558bf23434a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3d09800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:02.998595+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 62865408 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf3d09800 session 0x558bf30e7860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:03.999435+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 200671232 unmapped: 65380352 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f16c9000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:04.999859+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 200671232 unmapped: 65380352 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3776660 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:06.000317+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 200671232 unmapped: 65380352 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:07.000660+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 200671232 unmapped: 65380352 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:08.000877+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 200671232 unmapped: 65380352 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:09.001085+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 200671232 unmapped: 65380352 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:10.001593+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 200671232 unmapped: 65380352 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3776660 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2d44800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf2d44800 session 0x558bf23fcb40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2a43400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f16c9000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf2a43400 session 0x558bf2fce1e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:11.001848+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201457664 unmapped: 64593920 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:12.002111+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201457664 unmapped: 64593920 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f16c9000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:13.002276+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201457664 unmapped: 64593920 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:14.002398+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201457664 unmapped: 64593920 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f16c9000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:15.002592+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201457664 unmapped: 64593920 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777140 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f16c9000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:16.002721+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201457664 unmapped: 64593920 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43d1c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.022716522s of 21.258172989s, submitted: 45
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:17.002908+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201465856 unmapped: 64585728 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:18.003052+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201465856 unmapped: 64585728 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf43d1c00 session 0x558bf43e8f00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:19.003245+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 64577536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:20.003404+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 64577536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3780621 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:21.003560+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 64577536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f17cf000/0x0/0x4ffc00000, data 0x556bae3/0x57ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:22.003783+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 64577536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:23.003908+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 64577536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f17cf000/0x0/0x4ffc00000, data 0x556bae3/0x57ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:24.004067+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 64577536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:25.004274+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 64577536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3780621 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:26.004416+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 64577536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:27.004548+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 64577536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:28.004684+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 64577536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:29.004843+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 64577536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f17cf000/0x0/0x4ffc00000, data 0x556bae3/0x57ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:30.004983+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 64577536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3780621 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:31.005156+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 64577536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf6114c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:32.005306+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf6114c00 session 0x558bf2c07860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf2a43000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.210127831s of 15.230951309s, submitted: 6
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201474048 unmapped: 64577536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf2a43000 session 0x558bf2d4da40
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf7a34800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:33.005427+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 207495168 unmapped: 58556416 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf7a34800 session 0x558bf4583e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4efbce000/0x0/0x4ffc00000, data 0x716baaa/0x73ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee6000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:34.005519+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201236480 unmapped: 64815104 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf5ee6000 session 0x558bf318e5a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:35.005664+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201236480 unmapped: 64815104 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4022507 data_alloc: 234881024 data_used: 19472384
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:36.005832+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201236480 unmapped: 64815104 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:37.005989+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201236480 unmapped: 64815104 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ef5d6000/0x0/0x4ffc00000, data 0x7764ae2/0x79f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:38.006154+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201236480 unmapped: 64815104 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:39.006276+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201236480 unmapped: 64815104 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:40.006398+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201236480 unmapped: 64815104 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4022507 data_alloc: 234881024 data_used: 19472384
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:41.006492+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201236480 unmapped: 64815104 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ef5d6000/0x0/0x4ffc00000, data 0x7764ae2/0x79f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:42.006642+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201236480 unmapped: 64815104 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:43.006771+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201236480 unmapped: 64815104 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5f92400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf5f92400 session 0x558bf52b0780
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:44.006922+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201236480 unmapped: 64815104 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:45.007047+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1a28800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf1a28800 session 0x558bf3e443c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201236480 unmapped: 64815104 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4022507 data_alloc: 234881024 data_used: 19472384
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee9400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf5ee9400 session 0x558bf3e45e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:46.007198+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43c2c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.601169586s of 13.980533600s, submitted: 67
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf43c2c00 session 0x558bf3d51e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201547776 unmapped: 64503808 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3f91c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf43bf800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:47.007326+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ef5b2000/0x0/0x4ffc00000, data 0x7788ae2/0x7a1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201555968 unmapped: 64495616 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:48.007480+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201555968 unmapped: 64495616 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:49.007669+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 201555968 unmapped: 64495616 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:50.007784+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 202637312 unmapped: 63414272 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091808 data_alloc: 251658240 data_used: 28553216
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:51.007935+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 205578240 unmapped: 60473344 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:52.008147+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 205578240 unmapped: 60473344 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ef5b2000/0x0/0x4ffc00000, data 0x7788ae2/0x7a1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:53.008335+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 205578240 unmapped: 60473344 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:54.008488+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 205578240 unmapped: 60473344 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:55.008619+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 205578240 unmapped: 60473344 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091808 data_alloc: 251658240 data_used: 28553216
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:56.008749+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 205578240 unmapped: 60473344 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ef5b2000/0x0/0x4ffc00000, data 0x7788ae2/0x7a1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:57.008861+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 205578240 unmapped: 60473344 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:58.009020+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 205578240 unmapped: 60473344 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:59.009148+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 205578240 unmapped: 60473344 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:00.009303+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 205578240 unmapped: 60473344 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.297801018s of 14.324241638s, submitted: 8
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ef5b2000/0x0/0x4ffc00000, data 0x7788ae2/0x7a1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [0,0,0,0,1])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4123156 data_alloc: 251658240 data_used: 29097984
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:01.009429+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 221970432 unmapped: 44081152 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ef082000/0x0/0x4ffc00000, data 0x7788ae2/0x7a1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:02.009595+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 215990272 unmapped: 50061312 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:03.010199+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216653824 unmapped: 49397760 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:04.010468+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216662016 unmapped: 49389568 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:05.011076+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ee44e000/0x0/0x4ffc00000, data 0x88dcae2/0x8b70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216662016 unmapped: 49389568 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4238108 data_alloc: 251658240 data_used: 29188096
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:06.011347+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216662016 unmapped: 49389568 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:07.011584+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216662016 unmapped: 49389568 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ee44e000/0x0/0x4ffc00000, data 0x88dcae2/0x8b70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:08.011784+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216662016 unmapped: 49389568 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:09.012291+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216662016 unmapped: 49389568 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:10.012450+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216662016 unmapped: 49389568 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4238108 data_alloc: 251658240 data_used: 29188096
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.846724510s of 10.525947571s, submitted: 123
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:11.012694+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf3f91c00 session 0x558bf422e000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216178688 unmapped: 49872896 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf43bf800 session 0x558bf40412c0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ee44e000/0x0/0x4ffc00000, data 0x88dcae2/0x8b70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf1a28800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf1a28800 session 0x558bf3e441e0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:12.013001+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216211456 unmapped: 49840128 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:13.013145+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216211456 unmapped: 49840128 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:14.013319+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216211456 unmapped: 49840128 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ee482000/0x0/0x4ffc00000, data 0x88b8ae2/0x8b4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:15.013485+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216211456 unmapped: 49840128 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ee482000/0x0/0x4ffc00000, data 0x88b8ae2/0x8b4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4222420 data_alloc: 251658240 data_used: 29081600
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:16.013790+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216211456 unmapped: 49840128 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:17.013947+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216211456 unmapped: 49840128 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:18.014062+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216211456 unmapped: 49840128 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:19.014201+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216211456 unmapped: 49840128 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:20.014330+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216211456 unmapped: 49840128 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4222420 data_alloc: 251658240 data_used: 29081600
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:21.014486+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216211456 unmapped: 49840128 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ee482000/0x0/0x4ffc00000, data 0x88b8ae2/0x8b4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:22.014708+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216211456 unmapped: 49840128 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:23.014829+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216211456 unmapped: 49840128 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:24.014974+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216211456 unmapped: 49840128 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:25.015251+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216211456 unmapped: 49840128 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4222420 data_alloc: 251658240 data_used: 29081600
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:26.015451+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216211456 unmapped: 49840128 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ee482000/0x0/0x4ffc00000, data 0x88b8ae2/0x8b4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:27.015592+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216211456 unmapped: 49840128 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:28.015800+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf3fb7400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.231252670s of 17.308881760s, submitted: 26
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf3fb7400 session 0x558bf23425a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216367104 unmapped: 49684480 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff7400
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ee8c00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:29.015951+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ee45e000/0x0/0x4ffc00000, data 0x88dcae2/0x8b70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216367104 unmapped: 49684480 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:30.016185+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216367104 unmapped: 49684480 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4225116 data_alloc: 251658240 data_used: 29081600
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:31.016338+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216367104 unmapped: 49684480 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:32.016519+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216383488 unmapped: 49668096 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:33.016645+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216383488 unmapped: 49668096 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:34.016929+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ee45e000/0x0/0x4ffc00000, data 0x88dcae2/0x8b70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216383488 unmapped: 49668096 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:35.017065+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216383488 unmapped: 49668096 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4226556 data_alloc: 251658240 data_used: 29208576
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:36.017148+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216383488 unmapped: 49668096 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:37.017308+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216383488 unmapped: 49668096 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:38.017689+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216383488 unmapped: 49668096 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:39.017977+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ee45e000/0x0/0x4ffc00000, data 0x88dcae2/0x8b70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216383488 unmapped: 49668096 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:40.018169+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216383488 unmapped: 49668096 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4226556 data_alloc: 251658240 data_used: 29208576
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:41.018487+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216383488 unmapped: 49668096 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:42.019188+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.025629044s of 14.034238815s, submitted: 2
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216383488 unmapped: 49668096 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:43.019342+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216580096 unmapped: 49471488 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:44.019516+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ee45e000/0x0/0x4ffc00000, data 0x88dcae2/0x8b70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216834048 unmapped: 49217536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:45.019669+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216834048 unmapped: 49217536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ee45e000/0x0/0x4ffc00000, data 0x88dcae2/0x8b70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4251552 data_alloc: 251658240 data_used: 31457280
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:46.019934+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216834048 unmapped: 49217536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:47.020155+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216834048 unmapped: 49217536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:48.020389+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216834048 unmapped: 49217536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:49.020600+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216834048 unmapped: 49217536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:50.020787+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ee45e000/0x0/0x4ffc00000, data 0x88dcae2/0x8b70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216834048 unmapped: 49217536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4251552 data_alloc: 251658240 data_used: 31457280
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:51.021071+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216834048 unmapped: 49217536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:52.021401+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216834048 unmapped: 49217536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:53.021587+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216834048 unmapped: 49217536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:54.021759+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ee45e000/0x0/0x4ffc00000, data 0x88dcae2/0x8b70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216834048 unmapped: 49217536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:55.022014+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216834048 unmapped: 49217536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4251552 data_alloc: 251658240 data_used: 31457280
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:56.022150+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216834048 unmapped: 49217536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:57.022282+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216834048 unmapped: 49217536 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:58.022474+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.010570526s of 16.028680801s, submitted: 6
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216883200 unmapped: 49168384 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:59.022618+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216883200 unmapped: 49168384 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:00.022793+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ee45e000/0x0/0x4ffc00000, data 0x88dcae2/0x8b70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216883200 unmapped: 49168384 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4251600 data_alloc: 251658240 data_used: 31436800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:01.022927+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216883200 unmapped: 49168384 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:02.023112+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216883200 unmapped: 49168384 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:03.023250+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216883200 unmapped: 49168384 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:04.023385+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216883200 unmapped: 49168384 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:05.023531+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ee45e000/0x0/0x4ffc00000, data 0x88dcae2/0x8b70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216883200 unmapped: 49168384 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4251600 data_alloc: 251658240 data_used: 31436800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:06.023649+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216883200 unmapped: 49168384 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:07.023788+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216883200 unmapped: 49168384 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:08.023945+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 216883200 unmapped: 49168384 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf5ff7400 session 0x558bf23fd680
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.644167900s of 10.657393456s, submitted: 5
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf5ee8c00 session 0x558bf2d014a0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:09.024127+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf19b5000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf19b5000 session 0x558bf2d02960
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 217317376 unmapped: 48734208 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:10.024283+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ee482000/0x0/0x4ffc00000, data 0x88b8ae2/0x8b4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 217317376 unmapped: 48734208 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4250828 data_alloc: 251658240 data_used: 32489472
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:11.024507+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 217317376 unmapped: 48734208 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:12.024691+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 217317376 unmapped: 48734208 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:13.024838+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 217317376 unmapped: 48734208 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:14.025047+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4ee482000/0x0/0x4ffc00000, data 0x88b8ae2/0x8b4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 217317376 unmapped: 48734208 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf6113000
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf6113000 session 0x558bf43e9e00
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: handle_auth_request added challenge on 0x558bf5ff7800
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 ms_handle_reset con 0x558bf5ff7800 session 0x558bf2d6d860
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:15.025182+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:16.025361+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803740 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:17.025537+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:18.025681+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:19.025817+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:20.025939+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:21.026114+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803740 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:22.026325+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:23.026522+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:24.026866+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:25.027049+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:26.027268+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803740 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:27.027482+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:28.027702+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:29.027895+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:30.028062+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:31.028283+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803740 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:32.028455+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:33.028601+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:34.028763+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:35.028985+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:36.029146+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803740 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:37.029366+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:38.029506+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:39.029642+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:40.029775+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 29 08:25:31 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1658339800' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:41.029913+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803740 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210190336 unmapped: 55861248 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:42.030177+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:43.030315+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:44.030455+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:45.030613+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:46.030767+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803740 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:47.030933+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:48.031197+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:49.031335+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:50.031488+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:51.031625+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803740 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:52.031811+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:53.031941+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:54.032035+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:55.032177+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:56.032367+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803740 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:57.032537+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:58.032726+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:59.032917+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:00.033076+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:01.033281+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803740 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:02.033501+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:03.033686+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:04.033869+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:05.034024+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:06.034211+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803740 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:07.034352+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:08.034482+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:09.034623+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:10.034771+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:11.034962+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803740 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:12.035204+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:13.035371+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:14.035534+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:15.035745+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:16.035891+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803740 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:17.036231+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:18.036407+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:19.036543+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:20.036683+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:21.036822+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803740 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:22.037032+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:23.037181+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:24.037313+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:25.037426+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:26.037546+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803740 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:27.037693+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:28.037940+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:29.038148+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:30.038358+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:31.038528+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803740 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:32.038692+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:33.038841+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:34.039043+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:35.039218+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:36.039422+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803740 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:37.039559+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:38.039707+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:39.039810+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:40.039952+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:41.040082+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803740 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:42.040246+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:43.040419+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:44.040566+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:45.040708+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:46.041406+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803740 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:47.041551+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:48.041682+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:49.041809+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:50.041939+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:51.042065+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803740 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:52.042237+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:53.042586+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:54.042722+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:55.042842+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:56.045321+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:31 compute-0 ceph-osd[89968]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:31 compute-0 ceph-osd[89968]: bluestore.MempoolThread(0x558bf0a2db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3803740 data_alloc: 234881024 data_used: 19468288
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:57.045438+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f11c2000/0x0/0x4ffc00000, data 0x556ba70/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:58.045561+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 210198528 unmapped: 55853056 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: do_command 'config diff' '{prefix=config diff}'
Nov 29 08:25:31 compute-0 ceph-osd[89968]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 08:25:31 compute-0 ceph-osd[89968]: do_command 'config show' '{prefix=config show}'
Nov 29 08:25:31 compute-0 ceph-osd[89968]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 08:25:31 compute-0 ceph-osd[89968]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 08:25:31 compute-0 ceph-osd[89968]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 08:25:31 compute-0 ceph-osd[89968]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 08:25:31 compute-0 ceph-osd[89968]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:59.045693+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 209960960 unmapped: 56090624 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:25:00.045814+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 209928192 unmapped: 56123392 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: tick
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_tickets
Nov 29 08:25:31 compute-0 ceph-osd[89968]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:25:01.045944+0000)
Nov 29 08:25:31 compute-0 ceph-osd[89968]: prioritycache tune_memory target: 4294967296 mapped: 209821696 unmapped: 56229888 heap: 266051584 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:31 compute-0 ceph-osd[89968]: do_command 'log dump' '{prefix=log dump}'
Nov 29 08:25:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 29 08:25:31 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3981079350' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 08:25:31 compute-0 rsyslogd[1002]: imjournal from <np0005539583:ceph-osd>: begin to drop messages due to rate-limiting
Nov 29 08:25:31 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 08:25:31 compute-0 ceph-mon[75237]: pgmap v2219: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 29 08:25:31 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2114039298' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 08:25:31 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2609096550' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 08:25:31 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3497614738' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 08:25:31 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1658339800' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 08:25:31 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3981079350' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 08:25:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 29 08:25:31 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4019201353' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 08:25:31 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 29 08:25:31 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2643801237' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 08:25:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 08:25:32 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3338642482' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 08:25:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 29 08:25:32 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2464175606' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 08:25:32 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 29 08:25:32 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 29 08:25:32 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2459785563' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 08:25:32 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19327 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:25:32 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/4019201353' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 08:25:32 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2643801237' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 08:25:32 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3338642482' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 08:25:32 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2464175606' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 08:25:32 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2459785563' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 08:25:33 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19329 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:33 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19331 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:25:33 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19333 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:25:33 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19335 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:33 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19337 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:25:33 compute-0 ceph-mon[75237]: pgmap v2220: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 29 08:25:33 compute-0 ceph-mon[75237]: from='client.19327 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:25:33 compute-0 ceph-mon[75237]: from='client.19329 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:33 compute-0 ceph-mon[75237]: from='client.19331 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:25:33 compute-0 ceph-mon[75237]: from='client.19333 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:25:34 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19341 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:25:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 29 08:25:34 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1694647100' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 08:25:34 compute-0 nova_compute[255040]: 2025-11-29 08:25:34.382 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:34 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 29 08:25:34 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19345 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:25:34 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 29 08:25:34 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3388886003' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 08:25:34 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19349 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:25:34 compute-0 ceph-mon[75237]: from='client.19335 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:34 compute-0 ceph-mon[75237]: from='client.19337 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:25:34 compute-0 ceph-mon[75237]: from='client.19341 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:25:34 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1694647100' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 08:25:34 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3388886003' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 08:25:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 08:25:35 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2101808573' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 08:25:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 29 08:25:35 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1998180507' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 08:25:35 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 08:25:35 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 08:25:35 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 23814144 heap: 127270912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1479906 data_alloc: 234881024 data_used: 15777792
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 ms_handle_reset con 0x5622262ad000 session 0x562226dd2d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 ms_handle_reset con 0x56222585fc00 session 0x562226cdd0e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:37.575863+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ac000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 105463808 unmapped: 21807104 heap: 127270912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 ms_handle_reset con 0x5622262ac000 session 0x562226cb5680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:38.576790+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 21102592 heap: 127270912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:39.576973+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f9bc5000/0x0/0x4ffc00000, data 0x1997c42/0x1aa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 21069824 heap: 127270912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:40.577140+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 21069824 heap: 127270912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:41.577259+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 21069824 heap: 127270912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407203 data_alloc: 234881024 data_used: 16547840
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:42.577413+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 21069824 heap: 127270912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:43.577537+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 21045248 heap: 127270912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 ms_handle_reset con 0x5622262ad400 session 0x562226cded20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:44.577739+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 21045248 heap: 127270912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.951712608s of 14.264819145s, submitted: 78
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f9bc5000/0x0/0x4ffc00000, data 0x1997c42/0x1aa8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:45.577902+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 114655232 unmapped: 12615680 heap: 127270912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:46.578280+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106283008 unmapped: 20987904 heap: 127270912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1626579 data_alloc: 234881024 data_used: 16551936
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:47.578472+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 12435456 heap: 127270912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:48.578711+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106446848 unmapped: 20824064 heap: 127270912 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:49.578853+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 25157632 heap: 135667712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:50.579082+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f5017000/0x0/0x4ffc00000, data 0x6546c42/0x6657000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 111960064 unmapped: 23707648 heap: 135667712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f4d66000/0x0/0x4ffc00000, data 0x67f7c42/0x6908000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:51.579294+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110936064 unmapped: 33128448 heap: 144064512 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2089463 data_alloc: 234881024 data_used: 17428480
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:52.579536+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112336896 unmapped: 31727616 heap: 144064512 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:53.579734+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112336896 unmapped: 31727616 heap: 144064512 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:54.579934+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112336896 unmapped: 31727616 heap: 144064512 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:55.580141+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112336896 unmapped: 31727616 heap: 144064512 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.749465942s of 11.402972221s, submitted: 99
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:56.580379+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f2d30000/0x0/0x4ffc00000, data 0x882cc42/0x893d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112443392 unmapped: 31621120 heap: 144064512 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2250791 data_alloc: 234881024 data_used: 17436672
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:57.580512+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31555584 heap: 144064512 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:58.580715+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31555584 heap: 144064512 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:59.580853+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 31465472 heap: 144064512 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:00.581056+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 heartbeat osd_stat(store_statfs(0x4f152e000/0x0/0x4ffc00000, data 0xa02fc42/0xa140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112713728 unmapped: 31350784 heap: 144064512 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:01.581221+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112754688 unmapped: 31309824 heap: 144064512 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2545903 data_alloc: 234881024 data_used: 17436672
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbfc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:02.581413+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 ms_handle_reset con 0x562226bbfc00 session 0x56222736a5a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585fc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 heartbeat osd_stat(store_statfs(0x4ef52e000/0x0/0x4ffc00000, data 0xc02fc42/0xc140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 ms_handle_reset con 0x56222585fc00 session 0x562226c01860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 31162368 heap: 144064512 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:03.581589+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 31170560 heap: 144064512 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:04.581758+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 heartbeat osd_stat(store_statfs(0x4ef52e000/0x0/0x4ffc00000, data 0xc02fc42/0xc140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ac000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 31170560 heap: 144064512 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 ms_handle_reset con 0x5622262ac000 session 0x56222736af00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:05.582014+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 31170560 heap: 144064512 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:06.582162+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.852004051s of 10.180535316s, submitted: 15
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 22659072 heap: 144064512 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2691795 data_alloc: 234881024 data_used: 17436672
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 ms_handle_reset con 0x5622262ad000 session 0x562226ce10e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:07.582322+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 ms_handle_reset con 0x5622262ad400 session 0x5622274a8d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 113500160 unmapped: 30564352 heap: 144064512 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 heartbeat osd_stat(store_statfs(0x4edd2b000/0x0/0x4ffc00000, data 0xd82fc7b/0xd942000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,2,3,2])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 ms_handle_reset con 0x562226bbe400 session 0x5622273e0960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585fc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 ms_handle_reset con 0x56222585fc00 session 0x5622274a9c20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:08.582578+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 30162944 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:09.582756+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ac000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 ms_handle_reset con 0x5622262ac000 session 0x5622268bf2c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 38346752 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 ms_handle_reset con 0x5622262ad000 session 0x5622274a85a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:10.582973+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 38273024 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:11.583213+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 38199296 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 heartbeat osd_stat(store_statfs(0x4ea52e000/0x0/0x4ffc00000, data 0x1102fc42/0x11140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134987 data_alloc: 234881024 data_used: 17444864
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:12.583490+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 38199296 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 heartbeat osd_stat(store_statfs(0x4ea52e000/0x0/0x4ffc00000, data 0x1102fc42/0x11140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:13.583662+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 122781696 unmapped: 29679616 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:14.583806+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 115441664 unmapped: 37019648 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:15.584051+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 ms_handle_reset con 0x5622262ad400 session 0x5622271094a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 114475008 unmapped: 37986304 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622278b2800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:16.584208+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 ms_handle_reset con 0x5622278b2800 session 0x56222763fa40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585fc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 192 handle_osd_map epochs [193,193], i have 192, src has [1,193]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ac000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.825181961s of 10.100408554s, submitted: 82
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 193 ms_handle_reset con 0x5622262ad000 session 0x5622273e14a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 114802688 unmapped: 37658624 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3552918 data_alloc: 234881024 data_used: 17457152
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:17.584351+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 193 handle_osd_map epochs [194,194], i have 193, src has [1,194]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 194 ms_handle_reset con 0x5622262ac000 session 0x56222463c960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 194 ms_handle_reset con 0x5622262ad400 session 0x5622257601e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 194 ms_handle_reset con 0x56222585fc00 session 0x56222535af00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 194 heartbeat osd_stat(store_statfs(0x4e6d23000/0x0/0x4ffc00000, data 0x14833672/0x1494a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 37830656 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:18.584604+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 114638848 unmapped: 37822464 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622278b3400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622278b3800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 194 ms_handle_reset con 0x5622278b3800 session 0x562224ae2b40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585fc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:19.584835+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 194 handle_osd_map epochs [194,195], i have 194, src has [1,195]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 195 ms_handle_reset con 0x56222585fc00 session 0x562227108000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 195 ms_handle_reset con 0x562224cdac00 session 0x562224ae3c20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 195 ms_handle_reset con 0x5622278b3400 session 0x562226c73680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 114917376 unmapped: 37543936 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:20.585201+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ac000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 195 ms_handle_reset con 0x5622262ac000 session 0x5622268972c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 114933760 unmapped: 37527552 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:21.585427+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 195 ms_handle_reset con 0x5622262ad400 session 0x5622271392c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 39534592 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 196 ms_handle_reset con 0x5622251d4c00 session 0x56222463c1e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 196 ms_handle_reset con 0x562225369c00 session 0x56222461f680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 196 ms_handle_reset con 0x5622262ad000 session 0x562226cb45a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1610308 data_alloc: 234881024 data_used: 17477632
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:22.585630+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 196 ms_handle_reset con 0x562224cdac00 session 0x5622262cad20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 46194688 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585fc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 196 ms_handle_reset con 0x56222585fc00 session 0x56222736ba40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 196 heartbeat osd_stat(store_statfs(0x4fa7b3000/0x0/0x4ffc00000, data 0xda709c/0xeba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585fc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:23.585817+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 196 ms_handle_reset con 0x56222585fc00 session 0x56222736be00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 46194688 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:24.585968+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 46194688 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:25.586128+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 46194688 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:26.586324+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 46194688 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1319939 data_alloc: 218103808 data_used: 5152768
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 196 heartbeat osd_stat(store_statfs(0x4fa7b5000/0x0/0x4ffc00000, data 0xda708c/0xeb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:27.586507+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 46194688 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.043321609s of 11.649839401s, submitted: 204
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:28.586863+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 197 ms_handle_reset con 0x562224cdac00 session 0x56222736a5a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622251d4c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 197 ms_handle_reset con 0x5622251d4c00 session 0x562226c003c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 46145536 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:29.587186+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 46145536 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225369c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 197 ms_handle_reset con 0x562225369c00 session 0x562226dd2780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:30.587347+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 46145536 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:31.587494+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 197 heartbeat osd_stat(store_statfs(0x4fa3a2000/0x0/0x4ffc00000, data 0xda8c8a/0xebb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106340352 unmapped: 46120960 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1326215 data_alloc: 218103808 data_used: 5156864
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:32.587785+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106340352 unmapped: 46120960 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ac000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 198 ms_handle_reset con 0x5622262ac000 session 0x562226c6a3c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:33.587940+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 198 ms_handle_reset con 0x5622262ad000 session 0x562226dd2000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 198 ms_handle_reset con 0x562224cdac00 session 0x562226ce0780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 105988096 unmapped: 46473216 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622251d4c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225369c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 198 ms_handle_reset con 0x5622251d4c00 session 0x562224b90960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 198 ms_handle_reset con 0x562225369c00 session 0x56222535b4a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585fc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:34.588139+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 198 ms_handle_reset con 0x56222585fc00 session 0x562224641c20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 198 heartbeat osd_stat(store_statfs(0x4fa3a0000/0x0/0x4ffc00000, data 0xdaa7d0/0xebe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 105988096 unmapped: 46473216 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:35.588384+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585fc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 198 handle_osd_map epochs [199,199], i have 198, src has [1,199]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 199 ms_handle_reset con 0x56222585fc00 session 0x5622273e0f00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 46465024 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:36.588649+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622251d4c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 199 ms_handle_reset con 0x5622251d4c00 session 0x56222463c5a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 199 ms_handle_reset con 0x562224cdac00 session 0x5622268963c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106029056 unmapped: 46432256 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338329 data_alloc: 218103808 data_used: 5177344
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:37.588986+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106029056 unmapped: 46432256 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225369c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 199 ms_handle_reset con 0x562225369c00 session 0x562226ce14a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.615046501s of 10.002351761s, submitted: 113
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 199 ms_handle_reset con 0x5622262ad000 session 0x562224ba8f00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:38.589169+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa399000/0x0/0x4ffc00000, data 0xdac35e/0xec4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 199 ms_handle_reset con 0x5622262ad000 session 0x562226cdfe00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 199 ms_handle_reset con 0x562224cdac00 session 0x562226dd3a40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ac000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa18b000/0x0/0x4ffc00000, data 0xfbc2fc/0x10d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 46759936 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 199 ms_handle_reset con 0x5622262ac000 session 0x562224ae2b40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622278b3400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:39.589314+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 199 ms_handle_reset con 0x5622278b3400 session 0x56222744a3c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622273cb000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 199 ms_handle_reset con 0x5622273cb000 session 0x5622268be780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 105242624 unmapped: 47218688 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:40.589583+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 105242624 unmapped: 47218688 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 199 heartbeat osd_stat(store_statfs(0x4f9a1d000/0x0/0x4ffc00000, data 0x172b29a/0x1841000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:41.589826+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 105242624 unmapped: 47218688 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415000 data_alloc: 218103808 data_used: 5177344
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ac000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:42.590042+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 199 ms_handle_reset con 0x562224cdac00 session 0x5622268bfa40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 105234432 unmapped: 47226880 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:43.590315+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 105234432 unmapped: 47226880 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 199 heartbeat osd_stat(store_statfs(0x4f9a1c000/0x0/0x4ffc00000, data 0x172b2fc/0x1842000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:44.590571+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 199 heartbeat osd_stat(store_statfs(0x4f9a1c000/0x0/0x4ffc00000, data 0x172b2fc/0x1842000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 47300608 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 200 heartbeat osd_stat(store_statfs(0x4f9a1c000/0x0/0x4ffc00000, data 0x172b2fc/0x1842000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:45.590735+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 105168896 unmapped: 47292416 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:46.590899+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 45056000 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1419506 data_alloc: 218103808 data_used: 5185536
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:47.591062+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 200 ms_handle_reset con 0x5622262ad000 session 0x5622274a8d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106905600 unmapped: 45555712 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:48.591296+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.158949852s of 10.186372757s, submitted: 92
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106954752 unmapped: 45506560 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 200 ms_handle_reset con 0x5622262ac000 session 0x56222468a1e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:49.591454+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106954752 unmapped: 45506560 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:50.591604+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 200 heartbeat osd_stat(store_statfs(0x4f9a13000/0x0/0x4ffc00000, data 0x1732ed0/0x184b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 200 handle_osd_map epochs [200,201], i have 200, src has [1,201]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106954752 unmapped: 45506560 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 201 ms_handle_reset con 0x562228d7ec00 session 0x56222763e780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:51.591783+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 201 ms_handle_reset con 0x562228d7e800 session 0x562226cdf680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 201 heartbeat osd_stat(store_statfs(0x4f9a0e000/0x0/0x4ffc00000, data 0x17350e5/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 45498368 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426667 data_alloc: 218103808 data_used: 5197824
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:52.591994+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 45498368 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:53.592161+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 201 ms_handle_reset con 0x562228d7e400 session 0x56222534da40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 201 heartbeat osd_stat(store_statfs(0x4f9a0e000/0x0/0x4ffc00000, data 0x17350e5/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 46276608 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 201 heartbeat osd_stat(store_statfs(0x4f9a0e000/0x0/0x4ffc00000, data 0x17350e5/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:54.592340+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 201 ms_handle_reset con 0x562228d7ec00 session 0x56222534c780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 46276608 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:55.592493+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 46276608 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:56.592618+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 46276608 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425435 data_alloc: 218103808 data_used: 5197824
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:57.592836+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 46276608 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:58.593010+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.979375839s of 10.667660713s, submitted: 15
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 46276608 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 201 ms_handle_reset con 0x562224cdac00 session 0x562224b91680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:59.593134+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 201 heartbeat osd_stat(store_statfs(0x4f9a0f000/0x0/0x4ffc00000, data 0x17350e5/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 201 ms_handle_reset con 0x5622262ad000 session 0x56222468a960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ac000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 201 ms_handle_reset con 0x5622262ac000 session 0x56222468b4a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 46276608 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 201 ms_handle_reset con 0x562224cdac00 session 0x5622268be780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:00.593320+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 46276608 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:01.593529+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 46276608 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 201 ms_handle_reset con 0x5622262ad000 session 0x5622268bfa40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438329 data_alloc: 218103808 data_used: 5193728
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:02.593670+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 46276608 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:03.593826+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 201 ms_handle_reset con 0x562228d7ec00 session 0x56222744a3c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 201 ms_handle_reset con 0x562228a97400 session 0x562226dd3e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 106479616 unmapped: 45981696 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 201 ms_handle_reset con 0x562228a97800 session 0x5622273e0b40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:04.593994+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 202 ms_handle_reset con 0x562228a97c00 session 0x562224ae2b40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 202 ms_handle_reset con 0x562228a97000 session 0x562226dd3a40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 202 heartbeat osd_stat(store_statfs(0x4f8f43000/0x0/0x4ffc00000, data 0x21fd1b9/0x231a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 202 ms_handle_reset con 0x562224cdac00 session 0x5622253dd680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 202 ms_handle_reset con 0x562228d7e400 session 0x562225339e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 202 ms_handle_reset con 0x5622262ad000 session 0x5622274a8960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 202 ms_handle_reset con 0x562228a96c00 session 0x5622274a8d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 44195840 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:05.594194+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 202 ms_handle_reset con 0x562224cdac00 session 0x562226ce0b40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 44195840 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:06.594412+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 202 ms_handle_reset con 0x562228a97000 session 0x562226ce0780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 202 ms_handle_reset con 0x562228a97c00 session 0x562225338960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 202 handle_osd_map epochs [203,203], i have 202, src has [1,203]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 203 ms_handle_reset con 0x5622262ad000 session 0x562227139c20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 44179456 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 203 ms_handle_reset con 0x562224cdac00 session 0x5622268963c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:07.594666+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450412 data_alloc: 218103808 data_used: 5222400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 203 ms_handle_reset con 0x562228a96c00 session 0x562227108000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 203 ms_handle_reset con 0x562228a97c00 session 0x562225339a40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 203 ms_handle_reset con 0x562228d7e400 session 0x562225338d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 203 ms_handle_reset con 0x562228a97000 session 0x5622268bfc20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 108322816 unmapped: 44138496 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:08.594903+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 203 handle_osd_map epochs [204,204], i have 203, src has [1,204]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 204 ms_handle_reset con 0x562224cdac00 session 0x562224ae3680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.600512981s of 10.020191193s, submitted: 196
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 204 ms_handle_reset con 0x562228a96c00 session 0x5622274983c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 108355584 unmapped: 44105728 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:09.595162+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 204 heartbeat osd_stat(store_statfs(0x4f9a06000/0x0/0x4ffc00000, data 0x1738943/0x1856000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 108355584 unmapped: 44105728 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:10.595354+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 108355584 unmapped: 44105728 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 204 handle_osd_map epochs [204,205], i have 204, src has [1,205]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 204 heartbeat osd_stat(store_statfs(0x4f9a0b000/0x0/0x4ffc00000, data 0x1733ee4/0x1851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:11.595601+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 44072960 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:12.595747+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414725 data_alloc: 218103808 data_used: 5218304
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 205 ms_handle_reset con 0x562228a97c00 session 0x562227498d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 44072960 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:13.595937+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 205 ms_handle_reset con 0x562228d7e400 session 0x562227499680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 44072960 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 205 ms_handle_reset con 0x562228a97400 session 0x562227498780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:14.596127+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 205 ms_handle_reset con 0x562224cdac00 session 0x5622274981e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 205 heartbeat osd_stat(store_statfs(0x4f9f99000/0x0/0x4ffc00000, data 0x11a5b9a/0x12c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 44072960 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:15.596314+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 205 handle_osd_map epochs [205,206], i have 205, src has [1,206]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 205 heartbeat osd_stat(store_statfs(0x4f9f99000/0x0/0x4ffc00000, data 0x11a5b9a/0x12c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 44072960 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 206 ms_handle_reset con 0x562228a96c00 session 0x5622262e8f00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:16.596451+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 206 ms_handle_reset con 0x562228a97c00 session 0x5622262e9c20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 206 handle_osd_map epochs [206,207], i have 206, src has [1,207]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 207 ms_handle_reset con 0x562228a97800 session 0x56222763e960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 108429312 unmapped: 44032000 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:17.596691+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1427363 data_alloc: 218103808 data_used: 5234688
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 108429312 unmapped: 44032000 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 207 handle_osd_map epochs [207,208], i have 207, src has [1,208]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:18.596862+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 108429312 unmapped: 44032000 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 208 ms_handle_reset con 0x562228d7ec00 session 0x5622271083c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:19.597011+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 208 heartbeat osd_stat(store_statfs(0x4f9f8b000/0x0/0x4ffc00000, data 0x11aaf4c/0x12d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.726862907s of 10.330990791s, submitted: 65
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 208 ms_handle_reset con 0x562228d7e400 session 0x5622262e8960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 43999232 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:20.597181+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 208 handle_osd_map epochs [209,209], i have 208, src has [1,209]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 209 ms_handle_reset con 0x562228a96c00 session 0x5622253dc960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 209 ms_handle_reset con 0x562224cdac00 session 0x562227108960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 209 ms_handle_reset con 0x562228a97800 session 0x562225339e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 43327488 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 209 ms_handle_reset con 0x562228a97c00 session 0x562227108000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:21.597337+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 209 ms_handle_reset con 0x562228a97c00 session 0x562226ce0b40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 209 heartbeat osd_stat(store_statfs(0x4f93a5000/0x0/0x4ffc00000, data 0x1d8ea88/0x1eb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 43327488 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:22.597540+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1534707 data_alloc: 218103808 data_used: 5246976
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 209 heartbeat osd_stat(store_statfs(0x4f93a5000/0x0/0x4ffc00000, data 0x1d8ea88/0x1eb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 43327488 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 209 ms_handle_reset con 0x562228a96c00 session 0x562227108b40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:23.597708+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 209 ms_handle_reset con 0x562228d7e400 session 0x56222744a1e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 209 ms_handle_reset con 0x562228d7ec00 session 0x56222763f4a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 42475520 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 209 ms_handle_reset con 0x562228a96800 session 0x5622262cbc20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:24.597846+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 42475520 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:25.598081+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 42475520 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:26.598431+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 42475520 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:27.598625+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1531227 data_alloc: 218103808 data_used: 5251072
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 209 heartbeat osd_stat(store_statfs(0x4f93a7000/0x0/0x4ffc00000, data 0x1d8ea78/0x1eb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 42541056 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:28.598846+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 109920256 unmapped: 42541056 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:29.599036+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 210 ms_handle_reset con 0x562228a97800 session 0x5622262e9860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.047908783s of 10.420989037s, submitted: 80
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 42532864 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:30.599194+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 210 ms_handle_reset con 0x562224cdac00 session 0x5622274a8960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 42532864 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:31.599390+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 210 heartbeat osd_stat(store_statfs(0x4f93a3000/0x0/0x4ffc00000, data 0x122e6ae/0x1359000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 42532864 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:32.599603+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450620 data_alloc: 218103808 data_used: 5259264
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 210 ms_handle_reset con 0x562228a96c00 session 0x562226cde1e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 42532864 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:33.599814+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 109928448 unmapped: 42532864 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:34.599990+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 210 heartbeat osd_stat(store_statfs(0x4f9f85000/0x0/0x4ffc00000, data 0x11ae6ae/0x12d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 42516480 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:35.600152+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 210 ms_handle_reset con 0x562228d7e400 session 0x562226dd34a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 109944832 unmapped: 42516480 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:36.600403+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 210 handle_osd_map epochs [211,211], i have 210, src has [1,211]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 42500096 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:37.600566+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1454356 data_alloc: 218103808 data_used: 5275648
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 211 ms_handle_reset con 0x562228a97c00 session 0x562226c73860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 42475520 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:38.600825+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 212 ms_handle_reset con 0x562224cdac00 session 0x56222762e000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 212 ms_handle_reset con 0x562228a97c00 session 0x56222468b2c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 109993984 unmapped: 42467328 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:39.601062+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 212 handle_osd_map epochs [213,213], i have 212, src has [1,213]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.603280544s of 10.386857033s, submitted: 75
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 109993984 unmapped: 42467328 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:40.601241+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 213 ms_handle_reset con 0x562228a96c00 session 0x5622274ca780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 213 ms_handle_reset con 0x562228a97800 session 0x56222539eb40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 213 heartbeat osd_stat(store_statfs(0x4f9f7d000/0x0/0x4ffc00000, data 0x11b39d4/0x12de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 214 ms_handle_reset con 0x562228d7e400 session 0x5622253dd4a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 214 ms_handle_reset con 0x562228d7ec00 session 0x5622268bfe00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 109993984 unmapped: 42467328 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:41.601388+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 214 ms_handle_reset con 0x562224cdac00 session 0x5622253383c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 214 handle_osd_map epochs [215,215], i have 214, src has [1,215]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 215 ms_handle_reset con 0x562228d7e400 session 0x56222742be00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 42459136 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:42.601560+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 215 ms_handle_reset con 0x562228a97c00 session 0x562224ae25a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464890 data_alloc: 218103808 data_used: 5287936
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 215 ms_handle_reset con 0x562228a96c00 session 0x56222534cb40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 215 ms_handle_reset con 0x562228a97800 session 0x562226c001e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 42459136 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:43.601740+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 215 handle_osd_map epochs [216,217], i have 215, src has [1,217]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 42401792 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:44.601873+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 217 ms_handle_reset con 0x562228a97c00 session 0x5622268972c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 42401792 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:45.602004+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 218 ms_handle_reset con 0x562224cdac00 session 0x56222736ab40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110075904 unmapped: 42385408 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:46.602229+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 218 heartbeat osd_stat(store_statfs(0x4f9f71000/0x0/0x4ffc00000, data 0x11bc54a/0x12ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110075904 unmapped: 42385408 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:47.602467+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 218 heartbeat osd_stat(store_statfs(0x4f9f71000/0x0/0x4ffc00000, data 0x11bc54a/0x12ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1474324 data_alloc: 218103808 data_used: 5292032
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110092288 unmapped: 42369024 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 218 handle_osd_map epochs [218,219], i have 218, src has [1,219]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 219 ms_handle_reset con 0x562228d7e400 session 0x56222763fe00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:48.602719+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 219 ms_handle_reset con 0x562228d7ec00 session 0x5622274cb2c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 42295296 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:49.602879+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 42295296 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:50.603057+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 42295296 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:51.603199+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.280409813s of 11.768027306s, submitted: 162
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 219 heartbeat osd_stat(store_statfs(0x4f9f6d000/0x0/0x4ffc00000, data 0x11be22e/0x12f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 219 ms_handle_reset con 0x562224cdac00 session 0x56222461ef00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 42295296 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:52.603362+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482438 data_alloc: 218103808 data_used: 5292032
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 219 ms_handle_reset con 0x562228a97800 session 0x562226dd30e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 219 handle_osd_map epochs [219,220], i have 219, src has [1,220]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110174208 unmapped: 42287104 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:53.603487+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 220 ms_handle_reset con 0x562228a97c00 session 0x56222539fa40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 42278912 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:54.603634+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 220 ms_handle_reset con 0x562228d7e400 session 0x56222534d680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:55.603814+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 42262528 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 220 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0x11bfd04/0x12f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 220 ms_handle_reset con 0x562228a96000 session 0x56222744a960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 220 ms_handle_reset con 0x562228a96000 session 0x56222763ef00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 220 ms_handle_reset con 0x562224cdac00 session 0x562224b91c20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 220 ms_handle_reset con 0x562228a97800 session 0x5622274a9a40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:56.604159+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 42262528 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 220 ms_handle_reset con 0x562228a97c00 session 0x56222762f680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 220 ms_handle_reset con 0x562228d7e400 session 0x56222736b4a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 220 ms_handle_reset con 0x562228d7e400 session 0x56222461e3c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 220 ms_handle_reset con 0x562224cdac00 session 0x562226cb4000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 220 ms_handle_reset con 0x562228a96000 session 0x56222763e960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:57.604330+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 42188800 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1499421 data_alloc: 218103808 data_used: 5304320
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 220 heartbeat osd_stat(store_statfs(0x4f9ee1000/0x0/0x4ffc00000, data 0x1248d14/0x137d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:58.604548+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 42188800 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 220 handle_osd_map epochs [220,221], i have 220, src has [1,221]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:59.604670+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 42188800 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x5622293a6000 session 0x56222742be00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x562228a97c00 session 0x5622274a9860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x562228a97800 session 0x56222744a1e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x562224cdac00 session 0x562227108000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:00.604908+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 42139648 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x562228d7e400 session 0x562226cb45a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x562228a96000 session 0x562224b91c20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x5622293a6000 session 0x562224ae25a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x5622293a6000 session 0x5622253383c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:01.605039+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 heartbeat osd_stat(store_statfs(0x4f9edb000/0x0/0x4ffc00000, data 0x124a9f4/0x1383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110395392 unmapped: 42065920 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x562224cdac00 session 0x562226c6a3c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 heartbeat osd_stat(store_statfs(0x4f9edb000/0x0/0x4ffc00000, data 0x124a9f4/0x1383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.313113213s of 10.157511711s, submitted: 101
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:02.605203+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 heartbeat osd_stat(store_statfs(0x4f9eda000/0x0/0x4ffc00000, data 0x124aa17/0x1384000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110395392 unmapped: 42065920 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1513326 data_alloc: 218103808 data_used: 5320704
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x562228d7e400 session 0x56222742a780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:03.605322+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 41951232 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x5622293a6400 session 0x562226caed20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:04.605472+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110518272 unmapped: 41943040 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x5622293a6800 session 0x5622271392c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x562224cdac00 session 0x562227139860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 heartbeat osd_stat(store_statfs(0x4f9edc000/0x0/0x4ffc00000, data 0x124a9a5/0x1382000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:05.605599+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 41910272 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x562228d7e400 session 0x5622271390e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x5622293a6000 session 0x562227139a40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:06.605750+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 41910272 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:07.605885+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 41910272 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1516080 data_alloc: 218103808 data_used: 5877760
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:08.606181+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 41910272 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:09.606336+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 41910272 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:10.606596+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 41910272 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 heartbeat osd_stat(store_statfs(0x4f9edc000/0x0/0x4ffc00000, data 0x124a9a5/0x1382000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x5622293a6400 session 0x5622262e9c20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:11.606744+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 41893888 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 heartbeat osd_stat(store_statfs(0x4f9edc000/0x0/0x4ffc00000, data 0x124a9a5/0x1382000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x5622293a6800 session 0x562226cae780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 heartbeat osd_stat(store_statfs(0x4f9edc000/0x0/0x4ffc00000, data 0x124a9a5/0x1382000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:12.606920+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 41910272 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1515904 data_alloc: 218103808 data_used: 5877760
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.138450623s of 11.012658119s, submitted: 42
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x562224cdac00 session 0x5622274cb2c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:13.607177+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110592000 unmapped: 41869312 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:14.607435+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 41861120 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 heartbeat osd_stat(store_statfs(0x4f9edb000/0x0/0x4ffc00000, data 0x124a9b5/0x1383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:15.607593+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112951296 unmapped: 39510016 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x562228d7e400 session 0x562226cb4780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x5622293a6000 session 0x5622273e0b40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:16.607734+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112951296 unmapped: 39510016 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:17.607866+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 115482624 unmapped: 36978688 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1609774 data_alloc: 218103808 data_used: 6049792
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:18.608074+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 39133184 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 heartbeat osd_stat(store_statfs(0x4f9500000/0x0/0x4ffc00000, data 0x1c1d9b5/0x1d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x5622293a6400 session 0x56222744ab40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:19.608261+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 39075840 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x5622293a6c00 session 0x562227108f00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:20.608417+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 39075840 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:21.608593+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 39075840 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x562224cdac00 session 0x56222534c780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 heartbeat osd_stat(store_statfs(0x4f94ec000/0x0/0x4ffc00000, data 0x1c339b5/0x1d6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:22.608724+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 39075840 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1604474 data_alloc: 218103808 data_used: 6049792
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:23.608849+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 39075840 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.388900757s of 10.777103424s, submitted: 69
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x562228d7e400 session 0x562224b91680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:24.608957+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 39583744 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x5622293a6000 session 0x5622268f52c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 heartbeat osd_stat(store_statfs(0x4f94f2000/0x0/0x4ffc00000, data 0x1c339b5/0x1d6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:25.609205+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 39583744 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x5622293a6400 session 0x5622274cb0e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:26.609426+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 39559168 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:27.609630+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 39559168 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1557386 data_alloc: 218103808 data_used: 6045696
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:28.609844+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 39559168 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 heartbeat osd_stat(store_statfs(0x4f9b03000/0x0/0x4ffc00000, data 0x16239a5/0x175b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:29.610009+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 39559168 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:30.610155+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 39559168 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:31.610278+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 39559168 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 ms_handle_reset con 0x5622293a6c00 session 0x5622274ca5a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:32.610444+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 39550976 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1560774 data_alloc: 218103808 data_used: 6045696
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:33.610563+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 119570432 unmapped: 32890880 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 221 handle_osd_map epochs [221,222], i have 221, src has [1,222]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.008467674s of 10.001828194s, submitted: 36
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:34.610841+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 39149568 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 222 heartbeat osd_stat(store_statfs(0x4f9297000/0x0/0x4ffc00000, data 0x1e8c5db/0x1fc6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:35.611080+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 39141376 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 222 ms_handle_reset con 0x562228d7e400 session 0x56222742bc20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 222 ms_handle_reset con 0x562224cdac00 session 0x562224ba8780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:36.611169+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 39149568 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:37.611325+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 39149568 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1635889 data_alloc: 218103808 data_used: 6057984
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 223 ms_handle_reset con 0x5622293a6000 session 0x562226caef00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:38.611548+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 39141376 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a7000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a7400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:39.611675+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 223 ms_handle_reset con 0x5622293a7400 session 0x56222763f2c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 39149568 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 223 handle_osd_map epochs [223,224], i have 223, src has [1,224]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 224 heartbeat osd_stat(store_statfs(0x4f9295000/0x0/0x4ffc00000, data 0x1e8e1af/0x1fc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:40.611832+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 39149568 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 224 ms_handle_reset con 0x5622293a6400 session 0x562224b45c20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:41.611972+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 114884608 unmapped: 37576704 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:42.612144+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 36069376 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1700024 data_alloc: 234881024 data_used: 13549568
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 224 ms_handle_reset con 0x562228d7e400 session 0x5622273e1a40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 224 ms_handle_reset con 0x5622293a6000 session 0x5622253dd4a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a7400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:43.612260+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 36069376 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.905950546s of 10.075222969s, submitted: 49
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 225 ms_handle_reset con 0x5622293a7400 session 0x56222534c000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622278b3400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:44.612377+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 225 ms_handle_reset con 0x5622278b3400 session 0x5622274cbc20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 35979264 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1e000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 225 ms_handle_reset con 0x562228e1e000 session 0x562226c73680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622278b3400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:45.612497+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 35921920 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 225 handle_osd_map epochs [225,226], i have 225, src has [1,226]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 225 handle_osd_map epochs [226,226], i have 226, src has [1,226]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 226 heartbeat osd_stat(store_statfs(0x4f9288000/0x0/0x4ffc00000, data 0x1e937f9/0x1fd5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 226 ms_handle_reset con 0x5622278b3400 session 0x56222742b0e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:46.612634+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 116563968 unmapped: 35897344 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 226 ms_handle_reset con 0x562228a97c00 session 0x562224ae3e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 226 heartbeat osd_stat(store_statfs(0x4f9288000/0x0/0x4ffc00000, data 0x1e937f9/0x1fd5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 226 ms_handle_reset con 0x562228d7e400 session 0x56222763e3c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 226 ms_handle_reset con 0x562224cdac00 session 0x5622253dde00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1e000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:47.612780+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 35848192 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1710203 data_alloc: 234881024 data_used: 13574144
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 226 handle_osd_map epochs [226,227], i have 226, src has [1,227]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 227 ms_handle_reset con 0x562228e1e000 session 0x56222742ba40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:48.612996+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 36601856 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622278b3400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 227 ms_handle_reset con 0x5622278b3400 session 0x56222736ba40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:49.613340+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 228 heartbeat osd_stat(store_statfs(0x4f9285000/0x0/0x4ffc00000, data 0x1e95421/0x1fd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1e000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 228 ms_handle_reset con 0x562228a97c00 session 0x56222468a960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 115867648 unmapped: 36593664 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 228 ms_handle_reset con 0x562228e1e000 session 0x562226cb50e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 228 heartbeat osd_stat(store_statfs(0x4f9285000/0x0/0x4ffc00000, data 0x1e95421/0x1fd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 228 handle_osd_map epochs [229,229], i have 228, src has [1,229]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 228 ms_handle_reset con 0x5622293a6000 session 0x562224b903c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:50.613634+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 229 ms_handle_reset con 0x562228d7e400 session 0x562226ce03c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 36560896 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 229 ms_handle_reset con 0x562224cdac00 session 0x5622271383c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 229 heartbeat osd_stat(store_statfs(0x4f927d000/0x0/0x4ffc00000, data 0x1e98c01/0x1fde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:51.613822+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 36560896 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:52.614482+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 229 handle_osd_map epochs [230,230], i have 229, src has [1,230]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 115900416 unmapped: 36560896 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 230 ms_handle_reset con 0x562228d7e400 session 0x562227138f00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1724508 data_alloc: 234881024 data_used: 13631488
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:53.614696+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 28983296 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.008834839s of 10.020521164s, submitted: 234
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622278b3400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:54.614852+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 231 ms_handle_reset con 0x562228a97c00 session 0x5622253dcb40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 33177600 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 231 handle_osd_map epochs [232,232], i have 231, src has [1,232]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1e000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 232 ms_handle_reset con 0x5622293a6000 session 0x562226ce1a40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:55.614981+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 31948800 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 232 handle_osd_map epochs [233,233], i have 232, src has [1,233]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 233 ms_handle_reset con 0x562228e1e000 session 0x562226cafa40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 233 ms_handle_reset con 0x562224cdac00 session 0x562226c01860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 233 ms_handle_reset con 0x5622278b3400 session 0x5622262cbe00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:56.615495+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 233 heartbeat osd_stat(store_statfs(0x4f876b000/0x0/0x4ffc00000, data 0x299509e/0x2ae1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 33251328 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:57.615643+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 33218560 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1824917 data_alloc: 234881024 data_used: 13889536
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 233 handle_osd_map epochs [233,234], i have 233, src has [1,234]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 233 handle_osd_map epochs [234,234], i have 234, src has [1,234]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 234 ms_handle_reset con 0x562228a97c00 session 0x5622274cb4a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 234 ms_handle_reset con 0x562228d7e400 session 0x562226ce14a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:58.615860+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 33202176 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:59.615990+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 33202176 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 235 handle_osd_map epochs [236,236], i have 235, src has [1,236]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:00.616241+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 33177600 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 236 heartbeat osd_stat(store_statfs(0x4f8774000/0x0/0x4ffc00000, data 0x2999f4b/0x2ae9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:01.616378+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 119283712 unmapped: 33177600 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:02.616621+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 118095872 unmapped: 34365440 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 236 ms_handle_reset con 0x5622293a6000 session 0x56222744b2c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1832358 data_alloc: 234881024 data_used: 13897728
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:03.616739+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 34332672 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:04.616863+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 236 ms_handle_reset con 0x5622293a7000 session 0x562224b90960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 34332672 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:05.616969+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.563754082s of 11.562155724s, submitted: 160
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 236 ms_handle_reset con 0x562228a96000 session 0x562226c6a960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 236 ms_handle_reset con 0x562228a97800 session 0x562224b44f00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 34332672 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 236 heartbeat osd_stat(store_statfs(0x4f8752000/0x0/0x4ffc00000, data 0x29beed9/0x2b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 236 ms_handle_reset con 0x562224cdac00 session 0x562226dd2780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:06.617249+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 35078144 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:07.617384+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 35078144 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1786278 data_alloc: 234881024 data_used: 13176832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:08.617535+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 35078144 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:09.617670+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 35078144 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:10.617924+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 237 heartbeat osd_stat(store_statfs(0x4f87a5000/0x0/0x4ffc00000, data 0x255cea6/0x26a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 117415936 unmapped: 35045376 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622278b3400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 237 ms_handle_reset con 0x5622278b3400 session 0x56222539f2c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:11.618052+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 117424128 unmapped: 35037184 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 238 heartbeat osd_stat(store_statfs(0x4f879f000/0x0/0x4ffc00000, data 0x25604f4/0x26ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:12.618293+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 35028992 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1794482 data_alloc: 234881024 data_used: 13189120
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 238 handle_osd_map epochs [238,239], i have 238, src has [1,239]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:13.618429+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 35012608 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 239 ms_handle_reset con 0x562224cdac00 session 0x562226cae1e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:14.618558+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 240 ms_handle_reset con 0x562228a96000 session 0x56222534da40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 35012608 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:15.618702+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.382049561s of 10.055097580s, submitted: 159
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 240 ms_handle_reset con 0x562228a97800 session 0x56222763f680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a7000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 119341056 unmapped: 33120256 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 240 heartbeat osd_stat(store_statfs(0x4f8798000/0x0/0x4ffc00000, data 0x2566cf2/0x26b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 240 ms_handle_reset con 0x5622293a7000 session 0x5622262e9860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 240 ms_handle_reset con 0x562228a97c00 session 0x56222539fa40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 240 ms_handle_reset con 0x562224cdac00 session 0x562227138780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 240 ms_handle_reset con 0x562228a96000 session 0x562226cdfc20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:16.618873+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 33087488 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:17.619074+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 33087488 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1881632 data_alloc: 234881024 data_used: 13189120
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:18.619275+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 240 handle_osd_map epochs [240,241], i have 240, src has [1,241]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 33071104 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:19.619428+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 33071104 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:20.619609+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 33062912 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 241 heartbeat osd_stat(store_statfs(0x4f7dcd000/0x0/0x4ffc00000, data 0x2f3082a/0x3080000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 241 ms_handle_reset con 0x562228a97800 session 0x56222744ba40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:21.619763+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 241 ms_handle_reset con 0x562227658400 session 0x56222736af00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a7000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 33062912 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 241 ms_handle_reset con 0x562228d7e400 session 0x5622257601e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 241 ms_handle_reset con 0x5622293a7000 session 0x562224ba9860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:22.619948+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 114843648 unmapped: 37617664 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1679126 data_alloc: 218103808 data_used: 5427200
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 241 ms_handle_reset con 0x562224cdac00 session 0x562226ce1e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:23.620173+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 241 ms_handle_reset con 0x562227658400 session 0x56222763f0e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 114581504 unmapped: 37879808 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:24.620293+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 114581504 unmapped: 37879808 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:25.620430+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 118562816 unmapped: 33898496 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:26.620537+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.473547935s of 10.849651337s, submitted: 59
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 241 ms_handle_reset con 0x5622293a6000 session 0x5622253dc5a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 118562816 unmapped: 33898496 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 241 heartbeat osd_stat(store_statfs(0x4f912c000/0x0/0x4ffc00000, data 0x1bd085d/0x1d22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:27.620764+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 118562816 unmapped: 33898496 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1758926 data_alloc: 234881024 data_used: 15400960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:28.621002+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a7400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 118562816 unmapped: 33898496 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:29.621146+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 241 ms_handle_reset con 0x5622293a7400 session 0x5622253ddc20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 33890304 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:30.621341+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 33890304 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:31.621501+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 33890304 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:32.621653+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 33890304 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 heartbeat osd_stat(store_statfs(0x4f9127000/0x0/0x4ffc00000, data 0x1bd2327/0x1d26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1766354 data_alloc: 234881024 data_used: 15409152
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:33.621787+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 33890304 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 ms_handle_reset con 0x562224cdac00 session 0x562227498780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 ms_handle_reset con 0x562227658400 session 0x56222463c960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 ms_handle_reset con 0x5622293a6000 session 0x56222463d860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a7000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 ms_handle_reset con 0x5622293a7000 session 0x56222468a1e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1e400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:34.621928+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 ms_handle_reset con 0x562228e1e400 session 0x56222468a960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 ms_handle_reset con 0x562224cdac00 session 0x56222763f4a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 ms_handle_reset con 0x562227658400 session 0x56222763e3c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 heartbeat osd_stat(store_statfs(0x4f8a56000/0x0/0x4ffc00000, data 0x22a3337/0x23f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 ms_handle_reset con 0x5622293a6000 session 0x5622246405a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a7000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 ms_handle_reset con 0x5622293a7000 session 0x562226dd2f00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 118841344 unmapped: 33619968 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:35.622076+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 118841344 unmapped: 33619968 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:36.622259+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1e800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 ms_handle_reset con 0x562228e1e800 session 0x5622262cb4a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.591345787s of 10.166965485s, submitted: 48
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 30220288 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:37.622369+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 124649472 unmapped: 27811840 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1885854 data_alloc: 234881024 data_used: 17223680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:38.622528+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 128696320 unmapped: 23764992 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:39.622657+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 ms_handle_reset con 0x562224cdac00 session 0x56222534c780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 ms_handle_reset con 0x562227658400 session 0x5622274cab40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 126935040 unmapped: 25526272 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 ms_handle_reset con 0x5622293a6000 session 0x5622268f5e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a7000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:40.622789+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 ms_handle_reset con 0x5622293a7000 session 0x562226c72960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 heartbeat osd_stat(store_statfs(0x4f9438000/0x0/0x4ffc00000, data 0x220f389/0x2364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 29024256 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:41.622955+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 29024256 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:42.623177+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 ms_handle_reset con 0x562228e1ec00 session 0x562227499680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 heartbeat osd_stat(store_statfs(0x4f9437000/0x0/0x4ffc00000, data 0x220f399/0x2365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 29024256 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1837768 data_alloc: 234881024 data_used: 12128256
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:43.623318+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 29024256 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:44.623480+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 29024256 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:45.623693+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 ms_handle_reset con 0x562224cdac00 session 0x562226cdfe00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 heartbeat osd_stat(store_statfs(0x4f9436000/0x0/0x4ffc00000, data 0x220f3b7/0x2366000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [0,0,0,0,0,0,1,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 123166720 unmapped: 29294592 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 heartbeat osd_stat(store_statfs(0x4f9436000/0x0/0x4ffc00000, data 0x220f3b7/0x2366000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:46.623863+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 heartbeat osd_stat(store_statfs(0x4f9ae6000/0x0/0x4ffc00000, data 0x22313bc/0x2388000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 123166720 unmapped: 29294592 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:47.624063+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.331140518s of 10.946500778s, submitted: 155
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 123166720 unmapped: 29294592 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 243 ms_handle_reset con 0x562227658400 session 0x562226c00780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1838243 data_alloc: 234881024 data_used: 12144640
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:48.624591+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 123199488 unmapped: 29261824 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:49.624782+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 123199488 unmapped: 29261824 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a7000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 243 heartbeat osd_stat(store_statfs(0x4f9ae2000/0x0/0x4ffc00000, data 0x2232f90/0x238b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 243 ms_handle_reset con 0x5622293a7000 session 0x5622253dc000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:50.625033+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 27983872 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 243 handle_osd_map epochs [243,244], i have 243, src has [1,244]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 244 heartbeat osd_stat(store_statfs(0x4f98e1000/0x0/0x4ffc00000, data 0x2434190/0x258d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [0,0,0,0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:51.625236+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1f000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 28049408 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:52.625414+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 244 ms_handle_reset con 0x562228e1ec00 session 0x5622271094a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 28049408 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 244 ms_handle_reset con 0x562228e1f000 session 0x56222539eb40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1857109 data_alloc: 234881024 data_used: 12148736
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:53.625597+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 244 heartbeat osd_stat(store_statfs(0x4f98ca000/0x0/0x4ffc00000, data 0x2448db8/0x25a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 244 ms_handle_reset con 0x5622293a6000 session 0x562226ce05a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 28049408 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 244 ms_handle_reset con 0x562227658400 session 0x562226cb5860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 244 heartbeat osd_stat(store_statfs(0x4f98ca000/0x0/0x4ffc00000, data 0x2448db8/0x25a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 244 handle_osd_map epochs [244,245], i have 244, src has [1,245]
Nov 29 08:25:35 compute-0 podman[309044]: 2025-11-29 08:25:35.926366563 +0000 UTC m=+0.079845013 container health_status af1b77a47d1367d30d41142c2cc5326c2e498ccae918bacbb221ab5b4446c1c7 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 244 handle_osd_map epochs [245,245], i have 245, src has [1,245]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 245 ms_handle_reset con 0x562224cdac00 session 0x562226cdf4a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 245 ms_handle_reset con 0x562228e1ec00 session 0x562224b91e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:54.626201+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 125501440 unmapped: 26959872 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a7000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:55.626845+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1f400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 245 ms_handle_reset con 0x5622293a7000 session 0x562226caf860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 245 ms_handle_reset con 0x562224cdac00 session 0x56222744b0e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 125526016 unmapped: 26935296 heap: 152461312 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 246 ms_handle_reset con 0x562227658400 session 0x5622274a9860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:56.627120+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 246 ms_handle_reset con 0x562228e1ec00 session 0x5622262cbc20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a6000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 151756800 unmapped: 12460032 heap: 164216832 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:57.628476+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.005187511s of 10.063023567s, submitted: 155
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 246 ms_handle_reset con 0x562228e1f400 session 0x5622273e1860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 140689408 unmapped: 23527424 heap: 164216832 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 246 ms_handle_reset con 0x5622293a6000 session 0x56222763f0e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1993237 data_alloc: 234881024 data_used: 22597632
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 246 heartbeat osd_stat(store_statfs(0x4f8a0c000/0x0/0x4ffc00000, data 0x3304851/0x345f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:58.628906+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 140722176 unmapped: 23494656 heap: 164216832 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:59.629517+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 140779520 unmapped: 23437312 heap: 164216832 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:00.629817+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 140779520 unmapped: 23437312 heap: 164216832 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:01.630028+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 140779520 unmapped: 23437312 heap: 164216832 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 247 heartbeat osd_stat(store_statfs(0x4f89fd000/0x0/0x4ffc00000, data 0x331546b/0x3470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:02.630168+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 247 ms_handle_reset con 0x562224cdac00 session 0x562226c721e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137347072 unmapped: 26869760 heap: 164216832 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 247 ms_handle_reset con 0x562227658400 session 0x562226cae780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1986611 data_alloc: 234881024 data_used: 22609920
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 247 heartbeat osd_stat(store_statfs(0x4f89fe000/0x0/0x4ffc00000, data 0x331546b/0x3470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:03.630284+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137363456 unmapped: 26853376 heap: 164216832 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:04.630585+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137363456 unmapped: 26853376 heap: 164216832 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1f400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 247 handle_osd_map epochs [248,248], i have 248, src has [1,248]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:05.631000+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 248 ms_handle_reset con 0x562228e1f400 session 0x562224ae25a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 248 ms_handle_reset con 0x562228e1ec00 session 0x562224b45c20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 26828800 heap: 164216832 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 248 heartbeat osd_stat(store_statfs(0x4f89fa000/0x0/0x4ffc00000, data 0x3317077/0x3473000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:06.631177+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1f800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 248 ms_handle_reset con 0x562228e1f800 session 0x562225339e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 248 ms_handle_reset con 0x562224cdac00 session 0x5622253dcb40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 248 ms_handle_reset con 0x562227658400 session 0x562226cb5680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 26828800 heap: 164216832 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:07.631556+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 26828800 heap: 164216832 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1991020 data_alloc: 234881024 data_used: 22614016
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:08.631766+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 248 ms_handle_reset con 0x562228e1ec00 session 0x5622246410e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 26828800 heap: 164216832 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:09.631920+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 248 heartbeat osd_stat(store_statfs(0x4f89fa000/0x0/0x4ffc00000, data 0x33170c9/0x3473000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.643013000s of 12.080608368s, submitted: 61
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137461760 unmapped: 26755072 heap: 164216832 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1f400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 248 handle_osd_map epochs [249,249], i have 249, src has [1,249]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:10.632317+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1fc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 26722304 heap: 164216832 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 249 ms_handle_reset con 0x562228e1f400 session 0x56222534d0e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 249 ms_handle_reset con 0x562228e1fc00 session 0x5622271385a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:11.632533+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 26722304 heap: 164216832 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:12.632713+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 249 ms_handle_reset con 0x562227658400 session 0x5622274cab40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1f400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84c000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137551872 unmapped: 26664960 heap: 164216832 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2001677 data_alloc: 234881024 data_used: 22618112
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:13.632910+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137551872 unmapped: 26664960 heap: 164216832 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:14.633185+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 249 heartbeat osd_stat(store_statfs(0x4f89f4000/0x0/0x4ffc00000, data 0x3318cdf/0x347a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 249 handle_osd_map epochs [249,250], i have 249, src has [1,250]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 249 handle_osd_map epochs [250,250], i have 250, src has [1,250]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 250 ms_handle_reset con 0x56222a84c000 session 0x5622274990e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 250 ms_handle_reset con 0x562228e1ec00 session 0x5622274ca5a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84c400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 250 ms_handle_reset con 0x56222a84c400 session 0x5622246405a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 250 ms_handle_reset con 0x562224cdac00 session 0x562226cdf2c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 250 ms_handle_reset con 0x562224cdac00 session 0x562226cde000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 250 ms_handle_reset con 0x562227658400 session 0x5622274a8b40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137584640 unmapped: 26632192 heap: 164216832 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:15.633353+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 250 ms_handle_reset con 0x562228e1ec00 session 0x5622274a9c20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84c000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 251 ms_handle_reset con 0x562228e1f400 session 0x562223f5a780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137609216 unmapped: 26607616 heap: 164216832 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 251 heartbeat osd_stat(store_statfs(0x4f89f0000/0x0/0x4ffc00000, data 0x331a955/0x347e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:16.633545+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 251 handle_osd_map epochs [251,252], i have 251, src has [1,252]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 251 handle_osd_map epochs [252,252], i have 252, src has [1,252]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 252 ms_handle_reset con 0x56222a84c000 session 0x56222534c5a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 26574848 heap: 164216832 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 252 ms_handle_reset con 0x562227658400 session 0x562226dd2f00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 252 ms_handle_reset con 0x562224cdac00 session 0x56222463c960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 252 ms_handle_reset con 0x562228e1ec00 session 0x5622274992c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:17.633736+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1f400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84c400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 252 ms_handle_reset con 0x56222a84c400 session 0x562226ce05a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84c800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84cc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 252 heartbeat osd_stat(store_statfs(0x4f89e6000/0x0/0x4ffc00000, data 0x331e196/0x3486000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84d000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 252 ms_handle_reset con 0x56222a84c800 session 0x56222763e3c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 26468352 heap: 164216832 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2023666 data_alloc: 234881024 data_used: 22638592
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:18.633945+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 146366464 unmapped: 76636160 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:19.639235+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 252 heartbeat osd_stat(store_statfs(0x4f45ea000/0x0/0x4ffc00000, data 0x771e0e4/0x7884000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.258938313s of 10.207116127s, submitted: 149
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150364160 unmapped: 72638464 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:20.639378+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 252 heartbeat osd_stat(store_statfs(0x4f35ea000/0x0/0x4ffc00000, data 0x871e0e4/0x8884000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 142106624 unmapped: 80896000 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:21.639501+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 142106624 unmapped: 80896000 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:22.639701+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 142278656 unmapped: 80723968 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3147858 data_alloc: 251658240 data_used: 31174656
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:23.640087+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 142352384 unmapped: 80650240 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:24.640377+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 253 heartbeat osd_stat(store_statfs(0x4ec9ea000/0x0/0x4ffc00000, data 0xf31e0e4/0xf484000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [0,0,0,0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 146604032 unmapped: 76398592 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:25.640627+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 151093248 unmapped: 71909376 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:26.640796+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 147193856 unmapped: 75808768 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:27.640986+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 147316736 unmapped: 75685888 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4133088 data_alloc: 251658240 data_used: 31182848
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:28.641168+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 147521536 unmapped: 75481088 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:29.641381+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.074352264s of 10.025531769s, submitted: 83
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 253 heartbeat osd_stat(store_statfs(0x4e4de7000/0x0/0x4ffc00000, data 0x16f1fb9e/0x17087000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [0,0,0,0,0,1,1,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 156311552 unmapped: 66691072 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:30.641583+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 253 ms_handle_reset con 0x56222a84d000 session 0x562226dd23c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84c800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 253 ms_handle_reset con 0x56222a84c800 session 0x56222468a1e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 253 ms_handle_reset con 0x56222a84cc00 session 0x562227138d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 253 ms_handle_reset con 0x562224cdac00 session 0x56222468a960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 143949824 unmapped: 79052800 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:31.641731+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 253 ms_handle_reset con 0x562227658400 session 0x562226c72b40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 144121856 unmapped: 78880768 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:32.641969+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 253 ms_handle_reset con 0x562227658400 session 0x562226cae960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 253 ms_handle_reset con 0x562224cdac00 session 0x5622274ca3c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 144515072 unmapped: 78487552 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2194883 data_alloc: 251658240 data_used: 32141312
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:33.642133+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84c800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 253 ms_handle_reset con 0x56222a84c800 session 0x56222539eb40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84cc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 144990208 unmapped: 78012416 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:34.642445+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 254 ms_handle_reset con 0x56222a84cc00 session 0x5622257610e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 254 heartbeat osd_stat(store_statfs(0x4f89e4000/0x0/0x4ffc00000, data 0x3324ada/0x348a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 145039360 unmapped: 77963264 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:35.642728+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 145039360 unmapped: 77963264 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:36.642955+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84d000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 254 ms_handle_reset con 0x56222a84d000 session 0x5622268f54a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84d000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 145129472 unmapped: 77873152 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 254 ms_handle_reset con 0x56222a84d000 session 0x562226dd3860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:37.643267+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 145129472 unmapped: 77873152 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2199573 data_alloc: 251658240 data_used: 32825344
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:38.643630+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 254 ms_handle_reset con 0x562224cdac00 session 0x562226dd23c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 254 heartbeat osd_stat(store_statfs(0x4f89e1000/0x0/0x4ffc00000, data 0x3326702/0x348d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 145768448 unmapped: 77234176 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:39.643857+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 145768448 unmapped: 77234176 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:40.644083+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 145768448 unmapped: 77234176 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:41.644421+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 77225984 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.613728523s of 12.372413635s, submitted: 206
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:42.644574+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 145367040 unmapped: 77635584 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2208182 data_alloc: 251658240 data_used: 32829440
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:43.644745+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 254 heartbeat osd_stat(store_statfs(0x4f8972000/0x0/0x4ffc00000, data 0x3394764/0x34fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [0,0,0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 254 ms_handle_reset con 0x562227658400 session 0x56222763f4a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84c800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 145375232 unmapped: 77627392 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:44.645013+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 77725696 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x56222a84c800 session 0x562226ce0d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:45.645284+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 77725696 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:46.645468+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 145276928 unmapped: 77725696 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:47.645637+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 heartbeat osd_stat(store_statfs(0x4f82c5000/0x0/0x4ffc00000, data 0x3a401bc/0x3ba8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1,2])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84cc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x56222a84cc00 session 0x56222463c960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84cc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x56222a84cc00 session 0x56222534cd20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 148815872 unmapped: 74186752 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2295700 data_alloc: 251658240 data_used: 32829440
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:48.645855+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x562224cdac00 session 0x56222534d0e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x562227658400 session 0x56222534c5a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84c800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 148824064 unmapped: 74178560 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84d000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:49.646022+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 148922368 unmapped: 74080256 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:50.646467+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 157401088 unmapped: 65601536 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:51.646696+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x562228e1ec00 session 0x5622274a9e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84c400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x56222a84c400 session 0x56222539f2c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x562224cdac00 session 0x562226caef00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x562227658400 session 0x5622274cb0e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 157466624 unmapped: 65536000 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:52.646975+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.206747055s of 10.039710999s, submitted: 81
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x56222a84c800 session 0x562223f5a780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84cc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x56222a84cc00 session 0x562224b441e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84d400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x56222a84d400 session 0x562224b45c20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 147324928 unmapped: 75677696 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2485819 data_alloc: 251658240 data_used: 32841728
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:53.647181+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 heartbeat osd_stat(store_statfs(0x4f75a8000/0x0/0x4ffc00000, data 0x4759278/0x48c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,12])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x562228e1ec00 session 0x5622274cb2c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x562227658400 session 0x5622262e8000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84c800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x56222a84d000 session 0x5622268acf00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x562224cdac00 session 0x5622257601e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84cc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x56222a84c800 session 0x56222534cb40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 148013056 unmapped: 74989568 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x56222a84cc00 session 0x56222742b860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:54.647318+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x562224cdac00 session 0x5622253dd0e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x562227658400 session 0x56222742a3c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 148013056 unmapped: 74989568 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:55.647466+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x562228e1ec00 session 0x56222742b0e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 148013056 unmapped: 74989568 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84d000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:56.647615+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84d800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x56222a84d800 session 0x56222461e780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x562227658400 session 0x562224ae3e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 heartbeat osd_stat(store_statfs(0x4f59b8000/0x0/0x4ffc00000, data 0x51a92b1/0x5316000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [0,0,0,0,1,0,0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 148037632 unmapped: 74964992 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:57.647794+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x562224cdac00 session 0x562226cafa40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 149102592 unmapped: 73900032 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2481577 data_alloc: 251658240 data_used: 35782656
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:58.647946+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x562228e1ec00 session 0x562226cb43c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84cc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84dc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 149110784 unmapped: 73891840 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:59.648127+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x56222a84d000 session 0x56222742ba40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 heartbeat osd_stat(store_statfs(0x4f59b6000/0x0/0x4ffc00000, data 0x51a92e4/0x5318000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 149176320 unmapped: 73826304 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:00.648273+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 158351360 unmapped: 64651264 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:01.648427+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x562228e1f400 session 0x562226c01860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 165953536 unmapped: 57049088 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:02.648557+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.201305389s of 10.045639992s, submitted: 48
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 165986304 unmapped: 57016320 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:03.648720+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2637173 data_alloc: 268435456 data_used: 52994048
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 166019072 unmapped: 56983552 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:04.648869+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 heartbeat osd_stat(store_statfs(0x4f5c51000/0x0/0x4ffc00000, data 0x4f04282/0x5072000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 166051840 unmapped: 56950784 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:05.649064+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 166051840 unmapped: 56950784 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:06.649338+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 166068224 unmapped: 56934400 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:07.649500+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 166068224 unmapped: 56934400 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:08.649720+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2607686 data_alloc: 268435456 data_used: 52744192
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 166068224 unmapped: 56934400 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:09.649903+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 166068224 unmapped: 56934400 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:10.650125+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 heartbeat osd_stat(store_statfs(0x4f5c5c000/0x0/0x4ffc00000, data 0x4f04282/0x5072000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 166100992 unmapped: 56901632 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:11.650270+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 166100992 unmapped: 56901632 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:12.650473+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.942747116s of 10.197336197s, submitted: 32
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 168812544 unmapped: 54190080 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:13.650648+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2628284 data_alloc: 268435456 data_used: 52760576
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:14.650802+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176783360 unmapped: 46219264 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:15.651005+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 174161920 unmapped: 48840704 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:16.651189+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 171761664 unmapped: 51240960 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 heartbeat osd_stat(store_statfs(0x4f5406000/0x0/0x4ffc00000, data 0x575a282/0x58c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1,1,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:17.651361+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 171761664 unmapped: 51240960 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:18.651590+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 171868160 unmapped: 51134464 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2672272 data_alloc: 268435456 data_used: 52756480
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:19.651802+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 173621248 unmapped: 49381376 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:20.651978+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 175439872 unmapped: 47562752 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:21.652195+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 175439872 unmapped: 47562752 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:22.652424+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 174817280 unmapped: 48185344 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 heartbeat osd_stat(store_statfs(0x4f4ba8000/0x0/0x4ffc00000, data 0x5fb0282/0x611e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,3,7])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:23.652576+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 177315840 unmapped: 45686784 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2755366 data_alloc: 268435456 data_used: 52748288
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:24.652760+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 177438720 unmapped: 45563904 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:25.652942+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 177438720 unmapped: 45563904 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 0.114833608s of 13.264824867s, submitted: 116
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.675352097s, txc = 0x562224a94300
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.672725677s, txc = 0x5622258c8000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.672294617s, txc = 0x56222492ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.672754765s, txc = 0x562224af6000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.672529221s, txc = 0x562224ad6f00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.671866417s, txc = 0x562225459200
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.671420097s, txc = 0x562224982f00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:26.653139+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 175947776 unmapped: 47054848 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x562224cdac00 session 0x562224b45e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:27.653277+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176734208 unmapped: 46268416 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:28.653553+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176947200 unmapped: 46055424 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2781810 data_alloc: 268435456 data_used: 54685696
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 heartbeat osd_stat(store_statfs(0x4f4671000/0x0/0x4ffc00000, data 0x64e725f/0x6654000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,4])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:29.653783+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176947200 unmapped: 46055424 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:30.653903+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 175562752 unmapped: 47439872 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:31.654055+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x562227658400 session 0x56222742bc20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176685056 unmapped: 46317568 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:32.654202+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176685056 unmapped: 46317568 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:33.654326+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 177094656 unmapped: 45907968 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2792528 data_alloc: 268435456 data_used: 55189504
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 heartbeat osd_stat(store_statfs(0x4f464a000/0x0/0x4ffc00000, data 0x651725f/0x6684000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 ms_handle_reset con 0x562228e1ec00 session 0x56222763eb40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84d000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:34.654492+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 177127424 unmapped: 45875200 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 heartbeat osd_stat(store_statfs(0x4f463f000/0x0/0x4ffc00000, data 0x65231fd/0x668f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:35.654647+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 177160192 unmapped: 45842432 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.565733910s of 10.001836777s, submitted: 103
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 256 ms_handle_reset con 0x56222a84d000 session 0x562226cb5680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:36.654791+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 172072960 unmapped: 50929664 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b4000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b4400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 256 ms_handle_reset con 0x56222b8b4000 session 0x56222763f0e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:37.654947+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 172072960 unmapped: 50929664 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 256 ms_handle_reset con 0x562224cdac00 session 0x562226cb50e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 256 heartbeat osd_stat(store_statfs(0x4f54ff000/0x0/0x4ffc00000, data 0x5660e25/0x57ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 256 handle_osd_map epochs [257,257], i have 257, src has [1,257]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:38.655179+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 172097536 unmapped: 50905088 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2624726 data_alloc: 268435456 data_used: 45064192
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 258 ms_handle_reset con 0x562228e1ec00 session 0x56222534cd20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 258 ms_handle_reset con 0x562228a96000 session 0x562227498000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 258 ms_handle_reset con 0x562228a97800 session 0x562226c6be00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84d000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:39.655320+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 258 ms_handle_reset con 0x56222a84d000 session 0x56222763f4a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 162709504 unmapped: 60293120 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:40.655444+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 258 handle_osd_map epochs [258,259], i have 258, src has [1,259]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 162709504 unmapped: 60293120 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:41.655692+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 259 ms_handle_reset con 0x562227658400 session 0x5622253381e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 162709504 unmapped: 60293120 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 259 heartbeat osd_stat(store_statfs(0x4f6146000/0x0/0x4ffc00000, data 0x460824e/0x4777000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 259 ms_handle_reset con 0x562224cdac00 session 0x562226ce0960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 259 ms_handle_reset con 0x562228a96000 session 0x56222736b680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:42.655823+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 161275904 unmapped: 61726720 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:43.656047+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 161275904 unmapped: 61726720 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2399669 data_alloc: 251658240 data_used: 29691904
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 259 ms_handle_reset con 0x56222a84dc00 session 0x562226cdfe00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a97800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:44.656148+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 161275904 unmapped: 61726720 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 260 ms_handle_reset con 0x562228a97800 session 0x5622268bfa40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 260 ms_handle_reset con 0x562224cdac00 session 0x56222736ba40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:45.656367+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150585344 unmapped: 72417280 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 260 heartbeat osd_stat(store_statfs(0x4f79ed000/0x0/0x4ffc00000, data 0x2a80bb5/0x2bec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 260 ms_handle_reset con 0x562227658400 session 0x562224ae2b40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 260 handle_osd_map epochs [261,262], i have 260, src has [1,262]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.975813866s of 10.280453682s, submitted: 149
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:46.656541+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150585344 unmapped: 72417280 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 262 ms_handle_reset con 0x562228a96000 session 0x562226cdef00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:47.656689+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150585344 unmapped: 72417280 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84dc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228e1ec00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 263 ms_handle_reset con 0x56222a84dc00 session 0x562226c73860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 263 ms_handle_reset con 0x562228e1ec00 session 0x56222763f860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:48.656955+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150618112 unmapped: 72384512 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 263 ms_handle_reset con 0x562224cdac00 session 0x56222763e960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2133089 data_alloc: 234881024 data_used: 17473536
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:49.657144+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150568960 unmapped: 72433664 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:50.657346+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150568960 unmapped: 72433664 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 264 heartbeat osd_stat(store_statfs(0x4f7cc8000/0x0/0x4ffc00000, data 0x2a87aa2/0x2bf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:51.657567+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 264 ms_handle_reset con 0x56222b8b4400 session 0x5622262cbc20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150568960 unmapped: 72433664 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 264 ms_handle_reset con 0x56222a84cc00 session 0x562224b44f00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 264 ms_handle_reset con 0x562226e48c00 session 0x56222744a780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227658400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 264 ms_handle_reset con 0x562227658400 session 0x5622262cb0e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:52.657784+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 139952128 unmapped: 83050496 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:53.658134+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 139952128 unmapped: 83050496 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1919888 data_alloc: 218103808 data_used: 5582848
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:54.658272+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 139952128 unmapped: 83050496 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:55.658457+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 139952128 unmapped: 83050496 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 264 handle_osd_map epochs [264,265], i have 264, src has [1,265]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.233240128s of 10.013711929s, submitted: 135
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:56.658584+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 265 ms_handle_reset con 0x562224cdac00 session 0x5622271385a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 139960320 unmapped: 83042304 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 265 heartbeat osd_stat(store_statfs(0x4f8e98000/0x0/0x4ffc00000, data 0x18b853c/0x1a25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:57.658784+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 139960320 unmapped: 83042304 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:58.659035+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 139984896 unmapped: 83017728 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 266 ms_handle_reset con 0x562226e48c00 session 0x5622273e0960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1873618 data_alloc: 218103808 data_used: 5586944
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:59.659203+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 139984896 unmapped: 83017728 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84cc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 266 ms_handle_reset con 0x56222a84cc00 session 0x562227139a40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b4400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:00.659391+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 267 ms_handle_reset con 0x56222b8b4400 session 0x562226c730e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 139051008 unmapped: 83951616 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:01.659553+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 267 handle_osd_map epochs [267,268], i have 267, src has [1,268]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 268 handle_osd_map epochs [268,268], i have 268, src has [1,268]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 139911168 unmapped: 83091456 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 268 ms_handle_reset con 0x562228a96000 session 0x5622262cad20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 268 heartbeat osd_stat(store_statfs(0x4f953d000/0x0/0x4ffc00000, data 0x1211d38/0x1381000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:02.659813+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 139911168 unmapped: 83091456 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:03.660171+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 139911168 unmapped: 83091456 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1880206 data_alloc: 218103808 data_used: 5603328
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:04.660470+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 268 heartbeat osd_stat(store_statfs(0x4f9539000/0x0/0x4ffc00000, data 0x1213928/0x1384000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 139911168 unmapped: 83091456 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:05.660668+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 139911168 unmapped: 83091456 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 268 heartbeat osd_stat(store_statfs(0x4f9539000/0x0/0x4ffc00000, data 0x1213928/0x1384000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 268 handle_osd_map epochs [268,269], i have 268, src has [1,269]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.608169556s of 10.002414703s, submitted: 72
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:06.660946+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 139911168 unmapped: 83091456 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:07.661244+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 139911168 unmapped: 83091456 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:08.661478+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 139911168 unmapped: 83091456 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1883180 data_alloc: 218103808 data_used: 5603328
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 ms_handle_reset con 0x562228a96000 session 0x5622271092c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 ms_handle_reset con 0x562224cdac00 session 0x562227108780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 ms_handle_reset con 0x562226e48c00 session 0x562227108960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84cc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 ms_handle_reset con 0x56222a84cc00 session 0x5622271094a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b4400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:09.661624+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 ms_handle_reset con 0x56222b8b4400 session 0x562227108b40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 85278720 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b4400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 ms_handle_reset con 0x56222b8b4400 session 0x5622271385a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 ms_handle_reset con 0x562224cdac00 session 0x56222744a780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 heartbeat osd_stat(store_statfs(0x4f9536000/0x0/0x4ffc00000, data 0x12153e2/0x1387000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 ms_handle_reset con 0x562226e48c00 session 0x5622262cbc20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 ms_handle_reset con 0x562228a96000 session 0x562224ae2b40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:10.661785+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 85344256 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 heartbeat osd_stat(store_statfs(0x4f8ec9000/0x0/0x4ffc00000, data 0x1882444/0x19f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:11.661990+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 85344256 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:12.662202+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 85344256 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 heartbeat osd_stat(store_statfs(0x4f8ec9000/0x0/0x4ffc00000, data 0x1882444/0x19f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:13.662365+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 85344256 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1939998 data_alloc: 218103808 data_used: 5603328
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84cc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 ms_handle_reset con 0x56222a84cc00 session 0x56222736b680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:14.662540+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137658368 unmapped: 85344256 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84cc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:15.662803+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 85360640 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:16.662964+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 85082112 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:17.663232+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 85082112 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.645603180s of 11.834867477s, submitted: 57
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:18.663584+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 85082112 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1989782 data_alloc: 234881024 data_used: 12255232
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 ms_handle_reset con 0x562226e48c00 session 0x562226c721e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 heartbeat osd_stat(store_statfs(0x4f8ec8000/0x0/0x4ffc00000, data 0x1882454/0x19f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 ms_handle_reset con 0x562228a96000 session 0x5622268f54a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:19.663747+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 ms_handle_reset con 0x562224cdac00 session 0x5622253ddc20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 85106688 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:20.663934+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b4400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 ms_handle_reset con 0x56222b8b4400 session 0x5622268bfe00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 85106688 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:21.664184+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 85106688 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:22.664385+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 85106688 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84dc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 ms_handle_reset con 0x56222a84dc00 session 0x56222744be00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 ms_handle_reset con 0x562224cdac00 session 0x56222463c960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:23.664531+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 ms_handle_reset con 0x562226e48c00 session 0x5622273e1e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.9 total, 600.0 interval
                                           Cumulative writes: 15K writes, 60K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 15K writes, 5053 syncs, 3.13 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8950 writes, 33K keys, 8950 commit groups, 1.0 writes per commit group, ingest: 19.90 MB, 0.03 MB/s
                                           Interval WAL: 8950 writes, 3694 syncs, 2.42 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 138002432 unmapped: 85000192 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2067488 data_alloc: 234881024 data_used: 12255232
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 heartbeat osd_stat(store_statfs(0x4f8538000/0x0/0x4ffc00000, data 0x22114b6/0x2386000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:24.664742+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 138002432 unmapped: 85000192 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:25.664879+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 heartbeat osd_stat(store_statfs(0x4f8538000/0x0/0x4ffc00000, data 0x22114b6/0x2386000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 ms_handle_reset con 0x562228a96000 session 0x56222763f0e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84dc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 138018816 unmapped: 84983808 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:26.665039+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 270 heartbeat osd_stat(store_statfs(0x4f8537000/0x0/0x4ffc00000, data 0x2211518/0x2387000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b4400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b4000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 86761472 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 270 ms_handle_reset con 0x56222b8b4400 session 0x56222763fe00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:27.665284+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 271 ms_handle_reset con 0x56222b8b4000 session 0x56222744ab40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 271 ms_handle_reset con 0x56222a84dc00 session 0x562226c6be00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 139517952 unmapped: 83484672 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.041840553s of 10.008715630s, submitted: 160
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 271 ms_handle_reset con 0x562224cdac00 session 0x5622253381e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:28.665486+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 271 ms_handle_reset con 0x562226e48c00 session 0x56222736b2c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 77742080 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2253699 data_alloc: 234881024 data_used: 13680640
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:29.665676+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 271 ms_handle_reset con 0x562228a96000 session 0x562226ce12c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 77373440 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b4400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b4800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 271 ms_handle_reset con 0x56222b8b4800 session 0x56222468a3c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:30.665843+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 271 ms_handle_reset con 0x562226d83c00 session 0x56222461fe00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 271 ms_handle_reset con 0x562226e48c00 session 0x562226cb43c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 271 ms_handle_reset con 0x562226d83400 session 0x5622246405a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 146128896 unmapped: 76873728 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a7400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e05c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:31.666032+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 ms_handle_reset con 0x562224cdac00 session 0x562226cb4d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 146137088 unmapped: 76865536 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 heartbeat osd_stat(store_statfs(0x4f663a000/0x0/0x4ffc00000, data 0x4108d42/0x4284000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 ms_handle_reset con 0x56222b8b4400 session 0x562226dd2960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:32.666215+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 146145280 unmapped: 76857344 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:33.666388+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 146145280 unmapped: 76857344 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2340572 data_alloc: 234881024 data_used: 13688832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:34.666590+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 146145280 unmapped: 76857344 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 heartbeat osd_stat(store_statfs(0x4f6636000/0x0/0x4ffc00000, data 0x410a916/0x4287000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:35.666938+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 146145280 unmapped: 76857344 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 ms_handle_reset con 0x5622293a7400 session 0x5622246401e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 ms_handle_reset con 0x562226e05c00 session 0x56222463c5a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 heartbeat osd_stat(store_statfs(0x4f6636000/0x0/0x4ffc00000, data 0x410a916/0x4287000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:36.667137+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 ms_handle_reset con 0x562224cdac00 session 0x562226cb4780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 146145280 unmapped: 76857344 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 heartbeat osd_stat(store_statfs(0x4f6634000/0x0/0x4ffc00000, data 0x410d916/0x428a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:37.667710+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 146145280 unmapped: 76857344 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:38.668218+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 146145280 unmapped: 76857344 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2334563 data_alloc: 234881024 data_used: 13692928
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 ms_handle_reset con 0x562226d83400 session 0x56222534d0e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.844620705s of 11.188809395s, submitted: 89
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 ms_handle_reset con 0x562226d83c00 session 0x562227108d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:39.668364+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 146145280 unmapped: 76857344 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:40.668567+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 146145280 unmapped: 76857344 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:41.668753+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 ms_handle_reset con 0x562224cdac00 session 0x56222742ab40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 146145280 unmapped: 76857344 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:42.669011+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 ms_handle_reset con 0x562226d83400 session 0x562224b90960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e05c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 ms_handle_reset con 0x562226e05c00 session 0x56222461f680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a7400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 ms_handle_reset con 0x5622293a7400 session 0x5622274caf00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 146145280 unmapped: 76857344 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 heartbeat osd_stat(store_statfs(0x4f665a000/0x0/0x4ffc00000, data 0x40e98f6/0x4264000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 ms_handle_reset con 0x562226e48c00 session 0x562227108d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:43.669178+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 ms_handle_reset con 0x562224cdac00 session 0x56222534d0e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 146292736 unmapped: 76709888 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2336599 data_alloc: 234881024 data_used: 13688832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 ms_handle_reset con 0x562226d83400 session 0x56222463c5a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e05c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a7400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:44.669328+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 146300928 unmapped: 76701696 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:45.669500+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 heartbeat osd_stat(store_statfs(0x4f6635000/0x0/0x4ffc00000, data 0x410d906/0x4289000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 147267584 unmapped: 75735040 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:46.669647+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b4400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 ms_handle_reset con 0x56222b8b4400 session 0x562224641c20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150618112 unmapped: 72384512 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e05400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 ms_handle_reset con 0x562226e05400 session 0x5622262e85a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:47.669807+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228a96000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 ms_handle_reset con 0x562228a96000 session 0x56222539eb40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150626304 unmapped: 72376320 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 ms_handle_reset con 0x562224cdac00 session 0x5622274a8d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:48.670048+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150626304 unmapped: 72376320 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2443521 data_alloc: 234881024 data_used: 22958080
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:49.670246+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 272 handle_osd_map epochs [272,273], i have 272, src has [1,273]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.071903229s of 10.290650368s, submitted: 27
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 273 ms_handle_reset con 0x562226d83400 session 0x5622274a94a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150626304 unmapped: 72376320 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:50.670442+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150626304 unmapped: 72376320 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 273 heartbeat osd_stat(store_statfs(0x4f611b000/0x0/0x4ffc00000, data 0x46254da/0x47a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e05400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:51.670652+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150626304 unmapped: 72376320 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:52.670801+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b4400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150626304 unmapped: 72376320 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 274 ms_handle_reset con 0x56222b8b4400 session 0x562226cb4d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 274 heartbeat osd_stat(store_statfs(0x4f611c000/0x0/0x4ffc00000, data 0x46254da/0x47a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:53.670986+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150626304 unmapped: 72376320 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2450593 data_alloc: 234881024 data_used: 22974464
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84dc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 275 ms_handle_reset con 0x56222a84dc00 session 0x56222736b2c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 275 ms_handle_reset con 0x56222b8b5800 session 0x5622268bfe00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 275 ms_handle_reset con 0x56222b8b5c00 session 0x5622268acf00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:54.671154+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150683648 unmapped: 72318976 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 275 ms_handle_reset con 0x562224cdac00 session 0x5622253ddc20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:55.671281+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150700032 unmapped: 72302592 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 275 handle_osd_map epochs [275,276], i have 275, src has [1,276]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 276 ms_handle_reset con 0x562226d83400 session 0x5622262cbc20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:56.671442+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84dc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 276 ms_handle_reset con 0x56222a84dc00 session 0x56222461fe00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b4400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150503424 unmapped: 72499200 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 276 handle_osd_map epochs [276,277], i have 276, src has [1,277]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 276 handle_osd_map epochs [277,277], i have 277, src has [1,277]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 277 ms_handle_reset con 0x56222b8b4400 session 0x5622274cbe00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:57.671610+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 277 ms_handle_reset con 0x562224cdac00 session 0x5622262e8000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153927680 unmapped: 69074944 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 277 heartbeat osd_stat(store_statfs(0x4f4c86000/0x0/0x4ffc00000, data 0x4b238aa/0x4a98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 278 ms_handle_reset con 0x562226d83400 session 0x5622253dc000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:58.671795+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84dc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 278 ms_handle_reset con 0x56222a84dc00 session 0x562227138f00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 154157056 unmapped: 68845568 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2585079 data_alloc: 234881024 data_used: 23855104
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:59.672024+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 278 ms_handle_reset con 0x56222b8b5c00 session 0x562226ce1c20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.402806282s of 10.175094604s, submitted: 170
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 278 ms_handle_reset con 0x562226e05400 session 0x562223f5a780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 154222592 unmapped: 68780032 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 278 ms_handle_reset con 0x56222a84cc00 session 0x562226ce0960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:00.672187+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 278 ms_handle_reset con 0x562224cdac00 session 0x562224b91e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 147890176 unmapped: 75112448 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 278 heartbeat osd_stat(store_statfs(0x4f46a1000/0x0/0x4ffc00000, data 0x540611a/0x507d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 278 ms_handle_reset con 0x562226d83400 session 0x56222534cd20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84dc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 278 ms_handle_reset con 0x56222a84dc00 session 0x56222742bc20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:01.672346+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 147890176 unmapped: 75112448 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:02.672643+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 147922944 unmapped: 75079680 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:03.672772+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 149225472 unmapped: 73777152 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2454828 data_alloc: 234881024 data_used: 21274624
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:04.672924+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 278 ms_handle_reset con 0x56222b8b5c00 session 0x5622273e0960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 149225472 unmapped: 73777152 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 278 ms_handle_reset con 0x562226e05c00 session 0x5622246401e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 278 ms_handle_reset con 0x5622293a7400 session 0x5622253dc1e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:05.673055+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 149225472 unmapped: 73777152 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:06.673265+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 278 heartbeat osd_stat(store_statfs(0x4f55d1000/0x0/0x4ffc00000, data 0x44d60db/0x414d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 149225472 unmapped: 73777152 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:07.673504+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 278 handle_osd_map epochs [278,279], i have 278, src has [1,279]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 279 ms_handle_reset con 0x56222b8b5c00 session 0x5622273e1680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 149225472 unmapped: 73777152 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:08.673699+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 149225472 unmapped: 73777152 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2458546 data_alloc: 234881024 data_used: 21295104
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:09.673872+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 149225472 unmapped: 73777152 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:10.674037+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 279 heartbeat osd_stat(store_statfs(0x4f55ce000/0x0/0x4ffc00000, data 0x44d7b92/0x414f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 149225472 unmapped: 73777152 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:11.674267+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 149225472 unmapped: 73777152 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:12.674407+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 149225472 unmapped: 73777152 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:13.674567+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 149225472 unmapped: 73777152 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2458546 data_alloc: 234881024 data_used: 21295104
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:14.674737+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 279 heartbeat osd_stat(store_statfs(0x4f55ce000/0x0/0x4ffc00000, data 0x44d7b92/0x414f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 149225472 unmapped: 73777152 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 279 handle_osd_map epochs [279,280], i have 279, src has [1,280]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.820934296s of 15.480407715s, submitted: 75
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:15.674897+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 280 ms_handle_reset con 0x562226d83400 session 0x5622253dc3c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 280 heartbeat osd_stat(store_statfs(0x4f55ca000/0x0/0x4ffc00000, data 0x44d9766/0x4152000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 149233664 unmapped: 73768960 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:16.675074+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84cc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 280 ms_handle_reset con 0x562224cdac00 session 0x562224ba8780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 280 ms_handle_reset con 0x56222a84cc00 session 0x562225761e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 149233664 unmapped: 73768960 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:17.675303+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 280 ms_handle_reset con 0x562226d83400 session 0x56222736a000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e05c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 280 ms_handle_reset con 0x562226e05c00 session 0x5622271392c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a7400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 280 ms_handle_reset con 0x5622293a7400 session 0x562224b91680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 280 heartbeat osd_stat(store_statfs(0x4f55ef000/0x0/0x4ffc00000, data 0x44b5766/0x412e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 149233664 unmapped: 73768960 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:18.675512+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84dc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 281 ms_handle_reset con 0x56222b8b5c00 session 0x56222763f860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 281 ms_handle_reset con 0x56222a84dc00 session 0x56222736b680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 281 ms_handle_reset con 0x56222b8b5c00 session 0x5622274cba40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 149250048 unmapped: 73752576 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2459864 data_alloc: 234881024 data_used: 21180416
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e05c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 281 ms_handle_reset con 0x562226e05c00 session 0x562227139680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a7400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:19.675679+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 281 ms_handle_reset con 0x5622293a7400 session 0x562226cb4780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 282 ms_handle_reset con 0x562226d83400 session 0x56222763fe00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 282 ms_handle_reset con 0x56222b8b5000 session 0x5622273e0b40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e05c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 282 heartbeat osd_stat(store_statfs(0x4f5b04000/0x0/0x4ffc00000, data 0x3fa131c/0x3c19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 282 ms_handle_reset con 0x56222b8b5400 session 0x5622274994a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a7400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 282 ms_handle_reset con 0x5622293a7400 session 0x562225338960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 282 ms_handle_reset con 0x562226e05c00 session 0x562227138780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 282 ms_handle_reset con 0x562226d83400 session 0x56222763f4a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 140378112 unmapped: 82624512 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:20.675810+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 282 handle_osd_map epochs [282,283], i have 282, src has [1,283]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 140386304 unmapped: 82616320 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:21.675978+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 283 ms_handle_reset con 0x562226d83400 session 0x5622253381e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 82608128 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e05c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 283 ms_handle_reset con 0x562226e05c00 session 0x5622274cab40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a7400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:22.676219+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 283 heartbeat osd_stat(store_statfs(0x4f836d000/0x0/0x4ffc00000, data 0x122dc6a/0x13b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 283 ms_handle_reset con 0x5622293a7400 session 0x562224ae2b40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 283 ms_handle_reset con 0x56222b8b5000 session 0x5622257610e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 82608128 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:23.676398+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 82608128 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1982767 data_alloc: 218103808 data_used: 5677056
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:24.676636+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 284 ms_handle_reset con 0x56222b8b5400 session 0x562224ba9e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 284 heartbeat osd_stat(store_statfs(0x4f836d000/0x0/0x4ffc00000, data 0x122dc6a/0x13b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 82608128 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:25.676739+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.346277237s of 10.386232376s, submitted: 202
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 284 ms_handle_reset con 0x562226d83400 session 0x56222744a1e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 284 ms_handle_reset con 0x56222b8b5400 session 0x56222763e780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 140394496 unmapped: 82608128 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e05c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:26.676864+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 284 ms_handle_reset con 0x562226e05c00 session 0x56222461e780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a7400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 284 ms_handle_reset con 0x56222b8b5000 session 0x5622262cb4a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 284 ms_handle_reset con 0x5622293a7400 session 0x562226ce0780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a7400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 140615680 unmapped: 82386944 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 284 ms_handle_reset con 0x5622293a7400 session 0x5622262e9860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:27.677054+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 140615680 unmapped: 82386944 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:28.677282+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 285 ms_handle_reset con 0x562226d83400 session 0x562224ba9860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e05c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 285 heartbeat osd_stat(store_statfs(0x4f7708000/0x0/0x4ffc00000, data 0x1e8f398/0x2015000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 141672448 unmapped: 81330176 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2081993 data_alloc: 218103808 data_used: 5693440
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:29.677458+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 286 ms_handle_reset con 0x562226e05c00 session 0x5622273e0f00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 286 ms_handle_reset con 0x56222b8b5000 session 0x5622268f54a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 141672448 unmapped: 81330176 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 286 ms_handle_reset con 0x56222b8b5400 session 0x5622271094a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:30.677683+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 141680640 unmapped: 81321984 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:31.677851+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 286 ms_handle_reset con 0x56222b8b5400 session 0x56222744be00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e05c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 286 ms_handle_reset con 0x562226e05c00 session 0x562226cae3c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 286 ms_handle_reset con 0x562226d83400 session 0x562224abcd20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 141697024 unmapped: 81305600 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:32.678028+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 141697024 unmapped: 81305600 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:33.678202+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622293a7400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 141697024 unmapped: 81305600 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2085693 data_alloc: 218103808 data_used: 5701632
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:34.678373+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 286 handle_osd_map epochs [286,287], i have 286, src has [1,287]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 287 ms_handle_reset con 0x56222b8b5000 session 0x562226cafa40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 287 heartbeat osd_stat(store_statfs(0x4f7706000/0x0/0x4ffc00000, data 0x1e91012/0x2018000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 81289216 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:35.678538+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.620720863s of 10.147133827s, submitted: 129
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 288 ms_handle_reset con 0x5622293a7400 session 0x56222461e780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 141721600 unmapped: 81281024 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:36.678712+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 141721600 unmapped: 81281024 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e05c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 288 ms_handle_reset con 0x562226e05c00 session 0x5622253dc960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:37.678897+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 289 ms_handle_reset con 0x562226d83400 session 0x56222763f4a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 289 ms_handle_reset con 0x56222b8b5000 session 0x5622274994a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 141746176 unmapped: 81256448 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 289 heartbeat osd_stat(store_statfs(0x4f76f9000/0x0/0x4ffc00000, data 0x1e964b4/0x2022000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:38.679124+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 289 ms_handle_reset con 0x56222b8b5400 session 0x5622273e0b40
Nov 29 08:25:35 compute-0 ceph-mon[75237]: pgmap v2221: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 29 08:25:35 compute-0 ceph-mon[75237]: from='client.19345 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:25:35 compute-0 ceph-mon[75237]: from='client.19349 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:25:35 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2101808573' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 08:25:35 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1998180507' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 08:25:35 compute-0 ceph-mon[75237]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 08:25:35 compute-0 ceph-mon[75237]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84dc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 289 ms_handle_reset con 0x56222a84dc00 session 0x56222763fe00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 289 ms_handle_reset con 0x56222b8b5c00 session 0x562227139e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84dc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 289 ms_handle_reset con 0x56222a84dc00 session 0x5622274cba40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 141811712 unmapped: 81190912 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2106071 data_alloc: 218103808 data_used: 5713920
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 289 ms_handle_reset con 0x562226d83400 session 0x56222736b680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:39.679292+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e05c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 289 ms_handle_reset con 0x56222b8b5000 session 0x562224ba8960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84cc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 289 ms_handle_reset con 0x56222a84cc00 session 0x5622268bfa40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 141844480 unmapped: 81158144 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 289 ms_handle_reset con 0x562226d83400 session 0x5622268be780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84cc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 289 ms_handle_reset con 0x56222a84cc00 session 0x56222736a5a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:40.679461+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84dc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 289 ms_handle_reset con 0x56222a84dc00 session 0x5622253385a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 148316160 unmapped: 74686464 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 289 ms_handle_reset con 0x56222b8b5000 session 0x5622253dcb40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:41.679720+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 290 ms_handle_reset con 0x56222b8b5c00 session 0x562224abdc20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84cc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 290 ms_handle_reset con 0x562226d83400 session 0x56222742a000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 148324352 unmapped: 74678272 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:42.680143+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 291 ms_handle_reset con 0x56222a84cc00 session 0x5622271392c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84dc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 291 ms_handle_reset con 0x56222a84dc00 session 0x5622262e9a40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 291 ms_handle_reset con 0x56222b8b5000 session 0x5622271394a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 148357120 unmapped: 74645504 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b4c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:43.680368+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 292 heartbeat osd_stat(store_statfs(0x4f76f0000/0x0/0x4ffc00000, data 0x1e99db7/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 292 ms_handle_reset con 0x56222b8b4c00 session 0x56222744ab40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 292 heartbeat osd_stat(store_statfs(0x4f76f0000/0x0/0x4ffc00000, data 0x1e99db7/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 148381696 unmapped: 74620928 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2204722 data_alloc: 234881024 data_used: 17571840
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:44.680528+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 292 ms_handle_reset con 0x562226d83400 session 0x56222744a5a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84cc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 292 ms_handle_reset con 0x56222a84cc00 session 0x5622271092c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 148406272 unmapped: 74596352 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:45.680855+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84dc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 292 ms_handle_reset con 0x56222a84dc00 session 0x562227108000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 292 ms_handle_reset con 0x56222b8b5000 session 0x56222534c780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 148357120 unmapped: 74645504 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:46.681029+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222746b800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.277569771s of 10.934968948s, submitted: 203
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 292 ms_handle_reset con 0x562225368400 session 0x56222534d680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 148357120 unmapped: 74645504 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:47.681581+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225369000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 293 ms_handle_reset con 0x562225369000 session 0x562227108b40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 148365312 unmapped: 74637312 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:48.681817+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 293 handle_osd_map epochs [293,294], i have 293, src has [1,294]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 294 ms_handle_reset con 0x562225368400 session 0x562226c72960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 294 ms_handle_reset con 0x56222746b800 session 0x562227498960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 294 heartbeat osd_stat(store_statfs(0x4f76ec000/0x0/0x4ffc00000, data 0x1e9d50b/0x2031000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 148373504 unmapped: 74629120 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2210675 data_alloc: 234881024 data_used: 17584128
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:49.681972+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 148373504 unmapped: 74629120 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:50.682173+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 294 ms_handle_reset con 0x562226d83400 session 0x562226ce0f00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 148414464 unmapped: 74588160 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84dc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84cc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 295 ms_handle_reset con 0x56222a84dc00 session 0x562224ae2b40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 295 ms_handle_reset con 0x56222a84cc00 session 0x562224b91c20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:51.682384+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84dc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150265856 unmapped: 72736768 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 296 ms_handle_reset con 0x56222a84dc00 session 0x562224abdc20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 296 ms_handle_reset con 0x562226d83c00 session 0x5622246410e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:52.682532+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 297 ms_handle_reset con 0x562225368400 session 0x56222763f860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 297 heartbeat osd_stat(store_statfs(0x4f6d53000/0x0/0x4ffc00000, data 0x241fa01/0x25ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225369000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 297 ms_handle_reset con 0x562225369000 session 0x56222742be00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150290432 unmapped: 72712192 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 297 ms_handle_reset con 0x562225368400 session 0x56222744b680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225369000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:53.682652+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 297 ms_handle_reset con 0x562225369000 session 0x562225338d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84cc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84dc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 297 ms_handle_reset con 0x56222a84dc00 session 0x562227109c20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 297 ms_handle_reset con 0x56222a84cc00 session 0x5622253dcb40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150306816 unmapped: 72695808 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2286670 data_alloc: 234881024 data_used: 17645568
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222746b800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 298 ms_handle_reset con 0x56222746b800 session 0x56222742a960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:54.682798+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 298 ms_handle_reset con 0x562225368400 session 0x5622274ca1e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225369000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 298 handle_osd_map epochs [298,299], i have 298, src has [1,299]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 299 ms_handle_reset con 0x562226d83400 session 0x5622268bfa40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150364160 unmapped: 72638464 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 299 ms_handle_reset con 0x562225369000 session 0x5622268972c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 299 ms_handle_reset con 0x562226d83c00 session 0x562227138d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:55.682936+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 299 heartbeat osd_stat(store_statfs(0x4f6d38000/0x0/0x4ffc00000, data 0x2435cc5/0x25d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150364160 unmapped: 72638464 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84cc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 299 ms_handle_reset con 0x56222a84cc00 session 0x5622273e01e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:56.683114+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 299 ms_handle_reset con 0x562225368400 session 0x5622274985a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150405120 unmapped: 72597504 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:57.683581+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225369000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.977450371s of 11.460409164s, submitted: 135
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 299 ms_handle_reset con 0x562225369000 session 0x5622274983c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150437888 unmapped: 72564736 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 299 ms_handle_reset con 0x562226d83400 session 0x5622268f4780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:58.683782+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 299 heartbeat osd_stat(store_statfs(0x4f6d39000/0x0/0x4ffc00000, data 0x2435c63/0x25d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84dc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 299 ms_handle_reset con 0x56222a84dc00 session 0x562224ba8960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 299 heartbeat osd_stat(store_statfs(0x4f6d3a000/0x0/0x4ffc00000, data 0x2435c63/0x25d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150478848 unmapped: 72523776 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2282651 data_alloc: 234881024 data_used: 17661952
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 299 handle_osd_map epochs [299,300], i have 299, src has [1,300]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 299 handle_osd_map epochs [300,300], i have 300, src has [1,300]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:59.683993+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 300 ms_handle_reset con 0x56222b8b5000 session 0x562227139e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150511616 unmapped: 72491008 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:00.684171+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 301 ms_handle_reset con 0x562225368400 session 0x562226cb5e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 301 ms_handle_reset con 0x562226d83c00 session 0x5622271085a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 301 heartbeat osd_stat(store_statfs(0x4f6d36000/0x0/0x4ffc00000, data 0x2437837/0x25d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150536192 unmapped: 72466432 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225369000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:01.684370+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150536192 unmapped: 72466432 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 301 handle_osd_map epochs [301,302], i have 301, src has [1,302]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 302 ms_handle_reset con 0x562225369000 session 0x56222736b860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:02.684517+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150536192 unmapped: 72466432 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:03.684729+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 150544384 unmapped: 72458240 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2293005 data_alloc: 234881024 data_used: 17682432
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 302 ms_handle_reset con 0x562226d83400 session 0x5622273e1860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:04.684916+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 302 heartbeat osd_stat(store_statfs(0x4f6d30000/0x0/0x4ffc00000, data 0x243afd1/0x25dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 302 handle_osd_map epochs [303,303], i have 303, src has [1,303]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222a84dc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 303 ms_handle_reset con 0x56222a84dc00 session 0x562225339680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 151601152 unmapped: 71401472 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:05.685140+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 304 ms_handle_reset con 0x562225368400 session 0x5622271094a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225369000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 304 ms_handle_reset con 0x562225369000 session 0x562227138780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 151666688 unmapped: 71335936 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:06.685279+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 304 heartbeat osd_stat(store_statfs(0x4f6d2a000/0x0/0x4ffc00000, data 0x243e821/0x25e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [0,0,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 151707648 unmapped: 71294976 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:07.685590+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 305 ms_handle_reset con 0x562226d83400 session 0x5622274cab40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.304728508s of 10.017197609s, submitted: 247
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 306 ms_handle_reset con 0x562226d83c00 session 0x562224b44f00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 151707648 unmapped: 71294976 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:08.685866+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585fc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 306 ms_handle_reset con 0x562226e48c00 session 0x56222742af00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 151707648 unmapped: 71294976 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2308480 data_alloc: 234881024 data_used: 17694720
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:09.686031+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 307 ms_handle_reset con 0x562226e48c00 session 0x56222742a5a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 151724032 unmapped: 71278592 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 307 ms_handle_reset con 0x562225368400 session 0x5622268bf860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 307 handle_osd_map epochs [307,308], i have 307, src has [1,308]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:10.686279+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 308 heartbeat osd_stat(store_statfs(0x4f6d23000/0x0/0x4ffc00000, data 0x2443cd5/0x25ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 151724032 unmapped: 71278592 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 308 heartbeat osd_stat(store_statfs(0x4f6d20000/0x0/0x4ffc00000, data 0x24458c7/0x25ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225369000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:11.686441+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 309 ms_handle_reset con 0x562225369000 session 0x562226caef00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 151789568 unmapped: 71213056 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:12.686632+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 310 ms_handle_reset con 0x56222585fc00 session 0x5622273e0000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 310 ms_handle_reset con 0x562226d83400 session 0x5622262e9860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 310 heartbeat osd_stat(store_statfs(0x4f6d1d000/0x0/0x4ffc00000, data 0x24474b9/0x25ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 310 handle_osd_map epochs [310,311], i have 310, src has [1,311]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 151814144 unmapped: 71188480 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:13.686829+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 311 heartbeat osd_stat(store_statfs(0x4f6d18000/0x0/0x4ffc00000, data 0x244abd3/0x25f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 151814144 unmapped: 71188480 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2319286 data_alloc: 234881024 data_used: 17694720
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:14.687052+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 311 handle_osd_map epochs [311,312], i have 311, src has [1,312]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 312 ms_handle_reset con 0x562225368400 session 0x5622273e0f00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225369000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 151822336 unmapped: 71180288 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:15.688159+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 312 handle_osd_map epochs [312,313], i have 312, src has [1,313]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 313 ms_handle_reset con 0x562225369000 session 0x562227109860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585fc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 151846912 unmapped: 71155712 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:16.688557+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 314 ms_handle_reset con 0x56222585fc00 session 0x5622253381e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 314 ms_handle_reset con 0x562226e48c00 session 0x562225760000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 151879680 unmapped: 71122944 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:17.690297+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 314 heartbeat osd_stat(store_statfs(0x4f6d10000/0x0/0x4ffc00000, data 0x244ff5f/0x25fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 314 handle_osd_map epochs [314,315], i have 314, src has [1,315]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 151887872 unmapped: 71114752 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:18.691594+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.808970451s of 10.559885979s, submitted: 237
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 315 ms_handle_reset con 0x562226d83c00 session 0x5622274985a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 315 heartbeat osd_stat(store_statfs(0x4f6d10000/0x0/0x4ffc00000, data 0x244ff5f/0x25fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 151887872 unmapped: 71114752 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2333390 data_alloc: 234881024 data_used: 17694720
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:19.691910+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 315 handle_osd_map epochs [315,316], i have 315, src has [1,316]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225369000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 316 ms_handle_reset con 0x562225369000 session 0x562224b903c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 151920640 unmapped: 71081984 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:20.692249+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 317 ms_handle_reset con 0x562225368400 session 0x562227138d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 317 heartbeat osd_stat(store_statfs(0x4f6d0a000/0x0/0x4ffc00000, data 0x2453635/0x2603000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 151953408 unmapped: 71049216 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:21.692726+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585fc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 151953408 unmapped: 71049216 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 317 ms_handle_reset con 0x56222585fc00 session 0x5622268972c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:22.692856+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 317 handle_osd_map epochs [317,318], i have 317, src has [1,318]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 318 ms_handle_reset con 0x562226e49c00 session 0x56222742a960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 319 ms_handle_reset con 0x562226e49800 session 0x562226c01860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 319 ms_handle_reset con 0x562226e49800 session 0x56222534d680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 319 ms_handle_reset con 0x562226e48c00 session 0x5622274ca1e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 319 ms_handle_reset con 0x562225368400 session 0x56222539fa40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 152010752 unmapped: 70991872 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:23.693247+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225369000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 320 ms_handle_reset con 0x562226e49000 session 0x56222744ba40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585fc00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153075712 unmapped: 69926912 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2358537 data_alloc: 234881024 data_used: 17694720
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 320 ms_handle_reset con 0x56222585fc00 session 0x56222744a1e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:24.693438+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 320 heartbeat osd_stat(store_statfs(0x4f6cfd000/0x0/0x4ffc00000, data 0x2458fd9/0x260f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 321 ms_handle_reset con 0x562225369000 session 0x56222534c000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153083904 unmapped: 69918720 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:25.693839+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 321 ms_handle_reset con 0x562225368400 session 0x562226cafa40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153075712 unmapped: 69926912 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:26.694245+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 321 ms_handle_reset con 0x562226e48c00 session 0x56222461e780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 321 ms_handle_reset con 0x562226e49800 session 0x562226cdf4a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 321 ms_handle_reset con 0x562226e49000 session 0x562226cdeb40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 321 ms_handle_reset con 0x562225368400 session 0x56222742a780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153083904 unmapped: 69918720 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:27.694437+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225369000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 322 ms_handle_reset con 0x562225369000 session 0x562226cde780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 322 ms_handle_reset con 0x562226e48c00 session 0x5622268f5e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 322 ms_handle_reset con 0x562226e49800 session 0x56222744a000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 322 ms_handle_reset con 0x562226e48400 session 0x5622273e1a40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153108480 unmapped: 69894144 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:28.694629+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 322 ms_handle_reset con 0x562226e48400 session 0x562226c00780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 322 handle_osd_map epochs [322,323], i have 322, src has [1,323]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 323 ms_handle_reset con 0x562226e49c00 session 0x562226cdf860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 323 ms_handle_reset con 0x562225368400 session 0x562226c6a1e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225369000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.192251205s of 10.494208336s, submitted: 96
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 323 ms_handle_reset con 0x562226e48c00 session 0x562226c730e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153190400 unmapped: 69812224 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2368925 data_alloc: 234881024 data_used: 17711104
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:29.695238+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 323 handle_osd_map epochs [323,324], i have 323, src has [1,324]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 324 handle_osd_map epochs [324,324], i have 324, src has [1,324]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 324 ms_handle_reset con 0x562225369000 session 0x562227498d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153280512 unmapped: 69722112 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:30.695389+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 324 heartbeat osd_stat(store_statfs(0x4f6cec000/0x0/0x4ffc00000, data 0x2461725/0x261f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 325 ms_handle_reset con 0x562225368400 session 0x5622274ca1e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:31.695532+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153305088 unmapped: 69697536 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:32.695758+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153313280 unmapped: 69689344 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 325 handle_osd_map epochs [326,327], i have 325, src has [1,327]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 327 ms_handle_reset con 0x562226e48400 session 0x562224ae25a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 327 ms_handle_reset con 0x562226e48c00 session 0x5622262cb4a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:33.695933+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153346048 unmapped: 69656576 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 327 ms_handle_reset con 0x562226e49c00 session 0x562226dd2d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:34.696085+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153346048 unmapped: 69656576 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2382397 data_alloc: 234881024 data_used: 17760256
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 327 heartbeat osd_stat(store_statfs(0x4f6ce4000/0x0/0x4ffc00000, data 0x2466af7/0x2628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:35.696328+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153346048 unmapped: 69656576 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:36.696480+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153346048 unmapped: 69656576 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226c39c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226c38000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:37.696642+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153354240 unmapped: 69648384 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 327 ms_handle_reset con 0x562226c38000 session 0x562226cdf2c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:38.697035+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153354240 unmapped: 69648384 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 327 heartbeat osd_stat(store_statfs(0x4f6ce5000/0x0/0x4ffc00000, data 0x2466b07/0x2629000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.324016571s of 10.121621132s, submitted: 98
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:39.697204+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226c38000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153362432 unmapped: 69640192 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2389809 data_alloc: 234881024 data_used: 17772544
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 328 ms_handle_reset con 0x562225368400 session 0x562225761a40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 328 heartbeat osd_stat(store_statfs(0x4f6ce1000/0x0/0x4ffc00000, data 0x24687c9/0x262d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 328 handle_osd_map epochs [328,329], i have 328, src has [1,329]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 329 heartbeat osd_stat(store_statfs(0x4f6ce1000/0x0/0x4ffc00000, data 0x24687c9/0x262d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 329 ms_handle_reset con 0x562226c38000 session 0x562227498780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 329 ms_handle_reset con 0x562226c39c00 session 0x562226cde780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:40.697370+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153387008 unmapped: 69615616 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:41.697559+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153387008 unmapped: 69615616 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 329 handle_osd_map epochs [329,330], i have 329, src has [1,330]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:42.697656+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153411584 unmapped: 69591040 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 329 heartbeat osd_stat(store_statfs(0x4f6cdd000/0x0/0x4ffc00000, data 0x246a3b9/0x2630000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:43.697840+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153419776 unmapped: 69582848 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:44.698006+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2428392 data_alloc: 234881024 data_used: 17805312
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 157368320 unmapped: 65634304 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 330 ms_handle_reset con 0x562226e48400 session 0x56222744ba40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 330 ms_handle_reset con 0x562226e48c00 session 0x5622274a8b40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:45.698139+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 331 ms_handle_reset con 0x562225368400 session 0x562224b912c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 157392896 unmapped: 65609728 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:46.698262+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 157327360 unmapped: 65675264 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226c38000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226c39c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 331 ms_handle_reset con 0x562226c39c00 session 0x5622274ca3c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 331 handle_osd_map epochs [331,332], i have 331, src has [1,332]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 332 ms_handle_reset con 0x562226e48400 session 0x56222736bc20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 332 heartbeat osd_stat(store_statfs(0x4f69c8000/0x0/0x4ffc00000, data 0x277da55/0x2944000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:47.698381+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153395200 unmapped: 69607424 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 333 ms_handle_reset con 0x562226e48c00 session 0x562226c72d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 333 ms_handle_reset con 0x562226c38000 session 0x56222742b680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:48.698574+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153411584 unmapped: 69591040 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 333 ms_handle_reset con 0x562225368400 session 0x5622246410e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:49.698721+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226c39c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 333 handle_osd_map epochs [334,334], i have 333, src has [1,334]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.263117790s of 10.168362617s, submitted: 78
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2443083 data_alloc: 234881024 data_used: 18522112
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 334 ms_handle_reset con 0x562226e48400 session 0x5622268f52c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153436160 unmapped: 69566464 heap: 223002624 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 334 ms_handle_reset con 0x562226e49c00 session 0x562226ce0000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:50.698863+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 203776000 unmapped: 44417024 heap: 248193024 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 334 handle_osd_map epochs [335,335], i have 334, src has [1,335]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 335 ms_handle_reset con 0x562226c39c00 session 0x562226cafc20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 335 heartbeat osd_stat(store_statfs(0x4f35bf000/0x0/0x4ffc00000, data 0x5b83316/0x5d4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:51.698985+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 157745152 unmapped: 94650368 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:52.699138+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153567232 unmapped: 98828288 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:53.699276+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 157835264 unmapped: 94560256 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:54.699519+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3649653 data_alloc: 234881024 data_used: 18939904
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 157917184 unmapped: 94478336 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 336 ms_handle_reset con 0x5622262f1400 session 0x5622253dcb40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:55.699660+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 336 heartbeat osd_stat(store_statfs(0x4eb1b9000/0x0/0x4ffc00000, data 0xdf86bc8/0xe155000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 157982720 unmapped: 94412800 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 336 ms_handle_reset con 0x562225368400 session 0x562227139e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:56.699800+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 336 heartbeat osd_stat(store_statfs(0x4e91b9000/0x0/0x4ffc00000, data 0xff86bc8/0x10155000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153821184 unmapped: 98574336 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226c39c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:57.700185+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 162226176 unmapped: 90169344 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 336 ms_handle_reset con 0x562226e48400 session 0x56222736ab40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 336 ms_handle_reset con 0x562226c39c00 session 0x56222534da40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 336 ms_handle_reset con 0x5622262f1000 session 0x562226cafa40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:58.700565+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 93200384 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 336 handle_osd_map epochs [336,337], i have 336, src has [1,337]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 337 ms_handle_reset con 0x5622262f1c00 session 0x562226cb4960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:59.700678+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 337 ms_handle_reset con 0x562226e49c00 session 0x56222744b860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4558611 data_alloc: 234881024 data_used: 18956288
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.102324963s of 10.119833946s, submitted: 123
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 159277056 unmapped: 93118464 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:00.700793+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 168927232 unmapped: 83468288 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 337 ms_handle_reset con 0x562225368400 session 0x56222744a960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 337 ms_handle_reset con 0x562226e48c00 session 0x56222736af00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 338 heartbeat osd_stat(store_statfs(0x4e0db5000/0x0/0x4ffc00000, data 0x183887b8/0x18558000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 338 ms_handle_reset con 0x5622262f1000 session 0x5622257610e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:01.700964+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 156360704 unmapped: 96034816 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226c39c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:02.701152+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 156368896 unmapped: 96026624 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 338 handle_osd_map epochs [339,339], i have 338, src has [1,339]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 339 handle_osd_map epochs [340,340], i have 339, src has [1,340]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:03.701279+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 340 ms_handle_reset con 0x562226c39c00 session 0x56222763ef00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 340 ms_handle_reset con 0x562226e48400 session 0x5622274cb2c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 340 ms_handle_reset con 0x562225368400 session 0x5622268f52c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 156483584 unmapped: 95911936 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 340 heartbeat osd_stat(store_statfs(0x4e0dac000/0x0/0x4ffc00000, data 0x1838d5e3/0x1855f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:04.701392+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4901925 data_alloc: 234881024 data_used: 18968576
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 156483584 unmapped: 95911936 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:05.701540+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 341 ms_handle_reset con 0x5622262f1000 session 0x5622246410e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 156524544 unmapped: 95870976 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 341 ms_handle_reset con 0x562226e49800 session 0x562227138780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 341 ms_handle_reset con 0x562226e49400 session 0x562226cdf4a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:06.701685+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 341 ms_handle_reset con 0x5622262f1000 session 0x5622246405a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 341 ms_handle_reset con 0x562225368400 session 0x562226c6a1e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 156540928 unmapped: 95854592 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:07.701817+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 341 handle_osd_map epochs [342,342], i have 341, src has [1,342]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 342 ms_handle_reset con 0x562226e48c00 session 0x562224ae3680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 342 ms_handle_reset con 0x562226e48400 session 0x562226c6b860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 156614656 unmapped: 95780864 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f0c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 342 ms_handle_reset con 0x5622262f0c00 session 0x5622262cb0e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 342 handle_osd_map epochs [343,343], i have 342, src has [1,343]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 343 ms_handle_reset con 0x562226e49800 session 0x562226c6a3c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:08.701985+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 343 ms_handle_reset con 0x5622262f1000 session 0x562226cdc1e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 343 ms_handle_reset con 0x562226e49400 session 0x5622273e12c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 343 ms_handle_reset con 0x562225368400 session 0x5622274ca000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 156688384 unmapped: 95707136 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 343 heartbeat osd_stat(store_statfs(0x4e0daa000/0x0/0x4ffc00000, data 0x18390fad/0x18563000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 343 handle_osd_map epochs [343,344], i have 343, src has [1,344]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 344 ms_handle_reset con 0x562226e48400 session 0x5622274a83c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:09.702213+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 344 ms_handle_reset con 0x562226e49c00 session 0x562226896d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4897936 data_alloc: 234881024 data_used: 18976768
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.505434036s of 10.006747246s, submitted: 237
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 344 ms_handle_reset con 0x562226e48400 session 0x562226cdfe00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 156704768 unmapped: 95690752 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 344 heartbeat osd_stat(store_statfs(0x4e10b0000/0x0/0x4ffc00000, data 0x18084ed1/0x1825a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 344 ms_handle_reset con 0x562225368400 session 0x56222744a780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:10.702356+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 344 heartbeat osd_stat(store_statfs(0x4e10b0000/0x0/0x4ffc00000, data 0x18084ed1/0x1825a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 344 handle_osd_map epochs [345,345], i have 345, src has [1,345]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 345 ms_handle_reset con 0x5622262f1000 session 0x5622271381e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 156745728 unmapped: 95649792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:11.702544+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 345 handle_osd_map epochs [346,346], i have 345, src has [1,346]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 346 ms_handle_reset con 0x562226e49400 session 0x562226dd34a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 156762112 unmapped: 95633408 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 346 ms_handle_reset con 0x562226e05c00 session 0x562224b91680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 346 ms_handle_reset con 0x56222b8b5400 session 0x562226cafe00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 346 ms_handle_reset con 0x562226e49400 session 0x562226caed20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:12.702702+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 156762112 unmapped: 95633408 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 346 handle_osd_map epochs [346,347], i have 346, src has [1,347]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:13.702883+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225368400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 347 ms_handle_reset con 0x5622262f1000 session 0x5622274ca5a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 347 ms_handle_reset con 0x562225368400 session 0x5622268f5680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 157835264 unmapped: 94560256 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e05c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:14.703016+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 347 heartbeat osd_stat(store_statfs(0x4e10ac000/0x0/0x4ffc00000, data 0x1808a234/0x18261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1,3])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5186941 data_alloc: 234881024 data_used: 18976768
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 154238976 unmapped: 98156544 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 347 ms_handle_reset con 0x56222b8b5400 session 0x562227108000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 347 ms_handle_reset con 0x562226e05c00 session 0x56222763ed20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:15.703457+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 155017216 unmapped: 97378304 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:16.704464+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 151248896 unmapped: 101146624 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 347 ms_handle_reset con 0x562226e49c00 session 0x56222762fe00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:17.704629+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 347 heartbeat osd_stat(store_statfs(0x4d8a89000/0x0/0x4ffc00000, data 0x2029e234/0x20475000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [0,0,0,0,0,0,2,0,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 155820032 unmapped: 96575488 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f0000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 348 ms_handle_reset con 0x562226e48c00 session 0x562227109680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 348 handle_osd_map epochs [348,349], i have 348, src has [1,349]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:18.704816+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 349 ms_handle_reset con 0x562226e49800 session 0x56222468a3c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 152158208 unmapped: 100237312 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 349 ms_handle_reset con 0x5622262f0000 session 0x562226cb4780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 349 ms_handle_reset con 0x562226e49800 session 0x5622268be780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 349 ms_handle_reset con 0x562226e48400 session 0x562224ae2780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:19.704945+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6630076 data_alloc: 218103808 data_used: 5926912
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.832432747s of 10.083954811s, submitted: 195
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 349 heartbeat osd_stat(store_statfs(0x4d3280000/0x0/0x4ffc00000, data 0x25aa1c83/0x25c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [0,0,0,0,0,0,1,1])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 156745728 unmapped: 95649792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:20.705130+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e05c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 161128448 unmapped: 91267072 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:21.705247+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 349 heartbeat osd_stat(store_statfs(0x4cda81000/0x0/0x4ffc00000, data 0x2b2a2177/0x2b47d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [0,0,0,0,0,0,2])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 152928256 unmapped: 99467264 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:22.705389+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 157515776 unmapped: 94879744 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 350 ms_handle_reset con 0x562226e48c00 session 0x562226c6a780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 350 ms_handle_reset con 0x562226e05c00 session 0x56222742ad20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f0000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 350 ms_handle_reset con 0x5622262f0000 session 0x562226c01860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 350 ms_handle_reset con 0x5622262f1000 session 0x5622274ca960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:23.705654+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 350 ms_handle_reset con 0x562226e49400 session 0x5622262e8f00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 350 ms_handle_reset con 0x562226e49c00 session 0x562224b44f00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153509888 unmapped: 98885632 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 350 handle_osd_map epochs [351,351], i have 350, src has [1,351]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:24.705979+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 352 ms_handle_reset con 0x562226e48c00 session 0x56222763e1e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 352 ms_handle_reset con 0x562226e48400 session 0x56222762eb40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f0000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7452373 data_alloc: 218103808 data_used: 5939200
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 352 ms_handle_reset con 0x5622262f0000 session 0x5622274a81e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 352 ms_handle_reset con 0x5622262f1000 session 0x56222763fc20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153640960 unmapped: 98754560 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 352 heartbeat osd_stat(store_statfs(0x4c9676000/0x0/0x4ffc00000, data 0x2f6a716b/0x2f885000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:25.706161+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153649152 unmapped: 98746368 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:26.706343+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 353 ms_handle_reset con 0x562226e49400 session 0x56222762f2c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 353 ms_handle_reset con 0x562226e49c00 session 0x5622262cad20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f0000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153739264 unmapped: 98656256 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 353 ms_handle_reset con 0x5622262f1000 session 0x562227109e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 353 ms_handle_reset con 0x5622262f0000 session 0x562226dd32c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 353 ms_handle_reset con 0x562226e48400 session 0x5622274a9680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:27.706575+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 353 ms_handle_reset con 0x562226e49400 session 0x562226c6a960
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153796608 unmapped: 98598912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:28.707002+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153804800 unmapped: 98590720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:29.707247+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7482783 data_alloc: 218103808 data_used: 5955584
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153804800 unmapped: 98590720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.580615044s of 10.352890968s, submitted: 216
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 354 ms_handle_reset con 0x562226e49c00 session 0x562227498d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 354 ms_handle_reset con 0x562226e49c00 session 0x562226dd2d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:30.707427+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 354 heartbeat osd_stat(store_statfs(0x4c92af000/0x0/0x4ffc00000, data 0x2fa727ea/0x2fc4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153821184 unmapped: 98574336 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:31.707594+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153829376 unmapped: 98566144 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:32.707766+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153829376 unmapped: 98566144 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 354 handle_osd_map epochs [354,355], i have 354, src has [1,355]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:33.707979+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153837568 unmapped: 98557952 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:34.708160+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f0000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7488379 data_alloc: 218103808 data_used: 5955584
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 355 ms_handle_reset con 0x5622262f0000 session 0x5622273e10e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 153837568 unmapped: 98557952 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 355 ms_handle_reset con 0x5622262f1000 session 0x562226cb4d20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 355 ms_handle_reset con 0x562226e48400 session 0x5622274caf00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 355 ms_handle_reset con 0x562226e49400 session 0x5622262e9c20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:35.708329+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 355 heartbeat osd_stat(store_statfs(0x4c92aa000/0x0/0x4ffc00000, data 0x2fa75dce/0x2fc54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 155131904 unmapped: 97263616 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 355 ms_handle_reset con 0x562226e49400 session 0x562226c01e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:36.708543+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f0000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 355 ms_handle_reset con 0x5622262f0000 session 0x56222763f2c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 355 ms_handle_reset con 0x5622262f1000 session 0x56222468b4a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 155131904 unmapped: 97263616 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49c00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:37.708698+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 155058176 unmapped: 97337344 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b5400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 356 ms_handle_reset con 0x562226e49800 session 0x5622274ca5a0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:38.708913+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x56222b8b5400 session 0x562226c003c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f0000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262f0000 session 0x562227109c20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 155131904 unmapped: 97263616 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x562226e48400 session 0x562227139860
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:39.709046+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7611665 data_alloc: 234881024 data_used: 9945088
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 155148288 unmapped: 97247232 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:40.709231+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 155148288 unmapped: 97247232 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 357 heartbeat osd_stat(store_statfs(0x4c8932000/0x0/0x4ffc00000, data 0x303e5cdd/0x305ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:41.709352+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 155148288 unmapped: 97247232 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:42.709487+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 155148288 unmapped: 97247232 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:43.709719+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 155148288 unmapped: 97247232 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:44.709849+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7611665 data_alloc: 234881024 data_used: 9945088
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 155148288 unmapped: 97247232 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:45.710130+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 155148288 unmapped: 97247232 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:46.710306+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 357 heartbeat osd_stat(store_statfs(0x4c8932000/0x0/0x4ffc00000, data 0x303e5cdd/0x305ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 155148288 unmapped: 97247232 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:47.710532+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262f1000 session 0x562226cae780
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x562226e49400 session 0x562224ae3c20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x562226e49800 session 0x562226cdfa40
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49800
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x562226e49800 session 0x562226dd30e0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f0000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.241830826s of 17.879131317s, submitted: 114
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 155148288 unmapped: 97247232 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262f0000 session 0x562224abdc20
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262f1000 session 0x5622253dd680
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x562226e48400 session 0x562224abcf00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49400
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x562226e49400 session 0x5622268f5e00
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f0000
Nov 29 08:25:35 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262f0000 session 0x5622257603c0
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:48.710822+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 156205056 unmapped: 96190464 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:35 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:49.711073+0000)
Nov 29 08:25:35 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7716322 data_alloc: 234881024 data_used: 9945088
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 156205056 unmapped: 96190464 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262f1000 session 0x56222762ed20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:50.711272+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 heartbeat osd_stat(store_statfs(0x4c7c62000/0x0/0x4ffc00000, data 0x310b5d4f/0x3129c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 159006720 unmapped: 93388800 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x562226e48400 session 0x5622274cb2c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:51.711454+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x562226e49800 session 0x56222762ef00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262ad800 session 0x5622262e8000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 159440896 unmapped: 92954624 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:52.711622+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f0000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 159449088 unmapped: 92946432 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:53.711801+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 160874496 unmapped: 91521024 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:54.711938+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7872078 data_alloc: 234881024 data_used: 22646784
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 165011456 unmapped: 87384064 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x562226e48400 session 0x562226896d20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:55.712161+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 heartbeat osd_stat(store_statfs(0x4c768c000/0x0/0x4ffc00000, data 0x31689d82/0x31872000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 165019648 unmapped: 87375872 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:56.712388+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 165019648 unmapped: 87375872 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:57.712640+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x562226e49800 session 0x562226caf2c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 165019648 unmapped: 87375872 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:58.712890+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 165019648 unmapped: 87375872 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:59.713060+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 heartbeat osd_stat(store_statfs(0x4c768c000/0x0/0x4ffc00000, data 0x31689d82/0x31872000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7872078 data_alloc: 234881024 data_used: 22646784
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 165060608 unmapped: 87334912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:00.713237+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.292944908s of 12.827776909s, submitted: 99
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262ad400 session 0x562227498780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 165068800 unmapped: 87326720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:01.713432+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262adc00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262adc00 session 0x5622262cb0e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 165068800 unmapped: 87326720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:02.713591+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 165068800 unmapped: 87326720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b38a800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x56222b38a800 session 0x56222461fe00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262ad400 session 0x562223f5a780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:03.713761+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 165068800 unmapped: 87326720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:04.713971+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7873138 data_alloc: 234881024 data_used: 22646784
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 heartbeat osd_stat(store_statfs(0x4c768b000/0x0/0x4ffc00000, data 0x31689d82/0x31872000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 165101568 unmapped: 87293952 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:05.714187+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 165101568 unmapped: 87293952 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:06.714399+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 165101568 unmapped: 87293952 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:07.714547+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 heartbeat osd_stat(store_statfs(0x4c768b000/0x0/0x4ffc00000, data 0x31689d82/0x31872000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [1,0,0,1,2])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 169877504 unmapped: 82518016 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:08.714775+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 170655744 unmapped: 81739776 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:09.714918+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7980394 data_alloc: 234881024 data_used: 23040000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 171024384 unmapped: 81371136 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:10.715063+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 171024384 unmapped: 81371136 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:11.715250+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 heartbeat osd_stat(store_statfs(0x4c6d6c000/0x0/0x4ffc00000, data 0x3213cd82/0x3218a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 171032576 unmapped: 81362944 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:12.715451+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 171032576 unmapped: 81362944 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:13.715618+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x562226e49c00 session 0x562226cdfc20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262adc00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 171032576 unmapped: 81362944 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.971557617s of 13.401759148s, submitted: 97
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262adc00 session 0x5622273e03c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:14.715827+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7976170 data_alloc: 234881024 data_used: 23044096
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 170074112 unmapped: 82321408 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:15.716001+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 heartbeat osd_stat(store_statfs(0x4c6d73000/0x0/0x4ffc00000, data 0x3213cd82/0x3218a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 170074112 unmapped: 82321408 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:16.716245+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 170074112 unmapped: 82321408 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:17.716441+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 170082304 unmapped: 82313216 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:18.716641+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 170082304 unmapped: 82313216 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:19.716879+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7976170 data_alloc: 234881024 data_used: 23044096
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 170082304 unmapped: 82313216 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:20.717046+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 heartbeat osd_stat(store_statfs(0x4c6d73000/0x0/0x4ffc00000, data 0x3213cd82/0x3218a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 170082304 unmapped: 82313216 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:21.717235+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 170082304 unmapped: 82313216 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:22.717487+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 170082304 unmapped: 82313216 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x562226e48400 session 0x56222461f680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:23.717643+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 heartbeat osd_stat(store_statfs(0x4c6d73000/0x0/0x4ffc00000, data 0x3213cd82/0x3218a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x562226e49800 session 0x562227108d20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 170090496 unmapped: 82305024 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:24.717794+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 heartbeat osd_stat(store_statfs(0x4c6d73000/0x0/0x4ffc00000, data 0x3213cd82/0x3218a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262ad400 session 0x562225338960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262adc00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.710086823s of 10.727413177s, submitted: 2
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7977863 data_alloc: 234881024 data_used: 23044096
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262adc00 session 0x5622262e85a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 170098688 unmapped: 82296832 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:25.717941+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 170098688 unmapped: 82296832 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 heartbeat osd_stat(store_statfs(0x4c6d73000/0x0/0x4ffc00000, data 0x3213cda5/0x3218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:26.718137+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 170106880 unmapped: 82288640 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:27.718341+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 170106880 unmapped: 82288640 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:28.718601+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262ad800 session 0x56222534cf00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262f0000 session 0x56222742a780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262f1000 session 0x562226cdf4a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 170106880 unmapped: 82288640 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:29.718743+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262f1000 session 0x5622268f52c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7777138 data_alloc: 234881024 data_used: 10235904
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 163962880 unmapped: 88432640 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:30.718883+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 163962880 unmapped: 88432640 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:31.719069+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 heartbeat osd_stat(store_statfs(0x4c72e5000/0x0/0x4ffc00000, data 0x31442d00/0x3148d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262ad400 session 0x56222762e000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262ad800 session 0x56222762f0e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262adc00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262adc00 session 0x5622274a9c20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f0000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262f0000 session 0x562227499c20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f0000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 166256640 unmapped: 86138880 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262f0000 session 0x5622274990e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262ad400 session 0x56222742a780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262ad800 session 0x562225338960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262adc00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:32.719235+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262adc00 session 0x56222461f680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 ms_handle_reset con 0x5622262f1000 session 0x562227498780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 162078720 unmapped: 90316800 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:33.719422+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 162078720 unmapped: 90316800 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:34.719575+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 handle_osd_map epochs [357,358], i have 357, src has [1,358]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.580369949s of 10.022803307s, submitted: 105
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 357 handle_osd_map epochs [358,358], i have 358, src has [1,358]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7855231 data_alloc: 234881024 data_used: 10240000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 161562624 unmapped: 90832896 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 358 heartbeat osd_stat(store_statfs(0x4c7273000/0x0/0x4ffc00000, data 0x31c3ed72/0x31c8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 358 ms_handle_reset con 0x5622262ad400 session 0x5622268be780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:35.719757+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 358 ms_handle_reset con 0x5622262ad800 session 0x5622268f5e00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262adc00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 359 ms_handle_reset con 0x5622262adc00 session 0x56222762fc20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 359 ms_handle_reset con 0x5622262f1000 session 0x562226896d20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 162365440 unmapped: 90030080 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:36.719955+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f0000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 359 ms_handle_reset con 0x5622262f0000 session 0x562227139860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 360 ms_handle_reset con 0x5622262ad400 session 0x562226dd32c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 360 ms_handle_reset con 0x5622262ad800 session 0x56222736b860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 162422784 unmapped: 89972736 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:37.720116+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262adc00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 162422784 unmapped: 89972736 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:38.720312+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 361 ms_handle_reset con 0x5622262f1000 session 0x56222539fa40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 162430976 unmapped: 89964544 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 361 ms_handle_reset con 0x562226e49c00 session 0x56222468a3c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:39.720470+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7903965 data_alloc: 234881024 data_used: 18792448
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 162430976 unmapped: 89964544 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b38a400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:40.720641+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 361 ms_handle_reset con 0x56222b38a400 session 0x5622262e9a40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 361 heartbeat osd_stat(store_statfs(0x4c7655000/0x0/0x4ffc00000, data 0x318576ea/0x318a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 162430976 unmapped: 89964544 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:41.720844+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 361 heartbeat osd_stat(store_statfs(0x4c7655000/0x0/0x4ffc00000, data 0x318576ea/0x318a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 361 handle_osd_map epochs [361,362], i have 361, src has [1,362]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 362 ms_handle_reset con 0x5622262ad400 session 0x5622262e9c20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 362 heartbeat osd_stat(store_statfs(0x4c7651000/0x0/0x4ffc00000, data 0x3185933c/0x318ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 162439168 unmapped: 89956352 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:42.721039+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 362 ms_handle_reset con 0x5622262f1000 session 0x562226dd2b40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 362 heartbeat osd_stat(store_statfs(0x4c7651000/0x0/0x4ffc00000, data 0x3185933c/0x318ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 362 ms_handle_reset con 0x5622262ad800 session 0x562224ae2780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 162439168 unmapped: 89956352 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:43.721235+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e49c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fcb400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176144384 unmapped: 76251136 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:44.721404+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.902538300s of 10.048887253s, submitted: 142
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8273434 data_alloc: 234881024 data_used: 18808832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 163872768 unmapped: 88522752 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:45.721552+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 172679168 unmapped: 79716352 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:46.721743+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 168828928 unmapped: 83566592 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:47.722497+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 363 heartbeat osd_stat(store_statfs(0x4bde4d000/0x0/0x4ffc00000, data 0x3b05ae84/0x3b0b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [0,0,0,0,0,0,1,2])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 174366720 unmapped: 78028800 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:48.722680+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 364 heartbeat osd_stat(store_statfs(0x4bc24a000/0x0/0x4ffc00000, data 0x3cc5c93e/0x3ccb4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [0,0,0,0,0,1,0,2,2])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 166445056 unmapped: 85950464 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:49.723327+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 9589144 data_alloc: 234881024 data_used: 18817024
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 172097536 unmapped: 80297984 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:50.723753+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 185073664 unmapped: 67321856 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:51.723953+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 168706048 unmapped: 83689472 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:52.726000+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 173350912 unmapped: 79044608 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:53.726853+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 364 heartbeat osd_stat(store_statfs(0x4b0369000/0x0/0x4ffc00000, data 0x48b3d93e/0x48b95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [0,0,0,0,0,1,4])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 185253888 unmapped: 67141632 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:54.727014+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 364 heartbeat osd_stat(store_statfs(0x4adfe7000/0x0/0x4ffc00000, data 0x4aebf93e/0x4af17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,3])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 2.630207062s of 10.029790878s, submitted: 393
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 10841820 data_alloc: 234881024 data_used: 19603456
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 364 ms_handle_reset con 0x562226fcb400 session 0x562227109860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 364 ms_handle_reset con 0x562226e49c00 session 0x562224b912c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176504832 unmapped: 75890688 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:55.727133+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 364 ms_handle_reset con 0x5622262ad400 session 0x56222463c5a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 364 ms_handle_reset con 0x5622262ad800 session 0x56222736ab40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:56.727278+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176685056 unmapped: 75710464 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:57.727497+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176701440 unmapped: 75694080 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:58.727676+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176857088 unmapped: 75538432 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 364 ms_handle_reset con 0x5622262f1000 session 0x56222742b680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 364 heartbeat osd_stat(store_statfs(0x4c47cd000/0x0/0x4ffc00000, data 0x322d28cc/0x32328000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,15])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:59.727871+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 178151424 unmapped: 74244096 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fcb400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 364 ms_handle_reset con 0x562226fcb400 session 0x562227139c20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8119189 data_alloc: 234881024 data_used: 20160512
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:00.728022+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 178151424 unmapped: 74244096 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:01.728252+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 178151424 unmapped: 74244096 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 365 heartbeat osd_stat(store_statfs(0x4c47c6000/0x0/0x4ffc00000, data 0x322da86a/0x3232f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 365 ms_handle_reset con 0x562226fca400 session 0x56222763fe00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:02.728432+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 178159616 unmapped: 74235904 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:03.728621+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 178159616 unmapped: 74235904 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 365 ms_handle_reset con 0x5622262ad400 session 0x562226cdcb40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 366 ms_handle_reset con 0x5622262ad800 session 0x562226c72d20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:04.728766+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 178462720 unmapped: 73932800 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 366 heartbeat osd_stat(store_statfs(0x4c6bcc000/0x0/0x4ffc00000, data 0x322dc492/0x32332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.015704632s of 10.067430496s, submitted: 212
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8122892 data_alloc: 234881024 data_used: 20996096
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:05.728929+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179527680 unmapped: 72867840 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fcb400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 367 ms_handle_reset con 0x562226fcb400 session 0x56222763f860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 368 ms_handle_reset con 0x5622262f1000 session 0x5622274994a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fcb800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:06.729063+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179560448 unmapped: 72835072 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 368 ms_handle_reset con 0x562226fcb800 session 0x562226cdfa40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:07.729314+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179609600 unmapped: 72785920 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:08.729551+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 368 ms_handle_reset con 0x5622262ad400 session 0x56222763ed20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179609600 unmapped: 72785920 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 368 handle_osd_map epochs [370,370], i have 368, src has [1,370]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 368 handle_osd_map epochs [369,370], i have 368, src has [1,370]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 370 ms_handle_reset con 0x5622262ad800 session 0x562226cdf4a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:09.729755+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 177864704 unmapped: 74530816 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5362366 data_alloc: 234881024 data_used: 20754432
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 370 handle_osd_map epochs [370,371], i have 370, src has [1,371]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 371 handle_osd_map epochs [371,371], i have 371, src has [1,371]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 371 ms_handle_reset con 0x5622262f1000 session 0x56222762e3c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:10.729889+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fcb400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 177881088 unmapped: 74514432 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 371 heartbeat osd_stat(store_statfs(0x4dfe45000/0x0/0x4ffc00000, data 0x18ab4074/0x18ca8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 371 heartbeat osd_stat(store_statfs(0x4dfe41000/0x0/0x4ffc00000, data 0x18ab5ccc/0x18cab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [0,0,3,2])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 371 ms_handle_reset con 0x562226fcb400 session 0x56222763fe00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:11.730047+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 178905088 unmapped: 73490432 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 371 heartbeat osd_stat(store_statfs(0x4f5a41000/0x0/0x4ffc00000, data 0x2eb5ccc/0x30ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:12.730228+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 178905088 unmapped: 73490432 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:13.730370+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 178905088 unmapped: 73490432 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 371 heartbeat osd_stat(store_statfs(0x4f5a41000/0x0/0x4ffc00000, data 0x2eb5ccc/0x30ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 372 heartbeat osd_stat(store_statfs(0x4f5a41000/0x0/0x4ffc00000, data 0x2eb5ccc/0x30ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:14.730624+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fcb800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 372 ms_handle_reset con 0x562226fcb800 session 0x56222539fa40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 177995776 unmapped: 74399744 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3024295 data_alloc: 234881024 data_used: 20762624
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:15.730754+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.696551323s of 10.439273834s, submitted: 360
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 177995776 unmapped: 74399744 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 372 ms_handle_reset con 0x5622262ad400 session 0x56222463c5a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 372 heartbeat osd_stat(store_statfs(0x4f5a3e000/0x0/0x4ffc00000, data 0x2eb77f4/0x30b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:16.730889+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 177889280 unmapped: 74506240 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:17.731028+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 177905664 unmapped: 74489856 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:18.731211+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 177905664 unmapped: 74489856 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:19.731341+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 177905664 unmapped: 74489856 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fcb400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3030983 data_alloc: 234881024 data_used: 20779008
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:20.731491+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 177905664 unmapped: 74489856 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227d55c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 373 ms_handle_reset con 0x562227d55c00 session 0x5622262e8f00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 373 heartbeat osd_stat(store_statfs(0x4f5a3a000/0x0/0x4ffc00000, data 0x2eb92ee/0x30b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:21.731636+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 177971200 unmapped: 74424320 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227d55400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:22.731831+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585f800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 178249728 unmapped: 74145792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 373 handle_osd_map epochs [373,374], i have 373, src has [1,374]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 374 ms_handle_reset con 0x56222585f800 session 0x562226cdf860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 374 ms_handle_reset con 0x562227d55400 session 0x5622253381e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:23.731977+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 374 ms_handle_reset con 0x56222585ec00 session 0x562226cae960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 178266112 unmapped: 74129408 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets getting new tickets!
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:24.732255+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _finish_auth 0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:24.733178+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 178372608 unmapped: 74022912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 374 heartbeat osd_stat(store_statfs(0x4f59e2000/0x0/0x4ffc00000, data 0x2f14f34/0x310a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 374 handle_osd_map epochs [374,375], i have 374, src has [1,375]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 375 ms_handle_reset con 0x56222585ec00 session 0x562226cdc780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3058938 data_alloc: 234881024 data_used: 20799488
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 375 heartbeat osd_stat(store_statfs(0x4f59e2000/0x0/0x4ffc00000, data 0x2f14f34/0x310a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:25.732369+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 178462720 unmapped: 73932800 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.286065102s of 10.857958794s, submitted: 62
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:26.732509+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 178528256 unmapped: 73867264 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585f800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 376 ms_handle_reset con 0x56222585f800 session 0x56222463cf00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 376 ms_handle_reset con 0x5622262ad400 session 0x562226cdeb40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:27.732704+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 178528256 unmapped: 73867264 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 376 heartbeat osd_stat(store_statfs(0x4f6a0e000/0x0/0x4ffc00000, data 0x2f2a7a0/0x311f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:28.732922+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 376 ms_handle_reset con 0x562226fcb400 session 0x56222742b0e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 178536448 unmapped: 73859072 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:29.733050+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 178577408 unmapped: 73818112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227d55400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227d55c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 376 ms_handle_reset con 0x562227d55c00 session 0x56222461e3c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 376 ms_handle_reset con 0x562227d55400 session 0x5622271092c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 376 ms_handle_reset con 0x56222585ec00 session 0x562226dd30e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585f800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3076544 data_alloc: 234881024 data_used: 20807680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:30.733205+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 178593792 unmapped: 73801728 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 376 handle_osd_map epochs [376,377], i have 376, src has [1,377]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 377 ms_handle_reset con 0x56222585f800 session 0x5622253390e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:31.733412+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 377 ms_handle_reset con 0x5622262ad400 session 0x56222744a960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 178601984 unmapped: 73793536 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fcb400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:32.733586+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 377 ms_handle_reset con 0x562226fcb400 session 0x562224abd680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 178601984 unmapped: 73793536 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585f800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 378 ms_handle_reset con 0x56222585ec00 session 0x5622268bfa40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:33.733671+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 178601984 unmapped: 73793536 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 378 ms_handle_reset con 0x5622262ad400 session 0x562227108d20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227d55400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 378 ms_handle_reset con 0x562227d55400 session 0x56222762fc20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 378 heartbeat osd_stat(store_statfs(0x4f6948000/0x0/0x4ffc00000, data 0x2fee000/0x31e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:34.733826+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 180125696 unmapped: 72269824 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3093957 data_alloc: 234881024 data_used: 20971520
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:35.734010+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 180322304 unmapped: 72073216 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: mgrc ms_handle_reset ms_handle_reset con 0x562226b41400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1430667654
Nov 29 08:25:36 compute-0 ceph-osd[88926]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1430667654,v1:192.168.122.100:6801/1430667654]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: get_auth_request con 0x562226fcb400 auth_method 0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: mgrc handle_mgr_configure stats_period=5
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 378 ms_handle_reset con 0x56222585f800 session 0x562225339680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 378 heartbeat osd_stat(store_statfs(0x4f68c2000/0x0/0x4ffc00000, data 0x3074f8e/0x326a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:36.734184+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 180461568 unmapped: 71933952 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:37.734526+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227d55c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585fc00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.856520653s of 11.282711983s, submitted: 83
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179216384 unmapped: 73179136 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 378 ms_handle_reset con 0x56222585fc00 session 0x562224641c20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 378 ms_handle_reset con 0x562227d55c00 session 0x56222763eb40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:38.734813+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179216384 unmapped: 73179136 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 378 heartbeat osd_stat(store_statfs(0x4f68c3000/0x0/0x4ffc00000, data 0x3074f9e/0x326b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:39.734953+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179224576 unmapped: 73170944 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 379 ms_handle_reset con 0x56222585ec00 session 0x562226cdfa40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099029 data_alloc: 234881024 data_used: 20987904
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:40.735214+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 379 heartbeat osd_stat(store_statfs(0x4f68bf000/0x0/0x4ffc00000, data 0x3076a78/0x326e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179224576 unmapped: 73170944 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:41.735343+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 379 heartbeat osd_stat(store_statfs(0x4f68be000/0x0/0x4ffc00000, data 0x3076a88/0x326f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179224576 unmapped: 73170944 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585f800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:42.735468+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179224576 unmapped: 73170944 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:43.735644+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179224576 unmapped: 73170944 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227d55400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 380 ms_handle_reset con 0x562227d55400 session 0x562224b91c20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:44.735782+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 380 heartbeat osd_stat(store_statfs(0x4f68bb000/0x0/0x4ffc00000, data 0x307865c/0x3272000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179273728 unmapped: 73121792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 380 ms_handle_reset con 0x562227d54800 session 0x56222762e5a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224ad5400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3105309 data_alloc: 234881024 data_used: 21147648
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:45.735950+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179314688 unmapped: 73080832 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 381 ms_handle_reset con 0x5622262ad400 session 0x56222742b2c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 381 ms_handle_reset con 0x56222585f800 session 0x5622268be5a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:46.736125+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179306496 unmapped: 73089024 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:47.736367+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.852060318s of 10.013488770s, submitted: 40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179306496 unmapped: 73089024 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227d55400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 381 ms_handle_reset con 0x562227d55400 session 0x56222742ad20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 381 ms_handle_reset con 0x5622262ad400 session 0x562226c01e00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:48.736558+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179314688 unmapped: 73080832 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227d55c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 382 ms_handle_reset con 0x562227d55c00 session 0x56222762f2c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 382 heartbeat osd_stat(store_statfs(0x4f68a8000/0x0/0x4ffc00000, data 0x3087e14/0x3285000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:49.736724+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 382 handle_osd_map epochs [382,383], i have 382, src has [1,383]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179322880 unmapped: 73072640 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 383 ms_handle_reset con 0x56222585ec00 session 0x56222736a780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3117596 data_alloc: 234881024 data_used: 21168128
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:50.736880+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179339264 unmapped: 73056256 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec2400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec3000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:51.736999+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179322880 unmapped: 73072640 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227479800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 383 ms_handle_reset con 0x562227479800 session 0x562226c6be00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 384 ms_handle_reset con 0x562226ec3000 session 0x5622271092c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:52.737140+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 180035584 unmapped: 72359936 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 384 heartbeat osd_stat(store_statfs(0x4f6819000/0x0/0x4ffc00000, data 0x3114610/0x3314000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:53.737282+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 384 ms_handle_reset con 0x562226ec2400 session 0x5622253390e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 180035584 unmapped: 72359936 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 384 ms_handle_reset con 0x5622262ad400 session 0x562226cdf860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 384 ms_handle_reset con 0x5622262ad800 session 0x5622274994a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 384 ms_handle_reset con 0x5622262f1000 session 0x562227108780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:54.737425+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 384 ms_handle_reset con 0x5622262ad400 session 0x56222539fa40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179535872 unmapped: 72859648 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 384 ms_handle_reset con 0x5622262ad800 session 0x5622273e0780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 384 handle_osd_map epochs [384,385], i have 384, src has [1,385]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec2400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec3000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227d55400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 385 ms_handle_reset con 0x562226ec3000 session 0x562226ce05a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 385 ms_handle_reset con 0x562226ec2400 session 0x562224ae25a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3127585 data_alloc: 234881024 data_used: 21225472
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:55.737592+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179560448 unmapped: 72835072 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 386 ms_handle_reset con 0x562227d55400 session 0x562226cafa40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 386 ms_handle_reset con 0x56222585ec00 session 0x562226cae960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 386 ms_handle_reset con 0x5622262ad400 session 0x5622262e9a40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:56.737723+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 386 ms_handle_reset con 0x5622262ad800 session 0x56222742ab40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179609600 unmapped: 72785920 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec2400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:57.737857+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec3000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227d55c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.326025009s of 10.052790642s, submitted: 112
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 386 ms_handle_reset con 0x562226ec3000 session 0x5622268bfa40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227479000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179642368 unmapped: 72753152 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224601000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 386 ms_handle_reset con 0x562224601000 session 0x56222468a3c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 387 ms_handle_reset con 0x562227d55c00 session 0x5622253dd0e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:58.737979+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 388 ms_handle_reset con 0x562227479000 session 0x562226c6b860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179699712 unmapped: 72695808 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 388 ms_handle_reset con 0x56222585ec00 session 0x562226cdc780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 388 ms_handle_reset con 0x562226ec2400 session 0x56222763fc20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 388 ms_handle_reset con 0x5622262ad400 session 0x562227498d20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 388 heartbeat osd_stat(store_statfs(0x4f68a7000/0x0/0x4ffc00000, data 0x3084bed/0x3286000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:59.738105+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179716096 unmapped: 72679424 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec2400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 388 ms_handle_reset con 0x56222585ec00 session 0x5622271390e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227479000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 388 ms_handle_reset con 0x562226ec2400 session 0x562226c001e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227d55c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3118371 data_alloc: 234881024 data_used: 21204992
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:00.738241+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 388 ms_handle_reset con 0x5622262ad800 session 0x562226c010e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179757056 unmapped: 72638464 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 388 ms_handle_reset con 0x562227479000 session 0x5622262cbe00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 388 ms_handle_reset con 0x562227d55c00 session 0x5622274ca960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:01.738375+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179765248 unmapped: 72630272 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 389 ms_handle_reset con 0x56222585ec00 session 0x56222744b0e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 389 ms_handle_reset con 0x562226e48400 session 0x5622274a94a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:02.738510+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 389 heartbeat osd_stat(store_statfs(0x4f69ea000/0x0/0x4ffc00000, data 0x2f4129f/0x3143000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179781632 unmapped: 72613888 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec2400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227479000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec3000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 390 ms_handle_reset con 0x562227479000 session 0x562224ae3680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 390 ms_handle_reset con 0x5622262ad800 session 0x5622271081e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 390 ms_handle_reset con 0x562226ec3000 session 0x56222762fa40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:03.738624+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 391 ms_handle_reset con 0x5622262ad800 session 0x56222534d680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179838976 unmapped: 72556544 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 391 heartbeat osd_stat(store_statfs(0x4f69ef000/0x0/0x4ffc00000, data 0x2f36f23/0x313e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 391 ms_handle_reset con 0x562226e48400 session 0x56222763fc20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227479000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:04.738804+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 392 ms_handle_reset con 0x56222585ec00 session 0x56222463cf00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227474800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 392 ms_handle_reset con 0x562227474800 session 0x56222762e3c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179855360 unmapped: 72540160 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134611 data_alloc: 234881024 data_used: 21176320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 392 ms_handle_reset con 0x562226ec2400 session 0x5622268be5a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:05.738936+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 392 ms_handle_reset con 0x562227479000 session 0x562226c6b860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179855360 unmapped: 72540160 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 393 ms_handle_reset con 0x56222585ec00 session 0x56222461e3c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:06.739066+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 393 ms_handle_reset con 0x562226e48400 session 0x562226c73e00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 393 ms_handle_reset con 0x5622262ad800 session 0x56222762e960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179806208 unmapped: 72589312 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 393 heartbeat osd_stat(store_statfs(0x4f6a3d000/0x0/0x4ffc00000, data 0x2edc8be/0x30f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:07.739198+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179830784 unmapped: 72564736 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227474800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.979213715s of 10.246170044s, submitted: 272
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 393 ms_handle_reset con 0x5622262adc00 session 0x56222736a1e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:08.739378+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 179838976 unmapped: 72556544 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 393 ms_handle_reset con 0x5622262ad800 session 0x562226cdcd20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 394 ms_handle_reset con 0x562227474800 session 0x562224ae2780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227479000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:09.739557+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 394 heartbeat osd_stat(store_statfs(0x4f6a3b000/0x0/0x4ffc00000, data 0x2ede506/0x30f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [0,0,0,0,0,0,6])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176553984 unmapped: 75841536 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 394 ms_handle_reset con 0x562226e48400 session 0x562226cdf680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2814602 data_alloc: 218103808 data_used: 6180864
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 394 ms_handle_reset con 0x562227479000 session 0x56222461ef00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:10.739726+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 173006848 unmapped: 79388672 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:11.740012+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 394 ms_handle_reset con 0x56222585ec00 session 0x56222468a3c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 173039616 unmapped: 79355904 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 394 ms_handle_reset con 0x562224cdac00 session 0x562227109860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:12.740199+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 173039616 unmapped: 79355904 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:13.740310+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 394 ms_handle_reset con 0x5622262ad800 session 0x5622274994a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 394 heartbeat osd_stat(store_statfs(0x4f8562000/0x0/0x4ffc00000, data 0x12f0481/0x1502000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 394 handle_osd_map epochs [395,395], i have 395, src has [1,395]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 173039616 unmapped: 79355904 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:14.740496+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 173039616 unmapped: 79355904 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227474800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 396 ms_handle_reset con 0x562226e48400 session 0x5622268bf860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2823134 data_alloc: 218103808 data_used: 6176768
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:15.740714+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 173039616 unmapped: 79355904 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 396 ms_handle_reset con 0x562227474800 session 0x5622268f5e00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227479000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 397 ms_handle_reset con 0x562227479000 session 0x562223f5af00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 397 ms_handle_reset con 0x56222585ec00 session 0x56222736a5a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:16.740942+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 173039616 unmapped: 79355904 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 397 ms_handle_reset con 0x562224cdac00 session 0x5622274ca780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 398 ms_handle_reset con 0x5622262ad800 session 0x562224b903c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 398 heartbeat osd_stat(store_statfs(0x4f861f000/0x0/0x4ffc00000, data 0x12f5c7b/0x150d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:17.741079+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 173047808 unmapped: 79347712 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 398 ms_handle_reset con 0x562226e48400 session 0x562226dd2b40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:18.741273+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 398 handle_osd_map epochs [398,399], i have 398, src has [1,399]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.784668446s of 10.668027878s, submitted: 151
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 174096384 unmapped: 78299136 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227474800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:19.741431+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 399 ms_handle_reset con 0x562226d83800 session 0x56222468b4a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 399 ms_handle_reset con 0x562226d83800 session 0x562226c6bc20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 174096384 unmapped: 78299136 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 400 ms_handle_reset con 0x562224cdac00 session 0x56222763f4a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 400 ms_handle_reset con 0x562227474800 session 0x56222762ef00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2842232 data_alloc: 218103808 data_used: 6189056
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:20.741578+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 174088192 unmapped: 78307328 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 400 ms_handle_reset con 0x56222585ec00 session 0x5622268963c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262ad800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:21.741798+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 174096384 unmapped: 78299136 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 401 ms_handle_reset con 0x5622262ad800 session 0x562224ba9e00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 401 ms_handle_reset con 0x562224cdac00 session 0x562226ce1a40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:22.741989+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227474800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 401 ms_handle_reset con 0x562227474800 session 0x562226cb54a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 174137344 unmapped: 78258176 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 402 heartbeat osd_stat(store_statfs(0x4f8614000/0x0/0x4ffc00000, data 0x12fcc03/0x1519000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [0,0,0,1])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 402 ms_handle_reset con 0x562226e48400 session 0x56222762ed20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:23.742135+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 402 ms_handle_reset con 0x562226d83800 session 0x56222539f2c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227464c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 402 ms_handle_reset con 0x562227464c00 session 0x562226cb5e00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227464c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 403 ms_handle_reset con 0x562227464c00 session 0x562226dd30e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 403 ms_handle_reset con 0x56222585ec00 session 0x56222736af00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 175235072 unmapped: 77160448 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:24.742358+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 404 ms_handle_reset con 0x562226d83800 session 0x5622262e8f00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 404 heartbeat osd_stat(store_statfs(0x4f746f000/0x0/0x4ffc00000, data 0x1300646/0x151f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 404 ms_handle_reset con 0x562224cdac00 session 0x562226c73e00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 175267840 unmapped: 77127680 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e48400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227474800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 404 ms_handle_reset con 0x562226e48400 session 0x562226cdef00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 404 ms_handle_reset con 0x562227474800 session 0x56222742b0e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 404 ms_handle_reset con 0x562224cdac00 session 0x56222736a960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 404 ms_handle_reset con 0x562226d83800 session 0x562227108000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:25.742520+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2863874 data_alloc: 218103808 data_used: 6209536
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 175267840 unmapped: 77127680 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 405 ms_handle_reset con 0x56222585ec00 session 0x5622274a9860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227464c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:26.742716+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227468000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 175308800 unmapped: 77086720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 406 ms_handle_reset con 0x562227464c00 session 0x5622257601e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 406 ms_handle_reset con 0x562224cdac00 session 0x562226cae5a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:27.742966+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 407 handle_osd_map epochs [407,407], i have 407, src has [1,407]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 175308800 unmapped: 77086720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 407 ms_handle_reset con 0x562227468000 session 0x5622273e05a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 407 ms_handle_reset con 0x56222585ec00 session 0x5622273e0b40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 407 ms_handle_reset con 0x562226d83800 session 0x562226dd23c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:28.743132+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.166537285s of 10.004016876s, submitted: 280
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227474800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 175357952 unmapped: 77037568 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 408 ms_handle_reset con 0x562227474800 session 0x562226c6a780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:29.743332+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 408 ms_handle_reset con 0x562224cdac00 session 0x5622262cad20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 175357952 unmapped: 77037568 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 408 ms_handle_reset con 0x56222585ec00 session 0x56222763fe00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:30.743602+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2871523 data_alloc: 218103808 data_used: 6221824
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 408 heartbeat osd_stat(store_statfs(0x4f7463000/0x0/0x4ffc00000, data 0x1308a99/0x152b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 175357952 unmapped: 77037568 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 408 ms_handle_reset con 0x562226d83800 session 0x56222742ba40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227468000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:31.743753+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 408 ms_handle_reset con 0x562227468000 session 0x562227139c20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176447488 unmapped: 75948032 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227474800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 408 ms_handle_reset con 0x562227474800 session 0x562226c6ad20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:32.743902+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 408 ms_handle_reset con 0x562224cdac00 session 0x56222463c5a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176447488 unmapped: 75948032 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:33.744045+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 408 handle_osd_map epochs [408,409], i have 408, src has [1,409]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 409 heartbeat osd_stat(store_statfs(0x4f7463000/0x0/0x4ffc00000, data 0x1308a99/0x152b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176463872 unmapped: 75931648 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 409 ms_handle_reset con 0x56222585ec00 session 0x56222763f680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:34.744190+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227468000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 409 ms_handle_reset con 0x562227468000 session 0x56222742a960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176472064 unmapped: 75923456 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227474800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:35.744346+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2876547 data_alloc: 218103808 data_used: 6238208
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176496640 unmapped: 75898880 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 410 ms_handle_reset con 0x562227474800 session 0x56222534c5a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 410 heartbeat osd_stat(store_statfs(0x4f7051000/0x0/0x4ffc00000, data 0x130a55a/0x152c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e03c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e02800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 410 ms_handle_reset con 0x562226e02800 session 0x5622253dd860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 410 ms_handle_reset con 0x562226e03c00 session 0x5622274cab40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:36.744466+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176513024 unmapped: 75882496 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 411 ms_handle_reset con 0x562224cdac00 session 0x56222736b680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 411 ms_handle_reset con 0x562226d83800 session 0x56222762e3c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:37.744616+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222585ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176513024 unmapped: 75882496 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:38.744795+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176513024 unmapped: 75882496 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.976663589s of 10.608714104s, submitted: 192
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:39.744921+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176529408 unmapped: 75866112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:40.745149+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2885606 data_alloc: 218103808 data_used: 6250496
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227468000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 412 ms_handle_reset con 0x562227468000 session 0x562224641860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 412 ms_handle_reset con 0x56222585ec00 session 0x56222744ab40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176529408 unmapped: 75866112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:41.745294+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 412 handle_osd_map epochs [413,413], i have 413, src has [1,413]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 413 heartbeat osd_stat(store_statfs(0x4f7049000/0x0/0x4ffc00000, data 0x130f954/0x1534000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 75849728 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 413 ms_handle_reset con 0x562224cdac00 session 0x562226cde780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e03c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:42.745413+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 75849728 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 413 heartbeat osd_stat(store_statfs(0x4f7048000/0x0/0x4ffc00000, data 0x1311546/0x1536000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:43.745598+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 413 handle_osd_map epochs [414,414], i have 414, src has [1,414]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176570368 unmapped: 75825152 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 415 ms_handle_reset con 0x562226d83800 session 0x5622253dc780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 415 ms_handle_reset con 0x562226e03c00 session 0x5622274985a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:44.745732+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 415 heartbeat osd_stat(store_statfs(0x4f703f000/0x0/0x4ffc00000, data 0x1314cb6/0x153d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176586752 unmapped: 75808768 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:45.745962+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227468000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2896655 data_alloc: 218103808 data_used: 6254592
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227474800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 415 ms_handle_reset con 0x562227468000 session 0x56222742a960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227470400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176586752 unmapped: 75808768 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227471000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225772800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 415 ms_handle_reset con 0x562227470400 session 0x56222763f680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 415 ms_handle_reset con 0x562227471000 session 0x56222463c5a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 415 ms_handle_reset con 0x562225772800 session 0x5622271392c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:46.746256+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176586752 unmapped: 75808768 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 415 heartbeat osd_stat(store_statfs(0x4f703c000/0x0/0x4ffc00000, data 0x1314d58/0x1542000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 415 ms_handle_reset con 0x562227474800 session 0x5622273e0000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:47.746401+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176594944 unmapped: 75800576 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:48.746558+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176594944 unmapped: 75800576 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:49.746691+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176594944 unmapped: 75800576 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.352289200s of 10.866605759s, submitted: 101
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:50.746821+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2907124 data_alloc: 218103808 data_used: 6254592
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176594944 unmapped: 75800576 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:51.746968+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176594944 unmapped: 75800576 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 417 ms_handle_reset con 0x562224cdac00 session 0x56222742ba40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e03c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:52.747127+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 417 ms_handle_reset con 0x562226e03c00 session 0x5622273e0b40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 417 ms_handle_reset con 0x562226d83800 session 0x56222742b860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 417 heartbeat osd_stat(store_statfs(0x4f7034000/0x0/0x4ffc00000, data 0x131842e/0x1549000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176611328 unmapped: 75784192 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 417 ms_handle_reset con 0x562224cdac00 session 0x5622273e05a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225772800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:53.747279+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 417 ms_handle_reset con 0x562225772800 session 0x5622274a9860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176644096 unmapped: 75751424 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227471000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:54.747392+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 417 heartbeat osd_stat(store_statfs(0x4f7035000/0x0/0x4ffc00000, data 0x131841e/0x1548000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176644096 unmapped: 75751424 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 417 ms_handle_reset con 0x5622278b2000 session 0x56222539f860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227474800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:55.747613+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2914279 data_alloc: 218103808 data_used: 6275072
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 417 ms_handle_reset con 0x562227471000 session 0x562226ce1a40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227468000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176644096 unmapped: 75751424 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 417 ms_handle_reset con 0x562226fca800 session 0x56222463cf00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225772800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 417 ms_handle_reset con 0x562225772800 session 0x5622273e1a40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 417 ms_handle_reset con 0x562224cdac00 session 0x562226cdf680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:56.747773+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176644096 unmapped: 75751424 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227471000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 418 ms_handle_reset con 0x562226d83800 session 0x56222736ab40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 418 ms_handle_reset con 0x562226fca000 session 0x562226cde5a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:57.747898+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f0800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176660480 unmapped: 75735040 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 419 handle_osd_map epochs [419,419], i have 419, src has [1,419]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:58.748202+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 419 ms_handle_reset con 0x562227471000 session 0x562226c001e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 419 ms_handle_reset con 0x5622262f0800 session 0x562226dd2780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 419 ms_handle_reset con 0x562224cdac00 session 0x562224ba9c20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 419 ms_handle_reset con 0x562227468000 session 0x562226cb5c20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176668672 unmapped: 75726848 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:59.748656+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225772800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 419 ms_handle_reset con 0x562225772800 session 0x562224b44f00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176693248 unmapped: 75702272 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 419 heartbeat osd_stat(store_statfs(0x4f702c000/0x0/0x4ffc00000, data 0x131c10c/0x154f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:00.748819+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2931335 data_alloc: 218103808 data_used: 6295552
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.422940254s of 10.913562775s, submitted: 101
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176693248 unmapped: 75702272 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:01.748992+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f1400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fb7800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 175874048 unmapped: 76521472 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 419 ms_handle_reset con 0x5622262f1400 session 0x5622268bf2c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 419 ms_handle_reset con 0x562226d83800 session 0x562226dd32c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:02.749153+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 419 ms_handle_reset con 0x562226fb7800 session 0x5622268bfa40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176005120 unmapped: 76390400 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 420 ms_handle_reset con 0x562224cdac00 session 0x562226dd21e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562225772800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f0800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 420 heartbeat osd_stat(store_statfs(0x4f673b000/0x0/0x4ffc00000, data 0x1c0f16e/0x1e43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [1,0,0,1])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227468000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 420 ms_handle_reset con 0x5622262f0800 session 0x5622274ca780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:03.749435+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176037888 unmapped: 76357632 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:04.749576+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 421 ms_handle_reset con 0x562225772800 session 0x562224abd680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 421 ms_handle_reset con 0x562227468000 session 0x562224b44f00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176037888 unmapped: 76357632 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 421 ms_handle_reset con 0x562224cdac00 session 0x5622274cb0e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 421 ms_handle_reset con 0x562226fca000 session 0x56222534d680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:05.749722+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x5622262f0800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3016962 data_alloc: 218103808 data_used: 6311936
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176046080 unmapped: 76349440 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 421 ms_handle_reset con 0x562226d83800 session 0x562226c001e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fb7800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:06.749856+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 422 ms_handle_reset con 0x5622262f0800 session 0x562226cdd680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 422 heartbeat osd_stat(store_statfs(0x4f6730000/0x0/0x4ffc00000, data 0x1c14be8/0x1e4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [0,0,0,0,0,1,0,0,1])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 422 ms_handle_reset con 0x562226fb7800 session 0x562226cdf680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 76324864 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:07.749974+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 422 ms_handle_reset con 0x562224cdac00 session 0x562227498b40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 422 ms_handle_reset con 0x562226d83800 session 0x562226dd3e00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176267264 unmapped: 76128256 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:08.750213+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227468000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 422 ms_handle_reset con 0x562227468000 session 0x5622271392c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176267264 unmapped: 76128256 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:09.750331+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b4000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176291840 unmapped: 76103680 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:10.750552+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3086611 data_alloc: 234881024 data_used: 14237696
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176324608 unmapped: 76070912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:11.750707+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 422 heartbeat osd_stat(store_statfs(0x4f6707000/0x0/0x4ffc00000, data 0x1c3eb86/0x1e76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b4c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.009035110s of 10.920314789s, submitted: 140
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 422 ms_handle_reset con 0x56222b8b4c00 session 0x5622253dcb40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176324608 unmapped: 76070912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:12.750845+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 423 ms_handle_reset con 0x562226d83800 session 0x56222762f2c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176332800 unmapped: 76062720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 423 heartbeat osd_stat(store_statfs(0x4f6706000/0x0/0x4ffc00000, data 0x1c3ebe8/0x1e77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fb7800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:13.750969+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 424 ms_handle_reset con 0x562226fb7800 session 0x562226c6a780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 424 ms_handle_reset con 0x562224cdac00 session 0x562226dd2f00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176349184 unmapped: 76046336 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227468000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 424 ms_handle_reset con 0x562227468000 session 0x56222539ed20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:14.751137+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176357376 unmapped: 76038144 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226c38800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:15.751271+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 424 heartbeat osd_stat(store_statfs(0x4f66fc000/0x0/0x4ffc00000, data 0x1c4245a/0x1e7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227476c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3098632 data_alloc: 234881024 data_used: 14245888
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 425 ms_handle_reset con 0x562226c38800 session 0x562225338960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 425 ms_handle_reset con 0x562227476c00 session 0x562226c01e00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 425 heartbeat osd_stat(store_statfs(0x4f66fc000/0x0/0x4ffc00000, data 0x1c4245a/0x1e7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 176373760 unmapped: 76021760 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 425 ms_handle_reset con 0x562224cdac00 session 0x56222762f4a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:16.751475+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 425 heartbeat osd_stat(store_statfs(0x4f66fb000/0x0/0x4ffc00000, data 0x1c440a2/0x1e82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226c38800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 177438720 unmapped: 74956800 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e21400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 426 ms_handle_reset con 0x562226e21400 session 0x5622268f52c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 426 ms_handle_reset con 0x562226d83800 session 0x56222534cd20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 426 ms_handle_reset con 0x562226c38800 session 0x5622273e1e00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:17.751647+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 177512448 unmapped: 74883072 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 427 ms_handle_reset con 0x562224cdac00 session 0x56222744b4a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226c38800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 427 ms_handle_reset con 0x562226c38800 session 0x56222762fc20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:18.751843+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 428 ms_handle_reset con 0x562226d83800 session 0x5622274cbc20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e21400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 177569792 unmapped: 74825728 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:19.752031+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227476c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fb7800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 428 ms_handle_reset con 0x562226fb7800 session 0x5622273e1860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 428 ms_handle_reset con 0x562227476c00 session 0x56222763f860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 177692672 unmapped: 74702848 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 429 ms_handle_reset con 0x562226e21400 session 0x562226cafe00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227476c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 429 ms_handle_reset con 0x562227476c00 session 0x562226ce03c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:20.752145+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 429 heartbeat osd_stat(store_statfs(0x4f66f2000/0x0/0x4ffc00000, data 0x1c4a714/0x1e8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [0,0,0,0,0,1])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3158284 data_alloc: 234881024 data_used: 14303232
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183222272 unmapped: 69173248 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 430 ms_handle_reset con 0x562224cdac00 session 0x562226dd3a40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226c38800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:21.752286+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.052372932s of 10.146560669s, submitted: 255
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 182665216 unmapped: 69730304 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 430 handle_osd_map epochs [430,431], i have 430, src has [1,431]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 431 ms_handle_reset con 0x562226d83800 session 0x5622273e0d20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fb7800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:22.752429+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 431 ms_handle_reset con 0x562226c38800 session 0x562226ce0b40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 182706176 unmapped: 69689344 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:23.752560+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 182706176 unmapped: 69689344 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:24.752732+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 431 ms_handle_reset con 0x562226fb7800 session 0x56222461ef00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183025664 unmapped: 69369856 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 432 ms_handle_reset con 0x562224cdac00 session 0x562226cdfc20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 432 heartbeat osd_stat(store_statfs(0x4f5e4b000/0x0/0x4ffc00000, data 0x24eef84/0x2732000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:25.752926+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 432 ms_handle_reset con 0x562226d83800 session 0x562226c6b860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e21400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3207523 data_alloc: 234881024 data_used: 14843904
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 432 ms_handle_reset con 0x562226e21400 session 0x56222744ab40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183058432 unmapped: 69337088 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:26.753066+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183058432 unmapped: 69337088 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227476c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 432 ms_handle_reset con 0x562227476c00 session 0x562225338d20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:27.753190+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 432 ms_handle_reset con 0x562224cdac00 session 0x56222742a960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183058432 unmapped: 69337088 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:28.753334+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 432 ms_handle_reset con 0x562226d83800 session 0x562224641e00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226e21400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fb7800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 432 ms_handle_reset con 0x562226fb7800 session 0x5622271083c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227468000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183058432 unmapped: 69337088 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:29.753521+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183058432 unmapped: 69337088 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 432 handle_osd_map epochs [434,434], i have 432, src has [1,434]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 432 handle_osd_map epochs [433,434], i have 432, src has [1,434]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:30.753650+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3211027 data_alloc: 234881024 data_used: 14860288
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 434 ms_handle_reset con 0x562227468000 session 0x562226c6a000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183115776 unmapped: 69279744 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 434 heartbeat osd_stat(store_statfs(0x4f5e37000/0x0/0x4ffc00000, data 0x2503a9e/0x2747000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [0,0,0,0,2])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:31.753760+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 434 ms_handle_reset con 0x562226e21400 session 0x5622268be000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183123968 unmapped: 69271552 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.747125626s of 10.254196167s, submitted: 180
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:32.753893+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 434 ms_handle_reset con 0x562224cdac00 session 0x56222534c5a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 434 heartbeat osd_stat(store_statfs(0x4f5e31000/0x0/0x4ffc00000, data 0x25071a8/0x274c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183132160 unmapped: 69263360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fb7800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:33.754082+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 434 ms_handle_reset con 0x562226fb7800 session 0x5622274990e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227468000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183148544 unmapped: 69246976 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:34.754253+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183238656 unmapped: 69156864 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 434 heartbeat osd_stat(store_statfs(0x4f5e32000/0x0/0x4ffc00000, data 0x25071a8/0x274c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:35.754399+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3207257 data_alloc: 234881024 data_used: 14856192
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183238656 unmapped: 69156864 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:36.754533+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 435 ms_handle_reset con 0x562227468000 session 0x56222742ad20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7f400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183255040 unmapped: 69140480 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 435 ms_handle_reset con 0x562228d7f400 session 0x5622246405a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:37.754670+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183353344 unmapped: 69042176 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7f800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:38.754842+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 435 ms_handle_reset con 0x562226d83800 session 0x56222736ad20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183361536 unmapped: 69033984 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:39.755000+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 435 heartbeat osd_stat(store_statfs(0x4f5e2f000/0x0/0x4ffc00000, data 0x2508d6e/0x274e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 435 ms_handle_reset con 0x562228d7ec00 session 0x562224abcf00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 435 ms_handle_reset con 0x562224cdac00 session 0x5622268be780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183386112 unmapped: 69009408 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:40.755166+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3211724 data_alloc: 234881024 data_used: 14864384
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183386112 unmapped: 69009408 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:41.755330+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183386112 unmapped: 69009408 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:42.755526+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 436 ms_handle_reset con 0x56222b8b4000 session 0x56222763f680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.127852440s of 10.460619926s, submitted: 70
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183402496 unmapped: 68993024 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 436 ms_handle_reset con 0x562226fca000 session 0x562226cb43c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 436 ms_handle_reset con 0x562228d7f800 session 0x562227498d20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:43.755709+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7f800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f5e2b000/0x0/0x4ffc00000, data 0x250a9f8/0x2752000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 436 ms_handle_reset con 0x562224cdac00 session 0x5622274ca960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183402496 unmapped: 68993024 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 436 ms_handle_reset con 0x562226d83800 session 0x5622273e0960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:44.755869+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 436 handle_osd_map epochs [436,437], i have 436, src has [1,437]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 437 ms_handle_reset con 0x562226fca000 session 0x5622253dd4a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 437 ms_handle_reset con 0x562228d7f800 session 0x56222736a960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7ec00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 437 ms_handle_reset con 0x562228d7ec00 session 0x5622262e8960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181108736 unmapped: 71286784 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 437 ms_handle_reset con 0x562224cdac00 session 0x56222763e1e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:45.756054+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3007180 data_alloc: 218103808 data_used: 6381568
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181108736 unmapped: 71286784 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 437 heartbeat osd_stat(store_statfs(0x4f6ffd000/0x0/0x4ffc00000, data 0x133b5ae/0x1581000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:46.756180+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181108736 unmapped: 71286784 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:47.756329+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181108736 unmapped: 71286784 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:48.756502+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 438 ms_handle_reset con 0x562226d83800 session 0x562224b90960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181125120 unmapped: 71270400 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:49.756710+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 438 handle_osd_map epochs [440,440], i have 438, src has [1,440]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 438 handle_osd_map epochs [439,440], i have 438, src has [1,440]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181125120 unmapped: 71270400 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:50.756891+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3022050 data_alloc: 218103808 data_used: 6389760
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181141504 unmapped: 71254016 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:51.757052+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 440 handle_osd_map epochs [440,441], i have 440, src has [1,441]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7f800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 441 ms_handle_reset con 0x562228d7f800 session 0x56222463cf00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 441 heartbeat osd_stat(store_statfs(0x4f6ff1000/0x0/0x4ffc00000, data 0x1340894/0x158b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 441 ms_handle_reset con 0x562226fca000 session 0x56222742bc20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181166080 unmapped: 71229440 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x56222b8b4000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:52.757185+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 441 ms_handle_reset con 0x56222b8b4000 session 0x562226caed20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 441 handle_osd_map epochs [441,442], i have 441, src has [1,442]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.373895645s of 10.019693375s, submitted: 93
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 442 ms_handle_reset con 0x562226d83800 session 0x562226dd2b40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181174272 unmapped: 71221248 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 442 ms_handle_reset con 0x562224cdac00 session 0x5622271092c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:53.757318+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 442 ms_handle_reset con 0x562226fca000 session 0x562224ae25a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181174272 unmapped: 71221248 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7f800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:54.757441+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 442 ms_handle_reset con 0x562228d7f800 session 0x56222744af00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fb7800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 442 handle_osd_map epochs [443,443], i have 442, src has [1,443]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 443 ms_handle_reset con 0x562226fb7800 session 0x562226dd25a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 443 ms_handle_reset con 0x562224cdac00 session 0x562226dd2b40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181190656 unmapped: 71204864 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:55.757568+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 443 ms_handle_reset con 0x562226d83800 session 0x562224b90960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3031093 data_alloc: 218103808 data_used: 6410240
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 443 ms_handle_reset con 0x562226fca000 session 0x5622273e0960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7f800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 443 ms_handle_reset con 0x562228d7f800 session 0x5622268be780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181190656 unmapped: 71204864 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:56.757713+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227468000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7f400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 443 ms_handle_reset con 0x562228d7f400 session 0x562224b91680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 443 heartbeat osd_stat(store_statfs(0x4f6fe9000/0x0/0x4ffc00000, data 0x1345be4/0x1594000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 182255616 unmapped: 70139904 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:57.757841+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 443 handle_osd_map epochs [444,444], i have 443, src has [1,444]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 444 ms_handle_reset con 0x562226d83800 session 0x5622246405a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 444 ms_handle_reset con 0x562224cdac00 session 0x56222762ed20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 444 ms_handle_reset con 0x562227468000 session 0x562224abcf00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181239808 unmapped: 71155712 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 444 ms_handle_reset con 0x562226fca000 session 0x562225338d20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:58.758049+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7f800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181280768 unmapped: 71114752 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7fc00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 444 ms_handle_reset con 0x562228d7fc00 session 0x56222736b4a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7fc00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:59.758230+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 444 handle_osd_map epochs [445,445], i have 444, src has [1,445]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181313536 unmapped: 71081984 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 445 ms_handle_reset con 0x562224cdac00 session 0x5622273e01e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226d83800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:00.758496+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 445 ms_handle_reset con 0x562228d7f800 session 0x56222461ef00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038430 data_alloc: 218103808 data_used: 6426624
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 445 ms_handle_reset con 0x562226fca000 session 0x56222762fe00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227468000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 445 heartbeat osd_stat(store_statfs(0x4f6fe2000/0x0/0x4ffc00000, data 0x134947c/0x159a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181403648 unmapped: 70991872 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:01.758654+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 445 ms_handle_reset con 0x562227468000 session 0x56222461fc20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 445 handle_osd_map epochs [446,446], i have 445, src has [1,446]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181460992 unmapped: 70934528 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:02.758816+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 446 ms_handle_reset con 0x562226d83800 session 0x562225760000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.000669003s of 10.003086090s, submitted: 170
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 446 ms_handle_reset con 0x562224cdac00 session 0x562226cb4960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 446 ms_handle_reset con 0x562226fca000 session 0x56222463c5a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227468000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181477376 unmapped: 70918144 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 446 heartbeat osd_stat(store_statfs(0x4f6fe1000/0x0/0x4ffc00000, data 0x134b062/0x159c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 446 ms_handle_reset con 0x562227468000 session 0x562226c6af00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:03.758964+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 446 ms_handle_reset con 0x562228d7fc00 session 0x562226cb4b40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 446 heartbeat osd_stat(store_statfs(0x4f6fe2000/0x0/0x4ffc00000, data 0x134b060/0x159c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181485568 unmapped: 70909952 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7f800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:04.759171+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 446 handle_osd_map epochs [446,447], i have 446, src has [1,447]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7f000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 447 ms_handle_reset con 0x562228d7f800 session 0x562226ce01e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181485568 unmapped: 70909952 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:05.759989+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3046772 data_alloc: 218103808 data_used: 6447104
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 447 ms_handle_reset con 0x562224cdac00 session 0x5622273e0000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 447 ms_handle_reset con 0x562228d7f000 session 0x5622274a9e00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181485568 unmapped: 70909952 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:06.760448+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 447 handle_osd_map epochs [447,448], i have 447, src has [1,448]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 448 ms_handle_reset con 0x562226fca000 session 0x562226c730e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181493760 unmapped: 70901760 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:07.760788+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227468000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7fc00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 448 ms_handle_reset con 0x562227468000 session 0x56222539fa40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 448 ms_handle_reset con 0x562228d7fc00 session 0x562227109860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181493760 unmapped: 70901760 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:08.761331+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 181501952 unmapped: 70893568 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 448 ms_handle_reset con 0x562224cdac00 session 0x5622273e14a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:09.761459+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 449 ms_handle_reset con 0x562226fca000 session 0x562226ce10e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 449 heartbeat osd_stat(store_statfs(0x4f6fdb000/0x0/0x4ffc00000, data 0x134e944/0x15a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 449 handle_osd_map epochs [449,450], i have 449, src has [1,450]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 182575104 unmapped: 69820416 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:10.762465+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3059084 data_alloc: 218103808 data_used: 6459392
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 182575104 unmapped: 69820416 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:11.762937+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227468000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 450 ms_handle_reset con 0x562227468000 session 0x562226dd3860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562228d7f000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 450 ms_handle_reset con 0x562227466800 session 0x562226c6b860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 450 ms_handle_reset con 0x562228d7f000 session 0x5622273e0780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 182607872 unmapped: 69787648 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:12.763071+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 450 ms_handle_reset con 0x562224cdac00 session 0x5622273e12c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 450 ms_handle_reset con 0x562226fca000 session 0x562227109680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 450 heartbeat osd_stat(store_statfs(0x4f6fd1000/0x0/0x4ffc00000, data 0x135210e/0x15ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 450 handle_osd_map epochs [451,451], i have 450, src has [1,451]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.733193398s of 10.018505096s, submitted: 105
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 451 ms_handle_reset con 0x562227466800 session 0x562227109860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 182640640 unmapped: 69754880 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:13.763318+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227468000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 451 ms_handle_reset con 0x562227468000 session 0x562226c730e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227467800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 451 ms_handle_reset con 0x562227466400 session 0x562226cdf860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 451 ms_handle_reset con 0x562227466000 session 0x562225760000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 182648832 unmapped: 69746688 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:14.763648+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 451 handle_osd_map epochs [451,452], i have 451, src has [1,452]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 451 handle_osd_map epochs [452,452], i have 452, src has [1,452]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 452 ms_handle_reset con 0x562224cdac00 session 0x5622257610e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 452 ms_handle_reset con 0x562226fca000 session 0x5622273e01e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 452 heartbeat osd_stat(store_statfs(0x4f6fce000/0x0/0x4ffc00000, data 0x1353cfe/0x15ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 452 ms_handle_reset con 0x562227467800 session 0x562226ce01e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 182665216 unmapped: 69730304 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:15.763885+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3067784 data_alloc: 218103808 data_used: 6471680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 182665216 unmapped: 69730304 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:16.764217+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227468000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 452 ms_handle_reset con 0x562227468000 session 0x5622274ca3c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 452 ms_handle_reset con 0x562227466800 session 0x56222744a3c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 182665216 unmapped: 69730304 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:17.764458+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227468000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 182665216 unmapped: 69730304 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:18.764936+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 452 handle_osd_map epochs [452,453], i have 452, src has [1,453]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 453 ms_handle_reset con 0x562227468000 session 0x56222742b860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 453 heartbeat osd_stat(store_statfs(0x4f6fcc000/0x0/0x4ffc00000, data 0x135595e/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 182673408 unmapped: 69722112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:19.765177+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 453 ms_handle_reset con 0x562224cdac00 session 0x5622262e85a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 182681600 unmapped: 69713920 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:20.765440+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 453 heartbeat osd_stat(store_statfs(0x4f6fc7000/0x0/0x4ffc00000, data 0x13575c0/0x15b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3077730 data_alloc: 218103808 data_used: 6492160
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 454 ms_handle_reset con 0x562227466000 session 0x5622253ddc20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 182689792 unmapped: 69705728 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227467800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:21.765566+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 454 ms_handle_reset con 0x562226fca000 session 0x562226cdcf00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 454 ms_handle_reset con 0x562227467800 session 0x5622268bfc20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 182689792 unmapped: 69705728 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 454 ms_handle_reset con 0x562224cdac00 session 0x562226896d20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:22.765818+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 454 ms_handle_reset con 0x562226fca000 session 0x5622274992c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.010722160s of 10.072757721s, submitted: 125
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 454 ms_handle_reset con 0x562227466800 session 0x562226cdd4a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 454 heartbeat osd_stat(store_statfs(0x4f6fc5000/0x0/0x4ffc00000, data 0x13591b0/0x15b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227468000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 454 handle_osd_map epochs [455,455], i have 454, src has [1,455]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 455 ms_handle_reset con 0x562227466000 session 0x562224b912c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 182689792 unmapped: 69705728 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:23.765970+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 455 ms_handle_reset con 0x562227468000 session 0x562226c001e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 455 heartbeat osd_stat(store_statfs(0x4f6fc2000/0x0/0x4ffc00000, data 0x135add8/0x15bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 191430656 unmapped: 60964864 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:24.766118+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 455 ms_handle_reset con 0x562227466000 session 0x562226c6a000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 455 ms_handle_reset con 0x562224cdac00 session 0x5622253dcd20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 455 ms_handle_reset con 0x562226fca000 session 0x562224abd680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227467800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 456 ms_handle_reset con 0x562227467800 session 0x5622273e0000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 456 ms_handle_reset con 0x562227466800 session 0x562226ce0b40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183074816 unmapped: 69320704 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:25.766330+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327177 data_alloc: 218103808 data_used: 6508544
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 456 ms_handle_reset con 0x562224cdac00 session 0x562227498d20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183074816 unmapped: 69320704 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:26.766483+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183074816 unmapped: 69320704 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:27.766687+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 456 handle_osd_map epochs [457,457], i have 456, src has [1,457]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 457 ms_handle_reset con 0x562226fca000 session 0x562226cb5e00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183099392 unmapped: 69296128 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:28.767038+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 457 ms_handle_reset con 0x562227466000 session 0x5622253dc960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227468000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 457 handle_osd_map epochs [458,458], i have 457, src has [1,458]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 458 heartbeat osd_stat(store_statfs(0x4f4965000/0x0/0x4ffc00000, data 0x35a549e/0x3808000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 458 ms_handle_reset con 0x562227468000 session 0x5622268be5a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183099392 unmapped: 69296128 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:29.767326+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 458 handle_osd_map epochs [459,459], i have 458, src has [1,459]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 459 ms_handle_reset con 0x562224cdac00 session 0x56222744a1e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 459 ms_handle_reset con 0x562226fca000 session 0x562226cb5680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183123968 unmapped: 69271552 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:30.767529+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 459 heartbeat osd_stat(store_statfs(0x4f495e000/0x0/0x4ffc00000, data 0x35a8b3a/0x380c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339934 data_alloc: 218103808 data_used: 6508544
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183123968 unmapped: 69271552 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:31.767781+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 459 handle_osd_map epochs [460,460], i have 459, src has [1,460]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 460 ms_handle_reset con 0x562227466000 session 0x562227498d20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183189504 unmapped: 69206016 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:32.767978+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 460 ms_handle_reset con 0x562227466800 session 0x5622273e0000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183189504 unmapped: 69206016 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 460 heartbeat osd_stat(store_statfs(0x4f495d000/0x0/0x4ffc00000, data 0x35aa73a/0x3810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:33.768121+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbf400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 460 ms_handle_reset con 0x562226bbf400 session 0x562227139a40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 460 ms_handle_reset con 0x562227466c00 session 0x5622253dcd20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 460 handle_osd_map epochs [461,461], i have 460, src has [1,461]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.160073280s of 11.413453102s, submitted: 187
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbf400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 461 ms_handle_reset con 0x562226bbf400 session 0x562226c6a000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183197696 unmapped: 69197824 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:34.768247+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 461 ms_handle_reset con 0x562224cdac00 session 0x562226c001e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 461 ms_handle_reset con 0x562226fca000 session 0x5622274992c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 461 ms_handle_reset con 0x562227466000 session 0x562226896d20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbf400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 461 heartbeat osd_stat(store_statfs(0x4f495a000/0x0/0x4ffc00000, data 0x35ac32a/0x3813000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 461 ms_handle_reset con 0x562226fca000 session 0x56222742b860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183214080 unmapped: 69181440 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:35.768388+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 461 handle_osd_map epochs [461,462], i have 461, src has [1,462]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 462 ms_handle_reset con 0x562227466c00 session 0x5622274ca3c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3353249 data_alloc: 218103808 data_used: 6529024
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 462 ms_handle_reset con 0x562227466800 session 0x562226cdc780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183230464 unmapped: 69165056 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:36.768522+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 462 ms_handle_reset con 0x562226bbe000 session 0x562226ce01e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 462 ms_handle_reset con 0x562226bbe400 session 0x5622273e01e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f4954000/0x0/0x4ffc00000, data 0x35ae13b/0x3818000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183238656 unmapped: 69156864 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:37.768725+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 462 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 463 ms_handle_reset con 0x562226bbe000 session 0x562226c730e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183255040 unmapped: 69140480 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:38.768913+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 463 ms_handle_reset con 0x562226bbe400 session 0x562227109680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 463 ms_handle_reset con 0x562226fca000 session 0x5622273e12c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183255040 unmapped: 69140480 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:39.769273+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 463 handle_osd_map epochs [464,464], i have 463, src has [1,464]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183255040 unmapped: 69140480 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:40.769416+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 464 handle_osd_map epochs [465,465], i have 464, src has [1,465]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 465 ms_handle_reset con 0x562227466800 session 0x5622273e0780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 465 ms_handle_reset con 0x562227466c00 session 0x562224b91c20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413510 data_alloc: 234881024 data_used: 12820480
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183263232 unmapped: 69132288 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:41.769942+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 465 ms_handle_reset con 0x562226bbe000 session 0x562226dd2960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 465 heartbeat osd_stat(store_statfs(0x4f4948000/0x0/0x4ffc00000, data 0x35b3608/0x3824000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 465 ms_handle_reset con 0x562226fca000 session 0x56222762e3c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 465 ms_handle_reset con 0x562227466800 session 0x562224abde00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183271424 unmapped: 69124096 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:42.770132+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 465 handle_osd_map epochs [466,466], i have 465, src has [1,466]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 466 ms_handle_reset con 0x562226bbe400 session 0x562226cde000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f4949000/0x0/0x4ffc00000, data 0x35b35f8/0x3823000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183271424 unmapped: 69124096 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 466 ms_handle_reset con 0x562226bbe800 session 0x56222744a3c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:43.770417+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.759520531s of 10.027559280s, submitted: 99
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f4947000/0x0/0x4ffc00000, data 0x35b524c/0x3826000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183271424 unmapped: 69124096 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:44.770676+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 466 ms_handle_reset con 0x562226bbe000 session 0x5622273e14a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 466 handle_osd_map epochs [466,467], i have 466, src has [1,467]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183279616 unmapped: 69115904 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 467 ms_handle_reset con 0x562226bbe400 session 0x56222763e5a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:45.770817+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 467 handle_osd_map epochs [467,468], i have 467, src has [1,468]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3419537 data_alloc: 234881024 data_used: 12824576
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 468 heartbeat osd_stat(store_statfs(0x4f4946000/0x0/0x4ffc00000, data 0x35b6d4b/0x3828000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183287808 unmapped: 69107712 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:46.771427+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 468 heartbeat osd_stat(store_statfs(0x4f4942000/0x0/0x4ffc00000, data 0x35b893b/0x382b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183287808 unmapped: 69107712 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:47.772013+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 183287808 unmapped: 69107712 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:48.772362+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 468 ms_handle_reset con 0x562226fca000 session 0x562226cde5a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 468 ms_handle_reset con 0x562227466800 session 0x562224ae2b40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 202309632 unmapped: 50085888 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:49.772505+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 469 ms_handle_reset con 0x562226ec4000 session 0x5622268bf2c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 469 handle_osd_map epochs [470,470], i have 469, src has [1,470]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 194625536 unmapped: 57769984 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:50.772612+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510774 data_alloc: 234881024 data_used: 13733888
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 470 ms_handle_reset con 0x562226bbe000 session 0x5622253383c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:51.772746+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 194904064 unmapped: 57491456 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 470 ms_handle_reset con 0x562226fca000 session 0x562226c73e00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 470 ms_handle_reset con 0x562226bbe400 session 0x5622257610e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 470 heartbeat osd_stat(store_statfs(0x4f2d1b000/0x0/0x4ffc00000, data 0x403a089/0x42b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:52.772891+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 194985984 unmapped: 57409536 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 470 ms_handle_reset con 0x562226ec5000 session 0x562226cde3c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 470 ms_handle_reset con 0x562226ec4800 session 0x56222742b0e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:53.773014+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 195026944 unmapped: 57368576 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 470 handle_osd_map epochs [471,471], i have 470, src has [1,471]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 471 ms_handle_reset con 0x562226bbe000 session 0x5622271383c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 471 ms_handle_reset con 0x562227466800 session 0x5622268f4780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:54.773167+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 195051520 unmapped: 57344000 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 471 handle_osd_map epochs [471,472], i have 471, src has [1,472]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.261860371s of 10.195130348s, submitted: 230
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 472 ms_handle_reset con 0x562226bbe400 session 0x5622268be5a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 472 ms_handle_reset con 0x562226ec5000 session 0x562226cdd680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 472 ms_handle_reset con 0x562226ec4c00 session 0x562224b44f00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 472 ms_handle_reset con 0x562226ec4c00 session 0x56222762eb40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:55.773295+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 195108864 unmapped: 57286656 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 472 ms_handle_reset con 0x562226fca000 session 0x562224b903c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567731 data_alloc: 234881024 data_used: 13733888
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 473 ms_handle_reset con 0x562226bbe400 session 0x5622268f54a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 473 ms_handle_reset con 0x562226ec5000 session 0x5622271390e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:56.773404+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 195182592 unmapped: 57212928 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 473 handle_osd_map epochs [474,474], i have 473, src has [1,474]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 474 ms_handle_reset con 0x562226bbe000 session 0x562226cdf680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 474 heartbeat osd_stat(store_statfs(0x4f280b000/0x0/0x4ffc00000, data 0x454644b/0x47c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 474 ms_handle_reset con 0x562226bbe400 session 0x5622262e8960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:57.773524+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 195231744 unmapped: 57163776 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 474 ms_handle_reset con 0x562226ec4c00 session 0x5622246410e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 474 ms_handle_reset con 0x562226ec5000 session 0x56222736b680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:58.773765+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 195231744 unmapped: 57163776 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:59.773962+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 195231744 unmapped: 57163776 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 474 handle_osd_map epochs [475,475], i have 474, src has [1,475]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 475 ms_handle_reset con 0x562226fca000 session 0x56222461f680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 475 heartbeat osd_stat(store_statfs(0x4f2809000/0x0/0x4ffc00000, data 0x45480af/0x47c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:00.774160+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 57171968 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227466800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 475 ms_handle_reset con 0x562227466800 session 0x562226c721e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571424 data_alloc: 234881024 data_used: 13754368
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:01.774396+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 195231744 unmapped: 57163776 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 475 handle_osd_map epochs [476,476], i have 475, src has [1,476]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 476 ms_handle_reset con 0x562226bbe400 session 0x56222736bc20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 476 heartbeat osd_stat(store_statfs(0x4f2807000/0x0/0x4ffc00000, data 0x4549cbb/0x47c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 476 ms_handle_reset con 0x562226ec4c00 session 0x562226c6af00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/719455497' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:02.774540+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 195239936 unmapped: 57155584 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 476 ms_handle_reset con 0x562226ec5000 session 0x5622262e85a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:03.774671+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 195239936 unmapped: 57155584 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:04.774836+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 195239936 unmapped: 57155584 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 476 heartbeat osd_stat(store_statfs(0x4f2807000/0x0/0x4ffc00000, data 0x454b89d/0x47c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.856985092s of 10.191707611s, submitted: 127
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 476 ms_handle_reset con 0x562226fca000 session 0x56222461fe00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 476 handle_osd_map epochs [477,477], i have 476, src has [1,477]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:05.775003+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 195239936 unmapped: 57155584 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 477 heartbeat osd_stat(store_statfs(0x4f2807000/0x0/0x4ffc00000, data 0x454b89d/0x47c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3577342 data_alloc: 234881024 data_used: 13754368
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:06.775199+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 195239936 unmapped: 57155584 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 477 heartbeat osd_stat(store_statfs(0x4f2804000/0x0/0x4ffc00000, data 0x454d3ab/0x47ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:07.775365+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 195239936 unmapped: 57155584 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 477 ms_handle_reset con 0x562226ec5400 session 0x562224ae2b40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 477 handle_osd_map epochs [478,478], i have 477, src has [1,478]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 478 ms_handle_reset con 0x562226ec5800 session 0x56222736a960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 478 ms_handle_reset con 0x562226bbe400 session 0x562226cdda40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:08.775569+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 195256320 unmapped: 57139200 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 478 handle_osd_map epochs [478,479], i have 478, src has [1,479]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 479 ms_handle_reset con 0x562226ec4c00 session 0x562224b91680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 479 ms_handle_reset con 0x562226ec5000 session 0x56222762ed20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 479 heartbeat osd_stat(store_statfs(0x4f27ff000/0x0/0x4ffc00000, data 0x454effd/0x47ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226fca000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 479 ms_handle_reset con 0x562226fca000 session 0x562226dd25a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:09.775695+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 195272704 unmapped: 57122816 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 479 handle_osd_map epochs [479,480], i have 479, src has [1,480]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 480 ms_handle_reset con 0x562226bbe400 session 0x562226c010e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:10.775914+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 195305472 unmapped: 57090048 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 480 ms_handle_reset con 0x562226ec4c00 session 0x562226cb50e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 480 ms_handle_reset con 0x562226ec5800 session 0x56222742a960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f27f7000/0x0/0x4ffc00000, data 0x4552d42/0x47d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 480 handle_osd_map epochs [480,481], i have 480, src has [1,481]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 481 ms_handle_reset con 0x562226ec5000 session 0x562226cde5a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 481 ms_handle_reset con 0x562226ec4400 session 0x562226caed20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592540 data_alloc: 234881024 data_used: 13762560
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:11.776163+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196378624 unmapped: 56016896 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 481 ms_handle_reset con 0x562226bbe400 session 0x56222534cd20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:12.776402+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196386816 unmapped: 56008704 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 481 handle_osd_map epochs [482,482], i have 481, src has [1,482]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 482 ms_handle_reset con 0x562224cdac00 session 0x5622253ddc20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 482 ms_handle_reset con 0x562226bbf400 session 0x56222742a1e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 482 ms_handle_reset con 0x562226ec4c00 session 0x562226cde1e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:13.776579+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 197451776 unmapped: 54943744 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 482 ms_handle_reset con 0x562226ec5000 session 0x5622253dc780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 482 handle_osd_map epochs [483,483], i have 482, src has [1,483]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 483 ms_handle_reset con 0x562226ec5800 session 0x562224b91c20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:14.777180+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 197468160 unmapped: 54927360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.743387222s of 10.257036209s, submitted: 185
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 483 ms_handle_reset con 0x562224cdac00 session 0x56222763ef00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:15.777480+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 197468160 unmapped: 54927360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 483 handle_osd_map epochs [483,484], i have 483, src has [1,484]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 484 ms_handle_reset con 0x562226bbe400 session 0x56222736be00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f27f3000/0x0/0x4ffc00000, data 0x4557c5c/0x47db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3602300 data_alloc: 234881024 data_used: 13807616
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:16.777895+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 197468160 unmapped: 54927360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:17.778246+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbf400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 197468160 unmapped: 54927360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 484 ms_handle_reset con 0x562226bbf400 session 0x56222539f860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 484 ms_handle_reset con 0x562226ec4c00 session 0x562227108000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f27f1000/0x0/0x4ffc00000, data 0x45598a0/0x47dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:18.778454+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 197468160 unmapped: 54927360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 484 handle_osd_map epochs [485,485], i have 484, src has [1,485]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 485 ms_handle_reset con 0x562224cdac00 session 0x562226dd34a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:19.780996+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 197484544 unmapped: 54910976 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 485 ms_handle_reset con 0x562226bbe400 session 0x5622273e0960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbf400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 485 handle_osd_map epochs [486,486], i have 485, src has [1,486]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f27ec000/0x0/0x4ffc00000, data 0x455b51e/0x47e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [0,0,0,0,0,1])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 485 handle_osd_map epochs [486,486], i have 486, src has [1,486]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:20.781183+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 197484544 unmapped: 54910976 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 486 ms_handle_reset con 0x562226bbf400 session 0x562224ae2780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3608736 data_alloc: 234881024 data_used: 13811712
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:21.781320+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 197500928 unmapped: 54894592 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 486 handle_osd_map epochs [487,487], i have 486, src has [1,487]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 487 ms_handle_reset con 0x562226ec4c00 session 0x56222742bc20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:22.781555+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 197500928 unmapped: 54894592 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 487 ms_handle_reset con 0x562226ec5800 session 0x5622274cb680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:23.781726+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 197500928 unmapped: 54894592 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 487 ms_handle_reset con 0x562224cdac00 session 0x562224640b40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:24.781895+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 487 handle_osd_map epochs [488,488], i have 487, src has [1,488]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 197500928 unmapped: 54894592 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:25.782049+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 197500928 unmapped: 54894592 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.770115852s of 10.880167961s, submitted: 94
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 488 ms_handle_reset con 0x562226bbe400 session 0x5622274a92c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 488 heartbeat osd_stat(store_statfs(0x4f27e3000/0x0/0x4ffc00000, data 0x4560854/0x47ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614844 data_alloc: 234881024 data_used: 13815808
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:26.782200+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196337664 unmapped: 56057856 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:27.782369+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196337664 unmapped: 56057856 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:28.782649+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196337664 unmapped: 56057856 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:29.782827+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196337664 unmapped: 56057856 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 488 handle_osd_map epochs [488,489], i have 488, src has [1,489]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 489 heartbeat osd_stat(store_statfs(0x4f27e3000/0x0/0x4ffc00000, data 0x4560854/0x47ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:30.782990+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196337664 unmapped: 56057856 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617978 data_alloc: 234881024 data_used: 13819904
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:31.783187+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196337664 unmapped: 56057856 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 489 heartbeat osd_stat(store_statfs(0x4f27e0000/0x0/0x4ffc00000, data 0x456232a/0x47ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 489 heartbeat osd_stat(store_statfs(0x4f27e0000/0x0/0x4ffc00000, data 0x456232a/0x47ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:32.783369+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196337664 unmapped: 56057856 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:33.783569+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbf400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196337664 unmapped: 56057856 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 489 ms_handle_reset con 0x562226bbf400 session 0x5622262cad20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 489 heartbeat osd_stat(store_statfs(0x4f27e0000/0x0/0x4ffc00000, data 0x456232a/0x47ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:34.783688+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196509696 unmapped: 55885824 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:35.783808+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196509696 unmapped: 55885824 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 489 handle_osd_map epochs [490,490], i have 489, src has [1,490]
Nov 29 08:25:36 compute-0 nova_compute[255040]: 2025-11-29 08:25:36.070 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.331426620s of 10.478783607s, submitted: 24
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3626129 data_alloc: 234881024 data_used: 13836288
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:36.783996+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196509696 unmapped: 55885824 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:37.784128+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196534272 unmapped: 55861248 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:38.784379+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196534272 unmapped: 55861248 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f27b8000/0x0/0x4ffc00000, data 0x4587e07/0x4815000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:39.784581+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196534272 unmapped: 55861248 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:40.784768+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196534272 unmapped: 55861248 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3627409 data_alloc: 234881024 data_used: 13967360
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:41.784912+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196534272 unmapped: 55861248 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:42.785214+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196534272 unmapped: 55861248 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:43.785393+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196534272 unmapped: 55861248 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f27b8000/0x0/0x4ffc00000, data 0x4587e07/0x4815000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:44.785569+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196534272 unmapped: 55861248 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:45.785702+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196534272 unmapped: 55861248 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f27b8000/0x0/0x4ffc00000, data 0x4587e07/0x4815000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3627409 data_alloc: 234881024 data_used: 13967360
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:46.785916+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227462800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.351965904s of 10.363289833s, submitted: 13
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196534272 unmapped: 55861248 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:47.786244+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 490 ms_handle_reset con 0x562227462800 session 0x5622274994a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196534272 unmapped: 55861248 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:48.786632+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 196534272 unmapped: 55861248 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:49.786714+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 203874304 unmapped: 48521216 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:50.786836+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 199811072 unmapped: 52584448 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f1fb9000/0x0/0x4ffc00000, data 0x4d87e07/0x5015000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3731937 data_alloc: 234881024 data_used: 21389312
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:51.786926+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f1fb9000/0x0/0x4ffc00000, data 0x4d87e07/0x5015000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 200048640 unmapped: 52346880 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227463400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:52.787037+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 200065024 unmapped: 52330496 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 490 ms_handle_reset con 0x562227463400 session 0x562226cafc20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:53.787182+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 200065024 unmapped: 52330496 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:54.787399+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 200065024 unmapped: 52330496 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:55.787543+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 200065024 unmapped: 52330496 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741985 data_alloc: 234881024 data_used: 25583616
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:56.787681+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 203669504 unmapped: 48726016 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f1fb9000/0x0/0x4ffc00000, data 0x4d87e07/0x5015000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:57.787822+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 203669504 unmapped: 48726016 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:58.787998+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 203669504 unmapped: 48726016 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:17:59.788175+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 203669504 unmapped: 48726016 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:00.788324+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 203669504 unmapped: 48726016 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741985 data_alloc: 234881024 data_used: 25583616
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:01.788471+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 203669504 unmapped: 48726016 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:02.788706+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204005376 unmapped: 48390144 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f1fb9000/0x0/0x4ffc00000, data 0x4d87e07/0x5015000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:03.788824+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204005376 unmapped: 48390144 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 490 ms_handle_reset con 0x562226ec4c00 session 0x562226cb4960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 490 ms_handle_reset con 0x562226ec5800 session 0x562226cdc780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:04.788978+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204005376 unmapped: 48390144 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:05.789142+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204005376 unmapped: 48390144 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746945 data_alloc: 234881024 data_used: 26267648
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:06.789289+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204021760 unmapped: 48373760 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.719198227s of 20.428638458s, submitted: 14
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f1fb9000/0x0/0x4ffc00000, data 0x4d87e07/0x5015000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:07.789424+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204021760 unmapped: 48373760 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:08.789803+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204021760 unmapped: 48373760 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:09.790010+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 490 ms_handle_reset con 0x562224cdac00 session 0x562226caf680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204046336 unmapped: 48349184 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:10.790153+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 490 ms_handle_reset con 0x562226bbe400 session 0x562226cde780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204054528 unmapped: 48340992 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f1fdd000/0x0/0x4ffc00000, data 0x4d63de4/0x4ff0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3743543 data_alloc: 234881024 data_used: 26152960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:11.790321+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204054528 unmapped: 48340992 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbf400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 490 handle_osd_map epochs [491,491], i have 490, src has [1,491]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 491 ms_handle_reset con 0x562226bbf400 session 0x5622271094a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:12.790429+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204103680 unmapped: 48291840 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 491 ms_handle_reset con 0x562224cdac00 session 0x562226c6b680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 491 ms_handle_reset con 0x562226bbe400 session 0x562226cb4780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:13.790561+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 491 ms_handle_reset con 0x562226ec4c00 session 0x562227499c20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204103680 unmapped: 48291840 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 491 heartbeat osd_stat(store_statfs(0x4f27d8000/0x0/0x4ffc00000, data 0x4565a2a/0x47f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:14.790723+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204103680 unmapped: 48291840 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227462800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 491 ms_handle_reset con 0x562226ec5800 session 0x5622262e8f00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227463c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 491 ms_handle_reset con 0x562227462800 session 0x562226ce1860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227462800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 491 ms_handle_reset con 0x562227463c00 session 0x562226cdde00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 491 ms_handle_reset con 0x562227462800 session 0x562225339680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:15.791441+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 491 heartbeat osd_stat(store_statfs(0x4f27d9000/0x0/0x4ffc00000, data 0x4565a2a/0x47f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204103680 unmapped: 48291840 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 491 ms_handle_reset con 0x562224cdac00 session 0x5622274983c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:16.791707+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3678180 data_alloc: 234881024 data_used: 24784896
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 491 ms_handle_reset con 0x562226ec4c00 session 0x56222468a960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204111872 unmapped: 48283648 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _renew_subs
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 491 handle_osd_map epochs [492,492], i have 491, src has [1,492]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.368577957s of 10.259649277s, submitted: 33
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 492 ms_handle_reset con 0x562226ec5800 session 0x562224b45c20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:17.791968+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 492 ms_handle_reset con 0x562226bbe400 session 0x562226caf4a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204169216 unmapped: 48226304 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 492 heartbeat osd_stat(store_statfs(0x4f27d9000/0x0/0x4ffc00000, data 0x4565a2a/0x47f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:18.792358+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204169216 unmapped: 48226304 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 492 ms_handle_reset con 0x562226ec5800 session 0x562226dd2960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:19.792927+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:20.793160+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:21.793453+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679722 data_alloc: 234881024 data_used: 24784896
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:22.793667+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.9 total, 600.0 interval
                                           Cumulative writes: 29K writes, 109K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 29K writes, 10K syncs, 2.75 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 13K writes, 48K keys, 13K commit groups, 1.0 writes per commit group, ingest: 30.10 MB, 0.05 MB/s
                                           Interval WAL: 13K writes, 5550 syncs, 2.40 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:23.793924+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 492 heartbeat osd_stat(store_statfs(0x4f27d8000/0x0/0x4ffc00000, data 0x45675e0/0x47f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:24.794130+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 492 handle_osd_map epochs [493,493], i have 492, src has [1,493]
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:25.794485+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:26.794633+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683896 data_alloc: 234881024 data_used: 24793088
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:27.794900+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:28.795177+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:29.795398+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d4000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:30.795640+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d4000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:31.795885+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683896 data_alloc: 234881024 data_used: 24793088
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:32.796070+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d4000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:33.796321+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:34.796547+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:35.796730+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:36.796892+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683896 data_alloc: 234881024 data_used: 24793088
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d4000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:37.797061+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:38.797358+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d4000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:39.797536+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:40.797716+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:41.797879+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683896 data_alloc: 234881024 data_used: 24793088
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:42.798084+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:43.798289+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204177408 unmapped: 48218112 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.673908234s of 26.791568756s, submitted: 46
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562224cdac00 session 0x56222534d680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562226ec4c00 session 0x5622257603c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:44.798497+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204185600 unmapped: 48209920 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:45.798725+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204185600 unmapped: 48209920 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:46.798974+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683176 data_alloc: 234881024 data_used: 24797184
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204185600 unmapped: 48209920 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:47.799132+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204185600 unmapped: 48209920 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:48.799326+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204185600 unmapped: 48209920 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:49.799597+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204185600 unmapped: 48209920 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227462800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:50.799766+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204210176 unmapped: 48185344 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:51.799989+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686724 data_alloc: 234881024 data_used: 24797184
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562227462800 session 0x562226dd32c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204210176 unmapped: 48185344 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:52.800183+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204210176 unmapped: 48185344 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:53.800406+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204210176 unmapped: 48185344 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:54.800592+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204210176 unmapped: 48185344 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:55.800779+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d4000/0x0/0x4ffc00000, data 0x45690aa/0x47fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204210176 unmapped: 48185344 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:56.800926+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686325 data_alloc: 234881024 data_used: 24797184
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204210176 unmapped: 48185344 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:57.801166+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d4000/0x0/0x4ffc00000, data 0x45690aa/0x47fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204210176 unmapped: 48185344 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:58.801394+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204210176 unmapped: 48185344 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:18:59.801548+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204210176 unmapped: 48185344 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:00.801690+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204210176 unmapped: 48185344 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:01.801969+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686325 data_alloc: 234881024 data_used: 24797184
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204210176 unmapped: 48185344 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:02.802149+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204210176 unmapped: 48185344 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d4000/0x0/0x4ffc00000, data 0x45690aa/0x47fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:03.802346+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.678840637s of 19.717754364s, submitted: 11
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562224cdac00 session 0x5622253dd4a0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204251136 unmapped: 48144384 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562226bbe400 session 0x562226cafa40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:04.802517+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 208470016 unmapped: 43925504 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562226ec4c00 session 0x562227499e00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562226ec5800 session 0x562224abcd20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:05.802694+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204316672 unmapped: 48078848 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:06.802932+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3926390 data_alloc: 234881024 data_used: 24797184
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204316672 unmapped: 48078848 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f05db000/0x0/0x4ffc00000, data 0x67620aa/0x69f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:07.803075+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f05db000/0x0/0x4ffc00000, data 0x67620aa/0x69f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204316672 unmapped: 48078848 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:08.803369+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204316672 unmapped: 48078848 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:09.803590+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204316672 unmapped: 48078848 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:10.803710+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204333056 unmapped: 48062464 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:11.804175+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3926390 data_alloc: 234881024 data_used: 24797184
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f05db000/0x0/0x4ffc00000, data 0x67620aa/0x69f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204333056 unmapped: 48062464 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:12.804363+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204333056 unmapped: 48062464 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:13.804504+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204333056 unmapped: 48062464 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227463c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562227463c00 session 0x56222763e3c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:14.804645+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204333056 unmapped: 48062464 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:15.804791+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562224cdac00 session 0x562227109680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204333056 unmapped: 48062464 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562226bbe400 session 0x56222763e1e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.924462318s of 12.429594040s, submitted: 42
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562226ec4c00 session 0x562226c6a1e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:16.805498+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227463c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3931247 data_alloc: 234881024 data_used: 24801280
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f05db000/0x0/0x4ffc00000, data 0x67620aa/0x69f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204374016 unmapped: 48021504 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:17.805708+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204374016 unmapped: 48021504 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:18.805996+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 204390400 unmapped: 48005120 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:19.806210+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 205611008 unmapped: 46784512 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:20.806396+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 205643776 unmapped: 46751744 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:21.806511+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3954927 data_alloc: 251658240 data_used: 28098560
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f05b6000/0x0/0x4ffc00000, data 0x67860cd/0x6a18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 205643776 unmapped: 46751744 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:22.811338+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 205643776 unmapped: 46751744 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:23.811593+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 205643776 unmapped: 46751744 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:24.811797+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 205643776 unmapped: 46751744 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:25.811987+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f05b6000/0x0/0x4ffc00000, data 0x67860cd/0x6a18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 205643776 unmapped: 46751744 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:26.812147+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3954927 data_alloc: 251658240 data_used: 28098560
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 205643776 unmapped: 46751744 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f05b6000/0x0/0x4ffc00000, data 0x67860cd/0x6a18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:27.812373+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 205643776 unmapped: 46751744 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:28.812650+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 205643776 unmapped: 46751744 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:29.812853+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 205660160 unmapped: 46735360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.129644394s of 14.155584335s, submitted: 7
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:30.813020+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 213925888 unmapped: 38469632 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:31.813146+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3986491 data_alloc: 251658240 data_used: 28618752
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 218406912 unmapped: 33988608 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:32.813352+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f0126000/0x0/0x4ffc00000, data 0x67860cd/0x6a18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,12])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 218013696 unmapped: 34381824 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:33.813467+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f0316000/0x0/0x4ffc00000, data 0x67860cd/0x6a18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 34209792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:34.813605+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 218218496 unmapped: 34177024 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:35.813775+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 218341376 unmapped: 34054144 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:36.813975+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f0316000/0x0/0x4ffc00000, data 0x67860cd/0x6a18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4002315 data_alloc: 251658240 data_used: 29495296
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214278144 unmapped: 38117376 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:37.814155+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214278144 unmapped: 38117376 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:38.814343+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214286336 unmapped: 38109184 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:39.814469+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214384640 unmapped: 38010880 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:40.814644+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214384640 unmapped: 38010880 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eedfd000/0x0/0x4ffc00000, data 0x7f3f0cd/0x81d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:41.814805+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4143507 data_alloc: 251658240 data_used: 29499392
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214384640 unmapped: 38010880 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:42.814954+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214384640 unmapped: 38010880 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:43.815126+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eedfd000/0x0/0x4ffc00000, data 0x7f3f0cd/0x81d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eedfd000/0x0/0x4ffc00000, data 0x7f3f0cd/0x81d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214384640 unmapped: 38010880 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:44.815280+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214384640 unmapped: 38010880 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:45.815510+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214384640 unmapped: 38010880 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.559784889s of 15.962188721s, submitted: 103
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:46.815631+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4143683 data_alloc: 251658240 data_used: 29499392
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214384640 unmapped: 38010880 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:47.815885+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eedfd000/0x0/0x4ffc00000, data 0x7f3f0cd/0x81d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214384640 unmapped: 38010880 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:48.816141+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214384640 unmapped: 38010880 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eedfd000/0x0/0x4ffc00000, data 0x7f3f0cd/0x81d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:49.816340+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214384640 unmapped: 38010880 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:50.816530+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214384640 unmapped: 38010880 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:51.816665+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4143683 data_alloc: 251658240 data_used: 29499392
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214392832 unmapped: 38002688 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:52.816834+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214392832 unmapped: 38002688 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eedfd000/0x0/0x4ffc00000, data 0x7f3f0cd/0x81d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:53.816984+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214392832 unmapped: 38002688 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:54.817158+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214392832 unmapped: 38002688 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:55.817327+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eedfd000/0x0/0x4ffc00000, data 0x7f3f0cd/0x81d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562226ec5800 session 0x5622274992c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562227463c00 session 0x562226dd3860
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227463c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562227463c00 session 0x56222744b0e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214401024 unmapped: 37994496 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:56.817482+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4136369 data_alloc: 251658240 data_used: 29409280
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214401024 unmapped: 37994496 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eee21000/0x0/0x4ffc00000, data 0x7f1b0aa/0x81ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:57.817648+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214401024 unmapped: 37994496 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:58.817872+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214401024 unmapped: 37994496 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:19:59.818034+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214401024 unmapped: 37994496 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:00.818248+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214401024 unmapped: 37994496 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:01.818595+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4136369 data_alloc: 251658240 data_used: 29409280
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214401024 unmapped: 37994496 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:02.818774+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eee21000/0x0/0x4ffc00000, data 0x7f1b0aa/0x81ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eee21000/0x0/0x4ffc00000, data 0x7f1b0aa/0x81ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214409216 unmapped: 37986304 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:03.818906+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214409216 unmapped: 37986304 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:04.819137+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eee21000/0x0/0x4ffc00000, data 0x7f1b0aa/0x81ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.847698212s of 18.903602600s, submitted: 16
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214409216 unmapped: 37986304 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:05.819392+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214450176 unmapped: 37945344 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:06.819580+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4135841 data_alloc: 251658240 data_used: 29409280
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eee22000/0x0/0x4ffc00000, data 0x7f1b0aa/0x81ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214466560 unmapped: 37928960 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:07.819739+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eee22000/0x0/0x4ffc00000, data 0x7f1b0aa/0x81ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214466560 unmapped: 37928960 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:08.819929+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214466560 unmapped: 37928960 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:09.820062+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eee22000/0x0/0x4ffc00000, data 0x7f1b0aa/0x81ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214466560 unmapped: 37928960 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:10.820271+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214466560 unmapped: 37928960 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:11.820498+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4135841 data_alloc: 251658240 data_used: 29409280
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214466560 unmapped: 37928960 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:12.820641+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562224cdac00 session 0x56222762ed20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214466560 unmapped: 37928960 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:13.820882+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eee22000/0x0/0x4ffc00000, data 0x7f1b0aa/0x81ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562226bbe400 session 0x562224b91680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214466560 unmapped: 37928960 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:14.821193+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562226ec4c00 session 0x562226cdda40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562226ec5800 session 0x56222736a960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214482944 unmapped: 37912576 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:15.821340+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.918365479s of 10.207571983s, submitted: 94
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214491136 unmapped: 37904384 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:16.821459+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4137602 data_alloc: 251658240 data_used: 29409280
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214491136 unmapped: 37904384 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:17.821577+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214491136 unmapped: 37904384 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:18.821726+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eee22000/0x0/0x4ffc00000, data 0x7f1b0aa/0x81ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214491136 unmapped: 37904384 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:19.821857+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214491136 unmapped: 37904384 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:20.821971+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214491136 unmapped: 37904384 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:21.822122+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4138082 data_alloc: 251658240 data_used: 29483008
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eee22000/0x0/0x4ffc00000, data 0x7f1b0aa/0x81ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214491136 unmapped: 37904384 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:22.822284+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214491136 unmapped: 37904384 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:23.822410+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214491136 unmapped: 37904384 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:24.822999+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eee22000/0x0/0x4ffc00000, data 0x7f1b0aa/0x81ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214491136 unmapped: 37904384 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:25.823258+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214491136 unmapped: 37904384 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:26.823447+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4138082 data_alloc: 251658240 data_used: 29483008
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214491136 unmapped: 37904384 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:27.823636+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eee22000/0x0/0x4ffc00000, data 0x7f1b0aa/0x81ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214491136 unmapped: 37904384 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:28.828223+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214491136 unmapped: 37904384 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:29.828423+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.246172905s of 14.254531860s, submitted: 3
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214630400 unmapped: 37765120 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:30.828545+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214867968 unmapped: 37527552 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:31.828756+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4146974 data_alloc: 251658240 data_used: 30461952
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214867968 unmapped: 37527552 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:32.828968+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eee22000/0x0/0x4ffc00000, data 0x7f1b0aa/0x81ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214867968 unmapped: 37527552 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:33.829070+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214867968 unmapped: 37527552 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:34.829244+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214867968 unmapped: 37527552 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:35.829415+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eee22000/0x0/0x4ffc00000, data 0x7f1b0aa/0x81ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214867968 unmapped: 37527552 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:36.829551+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4146974 data_alloc: 251658240 data_used: 30461952
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214867968 unmapped: 37527552 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:37.829693+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:38.829942+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214867968 unmapped: 37527552 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eee22000/0x0/0x4ffc00000, data 0x7f1b0aa/0x81ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:39.830391+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214876160 unmapped: 37519360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:40.830615+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214876160 unmapped: 37519360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:41.830814+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214876160 unmapped: 37519360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4146974 data_alloc: 251658240 data_used: 30461952
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:42.830941+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214876160 unmapped: 37519360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eee22000/0x0/0x4ffc00000, data 0x7f1b0aa/0x81ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eee22000/0x0/0x4ffc00000, data 0x7f1b0aa/0x81ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:43.831143+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214876160 unmapped: 37519360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:44.831379+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214876160 unmapped: 37519360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.418904305s of 15.427464485s, submitted: 2
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:45.831531+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214876160 unmapped: 37519360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:46.831668+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214876160 unmapped: 37519360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4146990 data_alloc: 251658240 data_used: 30457856
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:47.831909+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214876160 unmapped: 37519360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eee22000/0x0/0x4ffc00000, data 0x7f1b0aa/0x81ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:48.832206+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214876160 unmapped: 37519360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:49.832416+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214876160 unmapped: 37519360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:50.832587+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214876160 unmapped: 37519360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:51.832827+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214876160 unmapped: 37519360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4146990 data_alloc: 251658240 data_used: 30457856
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:52.833240+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214876160 unmapped: 37519360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:53.833430+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214876160 unmapped: 37519360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eee22000/0x0/0x4ffc00000, data 0x7f1b0aa/0x81ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:54.833588+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214876160 unmapped: 37519360 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.055571556s of 10.062740326s, submitted: 1
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562224cdac00 session 0x56222736bc20
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562226bbe400 session 0x5622271381e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:55.833908+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 215261184 unmapped: 37134336 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562226ec4c00 session 0x5622271390e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:56.834070+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 215269376 unmapped: 37126144 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4154026 data_alloc: 251658240 data_used: 32120832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:57.834971+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 215269376 unmapped: 37126144 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eee22000/0x0/0x4ffc00000, data 0x7f1b0aa/0x81ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:58.835353+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 215269376 unmapped: 37126144 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:20:59.835499+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eee22000/0x0/0x4ffc00000, data 0x7f1b0aa/0x81ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 215269376 unmapped: 37126144 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:00.836008+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 215269376 unmapped: 37126144 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:01.837811+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 215269376 unmapped: 37126144 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4eee22000/0x0/0x4ffc00000, data 0x7f1b0aa/0x81ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4154158 data_alloc: 251658240 data_used: 32120832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562226ec5800 session 0x562226cdf0e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227463c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:02.837978+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 215285760 unmapped: 37109760 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,2])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562227463c00 session 0x5622262e8f00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:03.838645+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212131840 unmapped: 40263680 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:04.838983+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212131840 unmapped: 40263680 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:05.839262+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212131840 unmapped: 40263680 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:06.839482+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212131840 unmapped: 40263680 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3706610 data_alloc: 234881024 data_used: 24797184
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:07.840042+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212131840 unmapped: 40263680 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:08.840576+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212131840 unmapped: 40263680 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:09.841009+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212131840 unmapped: 40263680 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.134270668s of 15.085477829s, submitted: 57
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562224cdac00 session 0x562224ae2780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:10.841176+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562226bbe400 session 0x562227108000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212140032 unmapped: 40255488 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:11.841472+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212140032 unmapped: 40255488 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3706610 data_alloc: 234881024 data_used: 24797184
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:12.841679+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212140032 unmapped: 40255488 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:13.841875+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212140032 unmapped: 40255488 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:14.842053+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212140032 unmapped: 40255488 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:15.842210+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212140032 unmapped: 40255488 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:16.842346+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212148224 unmapped: 40247296 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3706610 data_alloc: 234881024 data_used: 24797184
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:17.842639+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212148224 unmapped: 40247296 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:18.842889+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212148224 unmapped: 40247296 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:19.843165+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212148224 unmapped: 40247296 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:20.843412+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212148224 unmapped: 40247296 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:21.843671+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212148224 unmapped: 40247296 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3706610 data_alloc: 234881024 data_used: 24797184
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:22.843884+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212148224 unmapped: 40247296 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:23.844007+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212148224 unmapped: 40247296 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:24.844139+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212148224 unmapped: 40247296 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:25.844316+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212148224 unmapped: 40247296 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:26.844446+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212148224 unmapped: 40247296 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3706610 data_alloc: 234881024 data_used: 24797184
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:27.844772+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212148224 unmapped: 40247296 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:28.844952+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212148224 unmapped: 40247296 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:29.845082+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212148224 unmapped: 40247296 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:30.845233+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212148224 unmapped: 40247296 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:31.845349+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212148224 unmapped: 40247296 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.490798950s of 21.507991791s, submitted: 5
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562226ec4c00 session 0x562226cde1e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3817762 data_alloc: 234881024 data_used: 24797184
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:32.845483+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 225034240 unmapped: 27361280 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f27d5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,2,1])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562226ec5800 session 0x562227108000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:33.845653+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 39944192 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:34.845802+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 39944192 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ef58e000/0x0/0x4ffc00000, data 0x77b009a/0x7a40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:35.845999+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 39944192 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:36.846170+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 39944192 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ef58e000/0x0/0x4ffc00000, data 0x77b009a/0x7a40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4059954 data_alloc: 234881024 data_used: 24797184
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:37.846287+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 39944192 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:38.846470+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 39944192 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:39.846629+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 39944192 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:40.846749+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 39944192 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:41.846887+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 39944192 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4059954 data_alloc: 234881024 data_used: 24797184
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:42.847129+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ef58e000/0x0/0x4ffc00000, data 0x77b009a/0x7a40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 39944192 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:43.847300+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227462400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562227462400 session 0x5622271390e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 39944192 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:44.847472+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 39944192 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562224cdac00 session 0x5622271381e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:45.847603+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562226bbe400 session 0x56222736a960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.246535301s of 13.876001358s, submitted: 19
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562226ec4c00 session 0x562226cdda40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212459520 unmapped: 39936000 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:46.847767+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227463000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212459520 unmapped: 39936000 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4063781 data_alloc: 234881024 data_used: 24801280
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ef17c000/0x0/0x4ffc00000, data 0x77b00cd/0x7a42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:47.847928+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212459520 unmapped: 39936000 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:48.848175+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 212467712 unmapped: 39927808 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:49.848320+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 214728704 unmapped: 37666816 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:50.848439+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 215056384 unmapped: 37339136 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:51.848620+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 215056384 unmapped: 37339136 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4114661 data_alloc: 251658240 data_used: 31600640
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:52.848746+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ef17c000/0x0/0x4ffc00000, data 0x77b00cd/0x7a42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 215056384 unmapped: 37339136 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:53.848892+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 215056384 unmapped: 37339136 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:54.849042+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 215056384 unmapped: 37339136 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:55.849196+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ef17c000/0x0/0x4ffc00000, data 0x77b00cd/0x7a42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 215056384 unmapped: 37339136 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:56.849363+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 215040000 unmapped: 37355520 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4114661 data_alloc: 251658240 data_used: 31600640
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:57.849486+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 215040000 unmapped: 37355520 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:58.849645+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 215040000 unmapped: 37355520 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ef17c000/0x0/0x4ffc00000, data 0x77b00cd/0x7a42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:21:59.849768+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ef17c000/0x0/0x4ffc00000, data 0x77b00cd/0x7a42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 215040000 unmapped: 37355520 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.321161270s of 14.345120430s, submitted: 6
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:00.849886+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 220430336 unmapped: 31965184 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:01.850036+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 221831168 unmapped: 30564352 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4248541 data_alloc: 251658240 data_used: 32915456
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:02.850208+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222429184 unmapped: 29966336 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:03.850383+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222429184 unmapped: 29966336 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ee192000/0x0/0x4ffc00000, data 0x873a0cd/0x89cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:04.850522+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222429184 unmapped: 29966336 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:05.850788+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222429184 unmapped: 29966336 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:06.850938+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222429184 unmapped: 29966336 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ee192000/0x0/0x4ffc00000, data 0x873a0cd/0x89cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4251453 data_alloc: 251658240 data_used: 32903168
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:07.851164+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222429184 unmapped: 29966336 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:08.851422+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222429184 unmapped: 29966336 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:09.851668+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222429184 unmapped: 29966336 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ee192000/0x0/0x4ffc00000, data 0x873a0cd/0x89cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:10.851814+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.825196266s of 10.491114616s, submitted: 131
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562226ec5800 session 0x56222744b0e0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562227463000 session 0x56222742a780
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 30154752 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562227463000
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562227463000 session 0x562227109680
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:11.852183+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 30154752 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4240032 data_alloc: 251658240 data_used: 32894976
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:12.852361+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 30154752 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:13.852501+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 30154752 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ee1f3000/0x0/0x4ffc00000, data 0x873a09a/0x89ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:14.853013+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 30154752 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:15.853400+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 30154752 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:16.853602+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 30154752 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4240032 data_alloc: 251658240 data_used: 32894976
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:17.853871+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 30154752 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:18.854080+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 30154752 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:19.854372+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ee1f3000/0x0/0x4ffc00000, data 0x873a09a/0x89ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 30154752 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:20.854671+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ee1f3000/0x0/0x4ffc00000, data 0x873a09a/0x89ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 30154752 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:21.855013+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 30154752 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4240032 data_alloc: 251658240 data_used: 32894976
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:22.855212+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 30154752 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:23.855468+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 30154752 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:24.855601+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ee1f3000/0x0/0x4ffc00000, data 0x873a09a/0x89ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 30154752 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:25.855753+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 30154752 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:26.855920+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222240768 unmapped: 30154752 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ee1f3000/0x0/0x4ffc00000, data 0x873a09a/0x89ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4240032 data_alloc: 251658240 data_used: 32894976
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:27.856036+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562224cdac00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.222305298s of 17.285421371s, submitted: 25
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562224cdac00 session 0x5622257603c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 30146560 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226bbe400
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:28.856218+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 30146560 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:29.856357+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 30146560 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:30.856478+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222248960 unmapped: 30146560 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:31.856621+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ee1f3000/0x0/0x4ffc00000, data 0x873a0bd/0x89cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 30113792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:32.856863+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4240725 data_alloc: 251658240 data_used: 33034240
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 30113792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:33.856993+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 30113792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:34.857152+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ee1f3000/0x0/0x4ffc00000, data 0x873a0bd/0x89cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 30113792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:35.857293+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 30113792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:36.857488+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 30113792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:37.857615+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4240725 data_alloc: 251658240 data_used: 33034240
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 30113792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:38.857824+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 30113792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ee1f3000/0x0/0x4ffc00000, data 0x873a0bd/0x89cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:39.857970+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 30113792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:40.858132+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 30113792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:41.858292+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 30113792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:42.858461+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4241525 data_alloc: 251658240 data_used: 33054720
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.759861946s of 14.775349617s, submitted: 5
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 30113792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ee1f3000/0x0/0x4ffc00000, data 0x873a0bd/0x89cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:43.858677+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 30113792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:44.858852+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 30113792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:45.859013+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 30113792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:46.859181+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 30113792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:47.859374+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4252565 data_alloc: 251658240 data_used: 33808384
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 30113792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:48.859567+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 30113792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ee1f3000/0x0/0x4ffc00000, data 0x873a0bd/0x89cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:49.859700+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 30113792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:50.859854+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 30113792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:51.859976+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 30113792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:52.860181+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4251493 data_alloc: 251658240 data_used: 33804288
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222281728 unmapped: 30113792 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:53.860319+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ee1f3000/0x0/0x4ffc00000, data 0x873a0bd/0x89cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.453174591s of 11.497036934s, submitted: 17
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222150656 unmapped: 30244864 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:54.860477+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222150656 unmapped: 30244864 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:55.860599+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222150656 unmapped: 30244864 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:56.860775+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222150656 unmapped: 30244864 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:57.860904+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4250789 data_alloc: 251658240 data_used: 33804288
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222158848 unmapped: 30236672 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:58.861083+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222158848 unmapped: 30236672 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:22:59.861253+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ee1eb000/0x0/0x4ffc00000, data 0x873a0bd/0x89cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222158848 unmapped: 30236672 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:00.861388+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ee1eb000/0x0/0x4ffc00000, data 0x873a0bd/0x89cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222158848 unmapped: 30236672 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:01.861556+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222199808 unmapped: 30195712 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:02.861724+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4250485 data_alloc: 251658240 data_used: 33787904
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222199808 unmapped: 30195712 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:03.861859+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222199808 unmapped: 30195712 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:04.861989+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ee1f3000/0x0/0x4ffc00000, data 0x873a0bd/0x89cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222199808 unmapped: 30195712 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:05.862185+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ee1f3000/0x0/0x4ffc00000, data 0x873a0bd/0x89cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222199808 unmapped: 30195712 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:06.862378+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222199808 unmapped: 30195712 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ee1f3000/0x0/0x4ffc00000, data 0x873a0bd/0x89cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:07.862649+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4250485 data_alloc: 251658240 data_used: 33787904
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222199808 unmapped: 30195712 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ee1f3000/0x0/0x4ffc00000, data 0x873a0bd/0x89cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:08.862889+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562226bbe400 session 0x562226dd2960
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec4c00
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.758680344s of 14.786371231s, submitted: 14
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562226ec4c00 session 0x562226ce0b40
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222298112 unmapped: 30097408 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:09.863033+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222298112 unmapped: 30097408 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:10.863181+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222298112 unmapped: 30097408 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:11.863304+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222298112 unmapped: 30097408 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:12.863429+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4255837 data_alloc: 251658240 data_used: 34951168
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222298112 unmapped: 30097408 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:13.863597+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 222298112 unmapped: 30097408 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4ee1f3000/0x0/0x4ffc00000, data 0x873a09a/0x89ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: handle_auth_request added challenge on 0x562226ec5800
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:14.863708+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 ms_handle_reset con 0x562226ec5800 session 0x56222535a3c0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:15.863825+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:16.863959+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:17.864076+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730773 data_alloc: 234881024 data_used: 24248320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:18.864284+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:19.864414+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:20.864524+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:21.864687+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:22.864831+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730773 data_alloc: 234881024 data_used: 24248320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:23.864956+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:24.865117+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:25.865244+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:26.865362+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:27.865501+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730773 data_alloc: 234881024 data_used: 24248320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:28.866220+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:29.866333+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:30.866477+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:31.866655+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:32.866791+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730773 data_alloc: 234881024 data_used: 24248320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:33.866914+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:34.867060+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:35.867191+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:36.867306+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:37.867446+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730773 data_alloc: 234881024 data_used: 24248320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:38.867637+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:39.867771+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:40.867917+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:41.868058+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:42.868153+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730773 data_alloc: 234881024 data_used: 24248320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:43.868274+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:44.868399+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:45.868551+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:46.868671+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:47.868802+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730773 data_alloc: 234881024 data_used: 24248320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:48.868981+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:49.869162+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:50.869249+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:51.869368+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:52.869520+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730773 data_alloc: 234881024 data_used: 24248320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:53.870785+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:54.870947+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:55.871148+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:56.871271+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:57.871397+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730773 data_alloc: 234881024 data_used: 24248320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:58.871601+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:23:59.871764+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:00.871898+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:01.872011+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:02.872163+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730773 data_alloc: 234881024 data_used: 24248320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:03.872275+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:04.872364+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:05.872514+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:06.872641+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:07.872767+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730773 data_alloc: 234881024 data_used: 24248320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:08.872918+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:09.873036+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:10.873232+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:11.873363+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:12.873526+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730773 data_alloc: 234881024 data_used: 24248320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:13.873677+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:14.873796+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:15.873897+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:16.874040+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:17.874147+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730773 data_alloc: 234881024 data_used: 24248320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:18.874281+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:19.874435+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:20.874588+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:21.874755+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:22.875012+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730773 data_alloc: 234881024 data_used: 24248320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:23.875207+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:24.875348+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:25.875505+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217284608 unmapped: 35110912 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:26.875648+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:27.875796+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730773 data_alloc: 234881024 data_used: 24248320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:28.875973+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:29.876199+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:30.876424+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:31.876552+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:32.876670+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730773 data_alloc: 234881024 data_used: 24248320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:33.876843+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:34.876975+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:35.877173+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:36.877298+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:37.877426+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730773 data_alloc: 234881024 data_used: 24248320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:38.877694+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:39.877891+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:40.878052+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:41.878235+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:42.879267+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730773 data_alloc: 234881024 data_used: 24248320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:43.881137+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:44.881782+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:45.882365+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:46.882566+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:47.882687+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730773 data_alloc: 234881024 data_used: 24248320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:48.882874+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:49.883010+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:50.883145+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:51.883280+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:52.883400+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730773 data_alloc: 234881024 data_used: 24248320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:53.883560+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:54.883904+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:55.884040+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:56.884160+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:57.884315+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730773 data_alloc: 234881024 data_used: 24248320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:58.884582+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:24:59.885135+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:25:00.885250+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:25:01.885379+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217292800 unmapped: 35102720 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:25:02.885515+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 217268224 unmapped: 35127296 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: do_command 'config diff' '{prefix=config diff}'
Nov 29 08:25:36 compute-0 ceph-osd[88926]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 08:25:36 compute-0 ceph-osd[88926]: do_command 'config show' '{prefix=config show}'
Nov 29 08:25:36 compute-0 ceph-osd[88926]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 08:25:36 compute-0 ceph-osd[88926]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f23c5000/0x0/0x4ffc00000, data 0x456909a/0x47f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:25:36 compute-0 ceph-osd[88926]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:25:36 compute-0 ceph-osd[88926]: bluestore.MempoolThread(0x56222325fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730773 data_alloc: 234881024 data_used: 24248320
Nov 29 08:25:36 compute-0 ceph-osd[88926]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 08:25:36 compute-0 ceph-osd[88926]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 08:25:36 compute-0 ceph-osd[88926]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 08:25:36 compute-0 ceph-osd[88926]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:25:03.885670+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 216973312 unmapped: 35422208 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: tick
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_tickets
Nov 29 08:25:36 compute-0 ceph-osd[88926]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:25:04.885844+0000)
Nov 29 08:25:36 compute-0 ceph-osd[88926]: prioritycache tune_memory target: 4294967296 mapped: 216989696 unmapped: 35405824 heap: 252395520 old mem: 2845415832 new mem: 2845415832
Nov 29 08:25:36 compute-0 ceph-osd[88926]: do_command 'log dump' '{prefix=log dump}'
Nov 29 08:25:36 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19361 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:36 compute-0 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 08:25:36 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 29 08:25:36 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 29 08:25:36 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2132927265' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 08:25:36 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/719455497' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 08:25:36 compute-0 ceph-mon[75237]: from='client.19361 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:36 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2132927265' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 08:25:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 29 08:25:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/804161006' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 29 08:25:37 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 29 08:25:37 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1124901464' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 29 08:25:37 compute-0 ceph-mon[75237]: pgmap v2222: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 29 08:25:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/804161006' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 29 08:25:37 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1124901464' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 29 08:25:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 29 08:25:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2878992943' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 29 08:25:38 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 29 08:25:38 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19371 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:38 compute-0 systemd[1]: Starting Hostname Service...
Nov 29 08:25:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:25:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:25:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:25:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:25:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:25:38 compute-0 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:25:38 compute-0 systemd[1]: Started Hostname Service.
Nov 29 08:25:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2025-11-29_08:25:38
Nov 29 08:25:38 compute-0 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:25:38 compute-0 ceph-mgr[75527]: [balancer INFO root] do_upmap
Nov 29 08:25:38 compute-0 ceph-mgr[75527]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'images', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', '.rgw.root']
Nov 29 08:25:38 compute-0 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:25:38 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 29 08:25:38 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3458205653' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 29 08:25:39 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/2878992943' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 29 08:25:39 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3458205653' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 29 08:25:39 compute-0 nova_compute[255040]: 2025-11-29 08:25:39.385 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:39 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 29 08:25:39 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3550588308' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 29 08:25:39 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19377 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:40 compute-0 ceph-mon[75237]: pgmap v2223: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 29 08:25:40 compute-0 ceph-mon[75237]: from='client.19371 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:40 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3550588308' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 29 08:25:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 29 08:25:40 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1678429589' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 29 08:25:40 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19381 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:40 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 29 08:25:40 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19383 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:40 compute-0 ceph-mon[75237]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:25:41 compute-0 ceph-mon[75237]: from='client.19377 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:41 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1678429589' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 29 08:25:41 compute-0 ceph-mon[75237]: from='client.19381 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:41 compute-0 nova_compute[255040]: 2025-11-29 08:25:41.071 255071 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:25:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 29 08:25:41 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3759906903' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 29 08:25:41 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 29 08:25:41 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/822304884' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 29 08:25:41 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19389 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:42 compute-0 ceph-mon[75237]: pgmap v2224: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 29 08:25:42 compute-0 ceph-mon[75237]: from='client.19383 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:42 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/3759906903' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 29 08:25:42 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/822304884' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19391 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:25:42 compute-0 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 271 MiB data, 659 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:25:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 29 08:25:42 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/607738929' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 29 08:25:42 compute-0 nova_compute[255040]: 2025-11-29 08:25:42.975 255071 DEBUG oslo_service.periodic_task [None req-cdc1c85b-8887-4699-adf3-c608910dd125 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:25:42 compute-0 ceph-mon[75237]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 29 08:25:42 compute-0 ceph-mon[75237]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1078537855' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 29 08:25:43 compute-0 ceph-mon[75237]: from='client.19389 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:43 compute-0 ceph-mon[75237]: from='client.19391 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:43 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/607738929' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 29 08:25:43 compute-0 ceph-mon[75237]: from='client.? 192.168.122.100:0/1078537855' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 29 08:25:43 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19397 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:25:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:25:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:25:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:25:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:25:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:25:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:25:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:25:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:25:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:25:43 compute-0 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:25:43 compute-0 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19399 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
